Reading view

The rise and fall of FAQ schema – and what it means for SEO today

FAQ schema in SEO

FAQ schema is no longer a quick SEO win. 

In August 2023, Google reduced the visibility of FAQ rich results in search, restricting them to authoritative government and health websites.

The update effectively rendered the tactic useless for marketers who once relied on it to expand their SERP real estate. 

Google also clarified that FAQPage markup should never be used for advertising or promotional purposes. It belongs only on genuine FAQ pages created to answer user questions. 

For years, many SEOs – including myself – added structured FAQ data to marketing pages as a best practice. It’s time to rethink that habit.

Google’s shifting guidance isn’t new. Once-popular tactics, such as directory submissions, were also encouraged but later abandoned. 

FAQ schema is simply the latest reminder that even trusted SEO strategies can turn obsolete overnight.

Finding your stride in the AI-driven FAQ era

SEO often feels like running a race where the course keeps changing. 

Just as marketers adapted to the loss of FAQ rich results, AI is reshaping how questions and answers surface in search.

The “add FAQs everywhere” era is over, but the Q&A format still matters – especially as AI-driven search begins to rely on structured, factual answers.

Large language models rely on clear, structured information to generate responses. 

If your site isn’t providing direct, factual answers to user questions, you risk being invisible in AI-powered search results.

The challenge is knowing where FAQ content still fits – and how to structure it for both people and machines.

Keep marketing FAQs on the page for users, not for markup

Add Q&A content to product, service, or category pages to:

  • Address buyer objections.
  • Explain features.
  • Handle “what if” scenarios. 

Don’t apply FAQPage schema unless the page exists primarily to answer questions. 

This keeps you compliant with Google’s rules while ensuring your content remains structured and readable for LLMs.

Build genuine FAQ hubs

If you want to use FAQPage schema, create dedicated FAQ pages built around a single topic or high-intent theme. 

Each page should present a complete list of questions and answers in full text. 

This format helps LLMs map questions to authoritative responses and improves your chances of being cited in AI-driven search results.

Dig deeper: How to create a helpful FAQ page (with 7 examples)

Write answers for both people and machines

Craft responses that are concise, factual, and written in natural language. Avoid filler or purely promotional copy. 

AI models perform best with content that reflects genuine expertise – direct, clear, and rich with relevant entities, facts, and relationships that reinforce topical authority.

Avoid promotional or advertising language in answers

Google explicitly warns against using FAQ markup for advertising. Even without schema, avoid turning answers into sales pitches. 

Focus on user value first, and let internal links or calls to action guide readers toward conversion naturally.

Follow the ‘single answer per question’ rule

Only apply FAQPage markup when one definitive, non-user-generated answer exists. 

If multiple perspectives are valid, use a QAPage or a long-form article with subheadings instead.

Monitor performance across both search and AI surfaces

Validate your markup with Google’s Rich Results Test, track visibility in Search Console, and monitor how your FAQs appear in AI search tools like Gemini, Bing Copilot, and ChatGPT. 

Even if FAQ schema no longer drives rich snippets, well-structured Q&A content remains key to helping AI systems retrieve your brand’s answers in response to user queries.

Dig deeper: How to create content that works for search and generative engines

Where to use FAQ schema – and where to skip it

Need a clear view of where FAQs belong, how they should be marked up, and when to leave the schema out entirely?

The right placement ensures your content complies with Google’s rules while also optimizing it for visibility in both traditional search and AI-driven results. 

This framework outlines when to apply a schema, when to omit it, and how to strike a balance between user experience and technical accuracy.

Use CasePurposeFAQ PlacementSchema UsageBest for
Dedicated FAQ HubProvide authoritative answers on a single topicStandalone FAQ page with all Q&As visible✅ FAQPage schemaRich results (if eligible), LLM retrieval, evergreen reference
Marketing / Product PagesAddress buyer questions and objectionsEmbedded Q&A sections within product, service, or category pages❌ No FAQPage schema (unless page is primarily a FAQ)On-page conversions, LLM retrieval, featured snippet potential
Knowledge Base / Support DocsAnswer technical or procedural questionsArticle or help doc with clearly defined Q&A✅ FAQPage schema if page is entirely FAQ formatLLM retrieval, voice assistant queries, customer self-service
AI-First Content DesignOptimize for generative AI inclusionQ&A woven into structured, topical content✅ FAQPage schema if eligible and page meets guidelinesGemini, Bing Copilot, ChatGPT responses
Promotional ContentDrive sales or leadsLanding pages, campaign pages❌ Never use FAQPage schemaSales-driven CTAs, brand awareness
  • ✅: Safe to use FAQPage schema under Google’s current rules.
  • ❌: Use Q&A format for user value, but do not apply schema to avoid violating guidelines.
  • LLM retrieval: Content designed so that LLMs can easily identify and extract answers for AI-powered search responses.

Get the newsletter search marketers rely on.


FAQ page wins and opportunities

The most effective FAQ pages share two traits: 

  • They genuinely answer user questions.
  • They present those answers in a clear, accessible format that search engines can easily interpret. 

Below are examples of pages that use FAQPage schema correctly to enhance visibility – and others that could benefit from adding it in a compliant way.

Best-in-class FAQ pages with effective schema

These examples follow Google’s guidelines by ensuring that FAQs are user-facing, non-promotional, and structured around genuine questions and answers.

  • Vashon-Maury.com FAQ: Locally oriented FAQs with clear, visible answers and a dedicated page structure.
  • Doctors Without Borders FAQ: Organized around the organization’s mission and operations, providing factual, non-promotional answers about humanitarian work, donations, and field logistics.
  • Apple Podcasts FAQ NYC: Niche but focused, serving a specific audience with concise answers and clear navigation to related content.
  • GitHub FAQ: Developed in 2020 from “People Also Ask” queries, this page addresses real user questions in plain language, improving both UX and search intent alignment. (Disclosure: I previously worked for GitHub.)

These examples demonstrate that a strong FAQ page is not solely about markup. It’s about building a genuine information resource. 

Schema should enhance an already solid foundation, not compensate for content that exists primarily to promote a product or service.

FAQ pages missing out on the benefits of schema

Many FAQ pages already meet Google’s guidelines and provide genuine user value but miss out on additional visibility because they don’t use structured data. 

The following examples show how adding FAQPage schema – where eligible – can enhance both search visibility and AI performance.

Website and pageCurrent state (No structured data)Potential with FAQPage schema
American College of Surgeons Surgery FAQClear, patient-focused Q&A; authoritative health contentEligible for rich results under Google’s health guidelines; answers could appear directly in search and AI summaries for medical queries.
California Student Aid Commission FAQSingle-answer financial aid Q&As; state government siteEligible for government rich results; schema could help surface answers for student financial aid questions in SERPs and LLMs.
CMS Medicare FAQGovernment-run health FAQ; authoritative sourceFully eligible for health/government FAQ rich results; high potential for PAA inclusion and AI search visibility.
Medicaid.gov FAQAuthoritative government health FAQsEligible for rich results; could increase visibility for Medicaid-related queries and improve AI answer sourcing.

From markup to meaning: The real SEO value of FAQs

The SEO strategies that worked yesterday may be restricted or retired tomorrow – and FAQ schema is only the latest example. 

The goal now is to adapt with intention: know when to apply structure, when to simplify, and when to shift your approach entirely.

By creating FAQ content that serves users first, aligns with Google’s guidelines, and remains structured enough for both traditional search and AI-driven systems, you’ll stay visible no matter how the landscape evolves.

Remember to:

  • Treat FAQ markup as a targeted tool, not a blanket tactic.
  • Use FAQPage schema only on genuine FAQ pages that meet Google’s eligibility criteria.
  • Keep marketing-page FAQs for user value and AI visibility, but without schema.
  • Write concise, factual answers that serve both people and LLMs.
  • Monitor results in Search Console and AI platforms, and adjust as needed.

SEO success comes from timing, precision, and adaptability – knowing when to build momentum, when to pause, and when to change direction entirely.

How to monitor your website’s performance and SEO metrics

How to monitor your website’s performance and SEO metrics

Your website is live – now it’s time to measure what matters.

To sustain traffic growth, you need to track performance, collect meaningful data, and make informed, data-driven decisions that shape your site’s success.

Here are the key areas to monitor and the tools that can automate much of the work.

How to monitor your website for SEO performance

Website performance and uptime alerts

When a page loads slowly, conversions drop, engagement falls, and the user experience suffers.

Visitors expect pages to respond instantly, whether they’re comparing products or just beginning their research journey on your blog.

Monitor site speed with PageSpeed Insights, but that only scratches the surface of what you should be tracking. 

To keep your site running smoothly, focus on a few other key areas:

  • Uptime: Sites that go offline lose money, and downtime impacts crawl and index rates. Rankings can also drop temporarily.
  • Errors: Server or client errors, such as 5xx or 4xx errors, must be monitored and rectified.
  • Redirects: Broken links and redirects.
  • Internal links: Monitor existing and broken links.

Website performance covers a wide range of technical SEO best practices, and manual checks rarely make the best use of your time.

Automate site audits and set up alerts to catch issues, such as server errors or extended downtime, before they affect users or rankings.

Tools that make life easier:

  • UpTime Robot.
  • Pingdom.
  • Semrush site audit.

Dig deeper: Core Web Vitals: How to measure and improve your site’s UX

Keyword rankings

A key goal of SEO is to enhance visibility through improved keyword rankings. 

If your site – or a client’s – isn’t appearing on Google, Bing, ChatGPT, or other platforms, you’re missing out on valuable traffic and potential revenue.

Manual tracking isn’t realistic, so rely on industry tools to monitor:

  • Current keyword positions.
  • Ranking changes.
  • Location.
  • Traffic potential.

To understand whether your efforts are helping or hurting, go beyond surface-level rankings and dig into deeper insights, such as:

  • Average positions.
  • Ranking distributions.
  • Volatility.
  • Trends over time.
  • Keyword difficulty.
  • Search intent type.
  • Click-through rate.

Keyword data helps you maintain focus on optimization efforts that have a return on investment. To monitor this data, use tools to search for the keywords that you’re trying to optimize for.

Tools that make life easier:

Dig deeper: Keyword research for SEO: The ultimate guide

Get the newsletter search marketers rely on.


Website changes

Websites evolve constantly – content updates, design tweaks, and technical fixes occur daily. 

On enterprise sites or those managed by multiple teams, this creates plenty of room for error. 

While many of these changes overlap with performance monitoring, it’s better to track more than risk missing something important.

Use monitoring tools to track:

  • Web accessibility compliance.
  • Site speed.
  • Content changes.
  • Before and after comparisons.
  • On-page content changes to content, headings, meta tags, etc.
  • URL changes and redirects.

Proactive monitoring makes it easier to connect cause and effect. When rankings shift, you’ll have data showing what changed and when. For example, a content refresh on a service page might coincide with a keyword’s jump from position 23 to 1 – a clear signal of what worked.

As a site scales and more stakeholders contribute, automation becomes essential. Smaller teams may still manage manual tracking, but for most, monitoring tools are indispensable.

Tools that make life easier:

  • Semrush.
  • Screaming Frog.
  • VisualPing.
  • Botify.
  • Lumar.

Lead monitoring

Website, blog, and SEO channels now deliver the strongest ROI for B2B brands, according to HubSpot’s State of Marketing Report.

Clients want to justify their marketing budgets and see that SEO efforts are producing a return on investment.

Use lead tracking tools to identify B2B website visitors and pinpoint:

  • Opportunities in the pipeline.
  • Lead sources.
  • Conversion potential.

Lead tracking shouldn’t stop when visitors arrive. Monitoring behavior shows which pages they visit, where they exit, and what happens during form submissions.

For example, analyzing form data can uncover broken fields or incomplete submissions that cost potential leads. 

Knowing where prospects come from, how they convert, and what happens when they don’t provides insights that manual tracking can’t deliver.

Tools that make this possible include:

  • LeadForensics.
  • Formstory.io.

Traffic and analytics

Monitoring your website’s traffic and analytics is the heartbeat of your marketing performance. You need to know:

  • Total sessions: The number of overall visits to your site.
  • Unique visitors: Number of unique visitors.
  • Pageviews: Total site pages viewed.
  • Pages per session: Average number of pages visited by users.
  • Average session duration: Length of time visitors remain on your site.
  • Bounce rate: The percentage of visitors who leave your site without interacting with it.
  • Traffic sources: Where visitors are originating from.
  • Impressions: How often your site appears in the search results.
  • Clicks: The number of clicks from the search results.
  • Click-through rate: Ratio of clicks to impressions.

Analytics can also segment audiences, track behavior metrics, and set conversion goals. 

Tools like Google Analytics can display revenue per visitor and surface crawl errors or other site issues.

SEOs need to understand how visitors find a site, which content or keywords drive traffic, and how those efforts connect to measurable results.

Tools that make life easier:

  • Google Analytics.
  • Google Search Console.
  • Looker Studio.
  • Fathom Analytics.

Dig deeper: How to measure organic traffic in GA4

Backlinks and brand mentions

Backlinks have long been tied to rankings and remain one of the strongest signals a site can earn. Track key factors such as:

  • Total links.
  • Referring domains.
  • Follow versus no-follow.
  • Anchor text and diversity.
  • New and lost links.
  • Top linking pages.

Brand mentions have become even more significant with the rise of generative engine optimization (GEO), helping LLMs associate content and context with your brand. 

Mentions across communities, social platforms, and online content all play a role.

Monitoring should also cover:

  • Linked and unlinked mentions.
  • Topic relevance.
  • Sentiment.
  • Volume.

Both backlinks and brand mentions play a role in building authority and driving visibility – and, in some cases, referral traffic.

To sustain growth, consistently track both.

Tools that make life easier:

  • Semrush.
  • Google Alerts.
  • Mention.

SSL/Domain expiration

Domain and SSL certificate expiration directly affect a site’s trust and uptime. Monitor the following:

  • SSL status.
  • Expiration date.
  • Mixed content errors.
  • HTTP to HTTPS redirects.

Though easy to overlook, these expirations can disrupt sales, erode trust, and take your site offline.

Use monitoring tools to send alerts and protect both your uptime and the credibility you’ve built with visitors.

Tools that make life easier:

  • Red Sift Certificates (formerly Hardenize).
  • UptimeRobot.
  • Datadog SSL Monitoring.
  • TrackSSL.
  • Host-Tracker.
  • HeyOnCall.

Building a monitoring system for lasting SEO growth

Tracking and monitoring your site’s metrics after launch provides the long-term data needed to make meaningful improvements. 

Use the tools and guides above to build a system that keeps your website healthy, competitive, and growing – catching issues early, improving performance, and driving sustainable SEO results.

Affinity Going the DaVinci Resolve Route Is Brilliant and a Proven Success

A laptop on a desk displays the Affinity website with the message "Creative freedom (That's actually free)" on the screen. The desk has a green-striped mug, headphones, and various art supplies.

Today Affinity announced that it was going free to use for everyone, a move that probably surprised a lot of you because it was certainly unexpected to me. However, the more I think about it, the more it makes perfect sense.

[Read More]

Blogging, AI, and the SEO road ahead: Why clarity now decides who survives

Blogging, AI, and the SEO road ahead: Why clarity now decides who survives

For years, I told bloggers the same thing: make your content easy enough for toddlers and drunk adults to understand. 

That was my rule of thumb. 

If a five-year-old can follow what you’ve written and someone paying half-attention can still find what they need on your site, you’re doing something right.

But the game has changed. It’s no longer just about toddlers and drunk adults. 

You’re now writing for large language models (LLMs) quietly scanning, interpreting, and summarizing your work inside AI search results.

I used to believe that great writing and solid SEO were all it took to succeed. What I see now:

Clarity beats everything

The blogs winning today aren’t simply well-written or packed with keywords. They’re clean, consistent, and instantly understandable to readers and machines alike.

Blogging isn’t dying. It’s moving from being a simple publishing tool to a real brand platform that supports off-site efforts more than ever before.

You can’t just drop a recipe or travel guide online and expect it to rank using the SEO tactics of the past. 

Bloggers must now think of their site as an ecosystem where everything connects – posts, internal links, author bios, and signals of external authority all reinforce each other.

When I audit sites, the difference between those that thrive and those that struggle almost always comes down to focus. 

The successful ones treat their blogs like living systems that grow smarter, clearer, and more intentional with time.

But if content creators want to survive what’s coming, they need to build their sites for toddlers, drunk adults, and LLMs.

In this article, bloggers will learn how to do the following:

  • Understand the current blogging climate and why clarity now matters more than ever.
  • Adapt their content for AI Overviews, LLMs, and emerging retrieval systems.
  • Use recency bias and “last updated” signals to strengthen visibility.
  • Build a recognizable brand that LLMs can cite and retrieve with confidence.
  • See why professional SEO audits are one of the smartest investments bloggers can make.
  • Prepare for the next five years of AI-driven search with practical, proven strategies.

The 2026 blogging climate: Clarity amid chaos

Let’s be honest: the blogging world feels a little shaky right now.

One day, traffic is steady, and the next day, it’s down 40% after an update no one saw coming. 

Bloggers are watching AI Overviews and “AI Mode” swallow up clicks that used to come straight to their sites. Pinterest doesn’t drive what it once did, and social media traffic in general is unpredictable.

It’s not your imagination. The rules of discovery have changed.

We’ve entered a stage where Google volatility is the norm, not the exception. 

Core updates hit harder, AI summaries are doing the talking, and creators are realizing that search is no longer just about keywords and backlinks. It’s about context, clarity, and credibility.

But here’s the good news: the traffic that matters is still out there. It just presents differently. 

The strongest blogs I work with are seeing direct traffic and returning visitors climb. 

People remember them, type their names into search, open their newsletters, and click through from saved bookmarks. That’s not an accident – that’s the result of clarity and consistency.

If your site clearly explains who you are, what you offer, and how your content fits together, you’re building what I call resilient visibility. 

It’s the kind of presence that lasts through algorithm swings, because your audience and Google both understand your purpose.

Think of it this way: the era of chasing random keyword wins is over. 

The bloggers who’ll still be standing in five years are the ones who organize their sites like smart libraries: easy to navigate, full of expertise, and built for readers who come back again and again.

AI systems reward that same clarity. 

They want content that’s connected, consistent, and confident about its subject matter. 

That’s how you show up in AI Overviews, People Also Ask carousels, or Gemini-generated results.

In short, confusion costs you clicks, but clarity earns you staying power.

Takeaway

  • The blogging climate might feel chaotic, but the strategy hasn’t changed as much as people think. 
  • Focus on clarity, structure, and user trust. Build a brand that people – and AI – can easily recognize and rely on.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

The AI acceleration: From search to retrieval

A few years ago, SEO was all about chasing rankings. 

You picked your keywords, wrote your post, built some links, and hoped to land on page one. 

Simple enough. But that world doesn’t exist anymore.

Today, we’re in what can best be called the retrieval era

AI systems like ChatGPT, Gemini, and Perplexity don’t list links. They retrieve answers from the brands, authors, and sites they trust most.

Duane Forrester said it best – search is shifting from “ranking” to “retrieval.” 

Instead of asking, “Where do I rank?” creators should be asking, “Am I retrievable?” 

That mindset shift changes everything about how we create content.

Mike King expanded on this idea, introducing the concept of relevance engineering. 

Search engines and LLMs now use context to understand relevance, not just keywords. They look at:

  • How consistently you cover topics.
  • How well your pages connect.
  • Whether you’re seen as an authority in your niche.

This is where structure and clarity start paying off. 

AI systems want to understand who you are and where you stand. 

They learn that from your internal links, schema, author bios, and consistent topical focus. 

When everything aligns, you’re no longer just ranking in search – you’re becoming a known entity that AI can pull from.

I’ve seen this firsthand during site audits. Blogs with strong internal structures and clear topical authority are far more likely to be cited as sources in AI Overviews and LLM results. 

You’re removing confusion and teaching both users and models to associate your brand with specific areas of expertise.

Takeaway

  • Stop worrying about ranking higher. Start making yourself easier to retrieve. 
  • Build a site that clearly tells Google and AI who you are, what you offer, and why your content deserves to be cited.

Understanding recency bias: Why freshness is your friend

Here’s something I see a lot in my audits: two posts covering the same topic, both written by experienced bloggers, both technically sound. Yet one consistently outperforms the other. 

The difference? One shows a clear “Last updated” date, and the other doesn’t.

That tiny detail matters more than most people realize.

Research from Metehan Yesilyurt confirms what many SEOs have suspected for a while: LLMs and AI-driven search results favor recency, and it’s already being exploited in the name of research.

It’s built into their design. When AI models have multiple possible answers to choose from, they often prefer newer or recently refreshed content. 

This is recency bias, and it’s reshaping both AI search and Google’s click-through behavior.

We see the same pattern inside the traditional SERPs. 

Posts that display visible “Last updated” dates tend to earn higher click-through rates. 

People – and algorithms – trust fresh information.

That’s why one of the first things I check in an audit is how Google is interpreting the date structure on a blog. 

Is it recognizing the correct updated date, or is it stuck on the original publish date? 

Sometimes the fix is simple: remove the old “published on” markup and make sure the updated timestamp is clearly visible and crawlable. 

Other times, the page’s HTML or schema sends conflicting signals that confuse Google, and those need to be cleaned up.

When Google or an LLM can’t identify the freshness of your content, you’re handing visibility to someone else who communicates that freshness better.

How do you prevent this? Don’t hide your updates. Celebrate them.

When you update recipes, add new travel information, or test a product, update your post and make the date obvious. 

This will tell readers and AI systems, “This content is alive and relevant.”

Now, that being said, Google does keep a history of document versions. 

The average post may have dozens of copies stored, and Google can easily compare the recently changed version to its repository of past versions. 

Avoid making small changes that do not add value to users or republishing to a new date years later to fake relevancy. Google specifically calls that out in its guidelines.

Takeaway

  • Recency is a ranking and retrieval advantage. Keep your content updated, make that freshness visible, and verify that Google and LLMs are reading the right dates. 
  • The clearer your update signals, the stronger your trust signals.

Get the newsletter search marketers rely on.


Relevance, entities, and the rise of brand SEO

Let’s talk about what really gets remembered in this new AI-driven world.

When you ask ChatGPT, Gemini, or Perplexity a question, it thinks in entities – people, brands, and concepts it already knows.

The more clearly those models recognize who you are and what you stand for, the more likely you are to be retrieved when it’s time to generate an answer.

That’s where brand SEO comes in.

Harry Clarkson-Bennett in “How to Build a Brand (with SEO) in a Post AI World” makes a great point: LLMs reward brand reinforcement. 

They want to connect names, authors, and websites with a clear area of expertise. And they remember consistency. 

If your name, site, and author profiles all align across the web (same logo, same tone, same expertise), you start training these models to trust you.

I tell bloggers all the time: AI learns the same way humans do. It remembers patterns, tone, and repetition. So make those patterns easy to see.

  • Use a consistent author bio everywhere.
  • Build clear “About” pages that connect your name to your niche.
  • Link your best content internally so Google and AI can map your expertise.
  • Use structured data to reinforce entity relationships (i.e., author, organization, and sameAs markup).
  • And here’s something new I’ve started recommending to audit clients: AI Buttons. 

I originally discussed these AI buttons in my last article, “AI isn’t the enemy: How bloggers can thrive in a generative search world,” and provided a visual example.

These are simple on-site prompts encouraging readers to save or summarize your content using AI tools like ChatGPT or Gemini. 

When users do that, those models start seeing your site as a trusted example. Over time, that can influence what those systems recall and recommend.

Think of this as reputation-building for the AI era. It’s not about trying to game the system. It’s about making sure your brand is memorable, consistent, and worth retrieving.

Fortunately, these buttons are becoming more mainstream, with theme designers like Feast including them as custom blocks. 

And the buttons work – I’ve seen creators turn their blogs into small but powerful brands that LLMs now cite regularly.

They did it by reinforcing who they were, everywhere, and then using AI buttons to encourage their existing traffic to save their sites as high-quality examples to reference in the future.

Takeaway

  • Google and AI don’t just rank content anymore. They recognize entities and remember brands. 
  • The more consistent and connected your brand signals are, the more likely you’ll be retrieved, cited, and trusted in AI search results.

Why every blogger needs an SEO professional (now more than ever)

Blogging has never been easy, but it’s never been harder than it is right now.

Between core updates, AI Overviews, and shifting algorithms, creators are expected to keep up with changes that even seasoned SEOs struggle to track. 

And that’s the problem – too many bloggers are still trying to figure it all out alone.

If there’s one thing I’ve learned after doing more than 160 site audits this year, it’s this: almost every struggling blogger is closer to success than they think. They’re just missing clarity.

A good SEO audit does more than point out broken links or slow-loading pages. It shows you why your content isn’t connecting with Google, readers, and now LLMs. 

My audits are built around what I call the “Toddlers, Drunk Adults, and LLMs” framework. 

If your site works for those three audiences, you’re in great shape.

For toddlers

  • The structure is simple. Your content hierarchy makes sense. Every post has a clear topic, and your categories aren’t a maze.

For drunk adults

  • Your site is fast, responsive, and forgiving. People can find what they need even when they’re not fully focused.

For LLMs

  • Your data is clean, your entities are connected, and your expertise is crystal clear to AI systems scanning your site.

When bloggers follow this approach, the numbers speak for themselves. 

In 2025 alone, my audit clients have seen an average increase of 47% in Google traffic and RPM improvements of 21-33% within a few months of implementing recommendations.

This isn’t just about ranking better. Every audit is a roadmap to help bloggers position their sites for long-term visibility across traditional search and AI-powered discovery.

That means optimizing for things like:

You can’t control Google’s volatility, but you can control how clear, crawlable, and connected your site is. That’s what gets rewarded.

And while I’ll always advocate for professional audits, this isn’t about selling a service. 

You need someone who can give you an honest, technical, and strategic look under the hood.

Why?

Because the difference between “doing fine” and “thriving in AI search” often comes down to a single, well-executed audit.

Takeaway

  • DIY SEO isn’t enough anymore. Professional audits are the most valuable investment a blogger can make in 2026 and beyond. 
  • Not for quick wins, but for building a site that’s understandable, adaptable, and future-ready for both Google and AI.

The road ahead: Blogging in 2026–2030

So where does all this lead? What does blogging even look like five years from now?

Here’s what I see coming.

We’re heading toward an increasingly agentic web, where AI systems do the searching, summarizing, and recommending for us. 

Instead of typing a query into Google, people will ask their personal AI for a dinner idea, a travel itinerary, or a product recommendation. 

And those systems will pull from a short list of trusted sources they already “know.”

That’s why what you’re doing today matters so much.

Every time you publish a post, refine your site structure, or strengthen your brand signals, you’re teaching AI who you are. 

You’re building a long-term relationship with the systems that will decide what gets shown and what gets skipped.

Here’s how I expect the next few years to unfold:

  • AI-curated discovery becomes normal: Instead of browsing through 10 links, users get custom recommendations from trusted sources. The blogs that survive are the ones AI already recognizes as reliable.
  • Brand-first SEO takes over: Ranking for a keyword will matter less than having your brand show up as the answer. Visibility won’t just depend on optimization, it’ll depend on reputation.
  • Entity-first indexing becomes the foundation: Google and AI models are increasingly indexing based on entities, not URLs. That means your author names, structured data, and topical focus all play a direct role in discoverability.
  • Human storytelling becomes the ultimate differentiator: AI can summarize information, but it can’t replicate lived experience, voice, or emotion. The content that stands out will be the content that feels human.

The creators who will win in this next chapter are the ones who stop trying to outsmart Google and start building systems that AI can easily understand and humans genuinely connect with.

It’s not about chasing trends or reinventing your site every time an update hits. It’s about getting the fundamentals right and letting clarity, trust, and originality carry you forward.

Because the truth is, Google’s not the gatekeeper anymore. You are. 

Your brand, expertise, and ability to communicate clearly will decide how visible you’ll be in search and AI-driven discovery.

Takeaway

  • The next five years of blogging will belong to those who build clear, human-centered brands that AI understands and audiences love. 
  • Keep your content fresh, your structure clean, and your voice unmistakably your own.

Clarity over chaos

If there’s one thing I want bloggers to take away from all this, it’s that clarity always wins.

We’re living through the fastest transformation in the history of search. 

AI is rewriting how content is discovered, ranked, and retrieved. 

Yes, that’s scary. But it’s also full of opportunity for those willing to adapt.

I’ve seen it hundreds of times in audits this year. 

Bloggers who simplify their sites, clean up their data, and focus on authority signals see measurable results. 

They show up in AI Overviews. They regain lost rankings. They build audiences that keep coming back, even when algorithms shift again.

This isn’t about fighting AI – it’s about working with it. The goal is to show the system who you are and why your content matters.

Here’s my advice, regardless of the professional you choose:

  • Get your site audited by someone who understands both SEO and AI search.
  • Keep your content updated and your structure clean.
  • Make your brand easy to recognize, both to readers and to machines.
  • Build for toddlers, drunk adults, and LLMs.

It’s never been harder to be a content creator, but it’s never been more possible to build something that lasts. 

The blogs that survive the next five years will be organized, human, and clear.

The future of blogging belongs to the creators who embrace clarity over chaos. AI won’t erase the human voice – it’ll amplify the ones that are worth hearing. 

Here’s to raised voices and future success. Good luck out there.

Dig deeper: Organizing content for AI search: A 3-level framework

Regex for SEO: The simple language that powers AI and data analysis

Regex for SEO

Regex is a powerful – yet overlooked – tool in search and data analysis. 

With just a single line, you can automate what would otherwise take dozens of lines of code.

Short for “regular expression,” regex is a sequence of characters used to define a pattern for matching text.

It’s what allows you to find, extract, or replace specific strings of data with precision.

In SEO, regex helps you extract and filter information efficiently – from analyzing keyword variations to cleaning messy query data. 

But its value extends well beyond SEO. 

Regex is also fundamental to natural language processing (NLP), offering insight into how machines read, parse, and process text – even how large language models (LLMs) tokenize language behind the scenes.

Regex uses in SEO and AI search

Before getting started with regex basics, I want to highlight some of its uses in our daily workflows.

Google Search Console has a regex filter functionality to isolate specific query types.

One of the simplest regex expressions commonly used is the brand regex brandname1|brandname2|brandname3, which is very useful when users write your brand name in different ways.

Google Analytics also supports regex for defining filters, key events, segments, audiences, and content groups.

Looker Studio allows you to use regex to create filters, calculated fields, and validation rules.

Screaming Frog supports the use of regex to filter and extract data during a crawl and also to exclude specific URLs from your crawl.

Screaming Frog regex

Google Sheets enables you to test whether a cell matches a specific regex. Simply use the function REGEXMATCH (text, regular_expression).

In SEO, we’re surrounded by tools and features just waiting for a well-written regex to unlock their full potential.

Regex in NLP

If you’re building SEO tools, especially those that involve content processing, regex is your secret weapon.

It gives you the power to search, validate, and replace text based on advanced, customizable patterns.

Here’s a Google Colab notebook with an example of a Python script that takes a list of queries and extracts different variations of my brand name. 

You can easily customize this code by plugging it into ChatGPT or Claude alongside your brand name.

Google Colab - BrandName_Variations
Fun fact: By building this code, I accidentally found a good optimization opportunity for my personal brand. 

Get the newsletter search marketers rely on.


How to write regex

I’m a fan of vibe coding – but not the kind where you skip the basics and rely entirely on LLMs. 

After all, you can’t use a calculator properly if you don’t understand numbers or how addition, multiplication, division, and subtraction work.

I support the kind of vibe coding that builds on a little coding knowledge – enough to use LLMs effectively, test what they produce, and troubleshoot when needed.

Likewise, learning the basics of regex helps you use LLMs to create more advanced expressions.

Simple regex cheat sheet

SymbolMeaning
.Matches any single character.
^Matches the start of a string.
$Matches the end of a string.
*Matches 0 or more of the preceding character.
+Matches 1 or more of the preceding character.
?Makes the preceding character optional (0 or 1 time).
{}Matches the preceding character a specific number of times.
[]Matches any one character inside the brackets.
\Escapes special characters or signals special sequences like \d.
`Matches a literal backtick character.
()Groups characters together (for operators or capturing).

Example usage

Here’s a list of 10 long-tail keywords. Let’s explore how different regex patterns filter them using the Regex101 tool.

  • “Best vegan recipes for beginners.”
  • “Affordable solar panels for home.”
  • “How to train for a marathon.”
  • “Electric cars with longest battery range.”
  • “Meditation apps for stress relief.”
  • “Sustainable fashion brands for women.”
  • “DIY home workout routines without equipment.”
  • “Travel insurance for adventure trips.”
  • “AI writing software for SEO content.”
  • “Coffee brewing techniques for espresso lovers.”

Example 1: Extract any two-character sequence that starts with an “a.” The second character can be anything (i.e., a, then anything).

  • Regex: a.
  • Output: (All highlighted words in the screenshot below.)
Regex usage - Example 1

Example 2: Extract any string that starts with the letter “a” (i.e., a is the start of the string, then followed by anything).

  • Regex: ^a.
  • Output: (All highlighted words in screenshot below.)
Regex usage - Example 2

Example 3: Extract any string that starts with an “a” and ends with an “e” (i.e., any line that starts with a, followed by anything, then ends with an e).

  • Regex: ^a.*e$
  • Output: (All highlighted words in the screenshot below.)
Regex usage - Example 3

Example 4: Extract any string that contains two “s.”

  • Regex: s{2}
  • Output: (All highlighted words in the screenshot below.)
Regex usage - Example 4

Example 5: Extract any string that contains “for” or “with.”

  • Regex: for|with
  • Output: (All highlighted words in the screenshot below.)

I’ve also built a sample regex Google Sheet so you can play around, test, and experience the feature in Google Sheets, too. Check it out here.

Sample regex Google Sheet

Note: Cells in the Extracted Text column showing #N/A indicate that the regex didn’t find a matching pattern.

Where regex fits in your SEO toolkit

By exploring regex, you’ll open new doors for analyzing and organizing search data. 

It’s one of those skills that quietly makes you faster and more precise – whether you’re segmenting keywords, cleaning messy queries, or setting up advanced filters in Search Console or Looker Studio.

Once you’re comfortable with the basics, start spotting where regex can save you time. 

Use it to identify branded versus nonbranded searches, group URLs by pattern, or validate large text datasets before they reach your reports.

Experiment with different expressions in tools like Regex101 or Google Sheets to see how small syntax changes affect results. 

The more you practice, the easier it becomes to recognize patterns in both data and problem-solving. 

That’s where regex truly earns its place in your SEO toolkit.

Why your SEO and PPC teams need shared standards to unlock mutual gains

SEO-PPC mutual gains concept

Most marketing teams still treat SEO and PPC as budget rivals, not as complementary systems facing the same performance challenges.

In practice, these relationships fall into three types:

  • Parasitism: One benefits at the other’s expense.
  • Commensalism: One benefits while the other remains unaffected.
  • Mutualism: Both thrive through shared optimization and accountability.

Only mutualism creates sustainable performance gains – and it’s the shift marketing teams need to make next.

Mutualism: Solving joint problems

One glaring problem unites online marketers: we’re getting less traffic for the same budget.

Navigating the coming years requires more than the coexistence many teams mistake for collaboration. 

We need mutualism – shared technical standards that optimize for both organic visibility and paid performance. 

Shared accountability drives lower acquisition costs, faster market response, and sustainable gains that neither channel can achieve alone.

Here’s what it looks like in practice: 

  • Fostering a culture of experimentation and learning.
  • PPC tests messaging while SEO builds long-term content assets​.
  • SEO uncovers search intent that PPC capitalizes on immediately​.
  • Both channels learn from shared incrementality testing (guerrilla testing)​.
  • Cross-pollination of keyword intelligence and conversion data​.
  • Combined technical standards (modified Core Web Vitals weights) align engineering with marketing goals​.
  • Feedback loops accelerate market insights and reduce wasted spend​.

Stabilizing performance during SEO volatility

During SEO penalties and core updates, PPC can maintain traffic until recovery. 

Core updates cause fluctuations in organic rankings and user behavior, which, in turn, can affect ad relevance and placements.

Do you involve SEO during a CPC price surge?

PPC-only landing pages affect the Core Web Vitals of entire sites, influencing Google’s default assumptions for URLs without enough traffic to calculate individual scores. 

Paid pages are penalized for slow loading just as much as organic ones, impacting Quality Score and, ultimately, bids.

New-market launch considerations

PPC should answer a simple question: Are we getting the types of results we expect and want? 

Setting clear PPC baselines by market and country provides valuable, real-time keyword and conversion data that SEO teams can use to strengthen organic strategies. 

By analyzing which PPC clicks drive signups or demo requests, SEO teams can prioritize content and keyword targets with proven high intent. 

Sharing PPC insights enables organic search teams to make smarter decisions, improve rankings, and drive better-qualified traffic.

Global Baseline, Market Baseline, Country Baseline, and Click Yield. Each step explains evaluating performance metrics at different levels and tracking conversion outcomes from clicks. The image visually supports a new-market launch PPC strategy designed to gather insights that strengthen SEO performance.
Source: SMX Advanced Berlin presentation by the author.

Dig deeper: The end of SEO-PPC silos: Building a unified search strategy for the AI era

Get the newsletter search marketers rely on.


Building unified performance measurement

One key question to ask is: how do we measure incrementality? 

We need to quantify the true, additional contribution PPC and SEO drive above the baseline. 

Guerrilla testing offers a lo-fi way to do this – turning campaigns on or off in specific markets to see whether organic conversions are affected. 

A more targeted test involves turning off branded campaigns. 

PPC ads on branded terms can capture conversions that would have occurred organically, making paid results appear stronger and SEO weaker. 

That’s exactly what Arturs Cavniss’ company did – and here are the results.

"Line and bar chart titled 'Cost and Purchases Over Time.' The chart shows daily advertising costs (bars) and purchases (line) from June to September 2025, with a highlighted section labeled 'Brand Spend Turned Off.' During this highlighted period, costs drop sharply while purchases remain steady, illustrating the impact of disabling branded ad spend."

For teams ready to operate in a more sophisticated way, several options are available. 

One worth exploring is Robyn, an open-source, AI/ML-powered marketing mix modeling (MMM) package.

Core Web Vitals

Core Web Vitals measures layout stability, rendering efficiency, and server response times – key factors influencing search visibility and overall performance. 

These metrics are weighted by Google in evaluating page experience.

Core Web Vitals MetricGoogle’s Weight
First Contentful Paint10%
Speed Index10%
Largest Contentful Paint25%
Total Blocking Time30%
Cumulative Layout Shift25%

Core Web Vitals:

  • Affect PPC performance through CLS metrics.
  • Influence SEO rankings through search vitals.
  • Give engineering teams clear benchmarks that align development efforts with marketing goals.

You can create a modified weighted system to reflect a combined SEO and PPC baseline. (Here’s a quick MVP spreadsheet to get started.)

However, SEO-focused weightings don’t capture PPC’s Quality Score requirements or conversion optimization needs. 

Clicking an ad link can be slower than an organic one because Google’s ad network introduces extra processes – additional data handling and script execution – before the page loads. 

The hypothesis is that ad clicks may consistently load slower than organic ones due to these extra steps in the ad-serving process. 

This suggests that performance standards designed for organic results may not fully represent the experience of paid users.

Microsoft Ads Liaison Navah Hopkins notes that paid pages are penalized for slow loading just as severely as organic ones – a factor that directly affects Quality Score and bids.

bold headline 'Before SEOs grumble...'. The text warns that PPC-only landing pages can negatively affect Core Web Vitals for the entire site, leading Google to make poor default assumptions about low-traffic URLs and ultimately hurting SEO performance."
Source: SMX Advanced Berlin presentation by the author.

SEOs also take responsibility for improving PPC-only landing pages, even without being asked. As Jono Alderson explains:

  • “All of your PPC-only landing pages are affecting the CWV of your whole site (and, Google’s default assumptions for all of your URLs that don’t have enough traffic to calculate), and thus f*ck with your SEO.”

PPC-only landing pages influence the Core Web Vitals of entire sites, shaping Google’s assumptions for low-traffic URLs.

INP gains importance with agentic AI

Agentic AI’s sensitivity to interaction delays has made Interaction to Next Paint (INP) a critical performance metric.

INP measures how quickly a website responds when a human or AI agent interacts with a page – clicking, scrolling, or filling out forms while completing tasks. 

When response times lag, agents fail tasks, abandon the site, and may turn to competitors. 

INP doesn’t appear in Chrome Lighthouse or PageSpeed Insights because those are synthetic testing tools that don’t simulate real interactions. 

Real user monitoring helps reveal what’s happening in practice, but it still can’t capture the full picture for AI-driven interactions.

Bringing quality scoring to SEO

PPC practitioners have long relied on Quality Score – a 1-10 scale measuring expected CTR and user intent fit – to optimize landing pages and reduce costs. 

SEO lacks an equivalent unified metric, leaving teams to juggle separate signals like Core Web Vitals, keyword relevance, and user engagement without a clear prioritization framework.

You can create a company-wide quality score for pages to incentivize optimization and align teams while maintaining channel-specific goals. 

This score can account for page type, with sub-scores for trial, demo, or usage pages – adaptable to the content that drives the most business value.

The system should account for overlapping metrics across subscores yet remain simple enough for all teams – SEO, PPC, engineering, and product – to understand and act on. 

A unified scoring model gives everyone a common language and turns distributed accountability into daily practice. 

When both channels share quality standards, teams can prioritize fixes that strengthen organic rankings and paid performance simultaneously.

Give a comprehensive view across channels

Display advertising and SEO rarely share performance metrics, yet both pursue the same goal – converting impressions into engaged users. 

Click-per-thousand impressions (CPTI) measures the number of clicks generated per 1,000 impressions, creating a shared language for evaluating content effectiveness across paid display and organic search.

For display teams, CPTI reveals which creative and targeting combinations drive engagement beyond vanity metrics like reach. 

For SEO teams, applying CPTI to search impressions (via Google Search Console) shows which pages and queries convert visibility into traffic – exposing content that ranks well but fails to earn clicks.

This shared metric allows teams to compare efficiency directly: if a blog post drives 50 clicks per 1,000 organic impressions while a display campaign with similar visibility generates only 15 clicks, the performance gap warrants investigation.

'Reverse CPM' explains how to calculate when content is 'paid for' and achieves ROI, giving the example that 1 million monthly impressions should generate 1,000+ clicks. The concept is attributed to inspiration from Navah Hopkins.
Source: SMX Advanced Berlin presentation by the author.

Reverse CPM offers another useful lens. It measures how long content takes to “pay for itself” – the point where it reaches ROI. 

For example, if an article earns 1 million impressions in a month, it should deliver roughly 1,000 clicks. 

As generative AI continues to reshape traffic patterns, this metric will need refinement.

Feedback loops

The most valuable insights emerge when SEO and PPC teams share operational intelligence rather than compete for credit. 

PPC provides quick keyword performance data to respond to market trends faster, while SEO uncovers emerging search intent that PPC can immediately act on. 

Together, these feedback loops create compound advantages.

SEO signals PPC should act on:

  • Google is testing a feature that impacts SEO rankings and traffic – PPC can maintain visibility during organic volatility.
  • SEO keyword research uncovers search intent, emerging keywords, seasonal patterns, and regional differences in query popularity.
  • Long-tail insights reveal shifting search intents after core updates, signaling format and content opportunities.

PPC signals SEO should act on: 

  • Some PPC keywords are effectively “dead.” They’ll never convert and are better handled by SEO.
  • PPC competitors bidding on brand keywords expose gaps in brand protection strategy.
  • PPC data highlights which product messaging, features, or offers resonate most with users, informing content priorities.
Mind map titled 'Dead Keywords in PPC' illustrating three main branches: why keywords become dead (including SERP feature dominance, zero-click searches, and other reasons for reduced visibility), impact on PPC campaigns (wasted budget, shift to expected CTR, quality score impact), and deeper impacts on PPC (long-term account health issues, shifting user intent due to SERP features, and reduced competitiveness)

When both channels share intelligence, insights extend beyond marketing performance into product and business strategy.

  • Product managers exploring new features benefit from unified search data across both channels.
  • Joining Merchant Center and Google Search Console data in BigQuery provides a strong foundation for ecommerce attribution.

These feedback loops don’t require expensive tools – only an organizational commitment to regular cross-channel reviews in which teams share what’s working, what’s failing, and what deserves coordinated testing.

Optimizing the system, not the channel

Treat technical performance as shared infrastructure, not channel-specific optimization. 

Teams that implement unified Core Web Vitals standards, cross-channel attribution models, and distributed accountability systems will capture opportunities that siloed operations miss. 

As agentic AI adoption accelerates and digital marketing grows more complex, symbiotic SEO-PPC operations become a competitive advantage rather than a luxury.

The new SEO sales tactic: Selling the AI dream

Selling AI-powered SEO concept

Something’s shifting in how SEO services are being marketed, and if you’ve been shopping for help with search lately, you’ve probably noticed it.

AI search demand is real – but so is the spin

Over the past few months, “AI SEO” has emerged as a distinct service offering. 

Browse service provider websites, scroll through Fiverr, or sit through sales presentations, and you’ll see it positioned as something fundamentally new and separate from traditional SEO. 

Some are packaging it as “GEO” (generative engine optimization) or “AEO” (answer engine optimization), with separate pricing, distinct deliverables, and the implication that you need both this and traditional SEO to compete.

The pitch goes like this: 

  • “Traditional SEO handles Google and Bing. But now you need AI SEO for ChatGPT, Perplexity, Claude, and other AI search platforms. They work completely differently and require specialized optimization.”

The data helps explain why the industry is moving so quickly.

AI-sourced traffic jumped 527% year-over-year from early 2024 to early 2025. 

Service providers are responding to genuine market demand for AI search optimization.

But here’s what I’ve observed after evaluating what these AI SEO services actually deliver. 

Many of these so-called new tactics are the same SEO fundamentals – just repackaged under a different name.

As a marketer responsible for budget and results, understanding this distinction matters. 

It affects how you allocate resources, evaluate agency partners, and structure your search strategy. 

Let’s dig into what’s really happening so you can make smarter decisions about where to invest.

The AI SEO pitch: What you’re hearing in sales calls

The typical AI SEO sales deck has become pretty standardized. 

  • First comes the narrative about how search is fragmenting across platforms. 
  • Then, the impressive dashboard showing AI visibility metrics. 
  • Finally, the recommendation to treat AI optimization as a separate workstream, often with separate pricing.

Here are the most common claims I’m hearing.

‘AI search is fundamentally different and requires specialized optimization’ 

They’ll show you how ChatGPT, Perplexity, and Claude are changing search behavior, and they’re not wrong about that. 

Research shows that 82% of consumers agree that “AI-powered search is more helpful than traditional search engines,” signaling how search behavior is evolving.

‘You need to optimize for how AI platforms chunk and retrieve content’ 

The pitch emphasizes passage-level optimization, structured data, and Q&A formatting specifically for AI retrieval. 

They’ll discuss how AI values mentions and citations differently than backlinks and how entity recognition matters more than keywords.

‘Only 22% of marketers are monitoring AI visibility; you need to act now’

This creates urgency around a supposedly new practice that requires immediate investment.

The urgency is real. Only 22% of marketers have set up LLM brand visibility monitoring, but the question is whether this requires a separate “AI SEO” service or an expansion of your existing search strategy.

Understanding the rebranding trend

To be clear, the AI capabilities are real. What’s new is the positioning – familiar SEO practices rebranded to sound more revolutionary than they are.

When you examine what’s actually being recommended (passage-level content structure, semantic clarity, Q&A formatting, earning citations and mentions), you will find that these practices have been core to SEO for years. 

Google introduced passage ranking in 2020 and featured snippets back in 2014.

Research from Fractl, Search Engine Land, and MFour found that generative engine optimization “is based on similar value systems that advanced SEOs, content marketers, and digital PR teams are already experts in.”

Let me show you what I mean.

What you’re hearing: “AI-powered semantic analysis and predictive keyword intelligence.”

  • What’s actually happening: Keyword research using advanced tools to analyze search volume, competition, user intent, and content opportunities. The strategic fundamentals (understanding what your audience is searching for and why) haven’t changed.

What you’re hearing: “Machine learning content optimization that aligns with AI algorithms.”

  • What’s actually happening: Analyzing top-ranking content, understanding user intent, identifying content gaps, and creating comprehensive content. AI tools can accelerate analysis, which is valuable. But the strategic work (determining what topics matter for your business, how to position your expertise, and what content will drive conversions) still requires human insight.

What you’re hearing: “Entity-based authority building for AI platforms.”

  • What’s actually happening: Building quality mentions and citations, earning coverage from reputable sources, and establishing expertise in your industry. Authority building is inherently relationship-driven and time-dependent. No AI tool shortcuts to becoming a recognized expert in your space.

Dig deeper: AI search is booming, but SEO is still not dead

Get the newsletter search marketers rely on.


Where real differences exist (and why fundamentals still matter)

I want to be fair here. There’s genuine debate in the SEO community about whether optimizing for AI-powered search represents a distinct discipline or an evolution of existing practices.

The differences are real.

  • AI search handles queries differently from traditional search. 
  • Users write longer, conversational prompts rather than short keywords.
  • AI platforms use query fan-out to match multiple sub-queries. 
  • Optimization happens at the passage or chunk level rather than the page level. 
  • Authority signals shift from links and engagement to mentions and citations.

These differences affect execution, but the strategic foundation remains consistent.

You still need to:

  • Understand what users are trying to accomplish.
  • Create content demonstrating genuine expertise.
  • Build authority and credibility.
  • Ensure content is technically accessible.
  • Optimize for relevance and user intent.

And here’s something that reinforces the overlap.

SEO professionals recently discovered that ChatGPT’s Atlas browser directly uses Google search results. 

Even AI-powered search platforms are relying on traditional search infrastructure.

So yes, there are platform-specific tactics that matter. 

The question for you as a marketer isn’t whether differences exist (they do). 

The real question is whether those differences justify treating this as an entirely separate service with its own strategy and budget.

Or are they simply tactical adaptations of the same fundamental approach?

Dig deeper: GEO and SEO: How to invest your time and efforts wisely

The risk of chasing platform-specific tactics

The “separate AI SEO service” approach comes with a real risk.

It can shift focus toward short-term, platform-specific tactics at the expense of long-term fundamentals.

I’m seeing recommendations that feel remarkably similar to the blackhat SEO tactics we saw a decade ago: 

These tactics might work today, but they’re playing a dangerous game.

Dig deeper: Black hat GEO is real – Here’s why you should pay attention

AI platforms are still in their infancy. Their spam detection systems aren’t yet as mature as Google’s or Bing’s, but that will change, likely faster than many expect.

AI platforms like Perplexity are building their own search indexes (hundreds of billions of documents). 

They’ll need to develop the same core systems traditional search engines have: 

  • Site quality scoring.
  • Authority evaluation.
  • Anti-spam measures. 

They’re supposedly buying link data from third-party providers, recognizing that understanding authority requires signals beyond just content analysis.

The pattern is predictable

We’ve seen this with Google. 

In the early days, keyword stuffing and link schemes worked great.

Then, Google developed Panda and Penguin updates that devastated sites relying on those tactics. 

Overnight, sites lost 50-90% of their traffic.

The same thing will likely happen with AI platforms. 

Sites gaming visibility now with spammy tactics will face serious problems when these platforms implement stronger quality and spam detection. 

As one SEO veteran put it, “It works until it doesn’t.”

This is why fundamentals matter more than ever

Building around platform-specific tactics is like building on sand. 

Focus instead on fundamentals – creating valuable content, earning authority, demonstrating expertise, and optimizing for intent – and you’ll have something sustainable across platforms.

Where AI genuinely helps

I’m not anti-AI. Used well, it meaningfully improves SEO workflows and results.

AI excels at large-scale research and ideation – analyzing competitor content, spotting gaps, and mapping topic clusters in minutes.

For one client, it surfaced 73 subtopics we hadn’t fully considered. 

But human expertise was still essential to align those ideas with business goals and strategic priorities.

AI also transforms data analysis and workflow automation – from reporting and rank tracking to technical monitoring – freeing more time for strategy.

AI clearly helps. The real question is whether these AI offerings bring truly new strategies or familiar ones powered by better tools.

What to watch for when evaluating services

After working with clients to evaluate various service models, I’ve seen consistent patterns in proposals that overpromise and underdeliver.

  • They lead with technology, not strategy: If the conversation jumps immediately to tools and dashboards rather than starting with your business goals, that suggests a tools-first rather than strategy-first approach.
  • Vague explanations of their approach: Watch for responses about “proprietary algorithms” or “advanced machine learning” without concrete explanations of what specific problems this solves.
  • Focus on vanity metrics: “We generated 500 AI citations!” sounds impressive but doesn’t answer: Did qualified traffic increase? Did conversion rates improve? How did search contribute to revenue?
  • Case studies that focus on visibility, not business results: They might have increased AI mentions or improved rankings, but did it drive revenue growth? Did it increase qualified leads?

Questions to ask instead

When evaluating any service provider, ask:

  • How would you approach our business? Walk me through your strategic process. The best approaches start by understanding your business, not showcasing tools. If they jump immediately to AI tools or technical tactics without understanding your business context, that’s a red flag.
  • How do you determine content strategy and prioritization? Look for answers that balance data insights with business context and audience understanding, not just what AI tools suggest would perform well.
  • What specific results have you achieved for similar businesses? Push for concrete business metrics (revenue growth, lead generation, conversion improvements), not just traffic or ranking increases.
  • How do you integrate optimization across traditional search and AI platforms? This reveals whether they view these as separate disciplines requiring separate work or as interconnected parts of a unified search strategy.

What actually drives long-term success

After working in SEO for 20 years, through multiple algorithm updates and trend cycles, I keep coming back to the same fundamentals:

  • Deep audience understanding drives every strategic decision.
  • Quality and expertise still win (search algorithms are increasingly sophisticated at evaluating content quality).
  • Authority building takes time and authenticity (you can’t automate trust and credibility).
  • Business alignment drives meaningful results (rankings and AI citations are means to an end: revenue growth, customer acquisition, or whatever your primary business goals are).

Dig deeper: Thriving in AI search starts with SEO fundamentals

What sustainable SEO looks like in the AI era

AI is genuinely changing how we work in search marketing – and that’s mostly positive. 

The tools make us more efficient and enable analysis that wasn’t previously practical.

But AI only enhances good strategy. It doesn’t replace it. 

Fundamentals still matter – along with audience understanding, quality, and expertise.

Search behavior is fragmenting across Google, ChatGPT, Perplexity, and social platforms, but the principles that drive visibility and trust remain consistent.

Real advantage doesn’t come from the newest tools or the flashiest “GEO” tactics. 

It comes from a clear strategy, deep market understanding, strong execution of fundamentals, and smart use of technology to strengthen human expertise.

Don’t get distracted by hype or dismiss innovation. The balance lies in thoughtful AI integration within a solid strategic framework focused on business goals.

That’s what delivers sustainable results – whether people find you through Google, ChatGPT, or whatever comes next.

LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Marketing, technology, and business leaders today are asking an important question: how do you optimize for large language models (LLMs) like ChatGPT, Gemini, and Claude? 

LLM optimization is taking shape as a new discipline focused on how brands surface in AI-generated results and what can be measured today. 

For decision makers, the challenge is separating signal from noise – identifying the technologies worth tracking and the efforts that lead to tangible outcomes.

The discussion comes down to two core areas – and the timeline and work required to act on them:

  • Tracking and monitoring your brand’s presence in LLMs.
  • Improving visibility and performance within them.

Tracking: The foundation of LLM optimization

Just as SEO evolved through better tracking and measurement, LLM optimization will only mature once visibility becomes measurable. 

We’re still in a pre-Semrush/Moz/Ahrefs era for LLMs. 

Tracking is the foundation of identifying what truly works and building strategies that drive brand growth. 

Without it, everyone is shooting in the dark, hoping great content alone will deliver results.

The core challenges are threefold:

  • LLMs don’t publish query frequency or “search volume” equivalents.
  • Their responses vary subtly (or not so subtly) even for identical queries, due to probabilistic decoding and prompt context.
  • They depend on hidden contextual features (user history, session state, embeddings) that are opaque to external observers.

Why LLM queries are different

Traditional search behavior is repetitive – millions of identical phrases drive stable volume metrics. LLM interactions are conversational and variable. 

People rephrase questions in different ways, often within a single session. That makes pattern recognition harder with small datasets but feasible at scale. 

These structural differences explain why LLM visibility demands a different measurement model.

This variability requires a different tracking approach than traditional SEO or marketing analytics.

The leading method uses a polling-based model inspired by election forecasting.

The polling-based model for measuring visibility

A representative sample of 250–500 high-intent queries is defined for your brand or category, functioning as your population proxy. 

These queries are run daily or weekly to capture repeated samples from the underlying distribution of LLM responses.

Competitive mentions and citations metrics

Tracking tools record when your brand and competitors appear as citations (linked sources) or mentions (text references), enabling share of voice calculations across all competitors. 

Over time, aggregate sampling produces statistically stable estimates of your brand visibility within LLM-generated content.

Early tools providing this capability include:

  • Profound.
  • Conductor.
  • OpenForge.
Early tools for LLM visibility tracking

Consistent sampling at scale transforms apparent randomness into interpretable signals. 

Over time, aggregate sampling provides a stable estimate of your brand’s visibility in LLM-generated responses – much like how political polls deliver reliable forecasts despite individual variations.

Building a multi-faceted tracking framework

While share of voice paints a picture of your presence in the LLM landscape, it doesn’t tell the complete story. 

Just as keyword rankings show visibility but not clicks, LLM presence doesn’t automatically translate to user engagement. 

Brands need to understand how people interact with their content to build a compelling business case.

Because no single tool captures the entire picture, the best current approach layers multiple tracking signals:

  • Share of voice (SOV) tracking: Measure how often your brand appears as mentions and citations across a consistent set of high-value queries. This provides a benchmark to track over time and compare against competitors.
  • Referral tracking in GA4: Set up custom dimensions to identify traffic originating from LLMs. While attribution remains limited today, this data helps detect when direct referrals are increasing and signals growing LLM influence.
  • Branded homepage traffic in Google Search Console: Many users discover brands through LLM responses, then search directly in Google to validate or learn more. This two-step discovery pattern is critical to monitor. When branded homepage traffic increases alongside rising LLM presence, it signals a strong causal connection between LLM visibility and user behavior. This metric captures the downstream impact of your LLM optimization efforts.

Nobody has complete visibility into LLM impact on their business today, but these methods cover all the bases you can currently measure.

Be wary of any vendor or consultant promising complete visibility. That simply isn’t possible yet.

Understanding these limitations is just as important as implementing the tracking itself.

Because no perfect models exist yet, treat current tracking data as directional – useful for decisions, but not definitive.

Why mentions matter more than citations

Dig deeper: In GEO, brand mentions do what links alone can’t

Estimating LLM ‘search volume’

Measuring LLM impact is one thing. Identifying which queries and topics matter most is another.

Compared to SEO or PPC, marketers have far less visibility. While no direct search volume exists, new tools and methods are beginning to close the gap.

The key shift is moving from tracking individual queries – which vary widely – to analyzing broader themes and topics. 

The real question becomes: which areas is your site missing, and where should your content strategy focus?

To approximate relative volume, consider three approaches:

Correlate with SEO search volume

Start with your top-performing SEO keywords. 

If a keyword drives organic traffic and has commercial intent, similar questions are likely being asked within LLMs. Use this as your baseline.

Layer in industry adoption of AI

Estimate what percentage of your target audience uses LLMs for research or purchasing decisions:

  • High AI-adoption industries: Assume 20-25% of users leverage LLMs for decision-making.
  • Slower-moving industries: Start with 5-10%.

Apply these percentages to your existing SEO keyword volume. For example, a keyword with 25,000 monthly searches could translate to 1,250-6,250 LLM-based queries in your category.

Using emerging inferential tools

New platforms are beginning to track query data through API-level monitoring and machine learning models. 

Accuracy isn’t perfect yet, but these tools are improving quickly. Expect major advancements in inferential LLM query modeling within the next year or two.

Get the newsletter search marketers rely on.


Optimizing for LLM visibility

The technologies that help companies identify what to improve are evolving quickly. 

While still imperfect, they’re beginning to form a framework that parallels early SEO development, where better tracking and data gradually turned intuition into science.

Optimization breaks down into two main questions:

  • What content should you create or update, and should you focus on quality content, entities, schema, FAQs, or something else?
  • How should you align these insights with broader brand and SEO strategies?

Identify what content to create or update

One of the most effective ways to assess your current position is to take a representative sample of high-intent queries that people might ask an LLM and see how your brand shows up relative to competitors. This is where the Share of Voice tracking tools we discussed earlier become invaluable.

These same tools can help answer your optimization questions:

  • Track who is being cited or mentioned for each query, revealing competitive positioning.
  • Identify which queries your competitors appear for that you don’t, highlighting content gaps.
  • Show which of your own queries you appear for and which specific assets are being cited, pinpointing what’s working.

From this data, several key insights emerge:

  • Thematic visibility gaps: By analyzing trends across many queries, you can identify where your brand underperforms in LLM responses. This paints a clear picture of areas needing attention. For example, you’re strong in SEO but not in PPC content. 
  • Third-party resource mapping: These tools also reveal which external resources LLMs reference most frequently. This helps you build a list of high-value third-party sites that contribute to visibility, guiding outreach or brand mention strategies. 
  • Blind spot identification: When cross-referenced with SEO performance, these insights highlight blind spots; topics or sources where your brand’s credibility and representation could improve.

Understand the overlap between SEO and LLM optimization

LLMs may be reshaping discovery, but SEO remains the foundation of digital visibility.

Across five competitive categories, brands ranking on Google’s first page appeared in ChatGPT answers 62% of the time – a clear but incomplete overlap between search and AI results.

That correlation isn’t accidental. 

Many retrieval-augmented generation (RAG) systems pull data from search results and expand it with additional context. 

The more often your content appears in those results, the more likely it is to be cited by LLMs.

Brands with the strongest share of voice in LLM responses are typically those that invested in SEO first. 

Strong technical health, structured data, and authority signals remain the bedrock for AI visibility.

What this means for marketers:

  • Don’t over-focus on LLMs at the expense of SEO. AI systems still rely on clean, crawlable content and strong E-E-A-T signals.
  • Keep growing organic visibility through high-authority backlinks and consistent, high-quality content.
  • Use LLM tracking as a complementary lens to understand new research behaviors, not a replacement for SEO fundamentals.

Redefine on-page and off-page strategies for LLMs

Just as SEO has both on-page and off-page elements, LLM optimization follows the same logic – but with different tactics and priorities.

Off-page: The new link building

Most industries show a consistent pattern in the types of resources LLMs cite:

  • Wikipedia is a frequent reference point, making a verified presence there valuable.
  • Reddit often appears as a trusted source of user discussion.
  • Review websites and “best-of” guides are commonly used to inform LLM outputs.

Citation patterns across ChatGPT, Gemini, Perplexity, and Google’s AI Overviews show consistent trends, though each engine favors different sources.

This means that traditional link acquisition strategies, guest posts, PR placements, or brand mentions in review content will likely evolve. 

Instead of chasing links anywhere, brands should increasingly target:

  • Pages already being cited by LLMs in their category.
  • Reviews or guides that evaluate their product category.
  • Articles where branded mentions reinforce entity associations.

The core principle holds: brands gain the most visibility by appearing in sources LLMs already trust – and identifying those sources requires consistent tracking.

On-page: What your own content reveals

The same technologies that analyze third-party mentions can also reveal which first-party assets, content on your own website, are being cited by LLMs. 

This provides valuable insight into what type of content performs well in your space.

For example, these tools can identify:

  • What types of competitor content are being cited (case studies, FAQs, research articles, etc.).
  • Where your competitors show up but you don’t.
  • Which of your own pages exist but are not being cited.

From there, three key opportunities emerge:

  • Missing content: Competitors are cited because they cover topics you haven’t addressed. This represents a content gap to fill.
  • Underperforming content: You have relevant content, but it isn’t being referenced. Optimization – improving structure, clarity, or authority – may be needed.
  • Content enhancement opportunities: Some pages only require inserting specific Q&A sections or adding better-formatted information rather than full rewrites.

Leverage emerging technologies to turn insights into action

The next major evolution in LLM optimization will likely come from tools that connect insight to action.

Early solutions already use vector embeddings of your website content to compare it against LLM queries and responses. This allows you to:

  • Detect where your coverage is weak.
  • See how well your content semantically aligns with real LLM answers.
  • Identify where small adjustments could yield large visibility gains.

Current tools mostly generate outlines or recommendations.

The next frontier is automation – systems that turn data into actionable content aligned with business goals.

Timeline and expected results

While comprehensive LLM visibility typically builds over 6-12 months, early results can emerge faster than traditional SEO. 

The advantage: LLMs can incorporate new content within days rather than waiting months for Google’s crawl and ranking cycles. 

However, the fundamentals remain unchanged.

Quality content creation, securing third-party mentions, and building authority still require sustained effort and resources. 

Think of LLM optimization as having a faster feedback loop than SEO, but requiring the same strategic commitment to content excellence and relationship building that has always driven digital visibility.

From SEO foundations to LLM visibility

LLM traffic remains small compared to traditional search, but it’s growing fast.

A major shift in resources would be premature, but ignoring LLMs would be shortsighted. 

The smartest path is balance: maintain focus on SEO while layering in LLM strategies that address new ranking mechanisms.

Like early SEO, LLM optimization is still imperfect and experimental – but full of opportunity. 

Brands that begin tracking citations, analyzing third-party mentions, and aligning SEO with LLM visibility now will gain a measurable advantage as these systems mature.

In short:

  • Identify the third-party sources most often cited in your niche and analyze patterns across AI engines.
  • Map competitor visibility for key LLM queries using tracking tools.
  • Audit which of your own pages are cited (or not) – high Google rankings don’t guarantee LLM inclusion.
  • Continue strong SEO practices while expanding into LLM tracking – the two work best as complementary layers.

Approach LLM optimization as both research and brand-building.

Don’t abandon proven SEO fundamentals. Rather, extend them to how AI systems discover, interpret, and cite information.

How to balance speed and credibility in AI-assisted content creation

How to balance speed and credibility in AI-assisted content creation

AI tools can help teams move faster than ever – but speed alone isn’t a strategy.

As more marketers rely on LLMs to help create and optimize content, credibility becomes the true differentiator. 

And as AI systems decide which information to trust, quality signals like accuracy, expertise, and authority matter more than ever.

It’s not just what you write but how you structure it. AI-driven search rewards clear answers, strong organization, and content it can easily interpret.

This article highlights key strategies for smarter AI workflows – from governance and training to editorial oversight – so your content remains accurate, authoritative, and unmistakably human.

Create an AI usage policy

More than half of marketers are using AI for creative endeavors like content creation, IAB reports.

Still, AI policies are not always the norm. 

Your organization will benefit from clear boundaries and expectations. Creating policies for AI use ensures consistency and accountability.

Only 7% of companies using genAI in marketing have a full-blown governance framework, according to SAS.

However, 63% invest in creating policies that govern how generative AI is used across the organization. 

Source- “Marketers and GenAI- Diving Into the Shallow End,” SAS
Source- “Marketers and GenAI- Diving Into the Shallow End,” SAS

Even a simple, one-page policy can prevent major mistakes and unify efforts across teams that may be doing things differently.

As Cathy McPhillips, chief growth officer at the Marketing Artificial Intelligence Institute, puts it

  • “If one team uses ChatGPT while others work with Jasper or Writer, for instance, governance decisions can become very fragmented and challenging to manage. You’d need to keep track of who’s using which tools, what data they’re inputting, and what guidance they’ll need to follow to protect your brand’s intellectual property.” 

So drafting an internal policy sets expectations for AI use in the organization (or at least the creative teams).

When creating a policy, consider the following guidelines: 

  • What the review process for AI-created content looks like. 
  • When and how to disclose AI involvement in content creation. 
  • How to protect proprietary information (not uploading confidential or client information into AI tools).
  • Which AI tools are approved for use, and how to request access to new ones.
  • How to log or report problems.

Logically, the policy will evolve as the technology and regulations change. 

Keep content anchored in people-first principles

It can be easy to fall into the trap of believing AI-generated content is good because it reads well. 

LLMs are great at predicting the next best sentence and making it sound convincing. 

But reviewing each sentence, paragraph, and the overall structure with a critical eye is absolutely necessary.

Think: Would an expert say it like that? Would you normally write like that? Does it offer the depth of human experience that it should?

“People-first content,” as Google puts it, is really just thinking about the end user and whether what you are putting into the world is adding value. 

Any LLM can create mediocre content, and any marketer can publish it. And that’s the problem. 

People-first content aligns with Google’s E-E-A-T framework, which outlines the characteristics of high-quality, trustworthy content.

E-E-A-T isn’t a novel idea, but it’s increasingly relevant in a world where AI systems need to determine if your content is good enough to be included in search.

According to evidence in U.S. v. Google LLC, we see quality remains central to ranking:

  • “RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: [redacted]% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.” 
Source: U.S. v. Google LLC court documentation
Source: U.S. v. Google LLC court documentation

It suggests that the same quality factors reflected in E-E-A-T likely influence how AI systems assess which pages are trustworthy enough to ground their answers.

So what does E-E-A-T look like practically when working with AI content? You can:

  • Review Google’s list of questions related to quality content: Keep these in mind before and after content creation.
  • Demonstrate firsthand experience through personal insights, examples, and practical guidance: Weave these insights into AI output to add a human touch.
  • Use reliable sources and data to substantiate claims: If you’re using LLMs for research, fact-check in real time to ensure the best sources. 
  • Insert authoritative quotes either from internal stakeholders or external subject matter experts: Quoting internal folks builds brand credibility while external sources lend authority to the piece.
  • Create detailed author bios: Include:
    • Relevant qualifications, certifications, awards, and experience.
    • Links to social media, academic papers (if relevant), or other authoritative works.
  • Add schema markup to articles to clarify the content further: Schema can clarify content in a way that AI-powered search can better understand.
  • Become the go-to resource on the topic: Create a depth and breadth of material on the website that’s organized in a search-friendly, user-friendly manner. You can learn more in my article on organizing content for AI search.
Source: Creating helpful, reliable, people-first content,” Google Search Central
Source: Creating helpful, reliable, people-first content,” Google Search Central

Dig deeper: Writing people-first content: A process and template

Train the LLM 

LLMs are trained on vast amounts of data – but they’re not trained on your data. 

Put in the work to train the LLM, and you can get better results and more efficient workflows. 

Here are some ideas.

Maintain a living style guide

If you already have a corporate style guide, great – you can use that to train the model. If not, create a simple one-pager that covers things like:

  • Audience personas.
  • Voice traits that matter.
  • Reading level, if applicable.
  • The do’s and don’ts of phrases and language to use. 
  • Formatting rules such as SEO-friendly headers, sentence length, paragraph length, bulleted list guidelines, etc. 

You can refresh this as needed and use it to further train the model over time. 

Build a prompt kit  

Put together a packet of instructions that prompts the LLM. Here are some ideas to start with: 

  • The style guide
    • This covers everything from the audience personas to the voice style and formatting.
    • If you’re training a custom GPT, you don’t need to do this every time, but it may need tweaking over time. 
  • A content brief template
    • This can be an editable document that’s filled in for each content project and includes things like:
      • The goal of the content.
      • The specific audience.
      • The style of the content (news, listicle, feature article, how-to).
      • The role (who the LLM is writing as).
      • The desired action or outcome.
  • Content examples
    • Upload a handful of the best content examples you have to train the LLM. This can be past articles, marketing materials, transcripts from videos, and more. 
    • If you create a custom GPT, you’ll do this at the outset, but additional examples of content may be uploaded, depending on the topic. 
  • Sources
    • Train the model on the preferred third-party sources of information you want it to pull from, in addition to its own research. 
    • For example, if you want it to source certain publications in your industry, compile a list and upload it to the prompt.  
    • As an additional layer, prompt the model to automatically include any third-party sources after every paragraph to make fact-checking easier on the fly.
  • SEO prompts
    • Consider building SEO into the structure of the content from the outset.  
    • Early observations of Google’s AI Mode suggest that clearly structured, well-sourced content is more likely to be referenced in AI-generated results.

With that in mind, you can put together a prompt checklist that includes:

  • Crafting a direct answer in the first one to two sentences, then expanding with context.
  • Covering the main question, but also potential subquestions (“fan-out” queries) that the system may generate (for example, questions related to comparisons, pros/cons, alternatives, etc.).
  • Chunking content into many subsections, with each subsection answering a potential fan-out query to completion.
  • Being an expert source of information in each individual section of the page, meaning it’s a passage that can stand on its own.
  • Provide clear citations and semantic richness (synonyms, related entities) throughout. 

Dig deeper: Advanced AI prompt engineering strategies for SEO

Create custom GPTs or explore RAG 

A custom GPT is a personalized version of ChatGPT that’s trained on your materials so it can better create in your brand voice and follow brand rules. 

It mostly remembers tone and format, but that doesn’t guarantee the accuracy of output beyond what’s uploaded.

Some companies are exploring RAG (retrieval-augmented generation) to further train LLMs on the company’s own knowledge base. 

RAG connects an LLM to a private knowledge base, retrieving relevant documents at query time so the model can ground its responses in approved information.

While custom GPTs are easy, no-code setups, RAG implementation is more technical – but there are companies/technologies out there that can make it easier to implement. 

That’s why GPTs tend to work best for small or medium-scale projects or for non-technical teams focused on maintaining brand consistency.

Create a custom GPT in ChatGPT
Create a custom GPT in ChatGPT

RAG, on the other hand, is an option for enterprise-level content generation in industries where accuracy is critical and information changes frequently.

Run an automated self-review

Create parameters so the model can self-assess the content before further editorial review. You can create a checklist of things to prompt it.

For example:

  • “Is the advice helpful, original, people-first?” (Perhaps using Google’s list of questions from its helpful content guidance.) 
  • “Is the tone and voice completely aligned with the style guide?” 

Have an established editing process 

Even the best AI workflow still depends on trained editors and fact-checkers. This human layer of quality assurance protects accuracy, tone, and credibility.

Editorial training

About 33% of content writers and 24% of marketing managers added AI skills to their LinkedIn profiles in 2024.

Writers and editors need to continue to upskill in the coming year, and, according to the Microsoft 2025 annual Work Trend Index, AI skilling is the top priority.  

Microsoft 2025 Annual Work Trend Index
Source: 2025 Microsoft Work Trend Index Annual Report

Professional training creates baseline knowledge so your team gets up to speed faster and can confidently handle outputs consistently.

This includes training on how to effectively use LLMs and how to best create and edit AI content.

In addition, training content teams on SEO helps them build best practices into prompts and drafts.

Editorial procedures

Ground your AI-assisted content creation in editorial best practices to ensure the highest quality. 

This might include:

  • Identifying the parts of the content creation workflow that are best suited for LLM assistance.
  • Conducting an editorial meeting to sign off on topics and outlines. 
  • Drafting the content.
  • Performing the structural edit for clarity and flow, then copyediting for grammar and punctuation.
  • Getting sign-off from stakeholders.  
AI editorial process
AI editorial process

The AI editing checklist

Build a checklist to use during the review process for quality assurance. Here are some ideas to get you started:

  • Every claim, statistic, quote, or date is accompanied by a citation for fact-checking accuracy.
  • All facts are traceable to credible, approved sources.
  • Outdated statistics (more than two years) are replaced with fresh insights. 
  • Draft meets the style guide’s voice guidelines and tone definitions. 
  • Content adds valuable, expert insights rather than being vague or generic.
  • For thought leadership, ensure the author’s perspective is woven throughout.
  • Draft is run through the AI detector, aiming for a conservative percentage of 5% or less AI. 
  • Draft aligns with brand values and meets internal publication standards.
  • Final draft includes explicit disclosure of AI involvement when required (client-facing/regulatory).

Grounding AI content in trust and intent

AI is transforming how we create, but it doesn’t change why we create.

Every policy, workflow, and prompt should ultimately support one mission: to deliver accurate, helpful, and human-centered content that strengthens your brand’s authority and improves your visibility in search. 

Dig deeper: An AI-assisted content process that outperforms human-only copy

Why a lower CTR can be better for your PPC campaigns

Why a lower CTR can be better for your Google Ads campaigns

Many PPC advertisers obsess over click-through rates, using them as a quick measure of ad performance.

But CTR alone doesn’t tell the whole story – what matters most is what happens after the click. That’s where many campaigns go wrong.

The problem with chasing high CTRs

Most advertisers think the ad with the highest CTR is often the best. It should have a high Quality Score and attract lots of clicks.

However, in most cases, lower CTR ads usually outperform higher CTR ads in terms of total conversions and revenue.

If all I cared about was CTR, then I could write an ad:

  • “Free money.”
  • “Claim your free money today.”
  • “No strings attached.”

That ad would get an impressive CTR for many keywords, and I’d go out of business pretty quickly, giving away free money. 

When creating ads, we must consider:

  • Type of searchers we want to attract.
  • Ensure the users are qualified.
  • Set expectations for the landing page.

I can take my free money ad and refine it:

  • “Claim your free money.”
  • “Explore college scholarships.”
  • “Download your free guide.”

I’ve now:

  • Told searchers they can get free money for college through scholarships if they download a guide.
  • Narrowed down my audience to people who are willing to apply for scholarships and willing to download a guide, presumably in exchange for some information.

If you focus solely on CTR and don’t consider attracting the right audience, your advertising will suffer. 

While this sentiment applies to both B2C and B2B companies, B2B companies must be exceptionally aware of how their ads appear to consumers versus business searchers. 

B2B companies must pre-qualify searchers

If you are advertising for a B2B company, you’ll often notice that CTR and conversion rates have an inverse relationship. As CTR increases, conversion rates decrease.

The most common reason for this phenomenon is that consumers and businesses can search for many B2B keywords. 

B2B companies must try to show that their products are for businesses, not consumers.

For instance, “safety gates” is a common search term. 

The majority of people looking to buy a safety gate are consumers who want to keep pets or babies out of rooms or away from stairs. 

However, safety gates and railings are important for businesses with factories, plants, or industrial sites. 

These two ads are both for companies that sell safety gates. The first ad’s headlines for Uline could be for a consumer or a business. 

It’s not until you look at the description that you realize this is for mezzanines and catwalks, which is something consumers don’t have in their homes. 

As many searchers do not read descriptions, this ad will attract both B2B and B2C searchers. 

OSHA compliance - Google Ads

The second ad mentions Industrial in the headline and follows that up with a mention of OSHA compliance in the description and the sitelinks. 

While both ads promote similar products, the second one will achieve a better conversion rate because it speaks to a single audience. 

We have a client who specializes in factory parts, and when we graph their conversion rates by Quality Score, we can see that as their Quality Score increases, their conversion rates decrease. 

They will review their keywords and ads whenever they have a 5+ Quality Score on any B2B or B2C terms. 

This same logic does not apply to B2B search terms. 

Those terms often contain more jargon or qualifying statements when looking for B2B services and products. 

B2B advertisers don’t have to use characters to weed out B2C consumers and can focus their ads only on B2B searchers.

How to balance CTR and conversion rates

As you are testing various ads to find your best pre-qualifying statements, it can be tricky to examine the metrics. Which one of these would be your best ad?

  • 15% CTR, 3% conversion rate.
  • 10% CT, 7% conversion rate.
  • 5% CTR, 11% conversion rate.

When examining mixed metrics, CTR and conversion rates, we can use additional metrics to define our best ads. My favorite two are:

  • Conversion per impression (CPI): This is a simple formula dividing your conversion by the number of impressions (conversions/impressions). 
  • Revenue per impression (RPI): If you have variable checkout amounts, you can instead use your revenue metrics to decide your best ads by dividing your revenue by your impressions (revenue/impressions).

You can also multiply the results by 1,000 to make the numbers easier to digest instead of working with many decimal points. So, we might write: 

  • CPI = (conversions/impressions) x 1,000 

By using impression metrics, you can find the opportunity for a given set of impressions. 

CTRConversion rateImpressionsClicksConversionsCPI
15%3%5,00075022.54.5
10%7%4,000400287
5%11%4,50022524.755.5

By doing some simple math, we can see that option 2, with a 10% CTR and a 7% conversion rate, gives us the most total conversions.

Dig deeper: CRO for PPC: Key areas to optimize beyond landing pages

Focus on your ideal customers

A good CTR helps bring more people to your website, improves your audience size, and can influence your Quality Scores.

However, high CTR ads can easily attract the wrong audience, leading you to waste your budget.

As you are creating headlines, consider your audience. 

  • Who are they? 
  • Do non-audience people search for your keywords?
    • How do you dissuade users who don’t fit your audience from clicking on your ads? 
  • How do you attract your qualified audience?
  • Are your ads setting proper landing page expectations?

By considering each of these questions as you create ads, you can find ads that speak to the type of users you want to attract to your site. 

These ads are rarely your best CTRs. These ads balance the appeal of high CTRs with pre-qualifying statements that ensure the clicks you receive have the potential to turn into your next customer. 

The agentic web is here: Why NLWeb makes schema your greatest SEO asset

The agentic web is here: Why NLWeb makes schema your greatest SEO asset

The web’s purpose is shifting. Once a link graph – a network of pages for users and crawlers to navigate – it’s rapidly becoming a queryable knowledge graph

For technical SEOs, that means the goal has evolved from optimizing for clicks to optimizing for visibility and even direct machine interaction.

Enter NLWeb – Microsoft’s open-source bridge to the agentic web

At the forefront of this evolution is NLWeb (Natural Language Web), an open-source project developed by Microsoft. 

NLWeb simplifies the creation of natural language interfaces for any website, allowing publishers to transform existing sites into AI-powered applications where users and intelligent agents can query content conversationally – much like interacting with an AI assistant.

Developers suggest NLWeb could play a role similar to HTML in the emerging agentic web

Its open-source, standards-based design makes it technology-agnostic, ensuring compatibility across vendors and large language models (LLMs). 

This positions NLWeb as a foundational framework for long-term digital visibility.

Schema.org is your knowledge API: Why data quality is the NLWeb foundation

NLWeb proves that structured data isn’t just an SEO best practice for rich results – it’s the foundation of AI readiness. 

Its architecture is designed to convert a site’s existing structured data into a semantic, actionable interface for AI systems. 

In the age of NLWeb, a website is no longer just a destination. It’s a source of information that AI agents can query programmatically.

The NLWeb data pipeline

The technical requirements confirm that a high-quality schema.org implementation is the primary key to entry.

Data ingestion and format

The NLWeb toolkit begins by crawling the site and extracting the schema markup. 

The schema.org JSON-LD format is the preferred and most effective input for the system. 

This means the protocol consumes every detail, relationship, and property defined in your schema, from product types to organization entities. 

For any data not in JSON-LD, such as RSS feeds, NLWeb is engineered to convert it into schema.org types for effective use.

Semantic storage

Once collected, this structured data is stored in a vector database. This element is critical because it moves the interaction beyond traditional keyword matching. 

Vector databases represent text as mathematical vectors, allowing the AI to search based on semantic similarity and meaning. 

For example, the system can understand that a query using the term “structured data” is conceptually the same as content marked up with “schema markup.” 

This capacity for conceptual understanding is absolutely essential for enabling authentic conversational functionality.

Protocol connectivity

The final layer is the connectivity provided by the Model Context Protocol (MCP). 

Every NLWeb instance operates as an MCP server, an emerging standard for packaging and consistently exchanging data between various AI systems and agents. 

MCP is currently the most promising path forward for ensuring interoperability in the highly fragmented AI ecosystem.

The ultimate test of schema quality

Since NLWeb relies entirely on crawling and extracting schema markup, the precision, completeness, and interconnectedness of your site’s content knowledge graph determine success.

The key challenge for SEO teams is addressing technical debt. 

Custom, in-house solutions to manage AI ingestion are often high-cost, slow to adopt, and create systems that are difficult to scale or incompatible with future standards like MCP. 

NLWeb addresses the protocol’s complexity, but it cannot fix faulty data. 

If your structured data is poorly maintained, inaccurate, or missing critical entity relationships, the resulting vector database will store flawed semantic information. 

This leads inevitably to suboptimal outputs, potentially resulting in inaccurate conversational responses or “hallucinations” by the AI interface.

Robust, entity-first schema optimization is no longer just a way to win a rich result; it is the fundamental barrier to entry for the agentic web. 

By leveraging the structured data you already have, NLWeb allows you to unlock new value without starting from scratch, thereby future-proofing your digital strategy.

NLWeb vs. llms.txt: Protocol for action vs. static guidance

The need for AI crawlers to process web content efficiently has led to multiple proposed standards. 

A comparison between NLWeb and the proposed llms.txt file illustrates a clear divergence between dynamic interaction and passive guidance.

The llms.txt file is a proposed static standard designed to improve the efficiency of AI crawlers by:

  • Providing a curated, prioritized list of a website’s most important content – typically formatted in markdown.
  • Attempting to solve the legitimate technical problems of complex, JavaScript-loaded websites and the inherent limitations of an LLM’s context window.

In sharp contrast, NLWeb is a dynamic protocol that establishes a conversational API endpoint. 

Its purpose is not just to point to content, but to actively receive natural language queries, process the site’s knowledge graph, and return structured JSON responses using schema.org. 

NLWeb fundamentally changes the relationship from “AI reads the site” to “AI queries the site.”

AttributeNLWebllms.txt
Primary goalEnables dynamic, conversational interaction and structured data outputImproves crawler efficiency and guides static content ingestion
Operational modelAPI/Protocol (active endpoint)Static Text File (passive guidance)
Data format usedSchema.org JSON-LDMarkdown
Adoption statusOpen project; connectors available for major LLMs, including Gemini, OpenAI, and AnthropicProposed standard; not adopted by Google, OpenAI, or other major LLMs
Strategic advantageUnlocks existing schema investment for transactional AI uses, future-proofing contentReduces computational cost for LLM training/crawling

The market’s preference for dynamic utility is clear. Despite addressing a real technical challenge for crawlers, llms.txt has failed to gain traction so far. 

NLWeb’s functional superiority stems from its ability to enable richer, transactional AI interactions.

It allows AI agents to dynamically reason about and execute complex data queries using structured schema output.

The strategic imperative: Mandating a high-quality schema audit

While NLWeb is still an emerging open standard, its value is clear. 

It maximizes the utility and discoverability of specialized content that often sits deep in archives or databases. 

This value is realized through operational efficiency and stronger brand authority, rather than immediate traffic metrics.

Several organizations are already exploring how NLWeb could let users ask complex questions and receive intelligent answers that synthesize information from multiple resources – something traditional search struggles to deliver. 

The ROI comes from reducing user friction and reinforcing the brand as an authoritative, queryable knowledge source.

For website owners and digital marketing professionals, the path forward is undeniable: mandate an entity-first schema audit

Because NLWeb depends on schema markup, technical SEO teams must prioritize auditing existing JSON-LD for integrity, completeness, and interconnectedness. 

Minimalist schema is no longer enough – optimization must be entity-first.

Publishers should ensure their schema accurately reflects the relationships among all entities, products, services, locations, and personnel to provide the context necessary for precise semantic querying. 

The transition to the agentic web is already underway, and NLWeb offers the most viable open-source path to long-term visibility and utility. 

It’s a strategic necessity to ensure your organization can communicate effectively as AI agents and LLMs begin integrating conversational protocols for third-party content interaction.

Could Galaxy S26 Plus delay One UI 8.5 Beta launch?

Samsung is reportedly preparing for One UI 8.5, which could debut alongside the Galaxy S26 series early next year. However, recent reports suggest the company might be running late with the Galaxy S26 launch, possibly pushing the event beyond January 2026.

The delay appears to be connected to Samsung’s change in its phone lineup. Earlier rumors said the regular Galaxy S26 might be called “Pro” and a slim “Edge” model would replace the “Plus”.

Now, those plans are reportedly canceled. Samsung is going back to the familiar lineup – Galaxy S26, Galaxy S26 Plus, and Galaxy S26 Ultra. The Plus model is back, while the Edge and Pro names are gone.

This could also affect One UI 8.5. Since the Galaxy S26 Plus development is running late, the release of One UI 8.5 Beta may also be delayed. If the Galaxy Unpacked event is postponed to late February or early March, users will also have to wait longer to get another major update.

Samsung Galaxy S26 Series

Phones in picture – Galaxy S25 Ultra, Plus and vanilla

However, the One UI 8.5 Beta program might start in late November, which gives users an early look at new features. But if the phone launch is postponed, the beta could run for several months before the official release, which may feel long for eager users. Or Samsung might delay One UI 8.5 Beta Program.

Despite these delays, the changes could be beneficial. Samsung seems focused on improving hardware and software, with upgrades expected in performance and camera capabilities with the next series. Going back to a simple naming system also makes it easier for people to understand the lineup.

While fans might be disappointed by the delay, it could mean a more polished experience when new phones and software finally launch. Samsung has not confirmed any dates yet, so users will have to wait for official announcements.

The return of Galaxy S26 Plus and the lineup reshuffle may push back the One UI 8.5 beta, but it could result in better phones and a smoother software update for users in 2026. Stay tuned.

Google Search Top Stories Preferred Source

The post Could Galaxy S26 Plus delay One UI 8.5 Beta launch? appeared first on Sammy Fans.

Your ads are dying: How to spot and stop creative fatigue before it tanks performance

Your ads are dying: How to spot and stop creative fatigue before it tanks performance

The death of an ad, like the end of the world, doesn’t happen with a bang but with a whimper. 

If you’re paying attention, you’ll notice the warning signs: click-through rate (CTR) slips, engagement falls, and cost-per-click (CPC) creeps up. 

If you’re not, one day your former top performer is suddenly costing you money.

Creative fatigue – the decline in ad performance caused by overexposure or audience saturation – is often the culprit. 

It’s been around as long as advertising itself, but in an era where platforms control targeting, bidding, and even creative testing, it’s become one of the few variables marketers can still influence.

This article explains how to spot early signs of fatigue across PPC platforms before your ROI turns sour, and how to manually refresh your creative in the age of AI-driven optimization. 

We’ll look at four key factors: 

  • Ad quality.
  • Creative lifecycle.
  • Audience saturation.
  • Platform dynamics.

1. Ad quality

Low-quality ads burn out much faster than high-quality ones. 

To stand the test of time, your creative needs to be both relevant and resonant – it has to connect with the viewer. 

But it’s important to remember that creative fatigue isn’t the same as bad creative. Even a brilliant ad will wear out if it’s shown too often or for too long. 

Think of it like a joke – no matter how good it is, it stops landing once the audience has heard it a dozen times.

The data behind ad quality

To track ad quality, monitor how your key metrics trend over time – especially CTR, CPC, and conversion rate (CVR). 

A high initial CTR followed by a gradual decline usually signals a strong performer reaching the end of its natural run.

Because every campaign operates in a different context, it’s best to compare an ad’s results against your own historical benchmarks rather than rigid KPI targets. 

Factor in elements like seasonality and placement to avoid overgeneralizing performance trends. 

And to read the data accurately, make sure you’re analyzing results by creative ID, not just by campaign or ad set.

Dig deeper: How Google Ads’ AI tools fix creative bottlenecks, streamline asset creation

2. Creative lifecycle 

Every ad has a natural lifespan – and every platform its own life expectancy. 

No matter how timely or novel your ad was at launch, your audience will eventually acclimate to its visuals or message. 

Keeping your creative fresh helps reset the clock on fatigue.

Refreshing doesn’t have to mean reinventing.

Sometimes a new headline, a different opening shot, or an updated call to action is enough to restore performance. (See the table below for rule-of-thumb refresh guidelines by platform.)

The data behind creative lifecycle

To distinguish a normal lifecycle from an accelerated one that signals deeper issues, track declining performance metrics like CTR and frequency – how many times a user sees your ad. 

A high-performing ad typically follows a predictable curve.

Engagement drops about 20-30% week over week as it nears the end of its run. Any faster, and something else needs fixing.

Your refresh rate should also match your spend. Bigger budgets drive higher frequency, which naturally shortens a creative’s lifespan.

Get the newsletter search marketers rely on.


3. Audience saturation

You’ve got your “cool ad” – engaging visuals, a catchy hook, and a refresh cadence all mapped out. 

You put a big budget behind it, only to watch performance drop like a stone after a single day. Ouch.

You’re likely running into the third factor of creative fatigue: audience saturation – when the same people see your ad again and again, driving performance steadily downward. 

Failing to balance budget and audience size leads even the strongest creative to overexposure and a shorter lifespan.

The data behind audience saturation

To spot early signs of saturation, track frequency, and reach together. 

Frequency measures how many times each person sees your ad, while reach counts the number of unique people who’ve seen it. 

When frequency rises but reach plateaus, your ad hits the same people repeatedly instead of expanding to new audiences. 

Ideally, both numbers should climb in tandem.

Some platforms – including Google, Microsoft, LinkedIn, and DSP providers – offer frequency caps to control exposure. 

Others, like Meta, Amazon, and TikTok, don’t.

Dig deeper: How to beat audience saturation in PPC: KPIs, methodology and case studies

4. Platform dynamics

These days, algorithms don’t just reflect performance – they shape it. 

Once an ad starts to underperform, a feedback loop kicks in.

Automated systems reduce delivery, which further hurts performance, which leads to even less delivery.

How each platform evaluates creative health – and how quickly you respond before your ad is demoted – is the fourth and final factor in understanding creative fatigue.

The data behind platform dynamics

Every platform has its own system for grading creative performance, but the clearest sign of algorithmic demotion is declining impressions or spend despite stable budgets and targeting.

The tricky part is that this kind of underdelivery can look a lot like normal lifecycle decline or audience saturation. In reality, it’s often a machine-level penalty. 

To spot it, monitor impression share and spend velocity week over week, at the creative level (not by campaign or ad set).

What to do when the algorithm punishes you

When impressions or spend drop despite a stable budget and consistent targeting, your ad has likely been demoted by the platform. 

That doesn’t necessarily mean it’s poor quality. 

This usually means the algorithm has lost “confidence” in its ability to achieve your chosen goal, such as engagement or conversions.

Here’s how to recover:

  • Check your performance metrics: Sharp declines in CTR, engagement, or conversions can trigger a penalty. Compare the trend line to earlier in the campaign.
  • Assess audience saturation: If frequency exceeds 3 for prospecting or 5 for retargeting, your audience may be too small for the budget. Broaden targeting or reduce spend.
  • Refresh the creative: Launch new or updated versions under new ad IDs so the system re-enters its learning phase.
  • Don’t make drastic edits: Frequent budget, bid, or targeting changes reset learning and slow recovery.

When the algorithm cools your ad, don’t panic. 

Act quickly to identify whether the issue lies in quality, freshness, audience, or budget – and make deliberate adjustments, not hasty ones.

Turning creative fatigue into a performance signal

Creative fatigue, like death and taxes, is inevitable. Every ad has a beginning, middle, and end. 

The key is recognizing those stages early through vigilant data monitoring, so you can extend performance instead of waiting for the crash.

While automation may be taking over much of marketing, ad creative, and copy remain one arena where humans still outperform machines. 

Great marketers today don’t just make good ads. They know how to sustain them through smart refreshes, rotations, and timely retirements.

Because when you can see the whimper coming, you can make sure your next ad lands with a bang.

Dig deeper: 7 best AI ad creative tools, for beginners to pros

Your Q4 ecommerce checklist for peak holiday sales

Your Q4 ecommerce checklist for peak holiday sales

Q4 is here – and for ecommerce brands, that means the biggest sales opportunities of the year are just ahead.

Black Friday, Cyber Monday, Christmas – the biggest sales events are just around the corner. To hit your targets, preparation is key. It’s not too late to act, and the opportunities ahead are huge.

Use this checklist to get up to speed quickly and set your account up for success.

Website and UX

Review site speed 

Start with a website audit to identify any red flags. Tools like PageSpeed Insights can help diagnose technical issues. 

Encourage clients to review key pages and the checkout process on multiple devices to ensure there are no bottlenecks. 

If resources allow, use heatmap or session analysis tools such as Microsoft Clarity or Hotjar to better understand user behavior and improve the on-site experience.

Confirm tracking setup

Double-check that all tracking is configured correctly across platforms. 

Don’t just verify that tags are firing – make sure all events are set up to their fullest potential. 

For example, confirm high match rates in Meta and ensure Enhanced Conversions is fully configured.

Add VIP sign-ups/pop-ups

Before the sales period begins, encourage users to join a VIP list for Black Friday or holiday promotions. 

This can give them early access or exclusive deals. Set up a separate automated email flow to follow up with these subscribers.

Launch sale page early

Publish your sale page as soon as possible so Google can crawl and index it for SEO. 

The page doesn’t need to be accessible from your site navigation or populated with products right away – the key is to get it live early. 

If possible, reuse the same URL from previous years to build on existing SEO equity. 

You can also add a data capture form to collect VIP sign-ups until the page goes live with products.

Display cutoffs clearly

If shipping cutoff dates aren’t clear, many users won’t risk placing an order close to the deadline. 

Clearly display both standard and express delivery cutoff dates on your website.

Highlight sales sitewide with banners

Don’t rely solely on a homepage carousel to promote your sale. 

Add a banner or header across all pages so users know a sale is happening, no matter where they land.

Dig deeper: Holiday ecommerce to hit record $253 billion – here’s what’s driving it

Get the newsletter search marketers rely on.


Creative and messaging

Run pre-sale lead gen ads

As mentioned with pop-ups, supplementing that strategy with lead generation ads can help grow your email list and build early buzz around your upcoming sale.

Launch simple, clear primary sale ads

These will be your Black Friday or holiday sale ads running for most of the campaign. 

Keep the messaging and promotion straightforward. Any confusion in a crowded feed will make users scroll past. 

Use strong branding, put the offer front and center, and include a clear CTA. On Meta, this often works best as a simple image ad.

Create Cyber Monday-specific ads

Many brands simply extend their Black Friday sale rather than creating Cyber Monday-specific ads and web banners. 

Take advantage of the opportunity to give your campaign a fresh angle – both in messaging and offer. 

Since it’s often the final day of your sale, you can go bigger on discounts for one day or add a free gift with purchases over a certain amount. 

It’s also a great way to move slower-selling inventory left over from Black Friday.

Refresh primary ads with ‘last days’ urgency

Add urgency to your messaging as the sale nears its end by including countdowns or end dates. 

This tactic works especially well for longer campaigns where ad fatigue can set in.

Finalize all creative assets early

November and December are busy months for ad builds and platform reviews. 

Make sure all sale assets are ready several weeks before launch to avoid rushed builds and delays from longer approval times.

Advertising and data

Audit product feeds

Make sure item disapprovals and limited products are kept to a minimum. Double-check that your setup is current. 

For example, if your return window has changed, update that information in Google Merchant Center.

Refresh first-party data and remarketing lists

Update any lists you plan to use this season. 

If you don’t have direct integrations, upload new or revised lists manually. 

Review your integrations and confirm that data is flowing correctly.

Build lookalike and custom audiences early

Start building audiences as soon as your first-party and remarketing lists are refreshed. 

Create Meta Lookalike Audiences, Performance Max audience signals, and Custom Audiences. 

If you run into volume issues, you’ll have time to adjust or explore alternatives.

Finalize budget by week, not just month

Agree on budgets early so you know your spending limits. Don’t plan just by month. Map out weekly spend, too. 

You’ll likely want to invest more heavily in the final week of November than in the first.

Use title and description extensions or ad customizers

Updating search ad copy can be tedious and time-consuming. 

These tools let you control and update copy dynamically without editing every RSA manually – saving hours in campaign builds.

Use ad assets, promo sitelinks, and GMC promotions

Enable sale-related sitelinks, callouts, and promotion extensions across search campaigns so your offers appear everywhere. 

In Shopping, set up Google Merchant Center promotions to highlight deals and incentives in your Shopping ad annotations.

Apply countdown features

Add a dynamic countdown timer to search ads to show exactly when your sale ends. 

This feature helps your ads stand out and adds urgency as the sale nears its close.

Launch search remarketing activity

Bid on generic keywords you wouldn’t normally target, but limit them to remarketing or first-party data audiences. 

For example, people searching for “Black Friday deals” who have purchased from your site in the past 30 days already know your brand and are primed to buy again.

Apply seasonality adjustments

If you use Google Ads or Microsoft Ads with a target ROAS strategy, apply seasonality adjustments to prepare the algorithm for higher conversion rates during the sale period. 

Remember to apply a negative adjustment once the sale ends to prevent unnecessary spend spikes.

Dig deeper: Seasonal PPC: Your guide to boosting holiday ad performance

Focus on what matters most for Q4 success

Not every tactic will fit your business or resources – and that’s OK. 

The key is to focus on what will have the biggest impact on your store. 

By addressing most of the points in this checklist, you’ll build a solid foundation for a strong Q4 and set yourself up to capture more sales during the busiest shopping season of the year.

Preparation is everything. The earlier you audit, test, and launch, the smoother your campaigns will run when traffic – and competition – start to surge.

Black hat GEO is real – Here’s why you should pay attention

Blackhat GEO is real – Here’s why you should be paying attention

In the early days of SEO, ranking algorithms were easy to game with simple tactics that became known as “black hat” SEO – white text on a white background, hidden links, keyword stuffing, and paid link farms. 

Early algorithms weren’t sophisticated enough to detect these schemes, and sites that used them often ranked higher. 

Today, large language models power the next generation of search, and a new wave of black hat techniques are emerging to manipulate rankings and prompt results for advantage.

The AI content boom – and the temptation to cut corners

Up to 21% of U.S. users access AI tools like ChatGPT, Claude, Gemini, Copilot, Perplexity, and DeepSeek more than 10 times per month, according to SparkToro. 

Overall adoption has jumped from 8% in 2023 to 38% in 2025. 

It’s no surprise that brands are chasing visibility – especially while standards and best practices are still taking shape.

One clear sign of this shift is the surge in AI-generated content. Graphite.io and Axios report that the share of articles written by AI has now surpassed those created by humans.

Two years ago, Sports Illustrated was caught publishing AI-generated articles under fake writer profiles – a well-intentioned shortcut that backfired. 

The move damaged the brand’s credibility without driving additional traffic. 

Its authoritativeness, one of the pillars of Google’s E-E-A-T (experience, expertise, authoritativeness, and trustworthiness) framework, was compromised.

While Google continues to emphasize E-E-A-T as the North Star for quality, some brands are testing the limits. 

With powerful AI tools now able to execute these tactics faster and at scale, a new wave of black hat practices is emerging.

Get the newsletter search marketers rely on.


The new black hat GEO playbook

As black hat GEO gains traction, several distinct tactics are emerging – each designed to exploit how AI models interpret and rank content.

Mass AI-generated spam

LLMs are being used to automatically produce thousands of low-quality, keyword-stuffed articles, blog posts, or entire websites – often to build private blog networks (PBNs). 

The goal is sheer volume, which artificially boosts link authority and keyword rankings without human oversight or original insight.

Fake E-E-A-T signals

Search engines still prioritize experience, expertise, authoritativeness, and trustworthiness. 

Black hat GEO now fabricates these signals using AI to:

  • Create synthetic author personas with generated headshots and fake credentials.
  • Mass-produce fake reviews and testimonials.
  • Generate content that appears comprehensive but lacks genuine, human-validated experience.

LLM cloaking and manipulation

A more advanced form of cloaking, this tactic serves one version of content to AI crawlers – packed with hidden prompts, keywords, or deceptive schema markup – and another to human users. 

The goal is to trick the AI into citing or ranking the content more prominently.

Schema misuse for AI Overviews

Structured data helps AI understand context, but black hat users can inject misleading or irrelevant schema to misrepresent the page’s true purpose, forcing it into AI-generated answers or rich snippets for unrelated, high-value searches.

SERP poisoning with misinformation

AI can quickly generate high volumes of misleading or harmful content targeting competitor brands or industry terms. 

The aim is to damage reputations, manipulate rankings, and push legitimate content down in search results.

Dig deeper: Hidden prompt injection: The black hat trick AI outgrew

The real risks of black hat GEO

Even Google surfaces YouTube videos that explain how these tactics work. But just because they’re easy to find doesn’t mean they’re worth trying. 

The risks of engaging in – or being targeted by – black hat GEO are significant and far-reaching, threatening a brand’s visibility, revenue, and reputation.

Severe search engine penalties

Search engines like Google are deploying increasingly advanced AI-powered detection systems (such as SpamBrain) to identify and penalize these tactics.

  • De-indexing: The most severe penalty is the complete removal of your website from search results, making you invisible to organic traffic.
  • Manual actions: Human reviewers can issue manual penalties that lead to a sudden and drastic drop in rankings, requiring months of costly, intensive work to recover.
  • Algorithmic downgrading: The site’s ranking for targeted keywords can be significantly suppressed, leading to a massive loss of traffic and potential customers.

Reputation and trust damage 

Black hat tactics inherently prioritize manipulation over user value, leading to poor user experience, spammy content, and deceptive practices.

  • Loss of credibility: When users encounter irrelevant, incoherent, or keyword-stuffed content – or find that an AI-cited answer is baseless – it damages the perception of the brand’s expertise and honesty.
  • Erosion of E-E-A-T: Since AI relies on E-E-A-T signals for authoritative responses, being caught fabricating these signals can permanently erode the brand’s trustworthiness in the eyes of the algorithm and the public.
  • Malware distribution: In some extreme cases, cybercriminals use black hat SEO to poison search results, redirecting users to sites that install malware or exploit user data. If a brand’s site is compromised and used for such purposes, the damage is catastrophic.

AI changes the game – not the rules

The growth of AI-driven platforms is remarkable – but history tends to repeat itself. 

Black hat SEO in the age of LLMs is no different. 

While the tools have evolved, the principle remains the same: best practices win. 

Google has made that clear, and brands that stay focused on quality and authenticity will continue to rise above the noise.

❌