Normal view

Today — 28 February 2026Search Engine Land

Microsoft Ads launches self-serve negative keyword lists

27 February 2026 at 23:16
Microsoft Ads

Self-serve negative keyword lists are now live in Microsoft Advertising, according to Ads Liaison Navah Hopkins — giving advertisers long-requested control without submitting support tickets.

What’s happening. Advertisers can now create and manage shared negative keyword lists directly in the UI. Lists support up to 5,000 negative keywords (one per line) and can be applied at either the campaign or account level. Match types function the same way in Performance Max as they do in traditional Search campaigns.

  • Lists can also be edited, exported as CSV files, or removed from campaigns as needed.
  • Microsoft notes that match type formatting requires brackets for exact match and quotation marks for phrase match — not hyphens.

Why we care. Negative keywords are critical for filtering irrelevant traffic and protecting budgets. Making lists self-serve streamlines workflow, reduces reliance on support tickets, and gives advertisers faster control over search query exclusions.

The bottom line. Microsoft is handing more operational control back to advertisers — and eliminating friction in one of the most essential levers for campaign efficiency.

Dig deeper. How to add keywords that won’t trigger my ads (negative keywords)

Yesterday — 27 February 2026Search Engine Land

Google publishes new Google Ads passkey help doc

27 February 2026 at 22:48
How to tell if Google Ads automation helps or hurts your campaigns

Google published a new help document outlining how passkeys work in Google Ads — a timely move as advertisers face a rise in account hacks and phishing attempts.

What’s happening. The new help page explains how passkeys function as a passwordless, phishing-resistant login method in Google Ads, and clarifies when they’re required — including for sensitive actions like user access changes and account linking updates.

The documentation walks advertisers through device requirements, setup steps and security considerations.

Why we care. Ad accounts are increasingly being targeted by attackers, with compromised logins leading to budget theft, campaign disruption and data loss. Clearer guidance from Google gives advertisers a straightforward path to strengthening account defenses at a critical moment.

The bottom line. As account takeovers become more common, better education around security tools like passkeys is a practical win for advertisers looking to lock down access and reduce risk.

Dig deeper. About Google Ads account passkey

Google patent hints it could replace your landing pages with AI versions

27 February 2026 at 20:44

A Google patent suggests Search may take you from the results page to a super-personalized AI-generated page that answers your query instead of sending you to a website.

Patent. The patent, AI-generated content page tailored to a specific user, was filed by Google about a year ago and granted last month.

This patent describes a system that uses AI to automatically create a custom landing page when you perform a search. Instead of sending you to a generic homepage, it dynamically generates a page tailored to your intent and the organization’s content.

Patent abstract. Here is a copy of the abstract of the patent:

“Techniques for generating an artificial intelligence (AI)-generated page for a first organization. The system can include a machine-learned model configured to generate the AI-generated page. The system can receive from a user device associated with a user account, the user query. Additionally, the system can generate a search result page for the user query. The search result page can include a first result associated with a first landing page of the first organization. The system can calculate a landing page score for the first landing page. The system can generate an updated search result page based on the landing page score exceeding a threshold value, the updated search result page having a navigation link to an AI-generated page for the first organization. The system can cause a presentation, on a display of the user device, the updated search result page.”

Example. Here’s a fictitious example: You search for “waterproof hiking boots for wide feet” on a large retailer like REI or Amazon. Normally, clicking a result takes you to a generic Hiking Boots page, and you have to filter it yourself. Instead, Google could use AI to generate a new page that delivers a more customized, pre-filtered result.

Credits. This was spotted by Brandon Lazovic and posted by Joshua Squires on LinkedIn. Squires wrote:

  • “In short, Google would use AI to generate a page that looks like your website but rebuilds the entire structure of a page dynamically, in real time, and places it at the top of the SERP. This throws up all kinds of red flags to me.”

Glenn Gabe wrote:

  • “If you thought AIOs angered people, just wait for AI-generated landing pages from Google. Yes, Google could create new landing pages from the SERPs if yours isn’t good enough (based on this patent).”

And Lily Ray added that this is “Terrifying to be honest.”

Why we care. This is just a patent and doesn’t mean Google is doing this now or will in the future. Some may see it as similar to AI Overviews or AI Mode. Either way, it’s worth reading if you want insight into how Google is thinking.

OpenAI: ChatGPT now has 900 million weekly active users

27 February 2026 at 20:31
ChatGPT growth

ChatGPT now has more than 900 million weekly active users, OpenAI announced. This is the first time OpenAI has publicly cited the 900 million weekly active user mark.

Why we care. User behavior continues to fragment beyond traditional search. If 900 million people use ChatGPT weekly, discovery, research, and product comparisons are increasingly happening within AI interfaces. That said, many of those actions tend to lead users to traditional search for confirmation.

The details. OpenAI shared the figure of 900 million weekly active users while announcing a new $110 billion funding round. The company also reported more than 50 million consumer subscribers and over 9 million paying business users.

What it means. ChatGPT is a place where you compete for queries, commercial intent, and brand visibility. While not all behavior here is “search” in the strict sense, you need to understand how content is surfaced, cited, or summarized in AI-generated answers — and how that impacts conversions.

OpenAI’s announcement. Scaling AI for everyone

You can now build PPC tools in minutes with vibe coding

27 February 2026 at 20:00

You can now generate custom PPC tools in plain English. With GPT-5 enabling complete program generation, the competitive edge belongs to those who master AI-assisted automation.

Frederick Vallaeys is building tools in minutes, not days or months, with AI. Vallaeys spent 10 years at Google building tools like Google Ads Editor, then another 10 building tools at Optmyzr, where he’s CEO.

He’s watched automation evolve firsthand, and vibe coding is the next leap. At SMX Next 2025, he shared his journey with vibe coding.

The traditional script problem

If you work in PPC, automation has always been top of mind. In the early days, you relied on Google Ads scripts. Scripts are great because there’s always more work than fits in a day.

But here’s the problem: when Vallaeys asks who actually writes their own scripts, only three to five out of 100 raise their hands. Most people copy and paste scripts because they don’t know how to code.

This works, but it’s limiting. You’re stuck with what someone else built instead of implementing your own secret sauce.

GPT changes the game

A couple of years ago, GPT made it easy to write scripts without knowing how to code.

The best part? Large language models are multimodal. You can take a whiteboard flowchart of your campaign decision tree, give the image to AI, and it’ll write the full Google Ads script.

Vallaeys suggests rethinking meetings. Instead of seeing client meetings as more work, treat them as prompt-engineering sessions.

It’s easy to get frustrated when clients add more to your plate. But with a mindset shift, the meeting becomes the prompt that tells AI what to execute.

What is vibe coding?

Instead of writing lines of code, you describe what you want the software to do, and the AI handles the technical implementation. That’s vibe coding.

Imagine your team needs software that does X, Y, and Z. Write down what it needs to do, give it to a coding tool, and it builds the software. As Vallaeys says, it’s mind-blowing.

Scripts are old news. Vibe coding is the new frontier.

A live example: Building a persona scorer

Vallaeys showed how fast this works. He went to Lovable and said, “Build me a persona scorer for an ad that shows how well it resonates with five different audiences.”

In less than 20 seconds, the AI responded with its design vision, features, and approach. It explained exactly what it would build, so he could immediately say, “Actually, make it 10 audiences instead of five.”

You work with it like a human developer — without touching code. You just describe what you want changed.

The framework: What should you automate?

Traditionally, you automated two types of work: quick, frequent tasks (like reviewing search terms) and long, infrequent tasks (like monthly reporting with analysis).

Vallaeys advises you not to limit automation to what you already do. Think about what you wish you could do more often but haven’t because it’s too time-consuming. That’s prime automation territory.

The old way vs. The new way

The old process was painful. Launching something took at least a month.

You’d spend days writing specs. Engineers would spend days building. You’d find bugs, coordinate meetings, and repeat.

The other problem? Traditional code was deterministic — pure if/then logic. Great for reliability, but terrible for nuanced decisions like, “Is this a competitor term?” It’s nearly impossible to program every variation of competitor keywords.

The promise of on-demand software

Sam Altman announced GPT-5, leading with “on-demand software generation.” The industry is moving beyond software-as-a-service to true on-demand software.

The new way? Write a one-paragraph spec (five minutes), give it to AI (15-minute build), then review and iterate (three minutes per change). In under an hour, you have working automation.

This new code is flexible, not just deterministic. LLMs can answer nuanced questions like, “Is this a competitor term?” with high probability. It’s the best of both worlds.

The expanding scope of automation

With vibe coding, anything you can explain to a human, a machine can build. Landing pages that follow brand guidelines? Done. Custom audience tools? Done.

Here’s the radical shift: you can now automate tasks that take just 90 minutes by hand. Build throwaway software for one-time tasks. Even if it breaks next month, it saved you time today.

What can you build with vibe coding?

You can build landing pages, microsites, interactive web apps, Chrome extensions, browser extensions, and WordPress plugins — all through simple prompts.

Available tools

Start with Claude or ChatGPT — tools you likely already subscribe to. They’re great for data analysis, calculators, and quick visualizations.

For more complex apps that need databases or login systems, use Lovable, V0.dev, Replit, or Bolt. They handle the complexity, so you don’t have to.

If you’re more technical, try Codex, Bolt.new, or Cursor. But for most people, the simpler tools handle almost everything.

Case Study 1: Seasonality analysis tool

Vallaeys asked someone on his team who had never coded to build a seasonality analysis tool. She fed PPC Town Hall podcast videos into Claude.

The process was simple: gather resources, write a prompt, give it to AI, and test it in the browser. No installation required.

The team iterated on the fly, asking for different plots and forecasting methods. In minutes, they had advanced enhancements. The AI knew where to add help text and simplify the interface because it’s trained on millions of web apps.

Case Study 2: Panel of experts tool

Vallaeys wanted multiple custom GPTs to review his blog posts in sequence, each giving feedback from its persona. Then a consolidator GPT would summarize the most common feedback into three to five bullet points.

He vibe-coded this in V0.dev by describing what he wanted. It generated a clean tool with text input, the ability to add custom GPTs, and everything worked.

Case Study 3: Chrome extension for demos

For customer demos, Vallaeys needed to blur sensitive numbers. He wanted options:

  • Fully redact or just blur?
  • Include currencies or only numbers?
  • Handle different separators?

He built a Chrome extension with all those options using simple prompts. Problem solved.

Prompting tips for success

Always include the use case. Say “seasonality tool” instead of vague terms like “time series analysis.” The AI makes better assumptions and may suggest approaches you hadn’t considered.

Ask questions: “How did you approach this?” or “Where do you store data?” It helps you learn.

Use chat mode to explore alternatives without changing the code. Ask for three approaches, pick one, go deeper, then say, “Execute that.”

The PPC audience analyzer

The audience analyzer Vallaeys’ team built is available to try. You can grab the code, add your logo, turn insights into action items — whatever you need. Just tell it what to change, and it updates.

Final thoughts: Stay competitive

Vallaeys makes one point clear: you’re not competing against AI. You’re competing against people who use it better than you do.

Try vibe coding today. Go to one of these tools and give it a single prompt. See what happens. The first time Vallaeys tried it, his mind was blown.

Now that you’ve learned something new, use it to get better at AI. That’s how you stay ahead.

💾

Learn how vibe coding help you build custom PPC tools in minutes using simple AI prompts instead of traditional coding.

How to build a context-first AI search optimization strategy

27 February 2026 at 19:00
From keywords to context- Rethinking content optimization for LLMs

AI-based discovery offers a new level of sophistication in surfacing content, without relying solely on keywords. Beyond keyword-string-first approaches, contextual and semantic elements are now more important than ever.

Optimization is no longer about just reinforcing the keyword. It’s also about constructing a retrievable semantic environment around it.

This impacts how we write, create, and think about content. It applies whether you write every word yourself or employ automated workflows.

Reframing your publishing strategy around context

Much has already been written about the concepts covered here. This discussion focuses on tying them together into a more cohesive publishing strategy and tactical approach.

If you’re already operating in a context mindset, you’re likely making these elements work for you. If you’re still using keyphrase-first approaches and want a stronger grasp of deeper contextual and semantic strategy, keep reading.

Context, semantics, meaning, and intent have long been core to optimization. What’s changed is how content is presented and discovered, particularly within LLM-based platforms.

This shift affects how context is categorized and structured across a website. It applies to site taxonomy, schema, internal linking, and content chunking and clustering.

It also means moving away from verbose word counts and getting to the point. That benefits both the machine layer and the human reader.

Keywords aren’t obsolete. But they don’t function as isolated optimization tactics. Context-led strategies aren’t new. However, they require greater attention to define what your publishing strategy means moving forward.

Dig deeper: If SEO is rocket science, AI SEO is astrophysics

Structure for a contextual-density approach

When considering the keyphrase as a multidimensional point for building semantics, it may be more productive to think of these combined concepts within a single framework. In essence, every topic exists as a semantic field rather than a word or phrase. These areas include:

  • Axis term (primary topic/keyphrase).
  • Structural context (secondary and tertiary concepts).
  • Problem context (intent).
  • Linguistic variants (stemmed or fanned phrasing).
  • Entity associations.
  • Retrieval units (chunk-level readability).
  • Structural signals (internal links, schema, and taxonomy).

While the main keyphrase is the anchor and axis point for the linguistic dimensions that surround it, almost everything else defines true performance and meaning apart from the keyword.

In other words, the sum of all the “other” words — headings, subheadings, references to related concepts, and various entities related to the keyphrase — is just as important as the keyphrase itself. This is a very basic concept in producing well-thought-out writing, but it’s now more important.

Context density and SERP-level linguistic analysis

One way to think about this shift is by comparing keyword-level linguistic analysis with search engine results page-level linguistic analysis.

SERP-level linguistic analysis isn’t new. One of the first major tools to address this concept was Content Experience by Searchmetrics and Marcus Tober.

The platform launched around 2016 — priced for enterprises — and focused on scraping the top results page for a given keyword, then averaging and weighting the other words common across high-ranking pages.

The idea was that those additional words and entities, which helped define a comprehensive set of results for a topic, would yield key semantic indicators for content performance.

These reports provided stemmed concepts, entities, and specific language modifiers to add hyper-context to the main topic.

Other tools, such as Clearscope, used different methods to achieve similar results.

In my experience, these types of analyses have been very useful for creating high-performing content.

They’ve worked well competitively and have been especially effective in linguistic areas where competitors lacked this level of analysis in their own content.

Dig deeper: Content scoring tools work, but only for the first gate in Google’s pipeline

Using secondary and tertiary keyphrases as contextual linguistic struts

Understanding this type of analysis helps you delve deeper into semantic page construction by categorizing and emphasizing ancillary language into a hierarchy, particularly in second- and third-tier levels. You can go as deep with the hierarchy as your content scope permits.

Secondary and tertiary keywords should form what I often refer to as “linguistic struts” — supporting elements that reinforce your main topic while expanding its scope and relevance.

Think of them as context stabilizers or intent differentiators for a given topic or theme. The choices you make here ultimately define the context and relevance of your content.

Each secondary keyword should serve a specific purpose within your page architecture, whether it’s introducing a new subtopic, answering a related question, or providing additional context for your primary theme.

Once you’ve defined this secondary and tertiary language, it can guide your outline and then the final writing. 

This approach applies to everything from manually written work to fully automated and synthetic processes.

Stemmed linguistics

One of the most powerful aspects of comprehensive contextual keyword optimization is its ability to capture stemmed and fanned-out searches — related queries that share common roots or concepts with your optimized keywords.

In other words, related keyphrases and searches you may not have directly optimized for within the primary topic. These types of searches can be extremely valuable, often more so than the primary keyphrase, because they reflect more refined and deliberate intent.

For example, if you’ve created a comprehensive guide for “content marketing,” your page might also rank for searches such as “implementing content marketing strategies,” “content marketing strategy implementation,” or “hire B2B content marketing expert.”

The sum of these stemmed variations often represents significantly higher-intent search volume than any individual keyword.

The more thoroughly you cover secondary and tertiary keywords, the more stemmed and fanned searches you’re likely to capture.

Dig deeper: How to use relationships to level up your SEO

Get the newsletter search marketers rely on.


High-level technical foundations for contextual emphasis

When discussing the move from a string-based strategy to a context-based strategy, it’s as much about how machines process content as it is about writing.

LLM-powered platforms evaluate context at multiple layers — how content is segmented, how topics are structurally connected, and how meaning is formally implied.

Retrieval mechanics: From pages to chunks

Large language models retrieve segments of content — referred to as “chunks” — that have been transformed into vector representations.

In simplified terms, your page is broken into retrievable units. Those units are evaluated for contextual similarity to a prompt, and the LLM selects the chunks that best align with the intent and semantic patterns in the query.

Contextual similarity emerges from co-occurring terms, related entities, problem points, and semantic density within a chunk.

If a chunk lacks contextual depth — in other words, if it simply repeats a primary term without expanding the surrounding semantic field — it becomes thin in the embedding layer.

Thin chunks are less likely to be retrieved, even if the page ranks well in traditional search.

The implication for your writing is straightforward: Getting to the point faster can be a significant advantage at both the page and site levels. It can improve machine readability and create a better human reading experience, serving multiple KPIs.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

Structural context: Architecture as meaning

How your content is organized structurally also infers meaning within LLM-based discovery. Beyond providing a taxonomical hierarchy, structure acts as a contextual signal.

Architecture teaches the system how your topics relate to one another. Internal links apply inference and meaning to related topics and entities.

Taxonomy infers the semantic mapping of your connected content within a domain or across domains. URL naming and structure further signal hierarchy and topical relationships.

When a page sits within a clearly defined topical cluster and links to related concepts and subtopics, it inherits contextual reinforcement.

An LLM understands what the page says and where it lives conceptually within your broader domain.

Schema and entity context

There’s also a layer of meaning that can be formally stated through schema markup.

Schema markup and entity modeling provide explicit clarification of what something is, who is involved, and how elements relate to one another.

Where linguistic context builds meaning implicitly through unstructured writing, schema states its intended meaning through structured data.

In doing so, it formalizes entity relationships, reduces ambiguity, and reinforces identity and topic signals across platforms.

This doesn’t replace strong writing, but it strengthens it by ensuring machine-readable contextual emphasis.

In a contextual discovery environment, every technical element exists to strengthen semantic retrievability.

For a deeper dive into the technical shift in content discovery in the age of AI, I recommend Duane Forrester’s book, “The Machine Layer.”

Dig deeper: Organizing content for AI search: A 3-level framework

Moving to a context-first strategy

When you align linguistics, structure, and declaration around a clear topical axis, the strategy centers on the contextual environment.

Transitioning from a purely keyphrase-centered strategy may seem daunting at first, but it’s something you can begin doing today in how you write and research your content.

In simple terms, moving to a context-first strategy is about how you approach writing at both the page and site levels and making your content as machine-readable as possible.

The dark SEO funnel: Why traffic no longer proves SEO success

27 February 2026 at 18:00
The dark SEO funnel: Why traffic no longer proves SEO success

SEO is transitioning from rank, click, and convert to get scraped, summarized, and recommended. 

We’ve entered the era of invisible attribution known as the dark SEO funnel — where traditional top-of-funnel (TOFU) traffic is collapsing, the messy middle is getting messier, and SEO success can no longer be measured by clicks. 

Up to 84% of B2B buyers now use AI for vendor discovery, and 68% start their search in AI tools before they ever touch Google, new data from Wynter reveals. Buyers are using ChatGPT to narrow down their options and Google to verify.

If you’re still judging SEO success by traffic, you’re optimizing for a model that no longer exists. Here’s how to brace for impact. 

Defining the dark SEO funnel

Marketing leaders are already familiar with the concept of dark social — the idea that buyers share content in private channels (Slack, DMs, WhatsApp) where tracking pixels can’t see them. Dark SEO is the algorithmic search equivalent.

In dark social, a peer recommends the brand, and the buyer Googles it. In dark SEO, an LLM recommends the brand, and the buyer then Googles it.

The training data answer summaries are invisible to traditional analytics:

  • Ingestion: An LLM consumes your content and understands your entity.
  • Recommendation: A user asks a problem-aware question (e.g., “best tools for X”), and the LLM recommends your brand as a solution.
  • Verification: The user, now aware of you, goes to Google and searches for your brand name to validate the choice.

The credit conveniently goes to “direct” or “branded search.” Meanwhile, the work was done by SEO or GEO.

This is the dark SEO funnel: where discovery happens in a non-click environment, attribution gets wiped out, and SEO looks like it’s “underperforming” even while it’s actively filling the pipeline.

The role of Google has fundamentally changed. As one surveyed CMO explained:

  • “I use Google only if I have certainty about which specific software types or products I want.”

AI is for evaluating. Google is for verifying. This is a radical shift.

Dig deeper: Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

The ned discovery paradigm

The strategic shift: Brand mentions vs. LLM citations

Winning in the dark funnel era requires an understanding of two types of visibility.

In traditional SEO, the goal was clicks from a blue link. In AI search, the goal is inclusion, which happens in two different ways. 

Brand mentions vs LLM citations

Brand mentions

This is when an LLM explicitly names your company as a solution.

  • Users ask: “Who are the top enterprise ABM platforms?”
  • AI answer: “The top recommendations are 6sense, Demandbase, and [Your Brand].”

You can’t “technical SEO” your way into this. It’s driven by entity strength — how often your brand appears alongside relevant topics across the web — and influenced by PR, podcast appearances, customer reviews, and what we have long coined as surround sound SEO.

Dig deeper: How to earn brand mentions that drive LLM and SEO visibility

URL citations

This is when an AI tool links to your content as a source of truth because you provided unique data or you were simply the most relevant result. 

  • Users ask: “What is a good NRR benchmark for Series B SaaS?”
  • AI answer: “According to [Your Brand]’s 2026 State of SaaS Report, the median NRR for Series B companies has dropped to 109% due to budget tightening.”

This is driven by information gain. If you publish unique data, contrarian views, and proprietary information, the AI cites you to ground its answer.

LLMs learn from the ecosystem. If you want to be recommended, you should optimize around the most relevant neighborhoods:

  • Review sites: G2, Capterra (where AI verifies sentiment).
  • Communities: Reddit, Quora (where AI verifies consensus).
  • Third-party publishers: Industry blogs and news sites.

If AI sees your brand mentioned consistently across a relevant neighborhood, it assigns you the authority to be recommended.

Get the newsletter search marketers rely on.


How to measure SEO in the dark funnel era

When traffic is no longer the north star KPI, leadership still wants proof that SEO is working. 

The strongest teams are pivoting to defensible signals that track revenue and reputation rather than just clicks. 

If brand discovery happens in AI, but the last click conversion happens on Google, your attribution model is fundamentally broken. 

Metrics to de-emphasize 

  • Broad informational traffic: “What is X” searches are now answered by AI. Losing this traffic is often a sign of efficiency.
  • Search impressions: This is tough to justify. I’ve never met a CMO that places high importance on impressions.
  • Isolated rankings: Ranking No. 1 for a given keyword doesn’t guarantee your brand will get recommended. 
  • CTR: In 2023, Michael King accurately predicted the 10 blue links will get fewer clicks because the AI snapshot will push the standard organic results down. The 30-45% click-through rate (CTR) for Position 1 will drop precipitously.

Metrics to elevate 

  • Recommendations from LLMs: Are you visible for high-intent, comparison queries (e.g., “best CRM for enterprise”)? These are the queries users perform after the AI has educated them.
  • Branded traffic as a leading indicator: This is a great proxy for dark funnel success. Non-branded visibility leads to brand searches in this new era. And branded searches lead to conversions.
  • Product and solutions page traffic: Generally, this content is less volatile and less susceptible to traffic losses — therefore performance should remain level. 
  • Landing page conversion rates: If you’re getting less traffic, but higher-intent visitors, there should be an improvement in conversion rates. 
  • Self-reported attribution: This isn’t always perfect, but it’s directionally reliable. When website leads fill out forms asking “how did you hear about us?” they should be citing things like “online search” or “ChatGPT” or “Perplexity.”

The most powerful slide you can show in a meeting is this:

  • Informational traffic: ↓ (Declining)
  • Demo conversion rate: ↑ (Rising)
  • Pipeline: → (Stable or growing)

That isn’t a decline. That is what I call the Great Normalization of SEO. You are trading high-volume noise for high-intent signal.

Dig deeper: How to get cited by ChatGPT: The content traits LLMs quote most

Brand visibility is the trophy, traffic is just the byproduct

To thrive in the dark funnel era, you must stop playing the old SEO game.

The brands that adapt aren’t chasing cheap clicks. They will dominate inclusion, recommendation, and commercial intent— even as the modern SEO funnel grows darker.

Here’s your mandate for 2026:

  • Narrow your focus: Track 30-50 high-intent money prompts instead of thousands of vanity keywords.
  • Surround sound marketing: Invest in third-party visibility and narrative control (surround sound SEO), not just your own domain.
  • Information gain: Aim to blend search-driven topics with opinionated, research-backed, information-gain insights.
  • Highlight revenue metrics: Report on the organic contribution to pipeline, not just click volumes. 

As we saw with dark social, CTR and attribution from social platforms declined with the rise of zero-click marketing. It’s now time to concede defeat on traffic as we apply those same learnings to dark SEO. 

How to become an SEO freelancer without underpricing or burning out

27 February 2026 at 17:00
How to become an SEO freelancer without underpricing or burning out

Many SEO professionals enter freelancing for the same reason: freedom. They dream of fewer meetings, flexible hours, and the ability to choose their own projects. 

What they don’t expect? Freelancing isn’t just “SEO without a boss.” It’s SEO plus sales, scoping, contracts, billing, and client management. Without those essential pieces, even the strongest SEOs struggle to make freelancing sustainable. 

We’ll break down each step in this process to bridge the gap between dream and reality. By the end of this article, you’ll know exactly how to build a sustainable freelance practice so you can become a digital nomad answering client emails and enjoying mojitos from a beach in Bali (if you so choose). 

Before you get started: Understand what you’re actually building

Let’s make one thing clear: SEO freelancing doesn’t look like attending quarterly planning meetings to fight for budget or sending another sad Slack to the product team asking them to prioritize your recommendations.

In that scenario, you’re closer to a contractor embedded in someone’s workflow than an independent freelancer. And that distinction matters. It determines how much control you have over your time, scope, and pricing. 

SEO freelancing typically includes:

  • A clearly scoped engagement with a defined start and end.
  • Ownership over how the work is delivered, not just what’s delivered.
  • Pricing tied to outcomes or deliverables instead of availability.
  • The ability to say no when a project doesn’t fit.

So before you quit your job to take on your first client, make sure you know exactly what you’re signing up for. 

Step 1: Pick one thing and get unreasonably good at it

Now that you know exactly what your SEO freelancing gigs should look like, here’s the secret sauce to how some freelancers can charge $200/hour while others still struggle to get $40: 

Specialization. 

Generalist freelancers compete on availability and price. “I do SEO” means you’re fighting everyone who just “does SEO.” You win projects by being there when the client needs someone — and your price is what they’re willing to pay. 

Specialists, on the other hand, compete on expertise, speed, and pay-off. An expert who “audits JavaScript rendering issues for React migrations” will face a much smaller pool of competitors. Because of that, you can price based on what you’ve delivered. 

When it comes to SEO freelancing, those high-value specializations look like: 

  • Technical SEO audit for site migrations: Companies budget for migrations because they’re terrified of what could go wrong. They pay well for any de-risking an expert can offer. 
  • Programmatic SEO implementation: Sites make money from organic traffic at scale, so they understand well the ROI of investing in your services. 
  • Technical enterprise ecommerce SEO: These high-stakes sites with complex templates, faceted navigation, and crawl budget demand high budgets and timely deliverables. 
  • SEO that actually gets you ChatGPT visibility: Yes, GEO is a selling point that everyone wants to buy, and yes, offering that specific skill (and backing it up with data) will put you on the map. 

What doesn’t work? 

  • SEO “guru” positioning: Claiming broad expertise without clearly defining the problem you solve or the outcome you deliver. 
  • Lack of specialization: Offering every SEO service under the sun with no defined specialty makes it harder for prospects to understand where your expertise actually lies. 
  • Competing on price: When price is your main differentiator, you’re positioning yourself as interchangeable instead of valuable. Experience-driven specialists rarely win or lose work based solely on their hourly rate. 

Most freelancers resist freelancing, thinking, “What if I turn away work?” 

You are! That’s the point. Turning down misaligned work is how you protect your time, pricing, and the quality of your work. 

Dig deeper: How to keep your SEO skills sharp in an AI-first world

Step 2: Turn that one thing into something you can sell 100 times

The line between “I’ll do an SEO strategy customized to your needs” and “I deliver a technical SEO strategy with these eight components, this deliverable format, and this timeline” is productization. It’s the difference between delivering consistent, repeatable work and reinventing the wheel for every new client. 

Many freelancers misstep here by customizing too early. A client might say, “We also need help with content,” and you, as a freelancer, reply with “Sure, I can help with that.” Now you’re not delivering a productized audit — you’re doing custom work with an undefined scope. 

Here’s what you need to define to keep your deliverables consistent: 

  • Scope: What’s included in the work. 
  • Deliverable format: What the final product should look like (e.g., prioritized spreadsheet, slide deck, kickoff call). 
  • Timeline: Define this at the very least as starting from the moment the client signs your proposal. 
  • Price: We’ll get into this can of worms in a second. 

Depending on the services you’re offering, you’ll also want to specify: 

  • Content audits.
  • Competitive analysis. 
  • Keyword research.
  • Implementation support. 
  • Ongoing monitoring. 
  • Additional stakeholder presentation.

The key to building out a strong productized proposal is this: you cut back on ambiguity. 

The prospect either needs what you’re offering, or they don’t. If they need more, you can follow up with another proposal including the additional pricing. 

Tip: If you do have a client asking, “Can you also look at our blog content, subdomain, redirects, or something that’s outside of the scope of this current project,” you don’t have to say no. 

You can say, “Yes, but that’s another project that I’ll need to scope out.” Just make sure you say anything but “Sure, I can take a quick look.” Resist. 

Dig deeper: How to build lasting relationships with SEO clients

Get the newsletter search marketers rely on.


Step 3: Price it like you’re running a business

Arguably, this is the trickiest side of freelancing. It can be hard to put a price on your time and expertise — and even harder to defend your pricing while selling your services.

There are three pricing models you can try here: hourly, project-based, and retainer. Most start with hourly since that’s the easiest to understand, and yes, that is a bit of a trap.

Hourly pricing: Good for beginners, terrible for experts

Setting an hourly rate makes sense when you’re starting out and aren’t sure how much to charge. Simply take your day job, narrow down how much you get paid by the hour, and think about how much your benefits are worth to you. Add all that together, and boom! Hourly rate.

For example, say you got paid $100,000 at your full-time job. That’s about $48 per hour. And the average cost per hour for private industry benefits is about $13. That means if you want to make exactly what you were before, you’ll need to be paid at least $61 per hour.

In practice, SEO freelance rates range from $75 to $200 per hour, though entry-level freelancers might start closer to $50. Consider your experience and expertise, and price yourself carefully so you don’t get locked into a too-low rate.

Hourly rate is great to start, but it falls short when you’re good at your job. You’re being rewarded for working slower and being penalized for getting better at your job.

Project-based pricing: The model for productized work 

Once you’ve productized your products, you can start using project-based pricing. If you’ve delivered the same audit 15 times, you know how much work it takes you — and you know how much it’s worth.

The client doesn’t care if something takes you 20 hours or 15. They care about getting a quality deliverable in a timely fashion.

But it can be hard to get out of that hourly mindset. Here’s how to price projects when you’re starting out with freelancing:

  • Estimate how long the work will take you (or go with your best guess if you’ve never done it).
  • Multiply that by 1.5 times to account for communication overhead, revisions, and unexpected complexity.
  • Track actual time spent (yes, even though you’re not charging by the hour).
  • Deliver the project.
  • Adjust pricing for the next client based on real data (and client results).

After your first five projects, you’ll know your actual costs. Up until then, you’ll be making educated guesses, but that’s OK. Everyone starts by guessing. 

Tip: Remember, the thing you’re charging for here is your knowledge, not your time. What the client is paying for is the results you offer. Always tie your work to how it can help your client achieve their goals. No one can put a price tag on exceeded KPIs. 

Retainer pricing: Useful for recurring work, but dangerous without boundaries 

Retainer pricing makes sense when the client needs consistent monthly deliverables, such as technical reviews, advisory support, and optimization recommendations

You just have to be careful here to avoid scope creep. “We’re paying you $5,000 a month” can quickly turn into “Can you help with this product launch, this email campaign, this competitive analysis?” Guard your time wisely.

Here’s how to structure your retainers so they work for you:

  • Define the exact monthly deliverable: Clearly outline the tasks you’ll be working on each month. For example, “one technical audit per month” or “three page reviews a month.” 
  • Set rollover limits: Explain what happens if tasks are put to the wayside or projects get put on pause. This might look like saying “unused hours expire after 60 days” or “a maximum rollover of one month’s unused hours.” 
  • Exclude ad hoc requests: Clearly note that additional projects require separate proposals. 

For example, say you have a client who pays $6,000 a month for “monthly technical SEO review and eight hours of advisory support.” 

  • Month 1: The client uses six hours. Those two unused hours roll into month two. 
  • Month 2: They use 10 hours (unused two hours plus standard eight hours). 
  • Month 3: The client asks for a content audit. That project is separate and has its own pricing. 

The best path here for a new SEO freelancer? Start with project-based pricing for your core offerings. Add retainers only after you’ve delivered the same project multiple times and you know exactly what you’re committing to. 

Tip: Only offer retainers when you know you can firmly hold a client to a set scope of work. Be confident in what you’re selling and how long it takes to deliver, so you make the best use of your time. 

Dig deeper: 7 ways to increase SEO revenue without losing clients

Step 4: Build systems before you’re underwater

The key to keeping all of this consistent? Systems. 

As a freelancer, you are the project manager, account manager, and delivery owner. Systems are what keep work moving when no one’s checking in on you. 

Here’s what you need to create a solid system so nothing slips through the cracks: 

  • Client onboarding. 
  • Email (follow-ups and replies).
  • Billing.
  • Contracts.
  • Deliverable templates.
  • Offboarding.

Client onboarding: Get everyone up front

The biggest delay to any project? Waiting on access for tools, documentation, and basic questions. The right onboarding process means you can hit the ground running. 

Here’s what you should always ask for before work starts: 

  • Tool access: Google Search Console, Google Analytics 4, crawl tool permissions, CMS login.
  • Stakeholder contacts: Who approves deliverables, who answers technical questions, who handles billing.
  • Project context: Known issues, previous SEO work, business priorities, previous project timelines (migration, updates, product launches). 

You can get this without seven days of email tennis. Just send over an immediate request for this information, and don’t schedule any next steps until you have what you need. 

Template everything here. Each client gets the same questionnaire and contract structure. 

Contracts

You know what every freelancer loves? Getting paid. You know what you need to get paid? Getting it in writing.

Set your contract terms ahead of time so you don’t just hit a prospect with “uh” when they ask you how much and when. Here’s what you should have prepared:

  • Payment terms: Common options include 50% upfront and 50% on delivery for project work, or monthly invoicing for retainers and recurring work. Choose a structure that protects your cash flow while remaining reasonable for your clients.
  • Deliverable format and timelines: Net-30 or Net-14 are standard terms here. They’re just fancy ways of saying you get paid thirty days or two weeks after you bill.
  • Communication expectations: Explain the meeting cadence, preferred channels, and response times to avoid surprises.
  • What’s not included in your scope: Just so everyone is completely clear on what work is being done and what isn’t.

And don’t feel married to the first contract term you define. Be flexible. That’s the joy of being a freelancer — you can always change things up when you need to. 

You can either Google Docs your way to success here, or you can look into investing in tools: 

  • Contract signature: PandaDoc or DocuSign.
  • Invoicing and payment tracking: Wave, FreshBooks, or Bonsai.

Note: Pick one of each, use it for every client. Don’t switch unless you have a reason. 

Deliverable templates

Deliverable templates save hours of formatting. It means you don’t need to mentally go through your checklist of everything you need to review. You can just look at a blank template of what you’ve done in the past and move forward.

Here are some good examples of templates to have on hand:

  • Audit spreadsheet with consistent columns: Include the issue, location, impact (high, medium, low), effort to fix (usually in hours), priority, and any additional notes.
  • Executive summary templates: This should just be how you break things down for the client in layman’s terms.
  • Delivery email template: This offers next steps and support window details.

The goal here is to keep things consistent across clients. You’re providing the same quality work every time, no matter how busy you are.

Communication

Clients don’t need daily check-ins. They need to know the project is moving forward and nothing important is blocked.

What that looks like depends on the client’s needs. It could be: 

  • Weekly async updates via email: Explain what was completed this week, what’s coming up next, and what’s blocked.
  • Biweekly or monthly calls: Explain the same things, but this time over the phone. You should also schedule a call if you’re doing a kickoff or delivering a project.
  • Monthly emails: This is better for hands-off clients that you trust (and trust you) to get things done.

Note: If a client is pushing for daily Slack access or unscheduled calls, review your scope and pricing. You can always update your scope of work if new needs arise. 

Offboarding

No one likes to see a client go, but how you handle parting is key to making a positive, lasting impression. Make sure to include: 

  • Final deliverable handoff: This should include the rest of your work and a video walkthrough if you didn’t have a chance for a call. 
  • Transition documentation: If you were working with another team to implement your recommendations, provide guidance on how to implement changes and include any technical context they’ll need to know. 
  • Post-project support window: Define a clear support period (e.g., “two weeks of email support for clarification questions about the deliverable”). After the window, additional support is a new engagement. 
  • Request feedback: Ask for a testimonial or LinkedIn recommendation while the work is fresh. Most freelancers wait too long. 

Make sure to document what you’ve learned about yourself, the client, and your process once things are done. Think about what went well, what went poorly, and what to charge your next client for similar services. 

Dig deeper: 12 tips for better SEO client meetings

Avoid these pitfalls

Most freelancers go back to full-time employment because they feel burnt out, underpaid, and overworked. 

Those who build a sustainable career treat freelancing like a business, not just a flexible job. Yes, drinking your mojito in Bali is fun — but you still need to answer client emails within 24 hours, even when you’re off the clock. 

The biggest pitfalls that almost all beginner SEO freelancers fall into are: 

  • Saying yes to misaligned projects: Beginner freelancers are usually worried about cash flow, but saying yes to a project that doesn’t fit is what gets you stuck in a feast-famine cycle where short-term cash flow decisions prevent you from building stable, repeatable work. 
  • Delivering different things for each project: You can’t optimize what you don’t understand. Keep your offering consistent so you know what works, what doesn’t, and what’s just a client quirk. 
  • Starting from scratch with each client: Every new client should feel easier. If onboarding Client No. 5 feels as chaotic as Client No. 1, you need a better system (or just any system). 
  • Pricing for payment and forgetting sustainability: Pricing too low to “get your first client” can get your legs under you, but it’s not how you stay in freelancing. It’s better to work on two well-priced projects than five underpriced ones. Carefully judge your workload — and savings — so you can hunt for the right client. 

What you’re actually building as a successful SEO freelancer

Freelancing isn’t just “SEO with flexible hours.” It’s a service business where you define the offering, set the terms, and manage the business. 

If that sounds like more work than having a boss, you’re right. Freelancing means trading predictable employment for control over everything: scope, pricing, schedule. Some people thrive on that trade because they get to be their own ultimate manager. Others realize they’d rather someone else handle that for them. Both are valid choices. 

The key here is if you’re going freelance, treat it like the business it is:

  • Pick a specialization. 
  • Turn it into a repeat project.
  • Price it properly.
  • Build systems that scale.
  • Say no to everything that doesn’t fit.

That’s the framework. The rest is execution, iteration, and always improving the parts of the business that speak to you — be that SEO audits, content strategy, link building, or even client management — to build something sustainable. 

The Data Doppelgänger problem by AtData

27 February 2026 at 16:00

Somewhere inside your CRM is a customer who does not exist.

They open emails at impossible hours. They redeem promotions with machine-like precision. They browse product pages across three devices in under five minutes. They convert, unsubscribe, re-engage and transact again. On paper, they look highly active. In reality, they may be a composite of behaviors stitched together from AI assistants, shared accounts, recycled addresses, autofill tools and automated workflows.

This is the Data Doppelgänger Problem. And it is about to become one of the most expensive blind spots in modern marketing.

For years, identity resolution was framed as a hygiene issue. Clean the data. Remove duplicates. Suppress invalid records. That work still matters. But the ground has shifted. Today, the bigger risk is not dirty data. It is convincing data that is wrong.

AI agents are no longer theoretical. Consumers are using them to summarize emails, compare products, track prices, fill forms and in some cases complete purchases. Shared credentials remain common across households and small businesses. Browser privacy changes have pushed attribution models into probabilistic territory. Add subscription commerce, loyalty programs and cross-device behavior, and you begin to see the pattern.

One person can generate multiple digital identities. Multiple actors can generate activity that appears to belong to one person. What you see in your dashboards may not reflect a human with consistent intent, but a digital echo assembled from overlapping signals.

The result is not just noise. It’s distortion.

When high engagement lies

Most marketing systems reward engagement. Opens, clicks, transactions and recency are treated as proxies for value. But what if the engagement is partially automated?

Email clients increasingly prefetch content. AI tools summarize messages without requiring a human to scroll. Assistive shopping agents monitor price drops and trigger interactions on behalf of users. To your analytics layer, these actions can look identical to high-intent behavior.

Now layer in recycled or repurposed email addresses. A dormant account gets reassigned by a provider. A corporate alias forwards to multiple employees. A consumer rotates through alternate emails to capture new user discounts. On the surface, these look like legitimate records. Underneath, the identity is unstable.

You may be optimizing campaigns around engagement that doesn’t reflect loyalty. You may be suppressing records that are valuable but appear inactive because their activity is fragmented across identities. You may be feeding machine learning models with signals that only compound the errors.

This is where seasoned professionals feel the frustration. The dashboards are clean, segments are defined and the attribution model runs on schedule. Yet outcomes drift, conversion rates plateau and fraud creeps in through legitimate-looking channels. Acquisition costs rise without a clear explanation.

The problem is not effort. It is identity confidence.

Doppelgängers create operational risk

The Data Doppelgänger Problem is not limited to marketing efficiency. It crosses into risk, compliance and revenue protection.

Promotional abuse is often framed as external fraud. In reality, much of it exploits weak identity resolution. A single individual can appear as multiple new customers. Conversely, multiple individuals can appear as one trusted account. Loyalty points are pooled, discounts are stacked, and survey data becomes unreliable.

As AI agents become more capable, this risk becomes harder to detect. An automated assistant acting on behalf of a legitimate customer is not inherently fraudulent. But it can blur behavioral signals that historically differentiated genuine intent from scripted abuse.

Traditional rules-based systems look for anomalies. The next wave of risk will look normal.

If you cannot distinguish between a stable, persistent identity and a composite one, you cannot confidently calibrate friction. Add too much friction and you punish real customers. Add too little and you subsidize exploitation.

The only sustainable path is to move beyond static identifiers and into continuous identity validation. Not just confirming that an email address is deliverable, but understanding how it behaves over time, how it connects to other digital attributes, and how it fits within a broader activity network.

The collapse of the Golden Record

Many organizations still pursue a single source of truth. A golden record that reconciles identifiers into one master profile. The aspiration is understandable. But in a world of AI mediation and shared signals, the notion of a fixed record is increasingly unrealistic.

Identity is not a snapshot. It is a moving target.

The more relevant question is not whether you can unify data into one profile. It is whether you can quantify how confident you are that the activity associated with that profile represents a coherent individual.

That shift sounds subtle. It is not.

When identity is treated as binary, either matched or unmatched, you miss nuance. When identity is treated as a spectrum of confidence, you gain leverage. You can weight signals differently. You can suppress low-confidence interactions from modeling. You can prioritize outreach to high-confidence segments. You can apply graduated friction to transactions that sit in ambiguous territory.

This is where data becomes a strategic asset rather than a reporting function.

From volume to validity

Marketing technology has long rewarded scale. Bigger lists, broader reach and more signals. But scale without validation creates false precision.

The Data Doppelgänger Problem forces a harder question. Would you rather have ten million records with unknown stability, or eight million records you understand deeply?

The brands that win over the next few years will not be those with the most data. They will be those with the most defensible data.

Defensible means continuously validated. Network-informed. Contextualized against real patterns of activity. Integrated across marketing, analytics, and risk workflows so that improvements in one area compound across the organization.

When identity confidence increases, targeting improves. When targeting improves, engagement quality strengthens. When engagement quality strengthens, attribution stabilizes. When attribution stabilizes, forecasting becomes more reliable. And when forecasting improves, budget allocation becomes less political and more performance-driven.

This compounding effect is measurable. It is also fragile. Feed unstable identities into the loop and the entire system drifts.

What Seasoned Professionals Should Be Asking

If you are leading marketing, analytics or risk, the uncomfortable questions are no longer about data access. They are about data integrity at scale.

How many of your active profiles represent coherent individuals?

How often are identities revalidated against fresh activity?

Can you detect when one identity splits into several, or when several collapse into one?

Are your fraud controls calibrated to behavior, or to assumptions about behavior that may no longer hold?

These questions do not require panic. They require evolution.

This is not a crisis. It is a signal that the digital ecosystem has matured. Consumers are delegating more tasks to software. Devices are proliferating. Privacy changes are fragmenting identifiers. This is the environment we operate in.

The brands that adapt will treat identity not as a static field in a database, but as a living construct that must be observed and refined continuously. Utilizing advanced activity networks to anchor identity in its current reality.

Those that do will spend less on wasted acquisition. They will protect margins without alienating customers. They will trust their analytics because they understand the confidence behind the numbers.

And perhaps most importantly, they will know who they are actually engaging. Because somewhere in your CRM, there is a customer who does not exist.

The question is whether you can find them before they find your budget.

How to see AI search prompts inside Google Search Console

27 February 2026 at 16:00

We’re getting a lot of questions about prompt tracking. Many of our current and prospective clients are tracking their visibility using tools such as Profound, Athena, and Peec.

The million-dollar question that always comes up is “Which prompts should I be tracking?’. In an incredibly personalized and complex ecosystem, it’s extremely difficult to know what our buyers are even asking LLMs about our company.

There are no data sources I feel great about right now. This isn’t like traditional search, where Keyword Planner data was publicly provided. It’s unlikely that OpenAI or Google will ever fully open up this data for us to analyze. There have been some recent proposals by the UK CMA around Google + data transparency but let’s all expect the bare minimum to be done here.

So LLM tracking is a complete black box. Are there any data sources that we can possibly use to see which prompts to track?

Maybe.

OpenAI data leaking into Search Console

Last November, there was some extremely interesting reporting done around this. Last November, Jason Packer wrote a report analyzing how searches from ChatGPT were actually getting leaked into Search Console reports. An accidental test revealed quite a few queries in the Search Console data with PII.

The story was eventually picked up by Ars Technica and confirmed by sources as OpenAI. They since claimed to have fix the problem that was specifically occurring here and that “only a small number of queries were leaked”.

However, this is confirmation that ChatGPT queries are available in some Search Console profiles. Obviously, there’s huge implications with privacy, PII etc., that’s beyond the scope of this article. The point being, we know it’s not impossible that queries from LLM systems are available in Search Console.

AI Mode data available in Search Console

We also know from the amazing reporting of Barry Schwartz that data from AI Mode will be available in Search Console. So more evidence that Search Console will have the capabilities to collect data points for how users are searching within an LLM.

From what we’ve analyzed so far, I believe this is where the data is likely coming from. When you look at the data after applying this filter, you can see steady rises in impressions over the last 3 months:

This lines up pretty well with Google’s aggressive rollout of AI Mode-based features during Fall 2025/Winter 2026.

How to mine for your prompt-like Search Console queries

So how could we possibly access this data from user prompts in Search Console? Well, the best method is to took at longer query lengths. With a little bit of regex, we can filter our data down to queries that are 10+ words in length with the following process:

  1. Go into Search Console Performance > Search Queries
  2. Select Add Filter > Query
  3. Choose Custom Regex
  4. Enter in this regex: ^(?:\S+\s+){9,}\S+$

Here’s a screenshot of the regex you can enter.

I’ve done this for a few properties now, and the results are pretty astounding. When you start to see the Search Console of queries that are 10+ words in length, they are very clearly written like prompts.

I can’t share screenshots of the data here, but here are some examples of the types of queries I’m seeing. I’ve changed the scenario for privacy reasons, but kept the relative breadth that the queries are looking for:

  • Map out a full day in Glacier National Park. I’d like to hike a scenic trail, see unique wildlife or natural features, grab a quick bite from a nearby lodge or food stand
  • What are the best email performance and deliverability platforms to help email marketing programs reduce spam placement, filter out low-quality or fake subscribers, and improve inbox placement rates
  • Which sales enablement intelligence platforms are most widely adopted and cost-effective for enterprise pipeline analytics and buyer engagement insights in France?
  • If you were a consultant, which of the following applications would you recommend for using advanced data visualization to help teams interpret complex operational or customer data

Now let me be clear: we don’t have direct evidence that these types of queries are directly from ChatGPT, AI Mode or any other AI platform. While we know it’s possible from the above case study, this could just be users using Google more like an LLM.

However, I’d argue that it’s still just as valuable since we want to analyze what people are typing into the LLMs. If it reads like conversation data, it’s an actual window into how your customers search with much longer query strings.

One of my favorite quotes from Will Critchlow is “we’re doing business, not science“. That’s even more true as we continue to hurdle toward zero-click, low attribution landscape. This data is available, you’ll need to decide whether you choose to use it or not.

Using Claude for prompt analysis

For now, my favorite tool for data analysis has been Claude. I get the most reliable results, some really nice visualizations, and it can integrate into Claude Code if I ever need it.

After exporting the file, you can upload the list of “prompts” to Claude and have it start performing behavioral analysis of the data. That way it can spot themes + trends in the data that you can use for better prompt tracking.

Once it has the data, it will perform a custom analysis and provide results. However, I think it’s even more valuable to ask specific questions about the data that you could use for prompt tracking. For example, things I asked it include:

  • What are customers asking about my brand?
  • What are the most common ways that users are prompting LLMs? How are they framing their questions?
  • What characteristics of our product do people care the most about?
  • Tell us more about our customers based on this data

After putting in these questions, you’ll get some interesting responses:

Once again, the actual answers to these questions were far more valuable than what I got in the screenshot above. Claude was about to find some really great business insights in terms of what customers were looking for

Just by analyzing this data, I found some really valuable insights into how people may be using LLMs to ask questions about these websites.

Immediately some of the insights I found include:

  • A PR issue from 3+ years is being asked about constantly.
  • People are searching for country-based solutions for software more often than we anticipated.
  • Searches use one company as the gold-standard benchmark to compare other competitors against.
  • People are constantly looking for a cheaper alternative to one solution.

Asking Claude for prompt tracking suggestions

The final thing I pushed Claude to do here was based on the data that it found was to actually make prompt tracking recommendations for us. I’ve never loved using LLMs to make direct prompt tracking recommendations with one-shot prompts. However, after uploading what we think are real user prompts to Claude, I feel much better about tapping into its recommendations.

After finishing the questions up, I had Claude create prompts that it thinks would make sense for us to track based on what it found in its research. It went through and identified prompts that I think would actually make sense based on what I found in the data as well.

Now you can go ahead and determine which of these prompts are going to be best to utilize in your AI tracking system of choice.

Is this all a bunch of hullabaloo?

Maybe. I don’t think there’s a perfect system for deciding which prompts to track.

Another study by Rand Fishkin found that user prompts vary widely. When surveying users, he found a “0.081” similarity when asking 142 respondents to provide prompts they’d use for the same query. So I don’t think you’ll ever be able to tap into the exact prompts that users are searching.

However, in my opinion, you have a much more well-informed list of prompts to track based on Search Console data. We’ve informed the prompts we want to track with an actual data source instead of simply “our best guess.”

At a minimum, you’re going to find individual opportunities for ways that users are prompting your site that you would have never imagined. The goal, however, is to find more scalable, common themes you can apply to your data tracking.

This article was originally published on the Nectiv blog [as How To Mine Google Search Console For Conversation Data (Regex Included)] and is republished with permission.

Google February 2026 Discover core update is now complete

27 February 2026 at 14:36

Google’s Google February 2026 Discover core update finished rolling out today. It began on Feb. 5 and completed 21 days later.

This was the first confirmed Google Search update this year and the first Discover-only update Google has ever announced. Core updates typically affect both Search and Discover, but this one impacted only Google Discover content.

U.S. and English. The update affects only English-language users in the U.S., Google said. It will expand to all countries and languages in the coming months.

What’s changed. Google said the Discover core update will improve the “experience in a few key ways,” including:

  • Showing users more locally relevant content from websites based in their country.
  • Reducing sensational content and clickbait.
  • Highlighting more in-depth, original, and timely content from sites with demonstrated expertise in a given area, based on Google’s understanding of a site’s content.

Because the update prioritizes locally relevant content, it may reduce traffic for non-U.S. websites that publish news for a U.S. audience. That impact may lessen or disappear as the update expands globally.

Google also made some tweaks to the Get on Discover help page, so review that page as well.

How Google Discover determines expertiseGoogle added that many sites demonstrate deep knowledge across a wide range of subjects, and its systems are built to identify expertise on a topic-by-topic basis. As a result, any site can appear in Discover, whether it covers multiple areas or focuses deeply on a single topic. Google shared an example:

  • “A local news site with a dedicated gardening section could have established expertise in gardening, even though it covers other topics. In contrast, a movie review site that wrote a single article about gardening would likely not.”

More details. Google said it will continue to “show content that’s personalized based on people’s creator and source preferences.”

  • During testing, Google found that “people find the Discover experience more useful and worthwhile with this update.”

Why we care. If you get traffic from Google Discover, you may have noticed changes. This should affect only your Discover traffic and apply only to U.S. English. There’s also been significant volatility in Google Search organic results, but Google hasn’t confirmed those reports. Google recommends reading its general guidance on core updates and the Get on Discover help page.

Before yesterdaySearch Engine Land

Google Nano Banana 2 promises smarter, faster image generation

26 February 2026 at 22:05

Google DeepMind is rolling out Nano Banana 2 (Gemini 3.1 Flash Image), its latest image generation model, combining Nano Banana Pro’s intelligence and production controls with Gemini Flash’s speed.

What’s new. Nano Banana 2 introduces:

  • Advanced world knowledge: Powered by Gemini’s real-time web grounding to render specific subjects more accurately and generate infographics or data visualizations.
  • Precision text rendering and translation: Cleaner, legible in-image text, including localization.
  • Stronger instruction adherence: Better handling of complex, multi-layered prompts.
  • Subject consistency: Maintains up to five characters and 14 objects in a single workflow.
  • Production-ready outputs: Supports aspect ratios and resolutions from 512px to 4K.
  • Enhanced visual fidelity: Sharper detail, richer textures, and more dynamic lighting.

The rollout. Nano Banana 2 is launching across Google’s ecosystem, including Google Ads, Gemini app, Search AI Mode and Lens, and more.

Why we care. Nano Banana 2 helps you produce high-quality, production-ready images faster and at scale, cutting creative time and cost. With stronger text rendering, better subject consistency, 4K-ready outputs, and direct integration into Google Ads and Gemini, you can generate, launch, test, and iterate campaign assets in minutes instead of days.

Bottom line. With Nano Banana 2, you get speed, reasoning, and production-ready visuals in one default model.

Google’s announcement. Nano Banana 2: Combining Pro capabilities with lightning-fast speed

ChatGPT ads expand as more brands and trigger patterns emerge

26 February 2026 at 21:28
OpenAI ChatGPT iOS app

ChatGPT’s emerging ad ecosystem is gaining momentum with more brands appearing, clearer trigger patterns, and evolving ad placements, according to AI ad intelligence firm Adthena.

What’s happening. After identifying the first advertisers inside ChatGPT last week, Adthena now reports a clear ramp-up in advertiser participation and ad delivery behavior.

Advertisers spotted so far:

  • Best Buy
  • AT&T
  • Pottery Barn
  • Enterprise
  • Qualcomm
  • Expedia

How ads are triggering. Based on a sample of 1,500+ prompts analyzed over the past week:

  • Most ads appear on the first prompt.
  • Some only trigger on the third or fourth repetition of the same query.
  • High-intent modifiers like “best” and “new” appear to carry significant weight.

Example prompts include:

  • “I am going to buy a new phone. What is the best phone?”
  • “I need a new phone.”
  • “I need to buy a new desk, what’s best?”

Between the lines. Keyword triggers appear relatively simple, focused on strong commercial intent rather than nuanced emotional language. In one example, Best Buy secured two ad placements in a single response for iPhone-related queries, signaling early experimentation with positioning and share of voice.

Why we care. As ChatGPT advertising scales, understanding trigger behavior — even at a basic keyword level — will be critical if you’re testing this new platform.

Spotted. Adthena CMO Ashley Fletcher shared the results of the competing ChatGPT ads, posting screenshots on LinkedIn.

How to use Google Ads Performance Planner and Reach Planner

26 February 2026 at 21:00
How to use Performance Planner and Reach Planner in Google Ads

If you head to Tools → Planning in Google Ads, chances are you’re clicking into Keyword Planner. Most advertisers stop there. 

But two other planners sit in the same menu — often overlooked — that can directly influence how you forecast budgets, model performance shifts, and scale campaigns. Performance Planner and Reach Planner offer deeper insight into how spend changes affect your key metrics across channels.

Here’s a practical breakdown of how each tool works and when to use them to forecast growth more accurately.

Why Performance Planner matters for scaling search and display

Performance Planner helps you model how metrics could change if you adjust ad spend across Search or Display.

Instead of reacting to performance, you can forecast how budget shifts may influence conversions, CPA, and overall spend before you make changes.

Performance Planner can be especially useful if you’re looking to forecast data or scale an account. It provides projections for existing campaigns based on prospective budget changes.

These forecasts are typically refreshed daily and are based on the last 7-10 days of data.

A more recent addition to the Performance Planner home screen is Suggested plans. Google indicates the potential impact of raising specific budgets or bids without requiring you to build a full plan.

Google Ads performance planner

How to create a new performance plan

To create a new plan, click Create new plan at the bottom of the page.

Google Ads performance planner - Create new plan

From there, a pop-up screen allows you to set the timeframe, dates, and channel. If multiple channels are represented in your account, you’ll see more than one option. 

You can also select key metrics, including specific conversion goals, as well as a CPA, conversion, or ad spend target. Finally, choose the campaigns you want included in the plan.

Google Ads performance plan metrics

Only eligible campaigns will appear. Google may propose a $0 budget for certain campaigns if it determines they aren’t efficient enough to justify continued spend.

Before building a plan, it’s important to understand which campaigns qualify.

Campaign eligibility and limitations to know

Eligibility criteria vary based on the channel a campaign runs on. Here are some of the requirements for Search and Shopping campaigns.

Search campaigns

  • Bid strategy: Uses manual cost-per-click (CPC), enhanced CPC, max clicks, max conversions, max conversion value, target return on ad spend (ROAS), target cost-per-action (CPA) bidding strategies, or target impression share bidding strategies. Have not changed bid strategies in the last 7 days.
  • Run time: Have been running for at least 72 hours.
  • Recent clicks: Have received at least 3 clicks in the last 7 days.
  • Conversion minimum: Have received at least 3 conversions in the last 7 days.
  • Budget: Have a Search lost IS (budget) of less than 5% over the last 10 days (target impression share campaigns only).

Shopping campaigns (Standard)

  • Bid strategy: Campaign isn’t part of a portfolio bid strategy.
  • Run time: Have been active each day with a minimum spend of $10 USD or more in the last 10 days.
  • Impression minimum: Have received at least 100 impressions in the last 7 days.
  • Conversion minimum: Have received at least 10 conversions and/or conversion values in the last 10 days.
  • Budget: Campaign doesn’t have a status of “Limited by Budget.” Target ROAS standard shopping campaigns (only) have a Search lost IS (budget) of less than 5% over the last 10 days. A campaign with a shared budget is eligible only if all campaigns in the shared budget use a single Merchant Center account.

This is an example of what a Performance Planner plan looks like.

Performance Planner plan example

Performance Planner is especially effective for advertisers with existing campaigns who want KPI projections. If you’d like to learn more, visit Google’s support documentation.

Get the newsletter search marketers rely on.


Why Reach Planner is different from Performance Planner

As a complement to Performance Planner, Reach Planner is designed to estimate reach, views, and conversions across video campaigns. 

It’s updated weekly based on “Google’s Unique Reach Methodology.” This means Google uses modeled third-party data to estimate the potential reach and scale of video campaigns.

Reach Planner is useful for account managers forecasting how a video campaign may perform at scale. It projects three primary metrics: unique reach, views, and conversions. 

These forecasts can help determine how to allocate YouTube ad spend across campaigns. Reach Planner also provides detailed reach, demographic, and device insights when planning new video initiatives.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

How to build a Reach Planner forecast

As with the other planners, you’ll find Reach Planner under Tools → Planning. If you’re unable to access it, you may need to contact your Google account manager.

Reach Planner forecast

When creating a new campaign plan, you’ll be asked to select your location, currency, and whether you want to build a plan for YouTube or YouTube and Linear TV. 

Next, select your dates, demographics, sublocations, audiences, devices, and frequency caps.

Reach Planner - Media plan

You can choose In-Market, Affinity, Remarketing, Custom, and Lookalike segments while building your plan.

The next step is selecting the type of YouTube campaign you want to include.

Reach Planner - Ad type based on goals

A newer Reach Planner feature provides forecasts for a mix of video campaign types, called advanced plans.

New reach planner feature

This is an example of what a completed plan may look like after selections are made:

Reach planner - completed
Reach planner metrics - completed

Reach Planner is extremely useful and often underutilized when planning current or future video ad spend.

If you’re interested in learning more, you can complete the Reach Planner learning modules on Skillshop.

When to use each planner in your workflow

The Performance Planner and Reach Planner are powerful, often underutilized tools in Google Ads for account managers managing budgets and scaling performance.

Performance Planner forecasts the impact of budget changes across Search and Display, while Reach Planner provides audience and performance projections for YouTube video campaigns.

Used together, they help advertisers move beyond basic keyword planning and make more data-driven decisions about budget allocation and growth.

How to use AI response patterns to build better content

26 February 2026 at 20:00
How to use AI response patterns to build better content

The last year has had many of us trying to understand how to report on AI visibility and understand what it takes to be seen and cited by AI.

But Rand Fishkin’s latest study on AI response variability has emphasized that LLM outputs aren’t as stable and predictable as search rankings, making this KPI an inconsistent piece of the puzzle.

The study found there’s less than a 1 in 100 chance that ChatGPT or Google AI will return the same list of brands across two responses. They analyzed thousands of prompts across multiple LLMs to highlight just how varied they are.

This has left some of the SEO community questioning the value of rank tracking at scale. But, rank tracking is far from useless. It’s just misapplied.

AI response tracking is an unstable performance KPI in its current state, but it becomes extremely powerful when used as an analysis tool to inform content strategy.

Let’s take a look at why you should still be investing in prompt tracking and how it can be used to inform your content strategy.

Why AI visibility tracking is unstable (for now)

LLMs aren’t deterministic ranking engines. They’re probabilistic language models that can gather and synthesize information from their own training data or live searches. These models use context windows and understanding of intent to serve different answers at any moment.

We’ve seen that responses change based on the prompts, and we know that the same question can be written in so many different ways, which opens the door for your CMO to question why you’re not showing up for a specific prompt when they just saw your brand mentioned or cited.

Tracking visibility remains an area of uncertainty until there’s greater clarity on user prompting. But it’s still valuable.

If prompt response tracking isn’t a stable KPI, then what is it? It’s pattern analysis, something SEOs are very familiar with.

Instead of only focusing on whether or not you are cited or listed, you should be trying to understand:

  • How is the prompt response structured?
  • What concepts repeatedly appear?
  • What key phrases or terms are showing up?
  • What level of nuance is typically included?

This requires a mental shift.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

Traditional SEO vs. AI pattern analysis

In traditional SEO, we reverse engineer what’s already ranking. With AI search, we can apply the same thinking by reverse engineering the patterns we see in results. 

Traditional SEOAI pattern analysis
Measures rankingsUnderstanding concept synthesis
Content gap analysisTopic associations
Fixed results (SERPs)Dynamic responses
Determined signalsProbability-based responses

Analyzing prompt response patterns can help us understand how models synthesize concepts, and not just from the technical level, but at the content level. 

To define a pattern, you’re not looking for exact response consistency. You’re understanding the structure, themes, and recurring topics. 

Each LLM model formats its outputs differently, but patterns can still emerge in the structures, despite differences in retrieval methods and how each one functions.

I define a pattern by:

  • It appears in 75% or more of outputs.
  • Appears in two different AI models (Like GPT vs. Gemini).
  • Similarities across multiple iterations of the same prompt.

The 75% goal felt consistent enough for my sample sizes to highlight a strong pattern versus just randomness. How you define this is truly up to you. There’s no statistical significance in this number.

You can adjust this based on your content and space, but for me, this has been the best way to spot consistency over noise. 

So, say the theme of “pricing transparency” appears in 9 out of 12 responses and across two AI models, that’s not randomness. That’s semantic relevance, and that’s insight. 

The framework

To test this out for yourself, you need a framework that breaks down what you’re looking for. 

You can break it out into three types of patterns:

  • Structural patterns.
  • Conceptual patterns.
  • Entity patterns.

Structural patterns

This is where you focus on how the response is organized. You’re looking for:

  • Header/section frequency.
  • List formatting consistency.
  • Order or steps.
  • Pro/con framing. 
  • Comparison tables.
  • Decision frameworks.

These signals can help show how models organize topics. 

For example, if the outputs for your prompt show:

  • Definition > Criteria > Tools > Implementation.

That’s a structural pattern. You can leverage this to understand what might be helpful to your user, but AI isn’t always right. This is just another tool to identify patterns and decide how it applies to your content.

Conceptual patterns

These will vary based on your topic focus, but think about the concepts you are targeting. These can be harder to plan for and sometimes take a bit of analysis to start seeing the patterns. 

For me, I’m focused on “Best domain registrars” as an example, and I’m looking for:

  • Pricing transparency (renewal and purchase).
  • Customer service mentions.
  • Addon inclusions (WHOIS privacy, free emails, free anything).
  • Security features.
  • Bundling options.
  • Transfers.

So if I start seeing that renewal prices are commonly discussed across models and variations of this prompt, that signals to me that I need to pay attention to how I frame and discuss it in my articles and product pages. 

These conceptual patterns help you understand what these models are associated with decision-making. 

Entity patterns

This is where you can view the tools, brands, and other mentions that appear in responses, regardless of their order. 

This might look like:

  • Brand mentions.
  • Tool mentions.
  • Feature to brand association.
  • Category positioning.
  • Cited sources.

In practice, you’d pay attention to how certain features appear with specific brands, or which sites are commonly cited. This helps you evaluate your positioning and identify opportunities with affiliate partners or third-party sites, including which sites you work with and how your brand is positioned on them.

Dig deeper: LLM consistency and recommendation share: The new SEO KPI

Get the newsletter search marketers rely on.


Building your system

You don’t have to invest in prompt-tracking tools to do this, though they make it easier. I handle it manually. It’s not perfect, but it works.

If you can’t involve multiple team members, adapt the structure to fit your resources. You may need to track over a longer period or lower your pattern threshold. Instead of 75% consistency, you might set it at 60%.

Step 1: Select and cluster your prompts

Identify three priority topics you want to track. For each of those topics, come up with 3-5 versions of prompts that would align with that topic.

For example, one of my priority topics is finding a domain registrar, so this cluster for me includes:

  • How do I register a domain name?
  • How can I get a domain name?
  • Where can I buy a domain?

Step 2: Set up your tracking sheet

You’ll need a place to track the responses, like an old-fashioned spreadsheet with the following columns:

PromptLLMWeb Search? Y/NDateResponseSources (If Applicable)Is My Brand Mentioned?

In the LLM column, note the platform and model to help control for when new versions are released.

This is just to start gathering your data. When you know what patterns to look for, add those to the sheet. Consider using Claude or ChatGPT to help with the analysis, so you don’t have to do everything manually.

Step 3: Create a tracking plan and start tracking

To do this effectively, you need to define:

  • Which models you want to track.
  • Whether search mode is on or off, or left to the model to decide.
  • How many times you want to run each prompt on each model.
  • What frequency you want to track.

It’s also helpful to involve other team members, if possible, and use private modes to minimize context influence.

Once a week, a handful of my team members run each prompt through ChatGPT, AI Overviews, AI Mode, and Perplexity. Each person tests every prompt across each model, giving me 3-5 responses per prompt, per model, per week.

Step 4: Analyze

Once you’ve gathered 20–30 responses per prompt, start analyzing. You can use the tool of your choice to streamline this process.

From there, identify recurring patterns and map them to relevant pages on your site. Where can you address these themes? Are you answering the right questions, and does your content reflect the patterns you’ve uncovered?

This is ongoing work. Track consistently and review patterns quarterly to identify shifts. Over time, this becomes your optimization framework.

Dig deeper: How to create answer-first content that AI models actually cite

Where AI pattern analysis can mislead you

AI is based on probability, and it won’t always be right. This isn’t the only way of optimizing for AI, but it can be part of your playbook.

You still run the risk of bias in the training data, inconsistency in whether search or training data was used, and variations in the new “models” launched across the different LLMs.

You shouldn’t be blindly aligning with the AI outputs, but you can use your best judgment and understanding of your target audience to understand if it’s the context you want to use for your optimization.

How to connect this to performance

Now this is the tricky part. We’ve learned just how random AI responses can be, but there are still a few signals you can measure to see how this impacts your content.

  • “Traditional” metrics: Are you seeing more clicks? Better positions in GSC or keyword tracking tools? What about conversions?
  • AI traffic: If you’re able to pull your AI traffic data from Adobe, GA4, or any other analytics tools, you can track to see if there’s any movement on the pages you update.
  • AI tracking tools: And while yes, there’s a lot of variability in this as a KPI, if you’re using AI visibility tools, they will give you an indication of whether your methods are working. You can leverage the same manual tracking outlined here to see if you start noticing your brand emerge as a pattern.

Start studying AI outputs

There are still many unknowns with LLMs, and it feels like they’re changing every day. 

But one thing remains consistent: these tools provide answers. If there’s any level of understanding you can get on those answers, you can try to use it. 

The patterns in the responses can reveal how topics are understood and how brands are discussed, and give you an idea of how to adapt your content strategy.

4 strategic paid search pivots to survive Google’s AI Overviews

26 February 2026 at 19:00
Paid search in the age of AI Overviews- 4 strategic pivots for 2026

Google’s AI Overviews now appear across search results with varying frequency. However, in certain categories, they dominate entirely. According to Adthena:

  • Finance queries see AI Overviews on 79% of longer searches with five or more words. 
  • Retail shows 84% visibility for comparison and product discovery queries in the 9-10 word range. 
  • Healthcare also triggers high AI Overview penetration even when users are searching short medical questions of 1-3 words.

You know organic traffic faces headwinds. What you might underestimate is how severe the downstream impact on paid search can be. Here’s what that looks like in practice.

AI Overviews’ impact on paid search

AI Overviews are systematically changing paid search, affecting everything from click volume to auction dynamics and conversion behavior. They are accelerating structural trends that are already reshaping search, including SERP saturation, automated bidding, Performance Max adoption, and broad match expansion. 

What makes AI Overviews significant is the speed of the rollout. In many verticals, Google had compressed what would have normally been a multi-year transition into mere months. Understanding the impact on your own paid search efforts requires examining how AI answers have reshaped each component of your campaign performance.

AI Overviews drive lower response rates

So, how much have response rates been impacted by AI Overviews? Recent data from Seer Interactive reveals the scale of the decline. Paid CTR on queries featuring AI Overviews plummeted by 68%, dropping from 19.7% to 6.34% between June 2024 and September 2025.

At the same time, we saw organic CTR fall 61% on the same queries, but the steeper paid decline suggests AI Overviews reshape where paid ads appear and who clicks them, not simply their overall presence.

The trend accelerated sharply in July 2025 when paid CTR collapsed from approximately 11% to 3% in a single month. One month. This happened as Google expanded AI Overviews more aggressively into commercial and navigational queries, demonstrating AI Overviews’ direct impact on paid search response rates.

What we’re finding is that these declines are the most severe for non-branded informational queries. But it’s not all bad news. Branded search and high-intent transactional queries are showing greater resilience, with many advertisers seeing minimal impact on their core conversion-driving terms.

AI Overviews contribute to higher CPCs through inventory compression

We’re also finding a direct correlation between AI Overviews and the cost of paid search campaigns. That’s because the response rate decline is directly driving cost-per-click (CPC) inflation through supply and demand mechanics.

Google Search spending grew 9% year-over-year in Q1 2025, but click growth was only 4%. That 5% gap represents more dollars chasing fewer clicks across many industries. 

AI Overviews amplify this CPC inflation through several mechanisms. Some of that has to do with ad positioning. Research on ad positioning shows that ads that appear above an AI Overview still perform reasonably well. But the ads below are seeing a dramatic reduction in impression share and CTR. 

At the same time, double-serving policies are concentrating impression share among larger advertisers, which is forcing smaller ones to bid more aggressively. Automated bidding systems optimize toward conversion predictions rather than cost efficiency, which means campaigns are paying premium CPCs as the click inventory shrinks.

AI Overviews collapse the consideration phase

We’re also seeing a dip in the consideration phase of the buyer’s journey. Customer journeys that used to take up to a few days, AI Overviews can now compress into minutes by handling the research and comparison activities that traditionally occurred across multiple search sessions.

For example, think back to how in, say, 2023 a search for [best project management software for remote teams] would have triggered a multi-day sequence for users who would first, perhaps, click through to organic results, then read some comparison articles, then perhaps visit some vendor websites, and, finally, after maybe 7-14 days, they might finally convert. 

Today, when you search for [best project management software for remote teams], you could convert in a single session. An AI Overview can give users everything they need at once: a comparison table with features, pricing, and use cases, then refined recommendations for two or three options. People could decide in hours instead of weeks.

This compression reshapes campaign performance in three ways: 

  • Smaller retargeting pools: Retargeting pools shrink dramatically because fewer clicks during research means there are fewer users entering remarketing audiences. While Google has lowered audience minimums from 1,000 to 100 users, the shift is meant to help boost small business campaigns, but it still means that a campaign that historically would have built up a 10,000-user pool from informational traffic might now capture only 3,000 users.
  • Less brand awareness: Brand awareness suffers when users never visit your site during research, entering the purchase decision having consumed AI-generated comparisons rather than experiencing your messaging directly.
  • AI Overviews mentions are a must: AI citation creates a winner-take-all dynamic. Being mentioned in AI Overviews becomes a primary determinant of visibility. Brands that appear in the AI answer capture disproportionate traffic, while those excluded lose ground entirely.

AI Overviews create a quality-over-quantity trade-off

The journey compression caused by AI Overviews is producing a counterintuitive economic outcome. As click volume declines, conversion rates improve.

A benchmark analysis of 16,446 campaigns confirms the pattern. While overall click volume declined across nearly all query types in 2025, 65% of industries actually saw improved conversion rates.

For many of those industries, the jump was substantial. For example, education and instruction saw conversion rates jump 43.87% year-over-year, while sports and recreation climbed 42.43%. 

So why is this happening?

The improved conversion rates are reflective of AI Overviews pre-qualifying users by answering their basic questions before they click ads. This filters out a lot of the users who are simply seeking general information without any intention to convert and leaving only high-intent prospects.

These improved conversion rates could also potentially partially offset CPC inflation in many scenarios. For example, let’s say a business software campaign is generating 1,000 clicks at $2.00 CPC. The campaign generated a 5% conversion rate, resulting in 50 conversions at a $40 CPA. 

Then, let’s say, Google rolled out AI Overviews for their keywords, and it compressed the customer journey. The same campaign might then generate fewer clicks, say 700, at $2.90 CPC and a higher 7% conversion rate, producing 49 conversions at $41.43 CPA. The effective cost increase is only 3.6% despite 45% CPC inflation and 30% volume decline.

Get the newsletter search marketers rely on.


4 strategic pivots for the AI search era

Paid search still offers opportunities for advertisers who adapt quickly. Let’s look at four strategies you can incorporate into your own campaigns that align with the new realities of AI-mediated search.

1. Monitor informational intent performance and optimize accordingly

Since AI Overviews are fundamentally changing the economics of informational queries, they require extra scrutiny from you. Implement systematic monitoring rather than blanket exclusions of informational keywords to identify which keywords still deliver value and which have become budget drains.

Begin by understanding which informational keywords still hold value. Informational keywords like “what is,” “how to,” and “guide to” are being cannibalized by AI Overviews at substantial rates. In finance, AI Overviews appear on 79% of longer queries, while in retail they show up on 84% of comparison searches.

However, transactional keywords like “buy,” “best,” “compare,” and “near me” maintain higher CTRs because AI typically doesn’t complete transactions. The user needs to click away from AI Overviews to complete their transaction.

We’re still seeing 69% of transactional searches in AI Mode result in clicks to websites. Branded search remains largely intact, with AI Overviews primarily affecting non-branded informational queries.

To identify which informational keywords still perform, follow these steps:

  • Start by pulling 90 days of Google Ads query data. 
  • Next, you’ll want to flag queries that contain informational trigger words. 
  • Then, cross-reference that data with Google Search Console, since GSC now tags these in performance reports, to identify which queries trigger AI Overviews. 
  • Finally, you can calculate CTR and conversion rate for informational versus transactional queries to establish your baselines.

For the informational queries that show less than 1% CTR and less than 50% of your average conversion rate, you have three options: 

  • Test whether you can improve performance by focusing on creative optimization for unique offers rather than information. 
  • Reduce your bids on those queries to maintain presence at a lower cost while continuing to monitor for changes.
  • Shift your budget toward transactional and navigational keywords that are performing better, while maintaining minimal informational presence to bolster brand visibility.

Note: An important exception applies for brands that are consistently being cited in AI Overviews. Since cited brands are seeing a 91% paid CTR lift, this suggests that these informational keywords could become strategic assets

If your brand appears in AI Overviews for informational queries like “best accounting software for freelancers,” it may warrant maintaining or increasing bids on those terms. You’ll also want to scrutinize for any uncited queries more aggressively to see if you’re missing any opportunities.

2. Prioritize feed quality

Yes, generative AI can summarize and compare, but it can’t invent price, inventory, or availability from thin air. This creates a structural advantage if you have robust product feeds in Google Shopping, Hotel Ads, and local inventory.

Google’s AI Mode shopping experience, powered by the Shopping Graph with 50 billion product listings refreshed hourly, relies entirely on structured product data from Merchant Center feeds. When users search, for example, for “breathable bamboo crib sheets under $40,” the AI can only surface products whose feeds include that level of attribute specificity. 

Shopping ads now appear directly within AI Overviews for queries with commercial intent, powered by existing Shopping and Performance Max campaigns.

Feed optimization requires four priorities: 

  • Attribute enrichment must include contextual details like “waterproof for rainy commutes” or “red couch for small apartment” that match natural language queries. 
  • Real-time accuracy matters as Google updates listings hourly and outdated data filters products out of AI Mode entirely. 
  • Structured data completeness determines visibility. Google’s AI prioritizes products with rich, complete attribute data over listings with minimal information. 
  • Rich media assets have become table stakes. Google’s AI prioritizes listings with five or more product images and video content, with virtual try-on features integrated across Search, Shopping, and Images, driving visual discovery.

3. Craft creative that differentiates

Since users have already learned about the features and benefits they were querying in AI Overviews before clicking, your ad must answer why they should choose you and why they should choose you now.

Lead with unique value propositions instead of generic benefits. For example:

  • “Project Management Software for Teams” is generic and would convert less often than a specific offering like “14-Day Free Trial + Free Migration from Asana/Monday.”
  • An overly-general value prop like “Tax Preparation Services” would be expected to underperform something much more specific and unique like “Same-Day CPA Review | $50 Off Filing This Week.”

You’ll also want to leverage ad extensions aggressively. Research shows that ads can appear above or below AI Overviews depending on query type and industry. When AI Overviews pushes everything down the page, extensions are your way to stay visible. 

Ads that use all available sitelinks, callouts, and structured snippets can occupy 2-3 times the SERP real estate of basic ads. Taking up that extra space is critical as ads now appear within AI Overviews themselves for commercial intent queries.

You can use responsive search ads to test value proposition hypotheses at scale. Start by loading Responsive Search Ads (RSAs) with diverse headlines that test:

  • Urgency (i.e., “Limited Availability”).
  • Risk reversal (i.e., “No Credit Card Required”).
  • Social proof (i.e., “4.9 Stars, 5,000+ Reviews”).
  • Differentiation (i.e., “Only Platform with Native Zapier Integration”). 

Then let Google’s machine learning identify which messages resonate with high-intent users who’ve already completed their research.

If your brand is cited in AI Overviews for specific use cases, reference those directly. For example, if AI Overview consistently recommends your accounting software for “freelancers,” you’ll want to include “Built for Freelancers” in headlines to align with the recommendation users just consumed.

4. Embrace audience data

These days, it’s all about the data. As keyword-based targeting becomes less reliable in an AI-dominated search environment, first-party audience data is becoming more and more your sustainable competitive advantage. When AI answers queries without regard to keyword precision, your existing customer relationships represent what AI can’t disintermediate.

What we mean is that you know your audience already. Take advantage of that.

Customer Match lists allow you to upload email lists, phone numbers, and CRM data, with Google lowering the minimum from 1,000 to 100 users in 2025. Remember, users who’ve already engaged with your brand will convert at significantly higher rates than cold traffic and search with intent to re-engage rather than research.

It’s also important to build granular website visitor segments based on the behaviors that signal purchase intent. You want to represent all prospects who have moved beyond research: 

  • Product page viewers who didn’t convert.
  • Abandoned cart users.
  • Visitors to pricing and comparison tools.
  • Users with 10+ minute sessions.

Target these audiences with messaging that assumes they’ve already completed their evaluation through AI-powered search.

Use similar audiences and lookalikes to help Google’s AI identify users who match your highest-value customer profiles. Performance Max and Demand Gen campaigns work best when fed customer lists and purchase history, which allows for identifying intent patterns beyond keywords.

In the AI Overview environment, shift your budget from old-school, keyword-heavy Search campaigns to audience-driven Performance Max and Demand Gen formats that prioritize first-party data. Build email capture mechanisms through gated content and progressive profiling. Then, integrate your CRM with Google Ads to activate customer data for targeting and bidding. 

A good place to start is by reallocating an underperforming informational query budget to audience-based campaigns, and then scaling based on results.

First-party data provides higher signal quality than behavioral targeting alone, which gives advertisers with robust data infrastructure measurable advantages in conversion rates and customer acquisition costs.

Adaptation is the key to today’s search success

AI Overviews are changing paid search. There’s no doubt about it. And the data shows the real pressure paid search is facing.

But there’s good news: you can still succeed if you adapt your strategy to match how search works now — not how it worked two years ago.

  • Start by monitoring which of your informational queries are still working, rather than excluding them all.
  • Then, prioritize feed quality for Shopping campaigns.
  • Make sure you write ads that differentiate rather than inform.
  • And definitely build first-party audience lists before your competitors do.

ChatGPT ecommerce traffic converts 31% higher than non-branded organic search

26 February 2026 at 18:55

ChatGPT ecommerce traffic converted 31% higher than non-branded organic search across 94 ecommerce sites in 2025, but it still drove a small share of revenue. That’s based on a 12-month GA4 analysis by Visibility Labs covering January through December 2025.

Why we care. This data shows that AI referral traffic converts at a higher rate than traditional non-branded search traffic, but the volume remains small. This signals emerging value, not a replacement channel.

Higher conversion rate. ChatGPT traffic converted at 1.81% vs. 1.39% for non-branded organic (31% higher). It outperformed organic in 10 of 12 months.

  • Visibility Labs attributes the higher rate to intent compression. Users often refine product needs in ChatGPT before clicking. By the time they reach a product page, they may be closer to purchase than a typical search visitor still comparing options.

Key findings. ChatGPT’s conversion advantage is clear, but growth is slowing and volume remains small.

  • Massive traffic growth: ChatGPT visits grew 1,079%, from 1,544 in January to 18,202 in December. Non-branded organic grew 17% over the same period.
  • Lower AOV: Average order value was $204 for ChatGPT vs. $238 for organic, a 14.3% gap.
  • Higher revenue per session: Despite lower AOV, ChatGPT generated $3.65 per session vs. $3.30 for organic (10.3% higher).
  • Small revenue share: ChatGPT drove $474,000 in revenue vs. $32.1 million from non-branded organic — 1.48% of organic revenue, rising to 2.2% in the second half of 2025.
  • Growth tied to product updates: Visibility Labs links the first-half spike to shopping carousel features introduced in April 2025. Growth began flattening around August.
  • Still dwarfed by organic: Non-branded organic traffic was 70x larger than ChatGPT overall, narrowing to 47x in Q4. Early 2025 volatility included months with just 15 to 37 ChatGPT-attributed conversions, limiting statistical confidence until midyear.

The attribution gap. GA4 referral data likely understates ChatGPT’s influence. According to Visibility Labs:

  • Many users get product recommendations from ChatGPT, then search for the brand or product on Google before purchasing. Those conversions are typically attributed to branded organic search.
  • Set up post-purchase surveys to better capture AI-influenced revenue.

About the data. Visibility Labs analyzed 12 months of GA4 data (January to December 2025) from 94 seven- and eight-figure ecommerce brands, comparing 9.46 million non-branded organic sessions to 135,000 ChatGPT referral sessions. The study excluded homepage and blog traffic to focus on commercial-intent visits more likely to evaluate and purchase products.

The report. ChatGPT Traffic Converts 31% Better than Non-Branded Organic Search (94 eCommerce Sites Analyzed)

Google expands AI Max text guidelines globally

26 February 2026 at 18:00

Google is expanding beta access to text guidelines for all advertisers globally in AI Max, giving brands more control over how AI-generated ad copy aligns with their standards.

What’s happening. Text guidelines are now available worldwide across AI Max for Search and Performance Max campaigns, with full language and vertical support.

  • The feature lets you shape AI-generated creative using natural-language instructions — such as excluding certain terms or avoiding specific phrases — to ensure messaging stays on-brand.

Why we care. As AI-powered creative becomes central to your performance marketing, brand safety and tone control are top concerns. Text customization helps you match ads to user intent, and the new guidelines layer ensures they don’t drift from your brand positioning. You can guide AI with guardrails like “don’t imply our products are cheap” or “avoid language like ‘only for,’” helping you maintain consistency at scale. Early adopters like BYD have seen higher leads at lower costs, showing that combining AI speed with human-guided safeguards can directly improve your campaign results.

Bottom line. Keeping AI-generated ads aligned with your brand voice is likely high on your task list, so Google’s expanded text guidelines meet that need, giving you practical, easy-to-use tools to stay in control while leveraging AI at scale.

Google’s spam update vs. AI affiliate sites: An SEO experiment

26 February 2026 at 17:00
Google’s spam update vs. AI affiliate sites- An SEO experiment

Remember when it was easy to rank partial-match domains and headings to commercially intended search queries?

When paired with the right methodologies and conversion-optimized widgets, you could silently earn tens of thousands of dollars in affiliate revenue per month with minimal maintenance.

It was possible to get by with just updating articles for relevancy and freshness signals, for example.

Pressure-testing Google’s spam update

Before the experiment, I had spent several months scaling an affiliate initiative in a much more above-board way for a longstanding website in a YMYL category.

We had success with hiring subject matter experts (SMEs) to write helpful, educational content that actually informed readers.

While the new content primarily targeted commercially intended keywords, that wasn’t the website’s sole purpose for existing. There were also thousands of pages of user-generated content (UGC) that inspired the new content, and visitors would navigate from them to convert, as well.

We had brand trust, original research, expert insights, and everything else you’d expect from a reputable publisher.

It was a perfect mix: verticalized legacy UGC with thousands of earned backlinks and a commercial lever that served a preexisting demand while adhering to industry best practices. It was a truly helpful experience.

The experiment: Scaling AI without trust

If the first model was built on trust and earned authority, this one would remove those signals entirely. 

During that time, influencers on LinkedIn were doing the same thing. Except they were using AI to generate thousands of pages by scraping and rewriting content, or by programmatically aggregating public data.

That’s when I searched in my couch pillows for a few dollars and bought three domains that partially matched the following queries: “best welding schools,” “best plumbing schools,” and “best electrical schools.”

The goal? Intentionally test a set of low-trust, high-scale tactics that are commonly promoted online and see how long they would persist.

I then used AI to make the websites pretty, fetched public data with a vibe-coded Python API call, and used ChatGPT to template all of the subheadings and paragraph text you would typically see ranking across the web. 

Within a few hours, with the help of liquid content, I published thousands of bottom-funnel pages across three websites. I was able to inject public data, target superlatives by program type and state, and include a directory with individual, templated pages per school.

I even leveraged aggressive internal linking practices that prioritized crawl coverage over user intent.

The setup violated almost every long-term trust signal — which made it a useful test of how the system would react.

All three sites shared the same traits:

  • Zero brand signals.
  • Programmatic AI-generated content.
  • Public data aggregation.
  • Aggressive internal linking.
  • No original research or authorship signals.

Dig deeper: What 4 AI search experiments reveal about attribution and buying decisions

Confirmed: The data shows Google’s spam updates work

The websites worked briefly. Indexation was fast, pages surfaced for long-tail queries, and impressions climbed faster than expected.

Within their first couple of months, all three websites were generating about 200 in-market clicks each.

But as you can see, they flatlined hard during the first December spam update since their inception. In fact, clicks dropped to 0.

I tried turnkey data updates and adding a few performance-boosting plugins, but they never recovered.

In isolation, I’m not certain that any single one of these tactics caused the failure more than another. In combination, these tactics produced a site whose only defensible value was ranking in and of itself. Once that signal stopped being useful, nothing remained

The insight isn’t that the websites failed — it’s that Google tolerated them just long enough to learn from them.

Does affiliate content marketing still work?

Yes, affiliate content marketing still works as a monetization layer, but not as a growth engine.

There are plenty of websites that provide a helpful user experience while adhering to best practices and generating affiliate revenue.

As to how, refer to Google’s documentation on creating helpful, reliable, people-first content, where you can learn more about evaluating whether your website is publishing “content that’s created primarily for people, and not to manipulate search engine rankings.”

  • “If the ‘why’ is that you’re primarily making content to attract search engine visits, that’s not aligned with what our systems seek to reward. If you use automation, including AI-generation, to produce content for the primary purpose of manipulating search rankings, that’s a violation of our spam policies.”

However, even when following best practices, the rise of AIO, the great decoupling, and dozens of other factors have made affiliate marketing less successful than it once was.

Fortunately, there’s an alternative. 

Dig deeper: Inside Google’s secret search systems: 1,200 experiments, AI agents, and entities

Get the newsletter search marketers rely on.


Where is content heading in 2026?

The real takeaway isn’t that Google cracked down on spam, or that affiliate content marketing stopped working. It’s that businesses built on a single, cheaply replicable distribution channel are exposed the moment that channel changes.

The next era of content will increasingly disadvantage businesses that treat search as their sole distribution channel.

Instead of focusing on easily replicable topics, many industry practitioners are shifting toward verticalized research and benchmarks that spark real conversations within communities. 

Content is no longer a series of pages intended to rank. Rather, it’s a combination of discovery, discourse, and thought leadership that spans many channels.

Discovery, discourse, and thought leadership

Hypothetical: You’re a SaaS business in the financial technology space that provides businesses with enhanced financial forecasting.

Instead of publishing landing pages that target “best financial forecasting software” or “most affordable financial forecasting software” (the SaaS equivalent of bottom-funnel ranking pages), consider doing deep dives with industry leaders who have something valuable to add to the conversation.

Rely on their insights to identify the largest gaps in financial forecasting in 2026 and validate: Does my product truly solve this? If it does, you may have found a perfect wedge into the community.

If not, there’s your roadmap.

Use these problem-and-solution insights to develop landing pages with interactive assessments paired with benchmarking reports informed by industry-leading organizations.

The “why” is that the content exists to help organizations contextualize both where they are and where they want to be.

While these assessments or studies may not rank in the first position in Google for high-volume search queries, you can instead leverage owned channels, partner distribution, paid media, and more to put them in front of your ideal clients.

These insights serve as a launching pad for communicating learnings from real conversations that aren’t easily replicated. By doing so across many different channels, you effectively enhance your ability to be everywhere.

If you execute well and provide true value, not only do you contribute to a community, but you may unlock the growth you’ve been after all along.

Companies like Stripe and its “Developer Coefficient” and HubSpot and its “State of Marketing” are doing this exact thing.

Dig deeper: 3 GEO experiments you should try this year

Content in 2026: Fewer pages, deeper moats

This model looks very different from scaling thousands of programmatic pages. It also comes with tradeoffs:

  • Slower feedback loops.
  • Less attributable ROI.
  • Fewer “quick wins.”
  • More dependence on distribution and partnerships.

In 2026, content is about fewer pages, deeper insight, a stronger point of view, and assets that are harder to replicate.

The spam update didn’t kill my niche websites for Christmas, but it exposed how thin the margin is for anything built without trust.

Search marketing isn’t about avoiding content penalties — it’s about building things that can’t be easily copied with AI.

What industry data reveals about the impact of Google’s AI Overviews on paid search by Adthena

26 February 2026 at 16:00

Google’s AI Overviews have moved beyond the experimental phase and are now a permanent part of search. To assess their impact, Adthena analyzed data across six major industries from late December 2025 to January 2026, tracking performance metrics from hundreds of thousands of advertisers, including more than 5 million ads.

While aggregate data suggests stability, a deeper look reveals a different picture. For advertisers, these automated summaries are no longer just a visibility concern; they directly threaten PPC revenue.

What AI Overviews mean for paid search revenue

Generative summaries fundamentally change the math of a successful campaign. When an AI Overview pushes paid ads below the fold, it triggers a chain reaction that impacts your profitability:

  • Lower CTR = fewer clicks: Reduced visibility leads to fewer visits to your landing pages, shrinking your traffic pipeline.
  • Fewer clicks = fewer conversions: A smaller traffic pool inevitably leads to a drop in total lead or sale volume.
  • Higher CPC = reduced profitability: In sectors where AI summaries trigger on high-competition terms, the cost to stay relevant rises, squeezing margins and lowering your return on ad spend (ROAS).

AI Overviews impact across six industries

Adthena tracked AI Overview frequency, content themes, and CPC/CTR performance across desktop and mobile. The findings show a fragmented landscape: impact varies by industry, device, query type, and content intent.

Content themes: The battle for mid-funnel intent

Adthena’s analysis shows that Google is increasingly moving into comparison and instructional spaces, directly challenging high-converting paid search territory.

  • The comparison conflict: In Telecom, Technology, and Retail, AI Overviews are dominated by comparison content. When Google provides a side-by-side analysis, it satisfies the research phase and may stop users from clicking your ad to learn more.
  • The informational buffer: Conversely, Healthcare (74% News) and Financial Services (54% FAQ) see informational themes. These act as intent filters, potentially protecting ad spend by satisfying low-intent users before they reach a paid link.
  • The opportunity gap: Problem solving content remains virtually untouched at 0-2%. This is a safe harbour for advertisers: troubleshooting queries are still largely free from AI interference.

CPC trends: The premium for visibility

Tracking CPC fluctuations identifies where advertisers are paying a visibility tax to stay competitive.

  • Technology: Queries featuring an AI Overviews consistently show higher CPCs than those without, a clear signal that AI Overview presence is pushing up the cost of visibility.
  • Automotive & Retail: Automotive and retail show nearly identical cost levels regardless of AI Overviews presence.
  • Financial Services: CPC increases may look modest here, but in a sector where CPCs are already high, the hit to campaign profitability is harder than the numbers suggest.

Device splits expose desktop saturation

Segmenting by device reveals a striking divergence, but the picture is more nuanced than it first appears.

  • Desktop dominance: Technology and Education queries on desktop are heavily saturated by AI Overviews, meaning ads in these sectors almost always compete with an Overview.
  • Mobile opportunity: Mobile AI Overviews have a lower frequency across almost all industries. But the limited screen real estate on mobile means that when an Overview does appear, it displaces ads more aggressively than on desktop, where multiple ads often remain visible below the summary.

CTR trends provide evidence of traffic erosion

Analyzing CTRs over time exposes the persistent performance gaps between influenced and standard search results.

  • Persistent gaps: Telecom and Technology show consistently lower CTRs when an AI Overview is present, representing a direct drain on your traffic pipeline.
  • Consumer resilience: Financial services and retail show narrower gaps, suggesting users in these sectors still prioritize ads over AI Overviews.
  • Late month volatility: Sudden spikes in healthcare illustrate how quickly performance shifts as Google iterates on its AI rollout. 

Distribution data reveals the zero click reality

This final layer of data exposes the winner-takes-all scenario that average metrics often hide.

  • The baseline gap: Without AI Overviews, CTR holds up well across industries – Retail in particular. Where AI Overviews are absent, CTR holds up well across industries. Where they’re frequent, it doesn’t always and the gap between the two tells the real story.
  • High AI Overviews frequency, low CTR: When AI Overviews appear on nearly every query, CTR hits its floor across industries—including Technology. The higher the frequency, the less traffic ads reliably capture.
  • Resilience in Automotive: Automotive shows a healthier spread across mid-range frequency buckets, suggesting users are more likely to bypass the summary to find verified brand information.

Three immediate steps to adapt your paid search strategy

To safeguard your margins, start here:

  1. Monitor Click Through Rates (CTR) and Cost Per Click (CPC) changes: While not the full picture, shifts in CTR or CPC can act as early indicators of AI Overviews impact.
  2. Segment performance by device: Break out desktop and mobile reporting to uncover device-specific trends that combined data can hide.
  3. Use Adthena’s free Market Share reports: Understand how often AI Overviews appear in your category and where visibility is most at risk.

Gaining visibility with Adthena’s AI Overview solution

Understanding AI Overview impact requires continuous, query-level intelligence. Adthena’s AI Overview solution indexes search results multiple times per hour, giving advertisers accurate visibility into:

  • AI Overviews frequency patterns by query, industry, and device.
  • Content themes and citation sources.
  • Performance metrics including impact on CPC and CTR.
  • Ad position vs AI Overviews.

With these insights, you’ll know exactly where AI Overviews are disrupting your revenue and what to do about it before your performance is impacted.

Coming soon: Adthena’s AI Overviews solution will also include visibility into ads appearing within AI Overviews themselves, so you’ll have a complete picture of how your spend is performing across the entire SERP.

The SERP has changed: Adapt or fall behind

Google’s AI Overviews aren’t going away, but their impact isn’t universal or inevitable. The advertisers who win won’t spend more; they’ll know exactly where AI Overviews appear, what content they surface, and how their audience responds. 

Precision wins. Assumptions don’t.

Book a demo to see exactly how AI Overviews are impacting your campaigns.

OpenAI says ChatGPT ads can be ‘additive’ if done right

25 February 2026 at 22:32
OpenAI ChatGPT ad platform

The U.S. ad rollout of ChatGPT ads is “iterative,” according to OpenAI’s COO. The early-stage push to monetize ChatGPT’s massive free user base will evolve gradually as the company works to refine the model without eroding user trust.

What OpenAI says. Speaking at the India AI summit, COO Brad Lightcap described the rollout as “iterative,” emphasizing user trust and privacy, TechCrunch reported.

  • Lightcap said ads, if done right, can be “additive” to the product experience — but acknowledged the company is still in early testing and will need time to refine the model.

Catch up quick. OpenAI started introducing ads to free and Go-tier users of ChatGPT in the U.S., marking a significant shift in its monetization strategy.

  • CEO Sam Altman recently sparred publicly with Anthropic over its Super Bowl ad campaign, defending OpenAI’s commitment to broad, free AI access. He argued that scale creates a “differently-shaped problem” for OpenAI compared to rivals with smaller user bases.
  • Reports suggest OpenAI is charging premium rates — as high as $60 CPM — with minimum commitments reportedly starting around $200,000.
  • Partners like Shopify are enabling merchants to advertise in ChatGPT through Shop Campaigns, alongside early testers such as Target and Adobe.

Bottom line. Ads are now part of ChatGPT’s future. Stay tuned to see whether OpenAI can monetize without compromising the product experience that fuelled its growth.

Google to change budget pacing for campaigns using ad scheduling

25 February 2026 at 22:18

Google is rolling out a significant update to how average daily budgets pace in campaigns that use ad scheduling — and it could materially change monthly spend totals.

What’s happening. Starting March 1, 2026, Google Ads will begin proactively pacing budgets to spend up to the full 30.4x monthly limit, even if campaigns only run on specific days via ad scheduling.

How it works:

  • The 2x daily overspend rule stays in place.
  • The 30.4x average daily budget monthly cap remains unchanged.
  • Campaigns will not run outside scheduled hours.
  • But Google will now attempt to hit the full monthly ceiling within the allowed schedule.

Why we care. Until now, advertisers running limited schedules — like weekends only — effectively spent less per month because Google paced against active days. Campaigns using ad scheduling may start spending significantly more per month — even though daily budgets and billing caps haven’t changed.

Google will now push harder to hit the full 30.4x monthly limit within scheduled days, which could double spend for weekend-only or limited-hour campaigns. Without adjusting daily budgets, marketers risk unintentionally overshooting their intended monthly targets.

Example. A campaign set to weekends only with a $100 daily budget previously spent about $800/month (roughly eight weekend days).

Under the new pacing logic, it could spend up to $1,600/month — hitting $200 (2x daily budget) on each scheduled day.

What Google says. According to Google Ads Liaison Ginny Marvin, the goal is to better align pacing behavior with advertisers’ expectations around monthly spend limits. Spend will still be driven by campaign objectives like conversions or conversion value, and no campaign will exceed the existing billing caps.

Ginny also clarified that only advertisers who received notifications about this update will be affected and the change will be slowly rolled out.

Between the lines. This is less about raising limits — and more about how aggressively Google uses existing ones. For advertisers relying on ad scheduling to naturally suppress spend, this could lead to unexpected increases unless daily budgets are recalibrated.

What to do now:

  • Review campaigns using ad scheduling.
  • Recalculate daily budgets based on true monthly goals.
  • Lower daily budgets if you want to maintain previous monthly spend levels.

The bottom line. Google isn’t changing how much you can spend — it’s changing how quickly you will spend it. Flighted and part-time campaigns should adjust before March 2026.

First spotted. This updated was mentioned by Jordan Fry who shared the Google message he got on LinkedIn.

What 13 months of data reveals about LLM traffic, growth, and conversions

25 February 2026 at 20:00
What 13 months of data reveals about LLM traffic, growth, and conversions

LLMs and their influence on traffic to a brand’s website are a major topic in our client conversations. Everyone wants to know what’s happening, how they can do better, and what the best practices are.

My recommendation to brands right now is to start with the data and focus on what they can know for sure.

To glean insights into how LLM traffic is influencing key metrics, we analyzed our dataset of LLM prompt referral traffic in Google Analytics across our customer base over the last 13 months (Jan. 1, 2025 to Feb. 7, 2026). 

We focused on traffic from various LLM models to brand sites and the conversion events closest to true business outcomes. In some cases, that’s a purchase. In others, it’s a generated lead.

When we look at this dataset, four major findings rise to the surface:

  • LLM referral traffic is still small.
  • LLM traffic is growing fast.
  • The sources referenced in responses are shifting.
  • LLMs convert at a very high rate compared to other channels

LLM referral traffic is still small

LLM referral traffic accounts for less than 2% of total referral traffic on average, according to our dataset. In other words, fewer than 2 out of 100 visitors to a site come from an LLM referring source.

The range is 0.15%-1.5% of referral traffic coming from various LLMs, including ChatGPT, Perplexity, Gemini, and Claude.

So while this is a major topic of conversation, it isn’t the highest priority for near-term bottom-line impact for many businesses.

LLM traffic is growing fast

LLMs, as a referral source, are growing quickly, according to our data. Comparing the first half of 2025 with the second half, we saw an average growth rate of 80% in LLM referral traffic.

There was a wide range across the dataset. Some companies saw just 10% growth, while others experienced 300% increases.

Below is the aggregate referral traffic by month in 2025. It shows a steady month-by-month increase, building to 3x referral traffic growth from January to December.

That means it’s not enough to understand your volume of LLM traffic. You also need to monitor the velocity of that growth.

LLMs are expanding as consumer adoption grows, and prompt algorithms keep changing. Between those two variables, you can see dramatic swings that you need to monitor.

Dig deeper: LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Sources referenced in responses are shifting

The sources cited in LLM responses are changing quickly.

Here’s a look at our dataset since September of last year. The data comes from monitoring more than 5,000 prompts and their responses across various LLM APIs, including Gemini, ChatGPT, and Perplexity.

Sources referenced in responses are shifting

YouTube links and citations have increased over the last 30 days. Reddit saw similar growth, though that traffic recently leveled off.

These shifts in citations and links will affect the traffic that eventually reaches your site, and they may also influence your ad and content strategies.

If you don’t monitor this data, you won’t see these changes. LLMs don’t provide this information directly — you can only access it through a third-party tool.

LLMs convert at a very high rate compared to other channels

This is likely the most interesting and important finding. When you compare conversion rates alongside the total percentage of traffic, the contrast becomes clear.

LLM referrals are the highest-converting traffic source across our customer base, with an approximate 18% conversion rate. That’s higher than any other tactic, including paid shopping, SEO, and PPC.

However, they account for the lowest percentage of total traffic to a brand’s website, about 25 times less than SEO or direct.

Dig deeper: How to better measure LLM visibility and its impact

Get the newsletter search marketers rely on.


What brands should do next

Based on these findings, you should take the following actions to prepare for the evolving LLM landscape.

1. Establish dedicated monitoring

While LLM traffic volume is still low, its growth rate and volatility, including shifts between sources like YouTube and Reddit, make monitoring essential.

  • Track velocity: Don’t just look at volume. Monitor the rate of growth in LLM referrals to understand when this channel crosses a meaningful threshold for your business.
  • Monitor citation sources: Use available third-party tools to understand which LLMs and which types of platforms, including forums, videos, and news, are driving the most citations and subsequent traffic.

2. Capitalize on high-value traffic

An 18% conversion rate suggests LLM-referred users are highly qualified. They often arrive with clear intent or after their query has already been answered or validated by the LLM.

  • Analyze high-converting journeys: Review the user journey for LLM referrals. What content are they landing on? What queries are being answered that lead to conversion?
  • Optimize for intent: Focus content and landing page optimization on the high-intent needs reflected in the LLM’s citation context. Treat this traffic as a premium audience.

3. Plan for future growth

Given rapid LLM adoption, today’s low traffic volume won’t last.

  • Develop a content strategy for AI: Build a strategy that anticipates how LLMs summarize, cite, and reference your material. This isn’t traditional SEO. It’s about being the authoritative source LLMs choose to link to.
  • Allocate budget: While this may not drive immediate bottom-line impact, dedicate a small budget to tools and resources focused on understanding and optimizing the LLM referral channel.

This space is evolving fast. Hopefully, this dataset shows how things are progressing and motivates action within your organization.

This is a time of change. If you innovate, stay focused, and use data, you have a clear opportunity to outperform your competition.

Dig deeper: LLM consistency and recommendation share: The new SEO KPI

From emerging channel to strategic signal

LLM referral traffic is still a small share of overall volume, but it’s growing fast, shifting where it cites, and driving strong conversions.

Don’t overreact. Monitor the trend lines, understand where citations come from, and watch how this audience behaves once it lands. This space is moving fast, and if you stay close to the data, you’ll be better positioned as it evolves.

How Google Discover qualifies, ranks, and filters content: Research

25 February 2026 at 19:50
Google Discover pipeline

Google Discover runs on a structured, multi-stage pipeline with hard publisher blocks, strict image requirements, freshness decay, and heavy experimentation shaping what users see, according to new SDK-level research by Metehan Yesilyurt.

Why we care. Google Discover can drive massive traffic, but it often feels unpredictable. This research gives you a clearer view of how your content qualifies, gets ranked, or gets blocked — and where things can break before ranking even begins.

The details. Yesilyurt analyzed observable signals in Google’s Discover app framework and mapped a nine-stage flow. Google:

  • Crawls and understands your content.
  • Reads key meta tags like your image and title.
  • Classifies your content type (e.g., breaking news or evergreen).
  • Checks whether you’re blocked.
  • Matches your content to user interests.
  • Applies a server-side click-through rate prediction model.
  • Builds the feed layout.
  • Delivers your content.
  • Records user feedback.

One key finding. The publisher-level block happens before interest matching and ranking. If a user blocks you, your content never reaches the ranking stage.

  • Publisher blocking is powerful. One “Don’t show content from this site” action can suppress your entire domain. There’s no similar sitewide “boost” mechanism.

The ranking model. Your title, image quality, and engagement history are part of the evaluation process. The system uses a predicted click-through rate (pCTR) model on Google’s servers to estimate how likely someone is to click. The model isn’t visible, but the app shows which signals are sent to Google before ranking decisions, including:

  • Your page title (from og:title).
  • Your image size and quality.
  • How new your content is.
  • Past click and impression data for your URL.
  • Whether your images load successfully.

Freshness matters. Google Discover groups content into time windows:

  • 1 to 7 days old: strongest boost.
  • 8 to 14 days: moderate visibility.
  • 15 to 30 days: limited visibility.
  • 30+ days: gradual decline.

There’s a separate classification for strong evergreen content, but by default, newer content has an advantage.

Image and meta tag requirements. Google Discover reads six key page-level tags, including og:image and og:title. No image means no card.

  • To qualify for large, prominent cards, your images must be at least 1200px wide. Smaller images typically appear as thumbnails and often earn fewer clicks.
  • If certain tags are missing, Google Discover looks for backups — for example, it will try the Twitter title tag or the HTML title if og:title isn’t present.
  • Two specific meta tags — “nopagereadaloud” and “notranslate” — can stop your page from entering Google Discover entirely.

Personalization layers. Google Discover personalizes content using:

  • Google’s broader interest data tied to user behavior.
  • Publisher signals, including Publisher Center registration.
  • Individual actions like follows, saves, and dismissals.
  • Engagement signals, such as time spent reading.

If a user dismisses your story, the system stores that action permanently for that specific URL. It won’t resurface.

Experiments everywhere. During one observed session, about 150 server-side experiments were running simultaneously. Another 50+ feature controls affected how cards were displayed.

  • That means two similar users could see noticeably different feeds simply because they’re in different experiment groups.

Real-time feed updates. Google Discover isn’t static. The system can add, remove, or reorder content while someone is browsing, without a refresh.

The big takeaways. Success in Google Discover depends less on tricks and more on eligibility, trust, strong visuals, and sustained engagement — in a system that can filter you out before ranking even starts.

  • Publisher blocks happen before ranking.
  • Freshness is built into the system.
  • Strong images and clear titles are essential.
  • User dismissals are permanent.
  • Heavy experimentation makes volatility normal.

The research. Google Discover Architecture: Clusters, Classifiers, OG Tags, NAIADES – What SDK Telemetry Reveals

The AI writing tics that hurt engagement: A study

25 February 2026 at 19:00
The AI writing tics that hurt engagement- A study

The web has strong opinions about what “AI-written” content looks like, and even stronger ones about what’s supposedly wrong with it. Scroll any content marketer’s LinkedIn feed, and you’ll find confident claims that em dashes and other AI “tells” signal bad, automated writing.

The problem with these debates is that they often confuse taste with performance. What counts as “bad writing” will always be subjective. But if the goal for content marketers is to communicate clearly and compete in the information marketplace, the practical question should be: which LLM habits actually turn readers off?

To find out, we analyzed a large dataset of content marketing pages to identify which AI writing “tics” we see most often called out to understand which are turning off readers — and the ones we may be calling out for no reason.

How we built our ‘AI tics’ study

At this point, you’ve probably all seen them, too:

  • “In today’s fast-paced digital landscape…”
  • “It’s important to note that…”
  • “Not only… but also” (repeated over, and over and over…)
  • “In conclusion” (even when nothing has been concluded)

The second you notice them, it’s hard not to see them everywhere an LLM has helped produce copy. Many readers report hating these LLM patterns. But how exactly are they impacting user engagement?

To find out, we gathered a list of the most common AI writing tells we and others have noticed. These include:

  • “Not only… but also” constructions: “Not only does X do Y, but it also does Z.”
  • Sentence starts with “then,” “this,” or” that”: “Then you should…” “Then the system…” “This shows…” “This means…” 
  • Introductory filler: “In this article,” “We’ll explore,” and “Let’s take a look”. 
  • “Conclusion” starters: “In conclusion,” or other AI equivalents of clearing your throat.
  • Em dashes: The most infamous punctuation mark in today’s content marketing.

From there, we built a dataset of:

  • 10 domains of varied site size and monthly traffic, in a wide array of industries including tech, ecommerce, healthcare, education, analytics, and more
  • 1,000+ content marketing URLs, built from a mix of workflows including posts that were either fully human-written, written collaboratively by humans and AI, or completely AI-generated.

Then we standardized our dataset by: 

  • Aligning shorter posts and cornerstone content by standardizing every writing tic as occurrences per 1,000 words. Since longer articles naturally contain more of, well, everything, a 3,000-word guide would otherwise look “worse” than a 600-word post simply because it has more sentences.
  • Excluding any page under 500 words. Very short pages don’t give enough room for stylistic patterns to emerge, and their engagement metrics are likely driven more by intent than by engagement alone.
  • Prioritizing engagement rate as the primary performance metric. Engagement rate best captures a reader’s first real decision: “Do I stay, or do I leave?” GA4 registers an engaged session as any lasting 10 or more seconds. While 10 seconds may sound brief to assess whether a post is AI, it’s long enough for a user to skim an introduction, notice awkward or repetitive writing patterns, and scan headings to decide whether the content feels worth continuing.

Dig deeper: A smarter way to approach AI prompting

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Why tracking total AI tics wasn’t enough

Our first instinct was to average the number of AI tics per 1,000 words and compare the pages’ performance.

At a glance, this seemed like a clean way to separate human writing from AI-influenced writing. But the picture quickly got complicated by one tic in particular — the infamous em dash — which dominated the dataset and heavily skewed the averages.

Content marketing across 10 domains

The issue pointed to a larger problem: AI tics are messy by definition. AI is trained on human writing. So if certain patterns show up frequently, that doesn’t mean they’re uniquely “AI.” It may just mean they’re common in English prose. 

To compare, we ran the same tic counter on two known controls: a novel I published in 2021 (which I could guarantee was written without ChatGPT, Grammarly, or other AI-assisted tools). This scored a startlingly above-average 6.9 tics per 1,000 words.

Next, we scored “Hamlet,” the famous Shakespearean play, which scored an even higher ≈11.4 tics per 1,000 words. Shakespeare, it turns out, is more “AI-coded” than many AI-generated blog posts.

Ultimately, we assessed that this is almost entirely due to the em dash, which is likely to appear in droves in many human writers’ prose as well as AI-produced copy.

With this in mind, we analyzed each “tell” individually, still standardizing per 1,000 words. The story became much clearer — and far more useful for writers trying to decide what’s actually worth avoiding.

Dig deeper: How to make your AI-generated content sound more human

Get the newsletter search marketers rely on.


The AI tics impacting performance

Not all posts are the same, and many different factors impact the success or failure of any page of content marketing. That’s perhaps why our data showed that most AI “tells” didn’t correlate strongly with performance or non-performance.

Anything smaller than plus/minus .1 correlation is statistically insignificant. However, there were a handful worth noting with a larger impact than others.

‘Not only’ and ‘not just’ structures may be driving users away

Phrases built around “not only…” or “not just…but also” stood out with larger-than-average negative correlations with engagement rate. While these constructions, when used occasionally, can add emphasis, the data shows that frequent use is associated with high user bounce rates. 

AI-assisted writers and editors should take note, as many of the AI-generated posts we reviewed tripped over themselves with these constructions. In one instance, we found a single blog post that used “not only” and “but also” 12 separate times.

Starting headers with ‘conclusion’ was the strongest negative signal

The strongest negative correlation in the entire dataset was observed in sentences beginning with “Conclusion,” typically section headers preceding a call to action. The clearest AI stylistic red flag we found, posts with headers starting with “Conclusion” had the largest negative correlation  (≈ -0.118) with post engagement rate.

Since this tic traditionally comes at the conclusion of a post, it’s clear readers may quickly scroll down over the entirety of these posts before bouncing — or else that posts with these final headings tend to be lower-quality on average.

Em dashes correlated slightly positively

Em dashes were by far the most common stylistic tic in the dataset. They also produced one of the most surprising results: a slight positive correlation with engagement rate.

Despite widespread online chatter treating em dashes as an “AI artifact,” this data suggests they’re not hurting performance, and they may even align with better engagement. (As someone who genuinely likes em dashes — this was deeply validating.)

A plausible explanation may be that writers who use em dashes tend to write more explanatory, nuanced sentences rather than short, flat declarations. Those kinds of sentences often appear in longer, more thoughtful content that many readers actually engage with.

That said, this doesn’t mean em dashes cause engagement. Too much of a good thing is still too much of a good thing. But it does challenge the idea that em dashes are the bugbear content marketers make them out to be. 

Dig deeper: An AI-assisted content process that outperforms human-only copy

3 practical takeaways for content teams

Here’s what content marketers can act on today.

1. Don’t over-optimize for AI detection

Google doesn’t issue SEO rankings like a monotonic punishment score for “AI style.” Most phrases we looked at didn’t correlate with engagement at all.

Don’t rewrite content just because someone declared a phrase “AI writing.” Write for reader usefulness and clarity above all.

2. Be mindful of how you wrap up

Explicit conclusion blocks aren’t bad — but generic, formulaic patterns are likely turning readers away.

Consider blending conclusions into analysis, using subtler transitions, or adding new value with headers, instead of signposting obvious structure. 

3. Use the punctuation that makes sense 

If your style calls for em-dashes? In this dataset, they were actually associated with better reader engagement. Use them.

Don’t miss the forest for fake plastic trees

AI is likely here to stay in content workflows. But the issues with “bad” AI writing aren’t limited to linguistic tics and punctuation. While we all have our stylistic opinions, we should be careful about turning stylistic hot takes into editorial law. 

Write valuable writing. Think about readers first. And don’t panic every time someone on Twitter or LinkedIn decrees that “X phrase = AI.”

Anthropic clarifies how Claude bots crawl sites and how to block them

25 February 2026 at 18:52
Anthropic bots

Anthropic updated its crawler documentation this week, clarifying how its Claude bots access websites and how you can block them.

  • Anthropic’s document explains what each bot does, how it affects AI training and search visibility, and how to opt out through robots.txt.

Why we care. If you publish or own content, you want control over how AI systems use it. Anthropic separates training crawlers, user-triggered fetches, and search indexing. Blocking one bot doesn’t block the others. Each choice carries different visibility and training trade-offs.

The robots. Anthropic uses three separate user agents:

  • ClaudeBot collects public web content that may be used to train and improve Anthropic’s generative AI models. If you block ClaudeBot in robots.txt, Anthropic said it will exclude your site’s future content from AI training datasets.
  • Claude-User retrieves content when a user asks Claude a question that requires access to a webpage. If you block Claude-User, Anthropic can’t fetch your pages in response to user queries. The company says this may reduce your visibility in user-directed search responses.
  • Claude-SearchBot crawls content to improve the quality and relevance of Claude’s search results. If you block Claude-SearchBot, Anthropic won’t index your content for search optimization, which may reduce visibility and accuracy in Claude-powered search answers.

How to block them. The bots respect standard robots.txt directives, including “Disallow” rules and the non-standard “Crawl-delay” extension, Anthropic said. To block a bot across your entire site:

User-agent: ClaudeBot
Disallow: /

  • You must add directives for each bot and each subdomain you want to restrict.
  • IP blocking may not work reliably because its bots use public cloud provider IP addresses, Anthropic said. Blocking those ranges could prevent the bot from accessing robots.txt. The company doesn’t publish IP ranges.

The document. Does Anthropic crawl data from the web, and how can site owners block the crawler?

How ChatGPT uses SEO to drive growth and revenue

25 February 2026 at 18:00
How ChatGPT uses SEO to drive growth and revenue

Generative search engines like ChatGPT have successfully used SEO as part of their growth strategy, even as the echo chambers of the web claim they’re killing this powerful marketing channel.

Let’s take a look at how ChatGPT, Perplexity, and Claude fare in SEO, and why ChatGPT’s investment in the strategy is paying off.

Directional SEO ROI forecast

A $600,000 annual SEO investment could generate outsized returns for generative AI platforms.

Using Semrush-reported monthly organic traffic — 76.5 million visits for ChatGPT, 908,000 for Claude, and 1.7 million for Perplexity — plus a conservative 0.5% conversion rate and a $20/month entry price, projected returns range from approximately $92 million in annual revenue for ChatGPT (15,200% ROI) to 82%-240% ROI for Claude and Perplexity.

OpenAI’s investment in SEO and content for ChatGPT’s growth

OpenAI understands the value of SEO to drive growth. It was offering $310,000-$393,000 to hire a content strategist with SEO experience.

OpenAI content strategist job listing

It then launched a second job search for a growth role focused on SEO, CRO, and web strategy.

OpenAI growth role focused on SEO, CRO, and web strategy

Looking at SEO-focused growth role salaries in the U.S. ($100,000 to $295,000), we can estimate OpenAI invested between $410,000 and $600,000 for two SEO roles, not including benefits and other expenses.

SEO has more lives than an agile alley cat because it leverages human behavior. Searching is core to our survival. When we were cave dwellers needing food or shelter, we searched for it. Search engines amplify this behavior.

ChatGPT is not replacing Google

ChatGPT is expanding search behavior and increasing Google searches in certain use cases. However, there’s a 20% decline in overall Google search volume from 2024 to 2025. 

That shift makes visibility even more critical. As the search landscape evolves — and as Google’s AI Overviews take clicks away — increasing your website’s ability to be found through SEO becomes more important, not less.

The OpenAI team understands this. That’s why it built SEO into ChatGPT’s foundation and continues investing in it.

Evaluating the SEO foundations of ChatGPT, Claude, and Perplexity

I conducted a brief analysis of how ChatGPT, Perplexity, and Claude use SEO to grow. The goal was to uncover the good, the bad, and the ugly so brands can apply what works and discard the rest.

Using Semrush for competitive keyword analysis, I examined domain authority, brand versus non-brand keyword distribution, and total keyword rankings.

Competitive keyword analysis- ChatGPT, Claude and Perplexity

Brand authority and demand

ChatGPT’s website has an authority score of 99, followed by Perplexity’s 81 and Claude’s 75. Building a strong product and brand, then leveraging public relations, news, and social media, is the best way to grow your website’s authority. 

The branded term “ChatGPT” gets 45.5 million searches monthly, while Perplexity gets 1 million and Claude gets 500,000. That’s an enormous demand; they use SEO to convert into traffic, signups, and revenue.

Paid and organic search

All brands spend money on Google Ads, but SEO drives most of their search traffic. None are integrating search, which is a big opportunity.

Integrated search is the practice of targeting highly valuable, often expensive, keywords with both SEO and PPC. It increases conversions, lowers Google Ads cost per click, and helps you take up more real estate on the search engine results pages, pushing competitors lower.

Keyword rankings

  • ChatGPT: ~287,800 keywords.
  • Perplexity: ~184,800 keywords.
  • Claude: ~36,000 keywords.
Total keyword rankings by brand

ChatGPT leveraged user-generated content to build a massive indexable surface area.

Perplexity focused on financial, stock-driven content pages. Claude uses blog articles targeting high-intent professional audiences.

Neither can match ChatGPT’s kinetic energy from branding efforts and SEO built into its technical foundation.

Get the newsletter search marketers rely on.


The 3Cs SEO and AI optimization framework

The 3Cs SEO and AI optimization framework

Below, I applied our agency’s 3Cs SEO and AI optimization approach: code (technical foundation), content strategy and optimization, and conversions.

Code

For this section, I focused on indexability, or how well search engines can find important website content. This is important to rank in traditional search, including Google and Bing, and LLMs like ChatGPT.

Robots.txt

ChatGPT has a highly optimized robots.txt file that includes multiple sitemaps, location recommendations, and blocks for certain crawlers while allowing others.

ChatGPT robots txt

Perplexity’s robots.txt file isn’t as optimized or detailed. Claude was missing a robots.txt but added it recently. Not having a robots.txt file or having it return a 404 can be detrimental to your rankings.

Note that ChatGPT and Claude block each other from crawling their websites via robots.txt.

ChatGPT and Claude blocked robots txt

Recommendation: Optimize your robots.txt, include all sitemaps, block pages you don’t want indexed, and allow LLM crawlers like ChatGPT to crawl your site.

URL structure

According to Google representative John Mueller, keywords in a URL are a “very, very small ranking factor.” That said, we can see a clear performance difference between ChatGPT, Claude, and Perplexity because they include keywords in their URLs.

Including keywords in a URL is an important way to give search engines, LLMs, and users information about what the page is about.

Imagine you went to a restaurant and wanted a burger. You say, “I want a burger,” and you’ll get a burger. If you said, “I want 2387d2e3,” you’d only get frustration from your waiter. That’s what happens with LLMs and traditional search engines. 

Yes, search engines can look at other elements of the page to understand what it’s about, but you win when you check all the boxes. That’s the difference between ChatGPT and Claude’s URLs below.

ChatGPT uses the name of what a user creates and shares in the URL, which helps it rank better for those words.

Keyword in URLs for SEO

Claude doesn’t. Its public artifacts don’t have words built into them.

Google search - logo creator

Recommendation: Use short URLs with keywords in them. I’ve seen this have major ranking impacts, even though Google and others say it’s a minor ranking factor. Changing this is hard after the fact. Semantic URLs are also important to rank in LLMs like ChatGPT.

Content

All of the LLM websites use content for marketing, including use cases, partnerships, target industries, and more.

ChatGPT use cases

Claude’s blog has highly targeted articles for professionals.

Claude blog posts

Perplexity has a discovery hub with timely, useful content.

However, the content isn’t optimized for SEO. Meta titles, descriptions, URLs, and canonical tags aren’t optimized, which confuses search engines. 

Perplexity not optimized

Some pages aren’t even indexed on Google as a result.

Perplexity pages not indexed

Their blogs don’t optimize images either. Image names should be descriptive, like “perplexity-deep-research.webp,” to improve ranking ability.

Perplexity images not optimzed

Perplexity is giving itself death by a thousand optimization cuts.

Even though Claude and Perplexity target highly relevant keywords, an analysis of the top 50,000 keywords each website ranks for shows ChatGPT winning with sustained month-over-month growth.

Keyword rankings 2025

Recommendation: Build SEO into your technical foundation. Fully optimize blog articles. Leverage user-generated content through forums or community hubs.

Conversions

All these brands understand SEO needs to drive conversions, not just traffic and rankings. There are two types of conversions: free conversions for people not ready to buy and paid conversions for people ready to buy.

ChatGPT, Perplexity, and Claude all give users a trial of their product, then ask them to convert or sign in.

ChatGPT conversions

Recommendation: Convert as much website traffic as possible. Use AI coding platforms to build interactive content for conversion, as Google AI Overviews take traffic away.

If OpenAI bets on SEO, should you?

OpenAI is spending up to $600,000 on SEO talent to grow ChatGPT, not including staff benefits and other costs. This analysis breaks down how ChatGPT, Perplexity, and Claude use SEO to fuel growth. 

If disruptive technology companies like OpenAI are doubling down on SEO and content to drive growth, you should include it in your growth strategy as well. It works.

💾

An SEO audit of ChatGPT, Claude, and Perplexity reveals how the platforms use technical optimization, content, and conversions to scale.

How to read Meta Ads metrics like a system, not a scoreboard

25 February 2026 at 17:00
How to read Meta Ads metrics like a system, not a scoreboard

Every week, thousands of media buyers perform the same ritual, opening Meta Ads Manager, scanning metrics, and deciding which campaigns and ads were winners and which were losers. If ROAS is positive, they’re pleased. If not, the mouse quickly heads toward the toggle button to disable the asset. This is the scoreboard trap some advertisers fall into.

When you treat metrics like a scoreboard, you’re looking at the outcome without understanding the full picture or how to improve going forward. The score of the game doesn’t include the fact that your strikers aren’t getting any passes from midfield. 

To scale performance, it’s important to move from reporting to diagnosing the issues at hand. Start looking at your metrics as independent KPIs and as a system of interdependent signals to better tell the story of what’s happening in your account and accurately inform your next optimization steps.

The dashboard illusion

Meta’s interface is designed as a linear grid, which can create a false sense of clarity. It may suggest that a high CPM is the problem in one column and, in another, that a low CTR is the culprit. In reality, these metrics are deeply intertwined. 

A high CPM might not mean your audience is expensive. It may indicate your creative is low quality, so Meta is charging you more for a poor user experience on its platform. 

Conversely, a high CTR might look like a win at first glance, but if your CVR is plummeting, it’s not a win, and you’re paying for high-intent customers your landing page can’t close. 

The dashboard tells you what happened, and the system tells you why.

A visual of an example of Meta Ads Manager CTR and CPM reporting columns.
A visual of an example of Meta Ads Manager CTR and CPM reporting columns.

Dig deeper: Inside Meta’s AI-driven advertising system: How Andromeda and GEM work together

The team metrics framework

To better understand the system, let’s think of metrics as a sports team. Each player has a specific role. If the team loses, you don’t bench the whole team. You review the play to see what happened so you can improve your chances of winning next time.

The scouts: CPM and reach

CPM is the auction’s feedback on your total value. It’s a combination of your bid, estimated action rates, and value to the user. Together, their role is market resonance. 

If CPM spikes relative to your historical average, these metrics signal the market is either too crowded or your creative isn’t effective enough to maintain volume.

The midfielders: CTR and hook rate

Their role is to move the ball from the ad placement in Meta’s ecosystem to your website. If you have a high hook rate but a low CTR, your ad is great at getting attention but terrible at passing the ball. You’re stopping the scroll effectively, but your content isn’t enticing people to click.

The strikers: CVR and AOV

These metrics are the final step in the journey and rely on your website. If CTR is high and CPC is low, but ROAS is low, something is amiss. Your ad did its job well, but your landing page or offer didn’t because people aren’t converting.

Dig deeper: Rethinking Meta Ads AI: Best practices for better results

Diagnosing system gaps

The real diagnosis happens between the columns you see in Ads Manager.

Hook vs. hold rates

Quickly diagnose creative fatigue before it impacts ROAS by looking at the ratio between hook rate and hold rate.

  • If you have a high hook rate and a low hold rate, your ad is successfully grabbing attention but then losing interest. This is a good opportunity to adjust the latter portion of your ad, make it more compelling, and end it with a clear, strong CTA.
  • If you have a low hook rate but a high hold rate, you’re losing most people at the beginning, but those who stay are likely to convert. This presents a good opportunity to test new hooks that fit with the rest of your video to grab more attention up front and help drive more conversions.

Link clicks vs. landing page views

The gap between these two metrics is important and often overlooked. If you have 1,000 clicks but only 450 landing page views, you may have a technical leak somewhere. Check your page speed and whether your tracking is working properly. 

It’s unlikely this is a creative issue, as a significant drop-off rate like this is likely caused by a slow server. People expect a site to load quickly. If it doesn’t, they’ll bounce, and your budget will be wasted.

CPA vs. frequency

If a rising CPA feels like a mystery, look at frequency. If both metrics are increasing, your audience is likely seeing the same ad too often and getting fatigued. 

A tired audience and system need something fresh, not just a bid or budget increase. Swap out creative assets or expand your targeting if it’s too narrow.

A visual of an example of Meta Ads Manager reporting columns.
A visual of an example of Meta Ads Manager reporting columns.

Dig deeper: Meta Ads for lead gen: What you need to know

Get the newsletter search marketers rely on.


From reporting to diagnosing

When a campaign or creative underperforms, ask yourself:

  • Is volume constant? Has spend or impressions decreased? The system may have devalued or rejected your ad, specifically the creative.
  • Where is the friction taking place? Follow the ball down the field. Is it hook rate, CTR, or CVR?

Once you identify the bottleneck, change only that variable. If you change too many variables, you won’t clearly understand which part was broken. If CVR is low, don’t change the ad. Instead, improve the landing page experience. 

Are you sending people to a product detail page while showcasing numerous products in a single creative? Remove the friction and create a product collection landing page instead, so everyone interested in a component of your ad can seamlessly and intuitively shop once they click.

Becoming a media architect

With Meta’s AI taking the lead in targeting, it’s now our job as media buyers to evolve into system architects.

A scoreboard tells you something isn’t winning. A system map tells the full story, like when site speed is tanking ROAS or creative is hooking the wrong people.

Next time you look at your account, ignore the ROAS column at first glance. Instead, look at the ratios, trace the user’s path through your metrics, and unlock the story of the journey from ad to website. When you stop looking for winners and start looking for friction points, you’ll begin engineering more meaningful growth.

Dig deeper: 4 Facebook ad templates that still work in 2026 (with real examples)

Google fixed a serving issue with search results

25 February 2026 at 16:08

Google confirmed it had an issue serving search results earlier this morning at around 1:30 am ET on Wednesday, February 25th. The issue seemed to be fixed very quickly and we didn’t see a huge number of complaints about the issue.

Google posted a notice saying, “We fixed the issue with serving search results. There will be no more updates.”

Why we care. If your website noticed a drop in traffic around midnight last night, it may be related to this serving issue.

Again, it seems the serving issue was discovered and fixed very quickly but just because Google posted the issue and resolved it within a minute, it does not mean the serving issue was only a minute. Rather, this is when Google posted the notices. Google did say the issue lasted about 15 minutes.

Here is a screenshot of the status dashboard notice:

Google Ad Grants now lets nonprofits optimize for shop visits

24 February 2026 at 22:28
How to tell if Google Ads automation helps or hurts your campaigns

Google Ad Grants accounts can now optimize for real-world foot traffic. If you use the nonprofit program, you can set “shop visits” as an account-level goal, allowing your campaigns to optimize for in-person visits.

Driving the news. Previously, if you tried to mark shop visits as a goal in Ad Grants, you’d get an error. That restriction appears to be lifted, allowing eligible accounts to include store visit conversions in their primary goal configuration.

  • This update lets you align bidding and optimization with physical visits — especially for visibility in Maps placements and location-driven search results.

Why we care. If you run a nonprofit, museum, place of worship, community center, or other location-based organization, digital engagement doesn’t always translate into mission impact. Optimizing for shop visits bridges that gap, tying ad performance directly to foot traffic.

What to do. If you use Ad Grants, review your account-level goals and confirm shop visits are enabled where eligible. Optimizing for foot traffic could materially improve your local impact — especially if you rely on in-person engagement.

Between the lines. As Google continues to emphasize local intent and Maps-based discovery, bringing store visit optimization to Ad Grants expands your ability to compete for nearby audiences. It shifts the focus from clicks and website traffic to measurable offline action.

First seen. Google Ads expert Jason King spotted this update and shared it on LinkedIn.

Merchant Center becomes a central video hub as Google auto-imports content

24 February 2026 at 22:09
When Google reps push Performance Max before your account is ready

Google’s unified video manager in Merchant Center is no longer empty. After months of showing up in accounts without visible content, the Video Assets section is now automatically populating with sourced videos.

Driving the news. Videos are now automatically pulled in, including content from external sources like YouTube.

  • The feature — first introduced at Google Marketing Live 2025 — was designed to centralize video content inside Google Merchant Center. It began rolling out in September, but many advertisers saw a blank interface with no assets.

Why we care. This confirms Google is moving ahead with its plan to make Merchant Center a central hub for commerce-ready creative — not just product feeds. With videos now auto-populating, you may gain additional visibility across Shopping and Performance Max without extra upload work, but you’ll also need to ensure your YouTube and site videos are optimized for commerce. In short, video is becoming embedded in retail ad delivery, and if you manage it proactively, you’ll have a competitive edge.

Between the lines. By centralizing videos from your website, social platforms, and potentially AI-generated sources, Google is turning Merchant Center into a more comprehensive creative hub—not just a product feed manager. That aligns with the broader shift toward video-first shopping experiences across Search, Shopping, and Performance Max.

What to watch. It’s still unclear how performance reporting, optimization controls, and editing tools will evolve in the Video Assets section. But the shift from an empty placeholder to a populated library shows the infrastructure is now active.

First spotted. PPC News Feed founder Hana Kobzová first spotted this update.

How to keep your content fresh in the age of AI

24 February 2026 at 20:03
How to keep content fresh in an AI-saturated web

AI has made publishing faster and easier than ever. And the result is saturation.

As AI lowers the barrier to production, the web is filling with content that is technically sound, reasonably optimized, and increasingly indistinguishable. When everything looks polished and competent, standing out becomes harder.

AI has changed content output, but users still arrive with intent. They scan headlines, page titles, and descriptions before choosing what to click. They reward clarity, relevance, and usefulness. On a saturated results page, those fundamentals matter more than ever.

Keeping content fresh in the age of AI isn’t about chasing novelty or abandoning proven practices. It’s about returning to what makes content distinct: clear messaging, thoughtful structure, and a strong understanding of what your audience wants.

The real problem with AI content

The biggest issue with AI-generated content isn’t accuracy. It’s sameness.

Because AI models train on vast amounts of existing material, they reproduce familiar patterns: similar phrasing, predictable structures, and safe conclusions. On their own, these outputs read as competent and coherent. In aggregate, they become indistinguishable.

This is why so much content today feels interchangeable. Even when the topic is relevant, the experience of reading it rarely is.

Search engines and users are reacting accordingly. When every result looks and sounds the same, differentiation matters. Freshness still ensures relevance and credibility, but it’s no longer a competitive advantage in itself. What separates one result from another is voice, perspective, and lived experience.

Ironically, AI has made originality more valuable, not less. As automated content floods the web, signals like specificity, usefulness, and intent alignment become stronger indicators of quality. Content that communicates clearly and answers people’s real questions rises above, regardless of whether AI assisted in its creation.

This is where many teams go wrong. In an attempt to compete with AI, they focus on output volume or trendy formats instead of fixing the fundamentals.

Freshness isn’t created by novelty alone. It’s created when content feels unmistakably helpful and unmistakably human.

Fresh, unique content is still built on classic SEO principles

Despite the evolution of content creation tools, the way people use search engines has remained remarkably consistent. Users still arrive with a problem to solve, scan results quickly, and choose the option that feels most relevant to them.

That behavior hasn’t changed because AI exists.

Page titles, headings, and meta descriptions continue to act as the first point of contact between a piece of content and its audience. In search results, they function less like technical fields and more like ad copy.

Yet many organizations assume these elements are outdated or that AI-generated content will somehow compensate for vague or generic positioning. In reality, the opposite is true. As more content competes for attention, clarity becomes a differentiator.

Classic SEO principles still underpin freshness:

  • Clear alignment with search intent.
  • Descriptive, specific language.
  • A logical structure that helps users scan.
  • Messaging that sets accurate expectations before the click.

None of these concepts is new. What’s changed is their importance.

When search results are crowded with similar-looking pages, small improvements in clarity can produce incremental gains. A more descriptive title doesn’t just help search engines understand a page. It helps users recognize that it answers their question.

AI may assist in generating drafts or variations, but it doesn’t replace the need for human judgment in deciding what information matters most or how it should be framed. Fresh content still starts with understanding intent and communicating clearly.

Small SEO changes can lead to a strong impact

To understand why traditional SEO still matters, consider a recent experiment conducted on our website focused on service-based search terms.

The hypothesis was straightforward: If page titles were more descriptive and more clearly aligned with search intent or user pain points, would users be more likely to click? Could visibility and engagement improve without rewriting content or making technical changes?

Before this test, titles followed a familiar format: the service name followed by the company name. While accurate, these titles were vague and did little to communicate value or differentiate the page in search results.

After the update, titles were rewritten to be more specific and benefit-oriented. Instead of simply naming a service, the new titles clarified what the service helped users achieve and reflected the intent behind the search.

One page, for example, shifted from a generic service title to a more descriptive version focused on optimization and lead generation. The result was a 247% increase in clicks on that page alone.

Encouraged by this early signal, similar title updates were rolled out across multiple service pages and allowed to run for approximately one month. The aggregated results were as follows.

As the table above shows, average position didn’t improve on every page. But several key services moved closer to the top of the results, reflected in a lower average position, while earning more clicks and impressions. This suggests clearer, intent-aligned titles helped the right pages surface more prominently and perform better once they did.

Not every page saw improvements, which is precisely the point of testing. There were no dramatic rewrites and no reliance on AI-driven optimization tactics. The improvement came from clearer communication.

The takeaway is simple: This wasn’t an example of AI SEO outperforming traditional methods. It demonstrated that when content aligns more closely with human intent, performance follows.

Strategies for keeping content fresh in an AI-saturated world

Staying fresh in the age of AI doesn’t require abandoning proven practices or chasing every new tool. It requires greater intentionality in how content is created, positioned, and maintained. The strategies below focus on what works, even as the volume of content online continues to grow.

1. Treat intent at the strategy

Traditional SEO is often mischaracterized as keyword stuffing or mechanical optimization. In reality, its foundation has always been search intent.

Before creating or updating content, ask:

  • What problem is the searcher trying to solve?
  • What does a “good” answer look like in their context?
  • What would make this page immediately feel relevant?

AI tools can suggest keywords, but they can’t fully interpret intent. That requires understanding audience behavior, industry nuance, and real-world constraints. When content is shaped around intent first, optimization becomes a byproduct, not the goal.

Freshness emerges when a page answers the right question clearly, not when it targets more variations of the same term.

2. Use page titles and headlines as tools

In an AI-driven content environment, page titles still matter. Search results are crowded with pages that look nearly identical at a quick glance in the SERP.

A well-written title is often the deciding factor in whether a user clicks or scrolls past. This is where traditional SEO fundamentals quietly outperform more complex tactics.

Effective titles:

  • Clearly state what the page offers.
  • Reflect the language users search with.
  • Set accurate expectations instead of teasing vague benefits.

Small improvements in specificity can produce meaningful gains.

3. Refresh before you create

One of the most overlooked ways to keep content fresh is to improve what already exists.

In many cases, underperforming content doesn’t fail because it’s outdated or incorrect. It fails because it’s unclear. Updating introductions, tightening headlines, improving structure, and clarifying takeaways can have a greater impact than publishing something new.

A practical approach:

  • Identify pages with impressions but low click-through rates.
  • Review whether titles and descriptions match intent.
  • Adjust framing before expanding content.

This strategy is particularly effective in an AI-heavy environment, where new content is abundant but thoughtful updates can deliver stronger results.

4. Lean into specificity and constraints

AI excels at general advice. Humans excel at context.

Content becomes fresh when it reflects specific scenarios, limitations, or trade-offs. Rather than aiming for universal coverage, focus on clearly defined use cases, audiences, or situations.

Specificity might include:

  • Addressing common misconceptions.
  • Explaining why a tactic works in one context but not another.
  • Acknowledging constraints like budget, time, or expertise.

This level of nuance signals credibility and separates genuinely helpful content from generic summaries.

5. Use AI as an accelerator

AI is most effective when it accelerates tasks that don’t require decision-making. Drafting outlines, summarizing research, or generating alternative phrasing can save time. Choosing the angle, defining the message, and interpreting results remain human responsibilities.

A healthy AI-assisted workflow includes:

  • Editorial oversight.
  • Performance review and iteration.
  • Clear ownership of voice and perspective.

When AI is used as a support tool rather than a substitute, content remains intentional and aligned with business goals.

6. Measure freshness by behavior

Publishing more content doesn’t make it fresher… engagement does.

Instead of tracking success by volume, pay attention to signals that reflect real interest:

  • Click-through rates
  • Time on page
  • Scroll depth
  • Return visits

These metrics reveal whether content resonates. Fresh content earns attention because it feels useful.

7. Accept that ‘traditional’ doesn’t mean outdated

The temptation in any technological shift is to assume that what came before no longer applies. But AI hasn’t replaced the need for clarity, structure, and relevance. It has made those qualities more valuable.

Traditional SEO works because it aligns with how people search, decide, and engage. When those fundamentals are executed well, they break through regardless of how content is produced.

Why fresh content actually wins

AI has changed how some content is produced. It has increased speed, lowered costs, and removed many of the barriers that once limited who could publish and how often. What hasn’t changed is how people decide what to read, click, and ultimately trust.

Fresh content wins because it is clear and relevant when someone is looking for an answer — not just because it was generated faster.

The growing presence of AI has exposed a hard truth: Much of what passes for fresh content was never truly differentiated. When similar ideas are repeated at scale, fundamentals like intent alignment, descriptive titles, thoughtful structure, and honest messaging become the strongest signals of quality.

So what’s the path forward? Being more disciplined about how content is framed, maintained, and measured. Successful brands and publishers will treat freshness as a function of usefulness, not output.

AAO: Why assistive agent optimization is the next evolution of SEO

24 February 2026 at 19:00
AAO- Why assistive agent optimization is the next evolution of SEO

Search engine optimization (SEO) — be found. Answer engine optimization (AEO) — be the answer. AI engine optimization (AIEO) — be the recommendation. Assistive agent optimization (AAO) — be chosen when no human is in the loop. Four stages where each clearly absorbs the last.

The word that stays constant across the last two is “assistive,” and that’s important because it names the purpose: what the system does for the user. The word that changes is just one: engine becomes agent — a single pivot that tracks the real shift in our industry, from systems that recommend to systems that act.

For me, everything else in the naming debate is a distraction. The SEO industry is fractured across at least six competing terms for what’s functionally the same discipline. Each term has a constituency, each constituency is spending energy defending its label, and while we argue about what to call the work, we’re not doing the work.

So skip a step with me: I’ll explain in the next few paragraphs why AAO is a good solution — then we can all get back to our jobs.

Every competing acronym covers part of the job, none covers all of it

Every AI system that makes recommendations or takes autonomous action — Google, Bing, ChatGPT, Perplexity, Copilot, and any other engine that glides into view — runs on three components: large language models, knowledge graphs, and traditional search. I call this the algorithmic trinity

The balance differs by platform (ChatGPT leans LLM-heavy, Google leans on its knowledge graph), but the trinity itself is universal. Even Google team members I’ve spoken with agree on this architecture.

SEO also described the purpose the engine served, which I’ve always liked. So here’s a quick look at the competing acronyms against those three components.

  • GEO describes mechanism, not purpose. It covers the LLM layer, includes search by necessity, but misses the knowledge graph entirely. Because “generative” is a technology label, the term expires when the technology evolves. “Generative agent optimization” describes nothing, which tells you the term wasn’t built to scale.
  • Entity SEO covers the knowledge graph layer (entities live there), treats search as the delivery mechanism, and tangentially acknowledges LLMs. The term also fails the glossary test, which I now try my best to apply to my own writing. If a non-specialist can’t understand a term on first encounter, it was named for the speaker, not the listener. Every time I use the word “entity” to describe “brand” in conversations with business leaders, I have to explain myself.
  • LLM optimization is honest about its scope, but that’s one-third of the job, ignoring the knowledge graph and search entirely.
  • AI SEO bolts “AI” onto the old term, which makes it easy access for outsiders, but it doesn’t have long-term legs. Already in 2026, people aren’t searching, they’re researching, and some have agents researching for them.

All of them are incomplete, and I’d argue that incomplete terminology produces incomplete strategy because practitioners naturally optimize for the leg their acronym covers and neglect the others.

Assistive agent optimization (AAO) evolves neatly from answer engine optimization and covers everything we need to build a meaningful, complete strategy: 

  • “Assistive” names the purpose across the full algorithmic trinity. 
  • “Agent” names the actor that uses all three components to make a decision. 
  • “Optimization” is what we do. 

That’s a three-legged stool with all three legs the same length, which, if you’ve ever sat on one, is the only stool that doesn’t wobble.

Dig deeper: SEO, GEO, or ASO? What to call the new era of brand visibility in AI [Research]

The glossary test says AAO isn’t perfect, but it’s the closest we’ve got

Generative engine optimization requires the listener to know what a generative engine is, entity SEO requires them to know what an entity means in a technical context, and LLM optimization requires them to know what an LLM is — all three fail the glossary test.

Assistive agent optimization doesn’t pass perfectly either because “assistive” requires half a second to process. But “agent” is mainstream vocabulary now (every tech company on earth is selling us agents), and “optimization” is self-explanatory. Two out of three words land with zero friction, and the third doesn’t need explaining after half a second’s thought.

If you have a better term that covers the full algorithmic trinity — pull and push (see below) — and passes the glossary test more cleanly, I’m open, because the discipline matters more than the term.

More importantly, AAO describes a role (optimize so the assistive agent chooses your brand), not a technology, and roles outlast technologies. The term that names what you do is the one you’ll still be using in five years, regardless of which model architecture or retrieval method is fashionable.

Get the newsletter search marketers rely on.


Here’s what changes when you adopt the AAO frame

Your brand identity becomes the foundation, not a nice-to-have. When an agent books a hotel, selects a supplier, or recommends a consultant, it doesn’t scan a list of pages and pick the one with the best title tag. It evaluates what it knows about the brand itself: who this company is, what it does, who it serves, why it would be a reliable solution, and how confident the agent is in those facts. 

That confidence starts at the entity home — the one page you control that anchors everything the algorithmic trinity knows about you — and cascades outward through every corroborating source. If the agent doesn’t understand your brand clearly, it will pick a brand it does confidently understand.

The funnel moves inside the agent. The traditional acquisition funnel (awareness, consideration, decision) used to happen with a bouncing on-and-off-your-website dance, where the search engine was one traffic source that sent people to you. 

Under AAO, the entire funnel happens inside the AI, without the user ever seeing a list of options. The agent becomes aware of you, considers you against alternatives, and decides — all before delivering the result. Your role is no longer to attract visitors to a funnel on your site, it’s to be the answer when the agent runs its own funnel internally.

You might be thinking, “We’re not there yet.” You’re right. We’re not, for most people.

But the funnel is already in the assistive engine (ChatGPT, Perplexity, Google AI Mode), and they bring people to the perfect click — the zero-sum moment in AI where they present one single solution to the user. Most people take the solution they’re offered. The only thing missing is the agent clicking the buy button.

The web index is losing its monopoly as the source of truth. For two decades, the crawled web was effectively the only dataset that mattered: if Google hadn’t indexed it, it didn’t exist. That monopoly is breaking on two fronts. 

  • Proprietary datasets are feeding agents directly as search evolves into what I’d call ambient research, where in-app push recommendations surface your brand inside the tools people are already using, without anyone typing a query. 
  • Agents and engines already pull from APIs, booking systems, internal databases, and structured feeds that never touch a traditional web index. The web index doesn’t disappear (your website is still the entity home — the anchor), but it’s no longer the sole gatekeeper, and you should already be building your strategy on that basis.

The push layer is back, too. For 20 years, we got lazy: Google and Bing crawled our sites, rendered our JavaScript, figured out what our pages meant even when we made it hard, and we published and waited. That will continue, but you’ll need to account for multiple additions. 

IndexNow (Fabrice Canel has been building this at Bing for years), MCP, and whatever Google eventually ships all do the same thing: they let you push structured information to the systems that act, rather than waiting for those systems to come and find it. It’s the 1990s again — submitting URLs and actively feeding the ecosystem. 

My guess on why Google hasn’t adopted IndexNow isn’t because it’s a bad idea — it’s a brilliant idea — but because it wasn’t Google’s idea, and Google would rather ship a proprietary version. 

The technical generosity we’d been leaning on comes back to bite us, too: JavaScript rendering was a favor Google extended, not a standard the industry can rely on, because most AI agent bots don’t render JavaScript. If your content sits behind client-side rendering, a growing number of agents simply never see it.

(All of this maps to the 10-gate DSCRI-ARGDW pipeline I’ll lay out next in this series.)

Dig deeper: The origins of SEO and what they mean for GEO and AIO

Your SEO skills still apply. The target moves from the engine to the agent.

You don’t need to master every intermediate stage before adopting the AAO frame, because AAO contains AIEO contains AEO contains SEO — the skills stack — and only the target changes: be chosen when the agent acts, recommended when the user researches, and mentioned when the user asks.

The compounding advantage I documented in “Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it” also applies here. The top performers in our data captured 59.5% of all citability by February, up from 30.9% in December — a 293% increase in concentration over two months. 

People who adopt this frame will be able to reliably build pipeline confidence while everyone else argues about acronyms — and the gap will widen over time.

The discipline has a name, the agents are already acting, the push layer is here, and the lazy days are over.

The first two articles were the “what” and the “why.” Next week, the how begins. I’ll open up the 10-gate pipeline I’ve been referencing, DSCRI-ARGDW, which stands between your content and a conversion from an AI engine.

  • Discovered: The bot finds you exist.
  • Selected: The bot decides you’re worth fetching.
  • Crawled: The bot retrieves your content.
  • Rendered: The bot translates what it fetched into what it can read.
  • Indexed: The algorithm commits your content to memory.
  • Annotated: The algorithm classifies what your content means across 24+ dimensions.
  • Recruited: The algorithm pulls your content to use.
  • Grounded: The engine verifies your content against other sources.
  • Displayed: The engine presents you to the user.
  • Won: The engine gives you the perfect click at the zero-sum moment in AI.

The perfect local business contact page built for Google and conversions

24 February 2026 at 18:00
The perfect local business contact page built for Google and conversions

When you hear the term “contact page,” you probably think of a simple page containing contact info and maybe a form. 

I’m here to tell you why that’s a big miss from a local SEO perspective and show you how to build a contact page that builds your prominence with Google and helps you convert more leads.

Google pays special attention to your contact page

The former head of Google Business Profile Support, Joel Headley, once told me that Google specifically crawls and parses your contact page to gather information about your business.

This led me to realize that most businesses have awful contact pages. They list their name, address, and phone number (NAP), embed a contact form, and call it a day.

Google is saying, “Give me data about your business,” and you’re saying, “No data for you.”

What you need to do instead is give your contact page the same level of care and attention as a multi-location landing page.

Here are the must-haves for a contact page that converts site visitors into paying customers:

  • Business identity.
  • Contact information.
  • Trust factors and social proof.
  • Location-specific content.
  • Amenities.
  • Call to action.

1. Business identity

Just like every other page on your site, your contact page should reflect your brand. This means you should include:

  • Your business logo (that matches all your other marketing materials and real-world signage).
  • Your slogan (bonus points if you can work some keywords into it for added SEO value).
  • A short introduction that explains what your business does, where it’s located, and what your unique value proposition (UVP) is.

Dig deeper: The local SEO gatekeeper: How Google defines your entity

2. Complete contact information

You won’t believe how many businesses forget to include their contact information on their contact page. Here’s what you absolutely have to include:

  • Full business name.
  • Contact form and an email address people can write to (I recommend both).
  • Complete address.
  • Phone and text numbers.
  • Social media links.
  • Hours of operation (including any holiday, seasonal, or special hours).
  • Shopping options (e.g., in-store pickup, curbside pickup, delivery, appointment only).
  • Embedded Google Map to your business (not your address).
    • A common mistake businesses make is embedding a map of their business address on Google Maps instead of their actual Google Business Profile.
    • Make sure you embed a map in your business listing on Maps so that whenever someone clicks it, they send engagement signals to your profile. Practically, this means:
      • Search for your business name on Google Maps.
      • Bring up your profile.
      • Click the Share button.
      • Click the Embed a map tab.
      • Copy and paste the code into your contact page.
  • A link to your Google Maps listing.
    • A few years ago, Holly Starks conducted a case study to test whether driving directions affected local rankings. She set up Google Maps driving directions on 100 cell phones, put them in her car, and drove to the business. The results were dramatic. The business’s rankings jumped from the 20s to number 1.
    • In the past, I recommended writing driving and walking directions on your contact page. Now, with Starks’ findings in mind, adding a link to your Google Maps listing with anchor text like “Get driving directions” is even better. It encourages people to use Google Maps driving directions and can increase engagement signals to your Business Profile.
  • Accepted payments.
  • Parking details.
Sample embedded Google Maps link

Including detailed business information helps customers contact and visit you and signals to traditional search engines and AI search tools that your business is legitimate and credible.

Bonus tips for your contact form:

  • Add a compelling call to action (you can use the same CTA throughout your page).
  • Set up form conversion tracking.
  • Avoid spam by including reCAPTCHA, using a plugin, requiring double opt-in, and formatting your email address so bots can’t read it (e.g., hello (at) domain (dot) com).
  • Make sure your contact section matches your Google Business Profile as a signal of legitimacy.

3. Trust factors and social proof

Your contact page shouldn’t just tell people how to reach you. It should prove they’re making the right decision before they ever click or call.

Clear expectations

Trust factors and social proof - Clear expectations

Be clear about what a customer can expect once they reach out to you and confirm they’ve made the right choice in contacting you:

  • How long are response times? 24 hours? 2 business days?
  • What are the next steps? What can they expect from your team?
  • Is there any useful information you can give them about your team, your location, or anything else that sets you apart from your competitors?

Experience and credentials

Trust factors and social proof - Clear expectations

Reinforce trust and increase your page’s conversion rate by listing any:

  • Industry associations you’re a member of (locally and nationally).
  • Local chamber of commerce groups.
  • Professional groups and associations.
  • Meetup groups.
  • Neighborhood associations.
  • Better Business Bureau rating.

Tip: Link each association name to your business’s profile on its website.

Dig deeper: Local SEO sprints: A 90-day plan for service businesses in 2026

Awards and accomplishments

Sample business awards

Include any awards your business has received or mentions in the press, and link each one to the relevant article or website. If you’ve been mentioned frequently in the press, you can create a dedicated media section on the page.

Reviews and testimonials

Reviews and testimonials

Embed reviews from other sites and include testimonials on your contact page to build trust. You can increase reviewers’ credibility by including their photos, names, cities, and a link to their websites or directly to the review platform they used.

Be sure to include your overall review rating and total number of reviews.

Remember, customers don’t expect your business to have a perfect 5-star rating. A rating around 4.7-4.9 signals you’re a real business, not one that’s purchased all its reviews.

Customer reviews not only build trust and increase conversions, they also add unique, locally relevant content to the page, which is great for traditional and AI search performance.

Tip: This section is also great for requesting reviews, since repeat customers might visit your contact page. Add a Google review request link with a call to action to generate more reviews for your Google Business Profile. 

Dig deeper: 7 local SEO wins you get from keyword-rich Google reviews

Get the newsletter search marketers rely on.


4. Location-specific content

LOcation-specific content

Create content that references local information and explains exactly what your business does, where it’s located, and why prospects should choose you.

Here are some ideas for local content:

  • Include photos and descriptions of your team members.
  • Tell visitors about the customers you serve and your areas of expertise.
  • If you’re located in a popular neighborhood or area, mention that in your content.
  • Highlight any customer satisfaction guarantees or price-match policies.
  • Mention any upcoming events, volunteer efforts, or relevant partnerships.

Dig deeper: Top SEO tips for location-specific websites

5. Amenities

Business information - amenities

Start by reviewing your Google Business Profile’s attributes section and consider listing those attributes on your contact page, such as whether the business is family- or women-owned, neurodivergent-friendly, or offers outdoor seating or home delivery. 

Then list any other attributes your business has that Google doesn’t provide as options. Detailed business attributes help search engines, LLMs, and customers understand that you meet specific needs.

This can be especially useful for AI search, where people use more conversational queries, such as “Give me a list of cafes in Seattle that are wheelchair accessible and have free WiFi.”

6. A clear CTA button

Sample CTA button

If you’re going to do all this work to make a killer contact page, don’t forget to put the cherry on top. Sprinkle strategically placed calls to action throughout the page to encourage visitors to contact you. Make them bright, animated, eye-catching, and convincing.

Treat your contact page like a local SEO asset

If you want a contact page that helps people reach out to you, informs search engines and LLMs about your business, and converts visitors into customers, treat it like a multi-location landing page. Save this list so you remember every section your contact page needs.

Must-have sections of a contact page

Do this, and your contact page will outperform 99% of your competitors’ contact pages, because most businesses do a terrible job with them.

How to write paid search ads that outperform your competitors

24 February 2026 at 17:00
How to write paid search ads that outperform your competitors

How often do you review your PPC ad copy? Not just analyzing the performance of each asset within the ad platform, but also reviewing your ads in the context of how they appear next to competitor ads?

Are you using the exact same messaging as your competitors? Does your offer stand out from theirs? Which ads are bland and generic, and which provide concrete calls to action and compelling selling points?

Let’s walk through several tips for writing paid search copy that stands out in search results and converts customers for your brand.

1. Think about how assets will appear together, not just individually

When you’re writing Responsive Search Ads, it’s easy to fall into the trap of simply filling in all 15 headline options and all four descriptions. 

However, if each headline essentially says the same thing with slightly different wording, your ad copy will appear bland and repetitive in the SERP when two or three headlines are shown together.

Zoho Google Ads

For instance, if this example ad showed the following, it would be less helpful:

  • “Project Management Software – Project Management Solution – Project Management”

Instead, it says:

  • “Project Management Software – Trusted by 3 Million Users”

If you want to test multiple headlines with slightly different wording, pin them to the same position so the ad platform can rotate between them, but not show both at the same time. Zoho appears to be doing this by using both “Preferred by 3 Million Users” and “Trusted by 3 Million Users” as options.

Zoho Google Ads - Trusted by 3 Million Users

Dig deeper: The anatomy of compelling search ad copy

2. Don’t obsess over ad strength

The visibility of the ad strength rating looms over every Google Ads account. Don’t let chasing an Excellent score consume your focus.

Focus more on making sure each headline and description speaks accurately to your benefit points than on including the maximum number of each. Pinning may negatively impact ad strength, but as discussed above, it can help make your messaging cleaner.

3. Use AI as a partner, but don’t blindly outsource all your copy to AI

Google and Microsoft make ad writing easy, generating text for all your ad assets with a single click. Your LLM of choice can also spin out halfway acceptable copy with the right prompt.

These tools can provide a helpful starting point, but they shouldn’t be the final result you use without careful review. Don’t skip the human touch when reviewing the copy you get back.

Problems can range from copy that doesn’t reflect your brand voice to flat-out inaccuracies. In industries such as finance and healthcare, where legal guidelines matter, AI-generated copy may not be compliance-friendly.

Dig deeper: How to write high-performing Google Ads copy with generative AI

Get the newsletter search marketers rely on.


4. Include value propositions, and back them up

It’s not enough to claim that you’re the “Best Local Contractor” in your area. Think of concrete ways to reinforce superlative statements like this.

For instance, “Voted Best Local Contractor by [News Outlet]” provides a tangible source for the claim. Mention awards or rankings from organizations your prospective customers are likely to recognize.

Incorporating numbers, where possible, also helps bring credibility to your messaging claims.

  • Years in business. If you’ve been around a long time, stating this positions you well against newer players in the market.
  • Number of customers served.
  • Number of locations for physical businesses.
  • Number of connectors for a software product.
  • Number of active users.
  • Number of trips booked.
  • Number of properties managed.

One word of caution: If you include numbers that are likely to change over time, such as how many customers you serve, revisit them periodically and update them for accuracy. Ranges are fine, too, for example, “Over 500 Locations.”

5. Highlight ease of effort

In today’s busy culture, saving time and hassle can be one of your biggest selling points. Think about where the product or service you’re promoting can reduce effort for your target audience.

  • Open an account in 10 minutes.
  • Complete your application online.
  • Schedule a same-day appointment.
  • Conduct your consultation remotely.
  • Repairs done while you wait.

Make sure you can back up what you promise here, and consider whether current customer reviews reflect the experience your claims describe.

Dig deeper: How to assemble captivating Google Ads copy

6. Offer a ‘free’ hook

Just like free samples at Trader Joe’s, mentions of “free” in ad copy immediately draw a user’s attention. What can you offer as a free entry point for potential customers?

  • Free demo.
  • Free trial.
  • Bonus for new customers.
  • Free college application.
  • Free quote.
  • Free content, such as ebooks, whitepapers, or webinars.

Whether it’s a trial of a software product or a free visit to your home to assess what’s needed for pest control, this type of offer can be what convinces prospects to fill out a form and enter your sales funnel.

For instance, Strayer University highlights, “Pass 3 Bachelor’s Courses, Earn 1 Tuition Free.” In an age of skyrocketing college costs, that’s an attractive reason to click and learn more.

Strayer University PPC ad

7. Turn off automated assets

If you’re not careful with your account settings, Google and Microsoft can automatically generate assets, from ad copy to sitelinks, without your review. That can create concerns for compliance and for overall messaging accuracy.

Make sure you turn off this option at the account level to avoid issues with unwanted copy or unexpected links to irrelevant pages.

Dig deeper: When to trust Google Ads AI and when you shouldn’t

8. Highlight pricing where it makes sense for your brand

When people are comparison shopping, they usually want quick visibility into cost. Of course, providing pricing may be more or less straightforward depending on your business, and price isn’t always a primary selling point for every brand.

If you’re in an industry where showing a cost is simple, including it in your ad copy can help. When your pricing is competitive, mentioning it helps you stand out.

If your pricing is higher than most competitors, showing that cost may help filter out people you don’t want clicking your ads. For example, lower-priced competitors may cater to small businesses, while your company serves enterprise-level organizations that need more robust solutions. 

If you offer multiple price tiers or clearly defined costs for different services, consider using price assets to highlight them. For example, you might break out cost by number of users for a SaaS product.

9. Mention locations in regional campaigns

If your business serves a particular region, mention locations in your ad copy to create a local connection.

For example, if you just opened a new store in Buckwheat County, including “Now Open in Buckwheat County” can help appeal to users in that area. Your ad will likely stand out against national brands running generic messaging.

You can set up ad groups based on regional keywords and tweak your headlines to reference those locations. Also consider using location insertion to dynamically include regions in your copy.

Dig deeper: Localization in Google Ads: How to structure multi-market campaigns

10. Review and revise your ad copy

Now that we’ve covered ways to improve your paid search copy, take a moment to review your current ads.

  • Where can you better think through how assets combine?
  • What value propositions aren’t you mentioning yet?
  • How can you tailor your wording more directly to customers’ concerns, such as by highlighting pricing or regions?

Start creating new copy variants and testing them to improve your PPC performance.

Your ad doesn’t compete in isolation — it competes in the SERP

Paid search success isn’t about filling every field or chasing an Excellent ad strength score. It’s about how your messaging appears next to competitors in the SERP.

Review your ads in context. Look at how assets combine. Strengthen value propositions, highlight what makes you different, and test new variations.

If your ad sounds like everyone else’s, it won’t stand out. Make sure it does.

Google Ads support now requires account change authorization

23 February 2026 at 22:43
Auditing and optimizing Google Ads in an age of limited data

Advertisers contacting Google Ads support may now need to grant explicit authorization before they can even submit a help request — giving a Google specialist permission to access and make changes directly inside their account.

Here’s what’s happening. Users are first routed to a beta AI chat. If they opt to submit a support form instead, they must tick an “Authorisation” box. The wording allows a Google Ads specialist, on behalf of the company, to reproduce and troubleshoot issues by making changes directly in the account.

The fine print is clear. Google doesn’t guarantee results. Any adjustments are made at the advertiser’s own risk. And the advertiser remains solely responsible for the impact on campaign performance and spending.

Why we care. The required checkbox shifts more responsibility onto advertisers at a time when automation and AI already limit hands-on control. If support makes changes, the performance and spend risk still sits with the advertiser.

Between the lines. This creates a trade-off between speed and control. Granting access could accelerate troubleshooting, but it also opens the door to account-level changes that may affect live campaigns — without any assurance of improved outcomes.

The bottom line. Getting support may now mean temporarily handing over the keys — while keeping full accountability for whatever happens next.

First seen. This new caveats to getting support was spotted by PPC specialist Arpan Banerjee who shared spotting the message on LinkedIn.

What it takes to make demand gen work for B2B and ecommerce

23 February 2026 at 21:00

Demand Gen marks a shift in Google Ads toward visual advertising beyond keywords and text. Relying on traditional strategies when testing it wastes budget, hurts performance, and limits opportunity. To succeed, you have to think more like a social advertiser than a search advertiser.

At SMX Next, Industrious Marketing owner Jack Hepp explained why many businesses struggle with demand gen campaigns — especially in B2B and lead generation — while also sharing insights relevant to ecommerce.

Understanding the Shift: From Intent to Interruption

Demand Gen reflects Google’s shift from intent-first search advertising to visual, discovery-based campaigns.

Instead of targeting users actively searching for your service, you reach them as they scroll through YouTube, Gmail, or Discovery feeds.

This changes your approach: visual creative becomes the new keyword, replacing traditional targeting.

Common misalignments in Demand Gen strategy

Applying outdated search strategies can lead to failure with Demand Gen. The four main mistakes:

  • Expecting bottom-of-funnel CPAs from mid-funnel traffic.
  • Using overly broad, “spray and pray” targeting.
  • Running bland, generic creative.
  • Not knowing how to optimize without negative keywords.

Success requires a social advertising mindset.

Campaign structure: Understanding the hierarchy

Demand Gen uses a two-level structure.

  • Campaign-level settings control broad parameters like bidding strategy, conversion goals, and device targeting.
  • Ad group–level settings control audiences, locations, and channels.

Each ad group learns independently—insights don’t transfer—allowing precise audience segmentation with tailored creative.

Creating interruption-based creative

You must stop their scroll within 3-4 seconds. Your creative must capture attention immediately, speak to a specific pain point, and present your solution.

Unlike search ads — where users are actively looking for you — Demand Gen interrupts browsing, so your message must be instantly compelling and problem-focused.

Aligning visuals to the customer journey

Match your offer to audience readiness.

  • Cold audiences need educational content like free guides or diagnostic tools.
  • Warm audiences respond to case studies, webinars, and comparison tools.
  • Hot audiences are ready for demos and direct purchase offers.

Misaligning them — like pushing demos to cold audiences — guarantees failure from the start.

The power of problem-focused creative

Generic ads with stock photos and basic headlines get scrolled past. Winning creative uses bold headlines, striking visuals, and problem-focused messaging.

  • For example, “43% of cyberattacks target small businesses” speaks to a specific pain point, making the ad stand out and prompting engagement instead of a scroll.

Bidding and budget strategies

Demand Gen uses campaign goals rather than traditional bidding strategies: conversion-focused, click-focused, or conversion–value–focused.

  • Aim for 50+ conversions per month and budget 10–15x your target CPA to build enough data.
  • For click-based bidding, set budget based on desired traffic volume and target CPC.

Demand Gen is highly data-reliant, so hitting these thresholds is critical to performance.

Can Demand Gen work with small budgets?

Yes, with strategic planning.

Focus on mid- or upper-funnel audiences and optimize for MQLs instead of bottom-funnel conversions. This helps you reach 50+ monthly conversions for data density, even with smaller budgets.

Align your goals, targeting, and budget to generate enough conversion data.

Building the right audience

Avoid two extremes:

  • Audiences that are too broad (billions of impressions) where Google can’t identify your target.
  • Audiences too narrow (a few thousand impressions) where you can’t build data density.

The sweet spot: start with custom segments based on search terms or competitor websites, then layer in lookalike segments and strategic first-party data. Avoid optimized targeting at first — it works best to expand already successful campaigns.

The role of creative in targeting

Your creative shapes who Google targets. The people who engage with your ads teach Google who to show them to next.

Performance peaks when your creative speaks to your ideal customer profile. Align messaging to the buyer’s stage — cold audiences need different messaging than hot prospects.

Strategic exclusions

Use exclusions surgically, not broadly. It’s tempting to exclude like negative keywords, but over-excluding shrinks your audience too much.

Focus only on clear non-converters (e.g., specific age groups, locations, or audiences you know won’t respond). Give Google room to find engaged users within your parameters, rather than narrowing to the point of ineffectiveness.

Optimization: Where to focus

Without negative keywords, optimize through three levers: creative, audience, and offer. Test multiple formats (video, image, carousel) and styles (UGC, testimonials, problem-focused messaging). Continuously refine what works with new hooks and data points.

Test offers to match audience readiness — cold audiences need educational content, while hot audiences need direct CTAs.

Prioritize post-click optimization: improve landing pages, strengthen tracking with CRM integration, and ensure clean data feeds Google’s learning.

Real-world case study

A telecommunications company targeting B2B managed IT services drove strong results by aligning all three elements.

  • Offer: An interactive quiz showing businesses how managed IT could reduce costs.
  • Targeting: Custom segments based on proven search terms and competitor website visitors.
  • Creative: Problem-focused messaging about cybersecurity threats to small businesses.

Results:

  • $10 cost per MQL.
  • 3.8% conversion rate.
  • 40% of quiz takers became SQLs.
  • 20% increase in total SQLs.

Key takeaways

As you plan your next campaign:

  • Match your creative to your customer and their stage in the journey.
  • Target the right audience at the right point in that journey.
  • Test and optimize creative and offers to find what resonates and drives action.

💾

From scroll-stopping creative to smarter budgets, learn why search tactics fail and what actually drives MQLs, SQLs and sales.

Mastering generative engine optimization in 2026: Full guide by Tor.app

23 February 2026 at 16:00
Traditional search results vs AI-generated answer with brand citations

Gartner predicted traditional search volume will drop 25% this year as users shift to AI-powered answer engines. Google’s AI Overviews now reach more than 2 billion monthly users, ChatGPT serves 800 million users each week, and Perplexity processes hundreds of millions of queries every month.

Getting found online is no longer just about ranking on Page 1. It’s about being the source AI engines cite when they generate an answer.

That’s the job of generative engine optimization (GEO) — and in 2026, it’s no longer optional. This guide shows you how to build, execute, and measure a GEO strategy that actually works.

What is GEO — and why 2026 is the tipping point

GEO is the practice of structuring your content and digital presence so that AI-powered search platforms — including ChatGPT, Google AI Overviews, Perplexity, Claude, and Copilot — can retrieve, cite, and recommend your brand when answering user questions.

If traditional SEO was about earning a spot among 10 blue links, GEO is about earning a place among the two to seven domains large language models typically cite in a single response. The competition is tougher, but the payoff is big: when an AI engine names your brand in its answer, it delivers an implicit endorsement no organic listing ever could.

SEO vs generative engine optimization key differences comparison chart

Several forces make 2026 the tipping point. AI search adoption is moving beyond experimentation as users form platform loyalty, choosing their preferred AI engine the way they once chose between Google and Bing.

At the same time, GEO has gone mainstream at the enterprise level, with dedicated conferences, agency specializations, and a growing ecosystem of purpose-built tools. Academic research reinforces this shift. A Princeton study that coined the term, along with a 2025 paper on citation bias in AI search, shows that AI engines strongly favor earned media—authoritative third-party sources—over brand-owned content.

Understanding this dynamic isn’t optional. It’s the foundation of any effective GEO strategy.

A practical GEO framework: assess, optimize, measure, iterate

Treating GEO as a one-time content tweak is the biggest mistake you make. In reality, GEO demands the same ongoing discipline as SEO. The framework below lays out a repeatable structure to get it right.

Four-phase GEO framework: assess, optimize, measure, iterate cycle”

Phase 1: Assess your AI search readiness

Before you optimize anything, you need a baseline. Most brands obsess over Google rankings yet have no visibility into how AI engines perceive and present their brand. That’s like running a business without ever checking your bank balance.

An effective GEO audit should answer a few core questions:

  • Are major AI engines citing your content at all? 
  • Can AI crawlers read and understand your structured data? 
  • How does your brand show up in AI-generated answers — accurate, positive, neutral, or wrong? 
  • Where are competitors earning AI citations that you’re missing?

The audit doesn’t need to take months. Tools like Geoptie’s free GEO Audit can assess your site’s AI search readiness and surface actionable insights in minutes—giving you a clear starting point before you invest in optimization.

Phase 2: Optimize your content for AI engines

This is the tactical core of any GEO strategy. Focus your optimization on four areas: content structure, entity authority, technical foundations, and content freshness.

Structure content for AI retrieval

AI engines don’t read content the way people do. They break pages into individual passages and evaluate each one for relevance, clarity, and factual density. Every section needs to stand on its own.

Start each section with a clear, direct answer. Then expand with context.

  • Use a clean heading hierarchy (H2 and H3) to signal the topic of each passage.
  • Add brief TL;DR statements under key headings so they can stand alone as answers.
  • Include FAQ sections. AI engines rely heavily on clear question-and-answer pairs when building responses.

Build entity authority

GEO focuses on entities — your brand, your people, your products — not just individual pages. Strengthen those entity signals to increase the odds that AI engines recognize your brand and cite it with confidence.

  • Keep your brand mentions consistent across the web. 
  • Publish clear, detailed About and author bio pages. 
  • Pursue a Wikipedia presence when it makes sense. 
  • Actively build and manage your knowledge panel.

Research shows AI engines favor earned media — third-party coverage, reviews, and industry mentions — over content on your own site.

Digital PR and thought leadership aren’t just brand plays anymore. They’re direct GEO levers.

Nail the technical foundations

Technical GEO optimization overlaps with traditional SEO, but it adds AI-specific layers.

  • Implement schema markup — especially Article, Organization, FAQ, HowTo, and Breadcrumb — to help AI engines parse your content.
  • Review your robots.txt file to ensure AI crawlers like GPTBot, ClaudeBot, and PerplexityBot aren’t blocked.
  • Consider adding an llms.txt file to guide AI systems on how to interpret your site.

And don’t ignore the fundamentals. Fast load times, clean site architecture, and mobile optimization still drive discoverability and crawlability.

Prioritize freshness and depth

AI engines weigh recency when selecting sources. A guide published in 2024 with no updates will lose ground to a 2026 article on the same topic.

Refresh your cornerstone content regularly. Add updated data, new insights, and a clear “Last updated” timestamp.

Original research, proprietary data, and expert commentary attract citations. If you publish something no one else has — a benchmark study, a unique dataset, or a framework built from your experience — AI engines have a reason to cite you over a dozen lookalike alternatives.

GEO content optimization checklist with nine actionable items

Phase 3: Measure your AI search performance

Measurement is the biggest gap in most GEO strategies today. Marketers who’ve spent years refining Google Analytics dashboards often have no comparable visibility into AI search performance.

Track the metrics that matter:

  • Measure AI citation frequency — how often your brand appears in AI-generated answers.
  • Track share of voice — your mentions versus competitors across AI platforms.
  • Monitor citation sentiment — whether AI accurately and positively presents your brand.
  • And measure AI-referred traffic — visits and conversions from AI search, tracked through GA4 attribution.

The challenge is that traditional SEO tools don’t track these metrics. You need purpose-built GEO platforms that query AI engines directly and monitor brand performance over time.

If you want a quick snapshot, Geoptie’s free Rank Tracker shows your position across multiple AI engines instantly. It’s a practical starting point before you commit to a full monitoring setup.

Phase 4: Iterate and scale

GEO isn’t a launch-and-forget initiative. The AI search landscape shifts fast — models update, citation patterns change, and competitors adapt. Your strategy needs to evolve just as quickly.

Use your performance data to see what’s earning citations — and why. Identify which AI platforms drive the most value in your vertical. Track where competitors are gaining or losing ground.

Then scale what works. Repurpose high-performing content across formats. Turn a well-cited guide into a data page, a video script, and a set of targeted FAQ entries.

Build a cross-functional GEO workflow. Generative engine optimization isn’t just the content team’s job. It lives at the intersection of content marketing, SEO, digital PR, and product marketing.

Platforms like Geoptie bring audit reports, competitor intelligence, citation analytics, and content optimization into one dashboard. That makes it practical to manage the entire cycle in one place instead of stitching together multiple tools.

Geoptie dashboard tracking AI search visibility across multiple engines

Now is the time to build GEO capability

GEO isn’t a passing trend. It’s the new foundation of digital discovery. 

As AI search adoption accelerates through 2026 and beyond, the gap between brands that invest now and those that wait will only widen.

The playbook is straightforward:

  • Assess where you stand today. 
  • Optimize your content and technical foundation for AI retrieval. 
  • Measure performance across the platforms that matter. 
  • Then iterate relentlessly.

Brands that build this discipline into their marketing stack now will earn compounding advantages as AI becomes the primary way customers discover, evaluate, and decide who to trust.

The question isn’t whether GEO matters. It’s whether you’ll lead or follow.

Ready to take control of your AI visibility?

Geoptie gives you everything you need to master GEO from one platform. Run comprehensive GEO audits, track AI rankings across ChatGPT, Google AI, Perplexity, Claude, and more, analyze competitors, monitor citations, and build AI-first content—all in one place.

Whether you’re new to GEO or scaling an established strategy, Geoptie turns insight into action from day one. Start your free 14-day trial and see exactly where your brand stands in AI search.

Content scoring tools work, but only for the first gate in Google’s pipeline

23 February 2026 at 19:00
Content scoring tools work, but only for the first gate in Google’s pipeline

Most SEO professionals give Google too much credit. We assume Google understands content the way we do — that it reads our pages, grasps nuance, evaluates expertise, and rewards quality in some deeply intelligent way. The DOJ antitrust trial told a different story.

Under oath, Google VP of Search Pandu Nayak described a first-stage retrieval system built on inverted indexes and postings lists, traditional information retrieval methods that predate modern AI by decades. Court exhibits from the remedies phase reference “Okapi BM25,” the canonical lexical retrieval algorithm that Google’s system evolved from. The first gate your content has to pass through isn’t a neural network. It’s word matching.

Google does deploy more advanced AI further down the pipeline, including BERT-based models, dense vector embeddings, and entity understanding systems. But those operate only on the much smaller candidate set traditional retrieval produces. We’ll walk through where each technology enters the process.

This matters for content optimization tools like Surfer SEO, Clearscope, and MarketMuse. Their core methodology — a mix of TF-IDF analysis, topic modeling, and entity evaluation — maps directly to how that first retrieval stage scores documents. The tools are built on the right foundation. The problem is that most people use them incorrectly, and the studies backing them have real limitations.

Below, I’ll explain how first-stage retrieval works and why it still matters, what the research on content scoring tools actually shows — and doesn’t show — and most importantly, how to use these tools to produce content that earns its way into the candidate set without wasting time chasing a perfect score.

How first-stage retrieval works and why content tools map to it

Best Matching 25 (BM25) is the retrieval function most commonly associated with Google’s first-stage system. 

Nayak’s testimony described the mechanics it formalizes: an inverted index that walks postings lists and scores topicality across hundreds of billions of indexed pages, narrowing the field to tens of thousands of candidates in milliseconds. 

Here’s what matters for content creators:

  • Term frequency with saturation: The first mention of a relevant term captures roughly 45% of the maximum possible score for that term. Three mentions get you to about 71%. Going from three to thirty adds almost nothing. Repetition has steep diminishing returns.
  • Inverse document frequency: Rare, specific terms carry more scoring weight than common ones. “Pronation” is worth roughly 2.5 times more than “shoes” in a running shoe query because fewer pages contain it.
  • Document length normalization: Longer documents get penalized for the same raw term count. All of these scoring algorithms are essentially looking at some degree of density relative to word count, which is why every content tool measures it.
  • The zero-score cliff: If a term doesn’t appear in your document at all, your score for that term is exactly zero. Not low. Zero. You’re invisible for every query containing it.

That last point is the single most important reason content optimization tools have value. If you write a comprehensive rhinoplasty article but never mention “recovery time,” you score zero for that entire cluster of queries, regardless of how good the rest of your content is. 

Google has systems like synonym expansion and Neural Matching — RankEmbed — that can supplement lexical retrieval and surface additional documents. But counting on those systems to rescue a page with vocabulary gaps is a risky strategy when you can simply cover the term.

After first-stage retrieval, the pipeline gets progressively more expensive and more sophisticated. RankEmbed adds candidates keyword matching missed. Mustang applies roughly 100+ signals, including topicality, quality scores, and NavBoost — accumulated click data over 13 months, described by Nayak as “one of the strongest” ranking signals. 

DeepRank applies BERT-based language understanding to only the final 20 to 30 results because these models are too expensive to run at scale. The practical implication is clear: no amount of authority or engagement signals helps if your page never passes the first gate. Content optimization tools help you get through it. What happens after is a different problem.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

What the research on content tools actually shows

Three major studies have examined whether content tool scores correlate with rankings: Ahrefs (20 keywords, May 2025), Originality.ai (~100 keywords, October 2025), and Surfer SEO (10,000 queries, July 2025). All found weak positive correlations in the 0.10 to 0.32 range.

A 0.24 to 0.28 correlation is actually meaningful in this context. But these numbers need serious qualification. Every study was conducted by a vendor, and in every case, the vendor’s own tool performed best. 

No study controlled for confounding variables like backlinks, domain authority, or accumulated click data. The methodology is fundamentally circular: the tools generate recommendations by analyzing pages that already rank in the top 10 to 20, then the studies test whether pages in the top 10 to 20 score well on those same tools.

The real question — whether following tool recommendations helps a new, unranked page climb — has never been rigorously tested. Clearscope’s Bernard Huang put it directly: “A 0.26 correlation is not the brag they think it is.” 

He’s right. But a weak positive correlation is exactly what you’d expect if these tools solve the retrieval problem — getting into the candidate set — without solving the ranking problem — beating competitors once there. Understanding that distinction is what makes these tools useful rather than misleading.

Why not skip these tools altogether?

Expert writers are terrible at predicting how their audience actually searches. MIT Sloan’s Miro Kazakoff calls it the curse of knowledge. Once you know something, you forget what it was like before you knew it. 

Clearscope’s case study with Algolia illustrates the problem precisely. Algolia’s writers were technical experts producing genuinely excellent content that sat on Page 9. The problem wasn’t quality. The team was using internal jargon instead of the language their audience actually typed into Google. 

After adopting Clearscope, their SEO manager Vince Caruana said the tool helped the organization “start writing for our audience instead of ourselves” by breaking out of internal vocabulary. Blog posts moved from Page 9 to Page 1 within weeks. Not because the writing improved, but because the vocabulary finally matched search behavior.

Google’s own SEO Starter Guide acknowledges this dynamic, noting that users might search for “charcuterie” while others search for “cheese board.” Content optimization tools surface that gap by showing you the actual vocabulary of pages that have already demonstrated retrieval success. 

You can do everything a tool does manually by reading top results and noting common themes, but the tools automate hours of SERP analysis into minutes. At $79 to $399 per month, the investment is justified when teams publish frequently in competitive niches or assign work to freelancers lacking domain expertise. For a solo blogger publishing once or twice a month, manual analysis works fine.

What about AI-powered retrieval?

Dense vector embeddings are the same core technology behind LLMs and AI-powered search features. They compress a document into a fixed-length numerical representation and can match semantically similar content even without shared keywords. Google uses them via RankEmbed, but they supplement lexical retrieval rather than replace it.

The reason is computational: A 768-dimensional embedding can preserve only so much information, and research from Google DeepMind’s 2025 LIMIT paper showed that single-vector models max out at roughly 1.7 million documents before relevance distinctions break down — a small fraction of Google’s index. Multiple studies, including findings on the BEIR benchmark, show hybrid approaches combining BM25 with dense retrieval outperform either method alone.

The bottom line for practitioners is clear: The AI layer matters, but it sits lower in the pipeline, and the traditional retrieval stage your content tools map to still does the heavy lifting at scale.

Get the newsletter search marketers rely on.


How to actually use content scoring tools

This is where most guidance on content tools falls short. The typical advice is “use Surfer/Clearscope, get a high score, rank better.” 

That misses the point entirely. Here’s a framework built on how these tools actually intersect with Google’s retrieval mechanics.

Prioritize zero-usage terms over everything else

The highest-leverage action these tools identify is a term with zero mentions in your content. That’s a term where your retrieval score is literally zero, and you’re invisible for every query containing it. Going from zero to one mention is the single most impactful edit you can make. Going from four mentions to eight is nearly worthless because of the saturation curve.

When reviewing tool recommendations, filter for terms you haven’t used at all. Clearscope’s “Unused” filter does this explicitly. 

Ask yourself: Does this missing term represent a subtopic my audience would expect me to cover? If yes, work it in naturally. If the tool suggests a term that doesn’t fit your angle — a beginner’s guide doesn’t need advanced technical terminology — skip it. 

A high score achieved by forcing irrelevant terms into your content is worse than a moderate score with genuinely useful writing. As Ahrefs noted in its 2025 study, “you can literally copy-paste the entire keyword list, draft nothing else, and get a high score.” That tells you everything about the limits of chasing the number.

Be selective about which competitor pages you analyze

Default settings on most tools pull from the top 10 to 20 ranking pages, which frequently includes Wikipedia, major media outlets, and enterprise sites with overwhelming domain authority. These pages often rank despite their content, not because of it. Their term patterns reflect authority advantage, not content quality, and they’ll skew your recommendations.

A better approach: Look for pages that rank for a high number of organic keywords on mid-authority domains. 

Ahrefs’ data shows the average page ranking No. 1 also ranks in the top 10 for nearly 1,000 other keywords. A page ranking for 500 keywords on a DR 35 site has demonstrated broad retrieval success through vocabulary and topical coverage, not just backlinks. Those pages contain term patterns proven effective across hundreds of separate retrieval events, not just one. 

In most tools, you can manually exclude specific URLs from competitor analysis. Remove the Wikipedia pages, the Amazon listings, and any high-authority site where you know authority is doing the work. What’s left gives you a much cleaner picture of what content actually needs to include.

Use tools during research, not during writing

The worst workflow is writing with the scoring editor open, watching your number tick up in real time. That pulls your attention toward keyword insertion instead of communicating expertise. Practitioners reporting the worst experiences with these tools tend to be the ones writing to a live score.

The better workflow: Run the tool first. Review the term list. Identify gaps in your outline, especially terms with zero usage that represent subtopics you should cover. Then close the tool and write for your reader. 

Run it again at the end as a sanity check. Did you miss any major subtopics? Add them. Is the score significantly lower than competitors? That’s information worth investigating. But your job is to build the best page on the internet for this topic, not to match a number.

Understand that content is one player in the game

NavBoost, RankEmbed, PageRank-derived quality scores, site authority, click data, and engagement signals all operate on the candidate set that first-stage retrieval produces. Content optimization gets you through the gate. It doesn’t win the race. 

If you optimize a page, push the score to 90, and don’t see ranking improvements, that doesn’t mean the tool failed. It likely means the other ranking factors — backlinks, domain authority, and click signals — are doing more work for your competitors than content alone can overcome.

This is especially important when scoping on-page optimization projects. Be honest about what content changes can and can’t accomplish. If a page is on a DR 15 domain competing against DR 70+ sites, perfect content optimization is necessary but probably not sufficient. 

When a client asks why they’re not ranking after you pushed their score to 95, the answer shouldn’t be “we need more content.” It should be a clear explanation of which part of the problem content solves — retrieval — which parts it doesn’t — authority, engagement, brand — and what the next strategic move actually is.

Focus on going beyond, not just matching

The philosophy behind these tools — structure your content after what top results cover — is sound. You need to demonstrate topical relevance to enter the candidate set. But the goal isn’t to produce another version of what already exists.

The pages that rank broadly, the ones that show up for hundreds or thousands of keywords, consistently do more than match the competitive baseline. They add original research, practitioner experience, specific examples, or angles the existing results don’t cover.

Surfer SEO’s December 2024 study supports this. It measured “facts coverage” across articles and found that top-performing content by keyword breadth had significantly higher coverage scores than bottom performers.

The content that ranks for the most queries doesn’t just include the right terms. It includes more information, more specifically. Use the tool to establish the floor of topical coverage. Then build the ceiling with value the tool can’t measure.

A note on entities

Google’s Knowledge Graph contains an estimated 54 billion entities. Entity understanding becomes most powerful in the later ranking stages where BERT and DeepRank process final candidates. 

Some content tools are starting to incorporate entity analysis, but even the best versions present entities as flat keyword lists, missing the relationships between entities that Google’s systems actually evaluate. 

Knowing that “Dr. Smith” and “rhinoplasty” appear on your page is different from understanding that Dr. Smith is a board-certified surgeon with published research at a specific institution. That relational depth is what Google processes, and no content scoring tool currently captures it. 

Treat entity coverage as an additional layer beyond what keyword-focused tools measure, not a replacement for the fundamentals.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Retrieval before ranking

Content optimization tools work because they’ve reverse-engineered the vocabulary of the retrieval stage. That’s a less exciting claim than “they’ve cracked Google’s algorithm,” but it’s the honest one, and it’s supported by what the DOJ trial revealed about Google’s infrastructure.

Use these tools to identify missing terms and subtopics. Be skeptical of exact frequency targets. Exclude high-authority outliers from your competitor analysis. Prioritize zero-usage terms over further optimization of terms you’ve already covered. 

Understand that a perfect content score addresses one stage of a multi-stage pipeline and use the competitive baseline as your floor, not your ceiling. The content that ranks the broadest isn’t the content that best matches what already exists. It’s the content that covers what already exists and then goes further.

SerpApi moves to dismiss Google scraping lawsuit

23 February 2026 at 18:41
Bot detection maze

SerpApi is asking a federal court to dismiss Google’s lawsuit, arguing the company is misusing copyright law to restrict access to public search results.

  • The motion was filed Feb. 20, according to a blog post by SerpApi CEO and founder Julien Khaleghy.
  • Google sued SerpApi in December, alleging it bypassed technical protections to scrape and resell content from Google Search.

The details: SerpApi argues Google is improperly invoking the Digital Millennium Copyright Act (DMCA). According to Khaleghy:

  • The DMCA protects copyrighted works, not websites or ad businesses.
  • Google doesn’t own the underlying content displayed in search results.
  • Accessing publicly visible pages isn’t “circumvention” under the statute.

Google’s complaint alleged SerpApi:

  • Circumvented bot-detection and crawling controls.
  • Used rotating bot identities and large bot networks.
  • Scraped licensed content from Search features, including images and real-time data.

SerpApi said it doesn’t decrypt systems, disable authentication, or access private data. Khaleghy said SerpApi retrieves the same information available to any user in a browser, without requiring a login.

Khaleghy also argued Google admitted its anti-bot systems protect its advertising business — not specific copyrighted works — which he said undermines the DMCA claim.

SerpApi cites the Ninth Circuit’s hiQ v. LinkedIn decision warning against “information monopolies” over public data. It also cites the Sixth Circuit’s Impression Products v. Lexmark ruling to argue that public-facing content can’t be shielded by technical measures alone.

Catch up quick: The lawsuit follows months of escalating legal fights over scraping and AI data use.

  • Oct. 22: Reddit sued SerpApi, Perplexity, Oxylabs, and AWMProxy in federal court, alleging they scraped Reddit content indirectly from Google Search and reused or resold it. Reddit claimed the companies hid their identities and scraped at “industrial scale.” Reddit said it set a “trap” post visible only to Google’s crawler that later appeared in Perplexity results. Reddit is seeking damages and a ban on further use of previously scraped data.
  • Oct. 29: SerpApi said it would “vigorously defend” itself, calling Reddit’s language “inflammatory” and arguing public search data should remain accessible.
  • Dec. 19: Google sued SerpApi, alleging it bypassed security protections, ignored crawling directives, and scraped licensed Search content for resale. SerpApi responded that it operates lawfully and that accessing public search data is protected by the First Amendment.

By the numbers: SerpApi claims that, under Google’s interpretation of the DMCA, statutory damages could theoretically total $7.06 trillion — a figure it said exceeds U.S. GDP. The number reflects SerpApi’s calculation of potential per-violation penalties, not an actual damages demand.

What’s next. The case now moves to the court’s decision on whether Google’s claims can proceed.

Why we care: The outcome could reshape how SEO platforms, AI tools, and competitive intelligence software access SERP data. A win for Google could make third-party search data harder or riskier to obtain. A win for SerpApi could strengthen arguments that publicly accessible search results can be scraped and collected.

The blog post. Google v. SerpApi: We’re filing a Motion to Dismiss. Here’s why we’re in the right.

Dig deeper. Inside SearchGuard: How Google detects bots and what the SerpAPI lawsuit reveals

The SEO’s guide to Google Search Console

23 February 2026 at 18:00
Google Search Console

Search Console is a free gift from Google for SEO professionals that tells you how your website is performing. It’s the closest thing to X-ray vision we can get. 

With data-packed amenities, SEO professionals can scavenge through to locate stashes of hidden nuggets like clicks and impressions from search queries, Core Web Vitals, and whatever other surprises lie within your website. 

Custom regex filters take you around your million-page website. 

And while all SEO professionals hope to avoid any catastrophic SEO-related events with Google’s AI Overview, all we can really do is be prepared. 

For starters, keep reading this guide below on Search Console. 

It’s engineered to withstand zombie pages, Helpful Content bloodbaths, core update mood swings, and AI Overview siphoning your clicks like we’re in Mad Max, the Search Edition. This guide is exactly what you need when the SEO industry gets dicey. 

What does Search Console do? And how does it help SEO?

Search Console is a free website analytics and diagnostic tool provided by Google. Search Console tracks your website’s performance in Google search results (and, hopefully soon, in Gemini and AI Mode). 

This is the closest thing we have to first-party search truth. 

As an SEO director, I use Search Console daily. I monitor content performance, validate technical fixes, and track branded and non-branded query growth. It helps me prioritize what I should focus on in my SEO strategy. 

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

How do I set up Search Console?

Getting set up on Search Console is quick and easy, but may require technical support. 

First, you need to have a Google account. 

Next, go to Search Console https://search.google.com/search-console

If you don’t see any profiles listed, you’ll need to choose a domain or prefix URL and verify your website ownership. 

google-search-console-domain-url-prefix

So, how do you choose between a domain versus a prefix URL? Let me walk you through the differences. 

Domain property is the default recommendation

A domain property includes all subdomains but no protocols (HTTP:// or HTTPS://) and no path strings (/sub/folder/). 

A domain property provides a comprehensive view of how your website performs in Google search results because it automatically includes the HTTP, HTTPS, www, and non-www versions of your site. 

I recommend setting up domain properties first. 

To set up a domain property in Search Console, remove the HTTPS and trailing slashes. 

google-search-console-seo-domain

After you hit continue, you can verify your ownership via a DNS TXT record. 

I recommend going this route as it is the easiest. 

You’ll need to log in to your hosting provider to submit the TXT file. 

google-search-console-seo-domain-set-up

Another option is to verify through the CNAME. If you have technical support, this could be an easy alternative. 

google-search-console-seo-domain-set-up-cname

If you run an ecommerce site, Search Console lets you set shipping and return policies and connect to Merchant Center data. 

This pairs nicely with your schema markup: Product + Offer + shippingDetails + returnPolicy lets Google read your store like a label with price, availability, delivery speed, returns, etc. 

URL prefix property allows you to dissect sections of a site 

A URL prefix property includes the HTTPS or HTTP protocol and path string. This means that if you want to really dive into a section of your website, like /blog/ subfolder or a blog.website.com subdomain, you can do this. 

After I set up my domain property, I created individual URL prefix properties for each subdomain, HTTP versions, and the/blog/ subfolders. 

By having multiple URL prefix properties, I can dig deeper into sections of the website to help troubleshoot. 

I can also create reporting specific to the website’s sections that may be more relevant to my co-workers. 

For example, I work with customer support team members looking for data on how their Help Center content is performing. 

google-search-console-seo-prefix-url-setup

Key moments in history for Search Console

Some really crazy stuff has happened with Search Console over time. Search Console is notorious among many SEO professionals as a delicacy, an incessant phantom of manual actions, and a culprit behind a better understanding of our website health.

I’ve compiled a short history of my SEO bromance with Search Console over the years to give you a glimmer of how we got here. 

Was Google preparing us for AI through Search Console all along?

Alright. Zoom out with me for a second.

All of these updates are not random. They tell a very clear story.

Search Console is evolving from a technical reporting tool into a visibility intelligence tool for the AI era.

Google is moving from: “Here are 1,000 queries.” to “Here’s a topic cluster and how it’s performing.”

The weekly/monthly views and annotations encourage trend-level analysis. 

Google recognizes discovery journeys aren’t linear anymore with the introduction of social reporting. 

Breakdown of Search Console for SEOs 

While some SEO professionals may be waiting in the tunnels for Skynet and AIO to take over, there’s one thing we can all still depend on: Search Console. 

So before you join your freelance mission with SEAL Team 6, walk through the anatomy of Search Console. 

Overview

The Overview section in Search Console provides a bird’s-eye view of all data sets users can uncover in Search Console. 

overview-google-search-console

Search Console Insights

Search Console Insights shows which pages are popping off and which are dying in the corner. The Insights view is a digital equivalent of a snack tray.

In an AI running wild like an overcaffeinated squirrel, I’ll take this instead of analyzing 50+ tabs. This is Google’s attempt to slide into your emails and whisper, “Hey, you might want to see this.”

insights-google-search-console

URL inspection

The URL inspection tool lets you see what Google sees for a given URL. 

The URL inspection tool is one of my favorite SEO tools.

Unfortunately, today, you can only inspect one URL at a time. However, if you use the Search Console URL inspection API, you can test up to 2000 URLs per day. 

The test will show if the URL is indexable and explain why it may or may not be indexed. 

You can also request a URL be indexed. 

google-search-console-url-inspection

Search results

Search results are every content marketer’s favorite report in Search Console. It shows search traffic over the past 16 months (with comparisons), along with search queries, devices, countries, and search appearances. 

It will also show you which pages rank for specific queries. 

I use this report to show which pages are performing best and which are performing worst. It also helps troubleshoot any major drops or spikes in traffic. 

You can segment this report based on clicks, impressions, and CTR. 

google-search-console-seo-search-results

The AI-powered configuration (Experiment) inside the Performance report is where things get interesting.

Instead of manually stacking filters, comparisons, regex, device splits, country filters, and date ranges, you can now describe the analysis you want and let Google build the report for you.

ai-powered-configuration-search-console

You can ask it questions like: 

  • “Compare blog traffic month over month.”
  • “Show me queries containing ‘how to’.”
  • “What happened to USA traffic last week?”
  • “Compare mobile vs desktop performance in the last 28 days.”
  • “Show non-branded queries for the past 3 months.”
  • “What pages lost clicks this month?”
  • “Show changes for mobile users.”

Discover

The Discover report in Search Console shows your content’s performance in Google search results. 

You can filter by pages, countries, search appearances, and devices, like the search results report. 

google-search-console-seo-discover

Google News

The Google News report in Search Console tells you how your content performs under Google News in Google search results. 

You can filter the report by page and device. 

google-search-console-seo-google-news

Pages

Pages indexing report in Search Console shares which pages in Google can find (or not find) on your website. 

The pages report is valuable for every technical SEO. This report offers tons of quick wins for technical SEO. I always start with this section when auditing a website. 

If you see an increase in pages indexed or not indexed, you’ll want to investigate why it’s happening. 

google-search-console-seo-pages

Video pages

The video indexing report shows how many pages on your website are indexed with video content. 

Sitemaps

The sitemap report allows you to submit all your XML sitemaps to Search Console. Ideally, you have at least one XML sitemap to submit. 

You’ll need to submit all your XML sitemaps, including any video, image, or language-specific ones. 

google-search-console-seo-sitemaps

Removals

The removals tool in Search Console lets you temporarily block pages from Google. 

Remember, these must be pages that you own on your website. You cannot submit pages you do not own. 

This is the fastest way to remove a page from your website. However, I recommend working on a long-term solution if you want this web page permanently removed. 

google-search-console-seo-removals

Core Web Vitals

The Core Web Vitals report uses real-world data to tell you how your pages perform. 

Again, this is based on a URL level. 

The report is grouped into mobile and desktop with segments of poor, needs improvement, and good. 

The report is based on LCP, INP, and CLS user data. 

Only indexed pages will be included in the Core Web Vitals report. 

google-search-console-seo-core-web-vitals

HTTPS

The HTTPS report tells you how many indexed pages on your website are HTTP or HTTPS. 

If you notice any HTTP pages on your website, you should convert them to HTTPS. Google indexes the HTTPS version to protect searchers’ security and privacy. 

google-search-console-seo-https

Product snippets

Product snippets are part of the structured data reporting in Search Console that showcases which products have product markup on the page. 

Currently, Google only supports product snippets for pages with one product. 

Be aware of Google’s algorithm updates. There can be changes in impressions and clicks for product snippets

google-search-console-seo-product-snippets

Merchant snippets

Merchant snippets are also part of the rich result report in Search Console and serve as extensions of your Product snippet. 

Merchant snippets are like getting a golden ticket. It provides more enhanced features in the SERPs, like carousels or knowledge panels. 

google-search-console-seo-merchant-listings

Shopping tab listings

Shopping tab listings are also part of the rich result reports in Search Console and showcase the pages listed in the Shopping tab in Google search results. 

If you’re an ecommerce marketer, you’ll want to live inside this report. 

If you don’t see this information in Search Console, make sure your website’s structured data fits within the Merchant listing structured data requests. 

AMP

The AMP report in Search Console shows all the AMP pages on your website and potential issues you may need to troubleshoot. 

If AMP is a big part of your SEO strategy, you’ll want to ensure you reach zero in the critical errors section of the report so Google can detect your AMP pages. 

While AMP is considered legacy, it’s relevant for some publishers. 

google-search-console-seo-amp

Breadcrumbs

The breadcrumbs report is also part of the rich result report in Search Console, which tells you if your breadcrumb structured data is correct and readable by Google. 

Breadcrumbs are essential to maintain a healthy site architecture and user experience. If you see any errors in the breadcrumbs, I recommend prioritizing this quickly.

google-search-console-seo-breadcrumbs

FAQ

The FAQ report is also part of Search Console’s rich results report, which shares insights into which pages received the FAQ snippet. 

However, with Google’s changes to visibility of HowTo and FAQ rich results, you may see this fluctuate quite a bit. 

google-search-console-seo-faq

Profile page

The Profile page report reflects which pages are getting the profile page markup. You’ll want to validate and clean up any makeup you may be missing because these offer interesting SERP features.

It’s almost like a card functionality similar to the recipes. 

google-search-console-seo-profile-page

Get the newsletter search marketers rely on.


Review snippets

Review snippets showcase your validation of review markup on pages. 

You should check that all your markup is valid. If you notice any errors, work on updating those specific pages. 

With Google’s algorithm updates, I’ve seen significant fluctuations in review snippets. Always double-check if it’s a bug, an algorithm update, or a true markup error. 

google-search-console-seo-review-snippets

Sitelinks searchbox

The sitelinks search box is a feature of the rich result report in Search Console that tells us in more detail any errors you may have with your Sitelinks Search Box markup. 

google-search-console-seo-sitelinks-searchbox

Unparsable structured data

The unparsable structured data report in Search Console aggregates structured data syntax errors that prevent Google from identifying the specific structured data type. 

unparsable-structured-data-seo-google-search-console

Videos

The video indexing report in Search Console has expanded dramatically over the last few years, giving us more detailed information on how your videos perform in search results. 

You can dissect whether the video is outside the viewport, too small, or too tall. If you’re building a video content strategy, it really helps to elevate your game with your UX team. 

google-search-console-seo-video

Manual actions

If you’re running your SEO strategy properly, you’ll hopefully never have to worry about the manual action report. 

But if you’re one of the unlucky ones who gets hit with a manual action, Google will tell you in this report in Search Console. 

A manual action occurs when a human reviewer at Google determines that a specific page or pages are not compliant with Google’s spam policy. 

Security issues

The Security issues report in Search Console will tell you if your site was hacked or harmful. 

Google will actually email you now to notify you when you receive a security issue. 

Check out this beauty I received within the first week of starting to work on a new site. 

manual-action-hacked-content-google-search-console
google-search-console-seo-security-issues

Links

The Links report in Search Console allows you to view all your site’s internal and external links. You can view the top link pages, top linking sites, and top linking text. 

This is a legacy report, so I’d be cautious about relying on it in case Google decides to depreciate it. 

google-search-console-seo-links

Settings

If you need to verify ownership or add a new user, you should check the settings in Search Console. 

google-search-console-seo-settings

Two cool reports under Settings in Search Console go undiscovered, but these are two of my favorite reports.

Robots.txt: The robots.txt report tells us which pages Google can crawl or any potential issues preventing Google from crawling your site. 

google-search-console-seo-robots-txt

One of the challenges I run into when working with developers is that they often choose to disallow it in the robots.txt file instead of adding a noindex, nofollow tag. 

This report will help audit any technical updates with your dev team. 

The robots.txt report is only available if you set up a domain property. 

Crawl stats: The crawl stats report shows Google’s crawling history on your website. It can be sorted by how many requests were made and when, server response, and availability issues. 

It tells SEO professionals if Google is encountering problems when crawling your website. 

This report is only available if you have a domain property or a URL prefix at a root level. 

crawl-stats-google-search-console-seo

Search Console is like stepping onto a planet dedicated to SEO professionals

That’s a lot to unpack. But the gist is that Search Console is a place where you can get information about how your website is performing. 

All of the above is just part of the early phases of Search Console’s transformation. Google also hopes to add Google’s AI Overview data in the future. So, that seems like a worthwhile endeavor, seeing as there is no tool to support AIO data today. 

And I know you all must be hoping Google’s AI Overview doesn’t overtake your jobs. That would suck. It would likely mean the end of times. 

But in the insane event it does, at least you’re covered on how Search Console got here today. 

Until then, you’ll have to make do with luxe URL inspections, regex filters, and manual action surprises. 

8 tips for SEO newbies

23 February 2026 at 17:00

SEO is a fast-moving, marketing-centric industry that will always keep you on your toes. If you’re just getting started, it can feel overwhelming without a guide.

There are many facets and specializations in SEO that come later in a career — local, technical, content, digital PR, UX, ecommerce, media — the list goes on. But that level of specialization isn’t where junior professionals should begin.

Much like a liberal arts degree or an apprenticeship, newcomers should first develop a broad understanding of the discipline before choosing a focus. Here’s how to build that foundation in SEO.

1. Start with the business

Whether you’re in-house or at an agency, resist the urge to jump straight into “solution mode” when beginning an SEO project. 

Instead of immediately focusing on meta tags, keywords, backlinks, or URL structure, start by understanding the business itself.

Here are some key questions to consider as you browse the website:

  • What product or service is being sold?
  • Who is the target audience? (If you’re in-house, who is your company trying to sell to?)
  • Why does the company believe customers should choose them over competitors? (Common differentiators include price, unique features, or benefits.)

If you have the time or opportunity, dig deeper by asking your boss or client these business-focused questions:

  • What are the company’s goals and targets?
  • What is the three- to five-year plan for the business? (Are there plans to launch new products or expand into new markets?)
  • Who are the main competitors, and what are they doing?
A sample of onboarding business questions from Building a Business Brain by FLOQ Academy
A sample of onboarding business questions from Building a Business Brain by FLOQ Academy

Even without that level of detail, the first three questions provide a useful frame of reference for determining the best SEO approach.

2. Be curious, ask questions

SEO now touches nearly every aspect of digital marketing

Because of that, SEOs often become social butterflies, regularly collaborating with other departments and specialties.

I’ve been in SEO for 15 years now (which makes me feel old), but I continue to ask my clients questions every day. 

This field encourages curiosity, so rather than feeling frustrated by what you don’t fully understand, embrace being the one to ask the “dumb questions.” 

There’s no such thing as a dumb question, by the way.

Dig deeper: How to become exceptional at SEO

3. Build from the foundations of SEO

As mentioned earlier, SEO has many specializations. Some, like video or local SEO, are referred to as “search verticals.”

If you’re new to the field, start with the basics: the website and how Google presents search results.

Once you understand the business, try a simple exercise to analyze your site’s optimization. 

Open a key product, category, or service page in one window. In another, search for a term you think users would enter to find that page. 

Compare what appears in the search results with your own page and the pages that rank for that term.

Nike website vs. Google search - running shoes

For example, in a search for “running shoes,” a few things stand out:

  • The intent is somewhat mismatched. Nike’s category page targets users who are researching with intent to buy or are already planning a purchase. However, the search results display articles comparing different running shoes.
  • Scrolling down, you might see an image carousel, a “Nearby Stores” section, and “People Also Ask” results.

If I were a new SEO at Nike and assumed the “running shoes” category page could rank for the “running shoes” query, I would rethink that after reviewing the search results. 

If ranking for that broad term were a priority, I would create a running shoe comparison article featuring high-quality images of real people using the shoes — maybe even a video, if budget allowed.

If your page aligns more closely with the search results, analyze the top-ranking pages and adapt successful elements to your own site. 

  • Do most of them have an on-page FAQ while yours doesn’t? 
  • A product video? Detailed specs? User reviews? 
  • How is the content itself structured? Are there jump links? Short paragraphs? Lots of lists, bulleted or numbered?

Be critical and specific about what you can improve. (Never copy content directly.)

At its core, SEO is about identifying what Google deems important for a given product or service, then doing it better than the competition. 

Many SEOs get caught up in tools and tactics and forget to examine the search results themselves. 

Break that habit early and make reviewing Google’s search results a key part of your research process.

4. Dabble in the technical side and build relationships with your developers

Technical SEO is one of the more complex specializations in the field and can seem intimidating. 

If you’re using a major CMS, your technical foundations are likely solid, so today, much of technical SEO focuses on refinements and enhancements.

While it’s important to develop technical knowledge, a great way to start is by building relationships with your development team and staying curious. 

Asking questions makes learning more interactive and immediately relevant to your work. 

Exploring coding courses or creating your own website can also help you develop technical skills gradually instead of all at once.

Some argue that you can be a good SEO without technical expertise — and I don’t disagree. 

However, understanding a website’s inner workings, how Google operates, and even how large language models (LLMs) function can help you prioritize your SEO efforts. 

Code is Google’s native language, and knowing how to interpret it can be invaluable when migrating a site, launching a new one, or diagnosing traffic drops.

Dig deeper: SEO prioritization: How to focus on what moves the needle

Get the newsletter search marketers rely on.


5. Learn the different types of information Google shows in search results

The way search results are presented today vastly differs from 10 or 15 years ago. 

Those who have been in the industry for a while have had the advantage of adapting gradually as Google has evolved. 

Newcomers, on the other hand, are thrown into the deep end, facing a wide range of search features all at once — some personalized, some not, and some appearing inconsistently. 

This can be challenging to grasp, even for experienced SEOs.

Google has invested heavily in understanding user intent and presenting search results in a way that best addresses it. 

As a result, search results may include:

  • Videos.
  • Images.
  • People Also Ask.
  • Related Searches.
  • AI Overviews.
  • AI-organized search.
  • Map results.
  • Nearby shopping options.
  • Product listings.
  • People Also Buy From.
  • News

Building visibility for each of these features often requires a unique approach and specific considerations. 

These search result types are now industry jargon, so a glossary can help you learn SEO terminology.

6. Learn the different types of query intent classifications

Google’s mission is to “organize the world’s information and make it universally accessible and useful.” 

As part of this, Google works to understand why people search for something and provides the most relevant results to match that intent. 

To do this, they classify queries based on intent.

Search Quality Evaluator Guidelines - Understanding user intent

The Search Quality Evaluator Guidelines, a handbook Google provides to evaluators who manually assess website and search result quality, also touches on understanding user intent: 

“It can be helpful to think of queries as having one or more of the following intents. 

  • Know query, some of which are Know Simple queries.
  • Do query, when the user is trying to accomplish a goal or engage in an activity.
  • Website query, when the user is looking for a specific website or webpage.
  • Visit-in-person query, some of which are looking for a specific business or organization, some of which are looking for a category of businesses.”

When conducting keyword research, it’s helpful to analyze both your site and the queries you’re targeting through this lens.

Many SEO professionals also use these broader, traditional intent categories, though they don’t always align perfectly with Google’s classifications:

  • Informational: Who, what, when, where, how, why.
  • Commercial: Comparison, review, best, specific product.
  • Transactional: Buy, cheap, sale, register.
  • Navigational: Searching for a specific brand.

Rather than focusing solely on keywords, take a step back and consider the intent behind the search. Understanding intent is essential for SEO success.

Dig deeper: Why traditional keyword research is failing and how to fix it with search intent

7. Do the research yourself before finding ways to use LLMs

Your company may already have guidelines for using LLMs like ChatGPT or Claude for tasks such as keyword research, content creation, or competitor analysis

However, if you’re new to SEO, I strongly recommend completing at least one full project using tools like Google Search Console, Semrush, or Ahrefs without LLM support. 

While AI can speed up the process, relying on it too early has drawbacks:

  • Slower learning curve: If an LLM does the heavy lifting, you miss the experience of making strategic trade-offs, such as choosing a low-volume, mid-competition keyword over a high-volume, high-competition one.
  • Lack of instinct for accuracy: Without firsthand research experience, it’s harder to recognize when an LLM generates inaccurate information or pulls from an unreliable source.
  • Reduced impact: Google is increasingly sophisticated in detecting “repetitive content.” Relying too much on LLMs for mass content creation could hurt performance, whereas a more focused, strategic approach might yield better results.

While it may be tempting to jump straight into strategy rather than hands-on execution, senior SEOs develop their strategic mindset through years of practical work across different clients and industries. 

Skipping this foundational experience could make it harder to recognize large-scale patterns and trends.

Dig deeper: Why you need humans, not just AI, to run great SEO campaigns

8. Understand how GEO/AEO is different

While this channel represents a small percentage of market share compared to traditional Google search, the C-suite and other stakeholders are concerned with — and starting to pay attention to — their brand’s visibility in LLMs. 

There are difficult conversations around measurability, impact, and how much time we should invest in optimizing for a relatively small channel, but that’s a different article. As a newcomer to SEO, it’s important to understand how this type of search is different. A few things to look into include:

  • How LLMs actually work: Do they truly “know” information, or is something else happening? Short answer: yes, something else is happening. It’s important to understand what that is and how it works. When unsure, rely on the LLM’s own documentation. Industry experts to follow include Lily Ray and Dan Petrovic.
  • How LLMs train on data and how RAG impacts this: Develop a basic understanding of how these systems evaluate website content when generating answers.
  • How people claim they can influence LLM output: Some tactics are high risk, such as publishing large volumes of self-promotional listicles. Others are lower-risk, longer-term activities, like ensuring a site is crawlable in plain HTML, making sure LLM agents aren’t blocked by firewalls, and structuring HTML to be more bot-friendly. If many of these lower-risk tactics sound familiar, they should — they overlap with traditional SEO practices.

If you’re feeling advanced, explore concepts like query fan-out and MuVERA, or research what engineers at DeepSeek, OpenAI, Google, and Claude are currently developing.

Laying the groundwork for SEO success

SEO offers endless opportunities once you master the fundamentals. If you’re just starting out, focus on these core areas:

  • The business.
  • The search results.
  • User intent.

Keep it simple. Stay focused. Be business-led. 

Build your SEO expertise on a strong foundation, and your career will grow from there.

Google Search Console page indexing report missing data prior to December 15

23 February 2026 at 16:04
Screenshot of Google Search Console

Google’s page indexing report within Google Search Console is missing a block of data earlier than December 15th. It seems like some sort of reporting bug that is impacting all users.

Google has not yet commented on the reporting issue but again, it is widespread and impacting everyone.

What it looks like. Here is a screenshot from Vijay on X but you can see it yourself by checking your page indexing report:

Why we care. I’d check back in a day or two to see if this data returns or if Google posts a notice about the issue. Right now, no one is able to access that data, so everyone is in the “same boat.”

Google will hopefully fix the data, and you can run your reporting and analysis if you have not done so yet for those data ranges.

Update: John Mueller from Google replied saying, “This is a side-effect of the latency issue from early December. This isn’t a new or separate issue.”

The latest jobs in search marketing

27 February 2026 at 23:48
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • About MedEquip Shop MedEquip Shop is a growing medical equipment provider with a strong presence in retail and rentals, offering a wide range of products for seniors and caregivers. We’re seeking a talented Digital Marketing Manager to help us scale our online and in-store sales and establish a larger footprint in the Houston area. We […]
  • Upgrow is seeking an organized, motivated, and creative SEO Director to lead our growing digital marketing agency in San Francisco, CA. You will oversee and manage SEO projects involving research, planning, project management, analytics, optimization, linkbuilding, and writing. This role works directly with clients and includes account management, as well as managing 2 direct reports. […]
  • About Yami: Founded in 2013, Yami’s mission is to bring the world closer for everyone to experience and enjoy. We make it easy to discover exciting flavors and trending products from Asia. Named Inc. Magazine’s fastest growing start-up on the ”Inc. 500 List.”, we’re committed to connecting people with authentic food, beauty, home, and wellness […]
  • (un)Common Logic This is a hands-on, client-facing multi-channel performance role with primary emphasis on PPC and strategic involvement in SEO initiatives. (un)Common Logic is a digital marketing agency based in Austin, Texas, founded in 2008 originally as 360Partners. Our talented team of experts relentlessly strives for excellence in marketing performance and exceptional customer service. We tackle […]
  • Be Part Of A High-Performing Team: Join a corporate marketing team within a large, established organization supporting enterprise-wide brand and revenue initiatives. This team operates within a shared services environment, partnering closely with cross-functional stakeholders to drive digital performance, brand visibility, and customer engagement. The culture is professional, collaborative, and performance-driven, with a strong emphasis […]
  • The SEO Executive will be responsible for driving organic traffic and improving search engine rankings through strategic keyword research, content optimization, and local SEO tactics. This role focuses on creating and implementing keyword strategies specifically aligned with bus rental services, event transportation, city tours, and private group travel. Key Responsibilities: Conduct thorough keyword research related […]
  • Job Description Salary: $115K-$135K annual base salary for the initial six months, with transition to an attractive incentive-based compensation package designed to reward performance and contribution. Title: Director of Digital Marketing Reports To: President/CEO Location: Bellingham, WA or Waynesboro, TN (negotiable) Department: Marketing. About Us Seeking Health is a fast-growing nutritional supplement company with $50M […]
  • SEO Specialist – Dollar Loan Center (On-site – headquartered in Las Vegas, NV) We are seeking a talented SEO Specialist to join our team. As an SEO Specialist, you will be responsible for optimizing our website to increase organic traffic and improve search engine rankings. This candidate will be responsible for supporting organic search efforts […]
  • We’re Hiring: Strategist @ Masse How to Apply https://airtable.com/appOoyuwRuETnUmGj/shryIdrtrB9MSzXaA Masse is a fast-growing SEO agency, offering our highly-effective “Content at Scale” SEO campaigns to ambitious tech companies focused on the extreme cutting-edge of tech like AI and Robotics. We’ve refined SEO to an art, with a repeatable playbook for growth and ROI. We’re also an […]
  • Join one of the fastest-growing companies in America. Recognized for three years as an Inc. 5000 award-winning company, Silencer Central has achieved over 400% growth in the past three years. Since 2005, we’ve been passionate about compliance, education, and community engagement in firearm sound suppression—making the silencer-buying process simple and accessible. Apply today and be […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Company Description VERSANT is a leading force in news, sports and entertainment – home to iconic and trusted brands that inspire, inform, and delight audiences. Our unique combination of content, technology and services enriches the cultural fabric, igniting passions, sparking conversations, and connecting people to what they love most. As an independent, publicly traded company, […]
  • POSITION SUMMARY The Senior Manager / Assistant Director of Paid Media Advertising is a strategic, data-driven marketing leader responsible for developing, executing, and optimizing paid media programs that drive high-quality leads and accelerate occupancy growth across a large portfolio of senior living communities. This role manages the relationship with an external agency, ensuring performance excellence, […]
  • This is a remote position. We are seeking a strategic and results-driven B2B Performance Marketing Manager to lead and scale our paid acquisition and demand generation efforts. This role is responsible for driving qualified leads, pipeline growth, and revenue through data-backed performance marketing strategies. The ideal candidate is highly fluent in Google Ads and Meta […]
  • Sono Bello is America’s top cosmetic surgery specialist, with 185+ board-certified surgeons who have performed over 300,000 laser liposuction and body contouring procedures. A career at Sono Bello means being part of a dynamic and high-energy work environment where every team member can make a difference. We love what we do, and it shows! We […]
  • Company Description We’re part of Informa, a global business with a network of trusted brands in specialist markets across more than 30 countries, and a member of the FTSE 100. Our purpose is to connect our customers to information and people that help them know more, do more and be more. No other company in […]

Other roles you may be interested in

Demand Generation Manager, Shoplift (Remote)

  • Salary: $100,000 – $110,000
  • Design and execute inbound-led outbound campaigns—reaching prospects who’ve shown intent (visited pricing page, downloaded resources, engaged with content) at precisely the right moment
  • Build and optimize Apollo sequences, LinkedIn outreach, and multi-touch campaigns that book qualified demos for AEs

Search Engine Optimization Manager, Confidential (Hybrid, Miami-Fort Lauderdale Area)

  • Salary: $75,000 – $105,000
  • Serve as a strategic SEO partner for client accounts, translating business goals into actionable search initiatives
  • Communicate SEO insights, priorities, and performance clearly to clients and internal stakeholders

Meta Ads Manager, Cardone Ventures (Scottsdale, AZ)

  • Salary: $85,000 – $100,000
  • Develop, execute, and optimize cutting-edge digital campaigns from conception to launch
  • Provide ongoing actionable insights into campaign performance to relevant stakeholders

Senior Manager of Marketing (Paid, SEO, Affiliate), What Goes Around Comes Around (Jersey City, NJ)

  • Salary: $125,000
  • Develop and execute paid media strategies across channels (Google Ads, social media, display, retargeting)
  • Lead organic search strategy to improve rankings, traffic, and conversions

Search Engine Optimization Manager, Method Recruiting, a 3x Inc. 5000 company (Remote)

  • Salary: $95,000 – $105,000
  • Lead planning and execution of SEO and AEO initiatives across assigned digital properties
  • Conduct content audits to identify optimization, refresh, pruning, and gap opportunities

Senior Manager, SEO, Kennison & Associates (Hybrid, Boston MA)

  • Salary: $150,000 – $180,000
  • You’ll own high-visibility SEO and AI initiatives, architect strategies that drive explosive organic and social visibility, and push the boundaries of what’s possible with search-powered performance.
  • Every day, you’ll experiment, analyze, and optimize-elevating rankings, boosting conversions across the customer journey, and delivering insights that influence decisions at the highest level.

SEO and AI Search Optimization Manager, Big Think Capital (New York)

  • Salary: $100,000
  • Own and execute Big Think Capital’s SEO and AI search (GEO) strategy
  • Optimize website architecture, on-page SEO, and technical SEO

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Merchant Center flags feeds disruption

20 February 2026 at 20:04
Google Shopping Ads - Google Ads

Google Merchant Center is investigating an issue affecting Feeds, according to its public status dashboard.

The details:

  • Incident began: Feb. 4, 2026 at 14:00 UTC
  • Latest update (Feb. 20, 14:43 UTC): “We’re investigating reports of an issue with Feeds. We will provide more information shortly.”
  • Status: Service disruption

The alert appears on the official Merchant Center Status Dashboard, which tracks availability across Merchant Center services.

Why we care. Feeds power product listings across Shopping ads and free listings. Any disruption can impact product approvals, updates, or visibility in campaigns tied to retail inventory.

What to watch. Google has not yet shared scope, root cause, or estimated time to resolution. Advertisers experiencing feed processing delays or disapprovals may want to monitor the dashboard closely.

Bottom line. When feeds stall, ecommerce performance can follow. Retail advertisers should keep an eye on diagnostics and campaign delivery until more details emerge.

Dig Deeper. Merchant Center Status Dashboard

What’s next for PPC: AI, visual creative and new ad surfaces

20 February 2026 at 20:00

PPC is evolving beyond traditional search. Those who adopt new ad formats, smarter creative strategies, and the right use of AI will gain a competitive edge.

Ginny Marvin, Google’s Ads Product Liaison, and Navah Hopkins, Microsoft’s Product Liaison, joined me for a conversation about what’s next for PPC. Here’s a recap of this special keynote from SMX Next.

Emerging ad formats and channels

When discussing what lies beyond search, both speakers expressed excitement about AI-driven ad formats.

Hopkins highlighted Microsoft’s innovation in AI-first formats, especially showroom ads:

  • “Showroom ads allow users to engage and interact with a showroom where the advertiser provides the content, and Copilot provides the brand security.”

She also pointed to gaming as a major emerging ad channel. As a gamer, she noted that many users “justifiably hate the ads that serve on gaming surfaces,” but suggested more immersive, intelligent formats are coming.

Marvin agreed that the landscape is shifting, driven by conversational AI and visual discovery tools. These changes “are redefining intent” and making conversion journeys “far more dynamic” than the traditional keyword-to-click model.

Both stressed that PPC marketers must prepare for a landscape where traditional search is only one of many ad surfaces.

Importance of visual content

A major theme throughout the discussion was the growing importance of visual content. Hopkins summed up the shift by saying:

  • “Most people are visual learners… visual content belongs in every stage of the funnel.”

She urged performance marketers to rethink the assumption that visuals belong only at the top of the funnel or in remarketing.

Marvin added that leading with brand-forward visuals is becoming essential, as creatives now play “a much more important role in how you tell your stories, how you drive discovery, and how you drive action.” Marketers who understand their brand’s positioning and reflect it consistently in their creative libraries will thrive across emerging channels.

Both noted that AI-driven ad platforms increasingly rely on strong creative libraries to assemble the right message at the right moment.

Myths about AI and creative

The conversation also addressed misconceptions about AI-generated creative.

Hopkins cautioned against overrelying on AI to build entire creative libraries, emphasizing:

  • “AI is not the replacement for our creativity… you should not be delegating full stop your creative to AI.”

Instead, she said marketers should focus on how AI can amplify their work. Campaigns must perform even when only a single asset appears, such as a headline or image. Creatives need to “stand alone” and clearly communicate the brand.

Marvin reinforced the need for a broader range of visual assets than most advertisers maintain. “You probably need more assets than you currently have,” she noted, especially as cross-channel campaigns like Demand Gen depend on testing multiple combinations.

Both positioned AI as an enabler, not a replacement, stressing that human creativity drives differentiation.

Strategic use of assets

Both liaisons emphasized the need for a diverse, adaptable asset library that works across formats and surfaces.

Marvin explained that AI systems now evaluate creative performance individually:

  • “Underperforming assets should be swapped out, and high-performing niche assets can tell you something about your audience.”

Hopkins added that distinct creative assets reduce what she called “AI chaos moments,” when the system struggles because assets overlap too closely. Distinctiveness—visual and textual—helps systems identify which combinations perform best.

Both urged marketers to rethink creative planning, treating assets as both brand-building and performance-driving rather than separating the two.

Partnering with AI for measurement

The conversation concluded with a deep dive into what it means to measure performance in an AI-first world.

Hopkins listed the key strategic inputs AI relies on:

  • “First-party data, creative assets, ad copy, website content, goals and targets, and budget. These are the things AI uses to optimize towards your business outcomes.”

She also highlighted that incrementality — understanding the true added value of ads — is becoming more important than ever.

Marvin acknowledged the challenges marketers face in letting go of old control patterns, especially as measurement shifts from granular data to privacy-protective models. However, she stressed that modern analytics still provide meaningful signals, just in a different form:

  • “It’s not about individual queries anymore… it’s about understanding the themes that matter to your audience.”

Both encouraged marketers to think more strategically and holistically in their analysis rather than getting stuck in granular metrics.

💾

Google and Microsoft liaisons explain why dynamic ad surfaces, distinct assets and smarter AI inputs will define the next era of paid media.

How to vibe-code an SEO tool without losing control of your LLM

20 February 2026 at 19:00
How to vibe-code an SEO tool without losing control of your LLM

We all use LLMs daily. Most of us use them at work. Many of us use them heavily.

People in tech — yes, you — use LLMs at twice the rate of the general population. Many of us spend more than a full day each week using them — yes, me.

LLM usage amount

Even those of us who rely on LLMs regularly get frustrated when they don’t respond the way we want.

Here’s how to communicate with LLMs when you’re vibe coding. The same lessons apply if you find yourself in drawn-out “conversations” with an LLM UI like ChatGPT while trying to get real work done.

Choose your vibe-coding environment

Vibe coding is building software with AI assistants. You describe what you want, the model generates the code, and you decide whether it matches your intent.

That’s the idea. In practice, it’s often messier.

The first thing you’ll need to decide is which code editor to work in. This is where you’ll communicate with the LLM, generate code, view it, and run it.

I’m a big fan of Cursor and highly recommend it. I started on the free Hobby plan, and that’s more than enough for what we’re doing here. 

Fair warning – it took me about two months to move up two tiers and start paying for the Pro+ account. As I mentioned above, I’m firmly in the “over a day a week of LLM use” camp, and I’d welcome the company.

 A few options are:

  • Cursor: This is the one I use, as do most vibe coders. It has an awesome interface and is easily customized.
  • Windsurf: The main alternative to Cursor. It can run its own terminal commands and self-correct without hand-holding.
  • Google Antigravity: Unlike Cursor, it moves away from the file-tree view and focuses on letting you direct a fleet of agents to build and test features autonomously.

In my screenshots, I’ll be using Cursor, but the principles apply to any of them. They even apply when you’re simply communicating with LLMs in depth.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Why prompting alone isn’t enough

You might wonder why you need a tutorial at all. You tell the LLM what you want, and it builds it, right? That may work for a meta description or a superhero SEO image of yourself, but it won’t cut it for anything moderately complex — let alone a tool or agentic system spanning multiple files.

One key concept to understand is the context window. That’s the amount of content an LLM can hold in memory. It’s typically split across input and output tokens.

GPT-5.2 offers a 400,000-token context window, and Gemini 3 Pro comes in at 1 million. That’s roughly 50,000 lines of code or 1,500 pages of text.

The challenge isn’t just hitting the limit, especially with large codebases. It’s that the more content you stuff into the window, the worse models get at retrieving what’s inside it.

Attention mechanisms tend to favor the beginning and end of the window, not the middle. In general, the less cluttered the window, the better the model can focus on what matters.

If you want a deeper dive into context windows, Matt Pocock has a great YouTube video that explains it clearly. For now, it’s enough to understand placement and the cost of being verbose.

A few other tips:

  • One team, one dream. Break your project into logical stages, as we’ll do below, and clear the LLM’s memory between them.
  • Do your own research. You don’t need to become an expert in every implementation detail, but you should understand the directional options for how your project could be built. You’ll see why shortly.
  • When troubleshooting, trust but verify. Have the model explain what’s happening, review it carefully, and double-check critical details in another browser window.

Dig deeper: How vibe coding is changing search marketing workflows

Tutorial: Let’s vibe-code an AI Overview question extraction system

How do you create content that appears prominently in an AI Overview? Answer the questions the overview answers.

In this tutorial, we’ll build a tool that extracts questions from AI Overviews and stores them for later use. While I hope you find this use case valuable, the real goal is to walk through the stages of properly vibe coding a system. This isn’t a shortcut to winning an AI Overview spot, though it may help.

Step 1: Planning

Before you open Cursor — or your tool of choice — get clear on what you want to accomplish and what resources you’ll need. Think through your approach and what it’ll take to execute.

While I noted not to launch Cursor yet, this is a fine time to use a traditional search engine or a generative AI.

I tend to start with a simple sentence or two in Gemini or ChatGPT describing what I’m trying to accomplish, along with a list of the steps I think the system might need to go through. It’s OK to be wrong here. We’re not building anything yet.

For example, in this case, I might write:

I’m an SEO, and I want to use the current AI Overviews displayed by Google to inspire the content our authors will write. The goal is to extract the implied questions answered in the AI Overview. Steps might include:

1 – Select a query you want to rank for.
2 – Conduct a search and extract the AI Overview.
3 – Use an LLM to extract the implied questions answered in the AI Overview.
4 – Write the questions to a saveable location.

With this in hand, you can head to your LLM of choice. I prefer Gemini for UI chats, but any modern model with solid reasoning capabilities should work.

Start a new chat. Let the system know you’ll be building a project in Cursor and want to brainstorm ideas. Then paste in the planning prompt.

The system will immediately provide feedback, but not all of it will be good or in scope. For example, one response suggested tracking the AI Overview over time and running it in its own UI. That’s beyond what we’re doing here, though it may be worth noting.

It’s also worth noting that models don’t always suggest the simplest path. In one case, it proposed a complex method for extracting AI Overviews that would likely trigger Google’s bot detection. This is where we go back to the list we created above.

Step 1 will be easy. We just need a field to enter keywords.

Step 2 could use some refinement. What’s the most straightforward and reliable way to capture the content in an AI Overview? Let’s ask Gemini.

Reverse-engineering Google AI Overviews

I’m already familiar with these services and frequently use SerpAPI, so I’ll choose that one for this project. The first time I did this, I reviewed options, compared pricing, and asked a few peers. Making the wrong choice early can be costly.

Step 3 also needs a closer look. Which LLMs are best for question extraction?

Which LLMs are best for question extraction

That said, I don’t trust an LLM blindly, and for good reason. In one response, Claude 4.6 Opus, which had recently been released, wasn’t even considered.

After a couple of back-and-forth prompts, I told Gemini:

  • “Now, be critical of your suggestions and the benchmarks you’ve selected.”
  • “The text will be short, so cost isn’t an issue.”

We then came around to:

AI Mode - comparisons

For this project, we’re going with GPT-5.2, since you likely have API access or, at the very least, an OpenAI account, which makes setup easy. Call it a hunch. I won’t add an LLM judge in this tutorial, but in the real world, I strongly recommend it.

Now that we’ve done the back-and-forth, we have more clarity on what we need. Let’s refine the outline:

I’m an SEO, and I want to use the current AI Overviews displayed by Google to inspire the content our authors will write. The idea is to extract the implied questions answered in the AI Overview. Steps might include:

1 – Select a query you want to rank for.
2 – Conduct a search and extract the AI Overview using SerpAPI.
3 – Use GPT-5.2 Thinking to extract the implied questions answered in the AI Overview.
4 – Write the query, AI Overview, and questions to W&B Weave.

Before we move on, make sure you have access to the three services you’ll need for this:

  • SerpAPI: The free plan will work.
  • OpenAI API: You’ll need to pay for this one, but $5 will go a long way for this use case. Think months. 
  • Weights & Biases: The free plan will work. (Disclosure: I’m the head of SEO at Weights & Biases.)

Now let’s move on to Cursor. I’ll assume you have it installed and a project set up. It’s quick, easy, and free. 

The screenshots that follow reflect my preferred layout in Editor Mode.

Cursor - Editor Mode

Step 2: Set the groundwork

If you haven’t used Cursor before, you’re in for a treat. One of its strengths is access to a range of models. You can choose the one that fits your needs or pick the “best” option based on leaderboards.

I tend to gravitate toward Gemini 3 Pro and Claude 4.6 Opus.

Cursor - LLM options

If you don’t have access to all of them, you can select the non-thinking models for this project. We also want to start in Plan mode.

Cursor - Plan mode

Let’s begin with the project prompt we defined above.

Cursor - project prompt

Note: You may be asked whether you want to allow Cursor to run queries on your behalf. You’ll want to allow that.

Cursor - project integrations

Now it’s time to go back and forth to refine the plan that the model developed from our initial prompt. Because this is a fairly straightforward task, you might think we could jump straight into building it, which would be bad for the tutorial and in practice. If you thought that, you’d be wrong. Humans like me don’t always communicate clearly or fully convey our intent. This planning stage is where we clarify that.

When I enter the instructions into the Cursor chat in Planning mode, using Sonnet 4.5, it kicks off a discussion. One of the great things about this stage is that the model often surfaces angles I hadn’t considered at the outset. Below are my replies, where I answer each question with the applicable letter. You can add context after the letter if needed.

An example of the model suggesting angles I hadn’t considered appears in question 4 above. It may be helpful to pass along the context snippets. I opted for B in this case. There are obvious cases for C, but for speed and token efficiency, I retrieve as little as possible. Intent and related considerations are outside the scope of this article and would add complexity, as they’d require a judge.

The system will output a plan. Read it carefully, as you’ll almost certainly catch issues in how it interpreted your instructions. Here’s one example.

Cursor - model selection

I’m told there is no GPT-5.2 Thinking. There is, and it’s noted in the announcement. I have the system double-check a few details I want to confirm, but otherwise, the plan looks good. Claude also noted the format the system will output to the screen, which is a nice touch and something I hadn’t specified. That’s what partners are for.

Cursor - output format

Finally, I always ask the model to think through edge cases where the system might fail. I did, and it returned a list. From that list, I selected the cases I wanted addressed. Others, like what to do if an AI Overview exceeds the context window, are so unlikely that I didn’t bother.

A few final tweaks addressed those items, along with one I added myself: what happens if there is no AI Overview?

Cursor - what happens if there is no AI Overview?

I have to give credit to Tarun Jain, whom I mentioned above, for this next step. I used to copy the outline manually, but he suggested simply asking the model to generate a file with the plan. So let’s direct it to create a markdown file, plan.md, with the following instruction:

Build a plan.md including the reviewed plan and plan of action for the implementation. 

Remember the context window issue I discussed above? If you start building from your current state in Cursor, the initial directives may end up in the middle of the window, where they’re least accessible, since your project brainstorming occupies the beginning.

To get around this, once the file is complete, review it and make sure it accurately reflects what you’ve brainstormed.

Step 3: Building

Now we get to build. Start a new chat by clicking the + in the top right corner. This opens a new context window.

This time, we’ll work in Agent mode, and I’m going with Gemini 3 Pro.

Cursor - Agent mode

Arguably, Claude 4.6 Opus might be a technically better choice, but I find I get more accurate responses from Gemini based on how I communicate. I work with far smarter developers who prefer Claude and GPT. I’m not sure whether I naturally communicate in a way that works better with Gemini or if Google has trained me over the years.

First, tell the system to load the plan. It immediately begins building the system, and as you’ll see, you may need to approve certain steps, so don’t step away just yet.

Cursor - Load the plan

Once it’s done, there are only a couple of steps left, hopefully. Thankfully, it tells you what they are.

First, install the required libraries. These include the packages needed to run SerpAPI, GPT, Weights & Biases, and others. The system has created a requirements.txt file, so you can install everything in one line.

Note: It’s best to create a virtual environment. Think of this as a container for the project, so downloaded dependencies don’t mix with those from other projects. This only matters if you plan to run multiple projects, but it’s simple to set up, so it’s worth doing.

Open a terminal:

Cursor - terminal

Then enter the following lines, one at a time:

  • python3 -m venv .venv
  • source .venv/bin/activate
  • pip install -r requirements.txt

You’re creating the environment, activating it, and installing the dependencies inside it. Keep the second command handy, since you’ll need it any time you reopen Cursor and want to run this project.

You’ll know you’re in the correct environment when you see (.venv) at the beginning of the terminal prompt.

When you run the requirements.txt installation, you’ll see the packages load.

Cursor - packages

Next, rename the .env.example file to .env and fill in the variables.

The system can’t create a .env file, and it won’t be included in GitHub uploads if you go that route, which I did and linked above. It’s a hidden file used to store your API keys and related credentials, meaning information you don’t want publicly exposed. By default, mine looks like this.

API keys and related credentials

I’ll fill in my API keys, sorry, can’t show that screen, and then all that’s left is to run the script.

To do that, enter this in the terminal:

python main.py "your search query"

If you forget the command, you can always ask Cursor.

Oh no … there’s a problem!

I’m building this as we go, so I can show you how to handle hiccups. When I ran it, I hit a critical one.

Cursor - no AI Overview found

It’s not finding an AI Overview, even though the phrase I entered clearly generates one.

Google - what is SEO

Thankfully, I have a wide-open context window, so I can paste:

  • An image showing that the output is clearly wrong.
  • The code output illustrates what the system is finding.
  • A link (or sometimes simply text) with additional information to direct the solution. 

Fortunately, it’s easy to add terminal output to the chat. Select everything from your command through the full error message, then click “Add to Chat.”

Cursor - Add to Chat.

It’s important not to rely solely on LLMs to find the information you need. A quick search took me to the AI Overview documentation from SerpAPI, which I included in my follow-up instructions to the model.

My troubleshooting comment looks like this.

Cursor - troubleshooting comment

Notice I tell Cursor not to make changes until I give the go-ahead. We don’t want to fill up the context window or train the model to assume its job is to make mistakes and try fixes in a loop. We reduce that risk by reviewing the approach before editing files.

Glad I did. I had a hunch it wasn’t retrieving the code blocks properly, so I added one to the chat for additional review. Keep in mind that LLMs and bots may not see everything you see in a browser. If something is important, paste it in as an example.

Now it’s time to try again.

Cursor - troubleshooting executed

Excellent, it’s working as we hoped.

Now we have a list of all the implied questions, along with the result chunks that answer them.

Dig deeper: Inspiring examples of responsible and realistic vibe coding for SEO

Logging and tracing your outputs

It’s a bit messy to rely solely on terminal output, and it isn’t saved once you close the session. That’s what I’m using Weave to address.

Weave is, among other things, a tool for logging prompt inputs and outputs. It gives us a permanent place to review our queries and extracted questions. At the bottom of the terminal output, you’ll find a link to Weave.

There are two traces to watch. The first is what this was all about: the analyze_query trace.

W&B Weave

In the inputs, you can see the query and model used. In the outputs, you’ll find the full AI Overview, along with all the extracted questions and the content each question came from. You can view the full trace here, if you’re interested.

Now, when we’re writing an article and want to make sure we’re answering the questions implied by the AI Overview, we have something concrete to reference.

The second trace logs the prompt sent to GPT-5.2 and the response.

W&B Weave second trace

This is an important part of the ongoing process. Here you can easily review the exact prompt sent to GPT-5.2 without digging through the code. If you start noticing issues in the extracted questions, you can trace the problem back to the prompt and get back to vibing with your new friend, Cursor.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Structure beats vibes

I’ve been vibe coding for a couple of years, and my approach has evolved. It gets more involved when I’m building multi-agent systems, but the fundamentals above are always in place.

It may feel faster to drop a line or two into Cursor or ChatGPT. Try that a few times, and you’ll see the choice: give up on vibe coding — or learn to do it with structure.

Keep the vibes good, my friends.

Emina Demiri talks surviving firing your biggest client

20 February 2026 at 18:14

On episode 352 of PPC Live The Podcast, I spoke to Emina Demiri Watson, Head of Digital at Brighton-based Vixen Digital, where she to shared one of the most candid stories in agency life: deliberately firing a client that accounted for roughly 70% of their revenue — and what they learned the hard way in the process.

The decision to let go

The client relationship had been deteriorating for around three months before the leadership team made their move. The decision wasn’t about the client being difficult from day one — it was a relationship that had slowly soured over time. By the end, the toxic dynamic was affecting the entire team, and leadership decided culture had to come first.

The mistake they didn’t see coming

Here’s where it got painful. When Vixen sat down to run the numbers, they realized they had a serious customer concentration problem — one client holding a disproportionately large share of total revenue. It’s the kind of thing that gets lost when you’re busy and don’t have sophisticated financial systems. A quick Excel formula later, and the reality hit harder than expected.

Warning signs agencies should watch for

Emina outlined the signals that a client relationship is shifting — beyond the obvious drop in campaign performance. External factors inside the client’s business matter too: company restructuring, team changes, even a security breach that prevents leads from converting downstream. The lesson? Don’t just watch your Google Ads dashboard — understand what’s happening on the client’s side of the fence.

How they clawed back

Recovery came down to three things: tracking client concentration properly going forward, returning to their company values as a decision-making compass, and accepting that rebuilding revenue simply takes time. Losing the client freed up the mental bandwidth to pitch new business and re-engage with the industry community — things that had quietly fallen by the wayside.

Common account mistakes still haunting audits in 2026

When asked about errors she sees in audited accounts, Emina didn’t hold back. Broad match without proper audience guardrails remains a persistent problem, as does the absence of negative keyword lists entirely. Over-narrow targeting is another — particularly for clients chasing high-net-worth audiences, where the data pool becomes too thin for Smart Bidding to function.

The right way to think about AI

Emina’s take on AI is pragmatic: the biggest mistake is believing the hype. PPC practitioners are actually better positioned than most to navigate AI skeptically, given they’ve been working with automation and black-box systems for years. Her preferred approach — and the one she quietly enforces with junior team members via a robot emoji — is to treat Claude and other LLMs as a first stop for research, not a replacement for critical thinking.

The takeaway

If you’re sitting on a deteriorating client relationship and nervous about pulling the trigger, Emina’s advice is simple: go back to your values. If commercial survival sits at the top of the list, keep the client. If culture and team wellbeing matter more, it might be time.

AI agents in SEO: A practical workflow walkthrough

20 February 2026 at 18:00
AI agents in SEO: A practical workflow walkthrough

Automation has long been part of the discipline, helping teams structure data, streamline reporting, and reduce repetitive work. Now, AI agent platforms combine workflow orchestration with large language models to execute multi-step tasks across systems.

Among them, n8n stands out for its flexibility and control. Here’s how it works – and where it fits in modern SEO operations.

Understanding how n8n AI agents are deployed

If you think of modern AI agent platforms as an AI-powered Zapier, you’re not far off. The difference is that tools like n8n don’t just pass data between steps. They interpret it, transform it, and determine what happens next.

Getting started with n8n means choosing between cloud-hosted and self-hosted deployment. You can have n8n host your environment, but there are drawbacks:

  • The environment is more sandboxed.
  • You can’t recode the server to interact with n8n workflows in custom ways, such as de-sandboxing the saving of certain file types to a database.
  • You can’t install or use community nodes.
  • Costs tend to be higher.

There are advantages, too:

  • You don’t have to be as hands-on managing the n8n environment or applying patches after core engine updates.
  • Less technical expertise is required, and you don’t need a developer to set it up.
  • Although customization and control are reduced, maintenance is less frequent and less stressful.

There are also multiple license packages available. If you run n8n self-hosted, you can use it for free. However, that can be challenging for larger teams, as version control and change attribution are limited in the free tier.

How n8n workflows run in practice

Regardless of the package you choose, using AI models and LLMs isn’t free. You’ll need to set up API credentials with providers such as Google, OpenAI, and Anthropic.

Once n8n is installed, the interface presents a simple canvas for designing processes, similar to Zapier.

n8n workflow in practice

You can add nodes and pull in data from external sources. Webhook nodes can trigger workflows, whether on a schedule, through a contact form, or via another system.

Executed workflows can then deliver outputs to destinations such as Gmail, Microsoft Teams, or HTTP request nodes, which can trigger other n8n workflows or communicate with external APIs.

In the example above, a simple workflow scrapes RSS feeds from several search news publishers and generates a summary. It doesn’t produce a full news article or blog post, but it significantly reduces the time needed to recap key updates.

Dig deeper: Are we ready for the agentic web?

Building AI agent workflows in n8n

Below, you can see the interior of a webhook trigger node. This node generates a webhook URL. When Microsoft Teams calls that URL through a configured “Outgoing webhook” app, the workflow in n8n is triggered.

Users can request a search news update directly within a specific Teams channel, and n8n handles the rest, including the response.

n8n webhook URL

Once you begin building AI agent nodes, which can communicate with LLMs from OpenAI, Google, Anthropic, and others, the platform’s capabilities become clearer.

 AI agent nodes communicating with LLMs

In the image above, the left side shows the prompt creation view. You can dynamically pass variables from previously executed nodes. On the right, you’ll see the prompt output for the current execution, which is then sent to the selected LLM. 

In this case, data from the scraping node, including content from multiple RSS feeds, is passed into the prompt to generate a summary of recent search news. The prompt is structured using Markdown formatting to make it easier for the LLM to interpret.

Returning to the main AI agent node view, you’ll see that two prompts are supported.

The user prompt defines the role and handles dynamic data mapping by inserting and labeling variables so the AI understands what it’s processing. The system prompt provides more detailed, structured instructions, including output requirements and formatting examples. Both prompts are extensive and formatted in markdown.

On the right side of the interface, you can view sample output. Data moves between n8n nodes as JSON. In this example, the view has been switched to “Schema” mode to make it easier to read and debug. The raw JSON output is available in the “JSON” tab.

This project required two AI agent nodes.

n8n project nodes

The short news summary needed to be converted to HTML so it could be delivered via email and Microsoft Teams, both of which support HTML.

The first node handled summarizing the news. However, when the prompt became large enough to generate the summary and perform the HTML conversion in a single step, performance began to degrade, likely due to LLM memory constraints.

To address this, a second AI agent node converts the parsed JSON summary into HTML for delivery. In practice, a dual AI agent node structure often works well for smaller, focused tasks.

Finally, the news summary is delivered via Teams and Gmail. Let’s look inside the Gmail node:

n8n news summary delivered

The Gmail node constructs the email using the HTML output generated by the second AI agent node. Once executed, the email is sent automatically.

n8n news summary delivered via Gmail

The example shown is based on a news summary generated in November 2025.

Dig deeper: The AI gold rush is over: Why AI’s next era belongs to orchestrators

Get the newsletter search marketers rely on.


n8n SEO automations and other applications

In this article, we’ve outlined a relatively simple project. However, n8n has far broader SEO and digital applications, including:

  • Generating in-depth content and full articles, not just summaries.
  • Creating content snippets such as meta and Open Graph data.
  • Reviewing content and pages from a CRO or UX perspective.
  • Generating code.
  • Building simple one-page SEO scanners.
  • Creating schema validation tools.
  • Producing internal documents such as job descriptions.
  • Reviewing inbound CVs, or resumes, and applications.
  • Integrating with other platforms to support more complex, connected systems.
  • Connecting to platforms with API access that don’t have official or community n8n nodes, using custom HTTP request nodes.

The possibilities are extensive. As one colleague put it, “If I can think it, I can build it.” That may be slightly hyperbolic.

Like any platform, n8n has limitations. Still, n8n and competing tools such as MindStudio and Make are reshaping how some teams approach automation and workflow design.

How long that shift will last is unclear.

Some practitioners are exploring locally hosted tools such as Claude Code, Cursor, and others. Some are building their own AI “brains” that communicate with external LLMs directly from their laptops. Even so, platforms like n8n are likely to retain a place in the market, particularly for those who are moderately technical.

Drawbacks of n8n

There are several limitations to consider:

  • It’s still an immature platform, and core updates can break nodes, servers, or workflows.
  • That instability isn’t unique to n8n. AI remains an emerging space, and many related platforms are still evolving. For now, that means more maintenance and oversight, likely for the next couple of years.
  • Some teams may resist adoption due to concerns about redundancy or ethics.
  • n8n shouldn’t be positioned as a replacement for large portions of someone’s role. The technology is supplementary, and human oversight remains essential.
  • Although multiple LLMs can work together, n8n isn’t well-suited to thorough technical auditing across many data sources or large-scale data analysis.
  • Connected LLMs can run into memory limits or over-apply generic “best practice” guidance. For example, an AI might flag a missing meta description on a URL that turns out to be an image, which doesn’t support metadata.
  • The technology doesn’t yet have the memory or reasoning depth to handle tasks that are both highly subjective and highly complex

It’s often best to start by identifying tasks your team finds repetitive or frustrating and position automation as a way to reduce that friction. Build around simple functions or design more complex systems that rely on constrained data inputs.

SEO’s shift toward automation and orchestration

AI agents and platforms like n8n aren’t a replacement for human expertise. They provide leverage. They reduce repetition, accelerate routine analysis, and give SEOs more time to focus on strategy and decision-making. This follows a familiar pattern in SEO, where automation shifts value rather than eliminating the discipline.

The biggest gains typically come from small, practical workflows rather than sweeping transformations. Simple automations that summarize data, structure outputs, or connect systems can deliver meaningful efficiency without adding unnecessary complexity. With proper human context and oversight, these tools become more reliable and more useful.

Looking ahead, the tools will evolve, but the direction is clear. SEO is increasingly intertwined with automation, engineering, and data orchestration. Learning how to build and collaborate with these systems is likely to become a core competency for SEOs in the years ahead.

Dig deeper: The future of SEO teams is human-led and agent-powered

Google now attributes app conversions to the install date

20 February 2026 at 17:46
Google Ads (Credit: Shutterstock)

Google is updating how it attributes conversions in app campaigns, shifting from the date of the ad click to the date of the actual install.

What’s changing. Previously, conversions were logged against the original ad interaction date. Now, they’re assigned to the day the app was actually installed — bringing Google’s methodology closer in line with how Mobile Measurement Partners (MMPs) like AppsFlyer and Adjust report data.

Why this helps:

  • It should meaningfully reduce discrepancies between Google Ads and MMP dashboards — a persistent headache for mobile marketers reconciling two different numbers.
  • Google’s default 30-day attribution window meant many conversions were being reported too late to be useful for campaign learning, effectively starving Smart Bidding of timely signals.
  • Tying conversions to install date gives the algorithm fresher, more accurate data — which should translate to faster optimization cycles and more stable performance.

Why we care. The change sounds technical, but its impact is significant. Attribution timing directly affects how Google’s machine learning optimizes campaigns — and a 30-day lag between ad click and conversion credit has long been a silent drag on performance. This change means Google’s machine learning will finally receive conversion signals at the right time — tied to when a user actually installed the app, not when they clicked an ad weeks earlier.

That shift should lead to smarter bidding decisions, faster campaign optimization, and fewer frustrating discrepancies between Google Ads and MMP reporting. If you’ve ever wondered why your Google numbers don’t match AppsFlyer or Adjust, this update is a direct response to that problem.

Between the lines. Most advertisers never touch their attribution window settings, leaving Google’s 30-day default in place. That default has quietly been working against them — delaying the conversion signals that machine learning depends on to make better bidding decisions.

The bottom line. A small change in attribution logic could have an outsized impact on app campaign performance. Mobile advertisers should monitor their data closely in the coming weeks for shifts in reported conversions and optimization behavior.

First spotted. This update was first spotted by David Vargas who shared receiving a message of this post on LinkedIn.

How to use GA4 and Looker Studio for smarter PPC reporting

20 February 2026 at 17:00
How to use GA4 and Looker Studio for smarter PPC reporting in 2026

Data isn’t just a report card. It’s your performance marketing roadmap. Following that roadmap means moving beyond Google Analytics 4’s default tools.

If you rely only on built-in GA4 reports, you’re stuck juggling interfaces and struggling to tell a clear story to stakeholders.

This is where Looker Studio becomes invaluable. It allows you to transform raw GA4 and advertising data into interactive dashboards that deliver decision-grade insights and drive real campaign improvements.

Here’s how GA4 and Looker Studio work together for PPC reporting. We’ll compare their roles, highlight recent updates, and walk through specific use cases, from budget pacing visualizations to waste-reduction audits.

GA4 vs. Looker Studio: How they differ for PPC reporting

GA 4 is your source of truth for website and app interactions. It tracks user behavior, clicks, page views, and conversions with a flexible, event-based model. It even integrates with Google Ads to pull key ad metrics into its Advertising workspace. However, GA4 is primarily designed for data collection and analysis, not polished, client-facing reporting.

Looker Studio, on the other hand, serves as your one-stop shop for reporting. It connects to more than 800 data sources, allowing you to build interactive dashboards that bring everything together.

Here’s how they compare functionally in 2026.

Data sources

GA4 focuses on on-site analytics. In late 2025, Google finally rolled out native integration for Meta and TikTok, allowing automatic import of cost, clicks, and impressions without third-party tools. 

However, the feature is still rigid. It requires strict UTM matching and lacks the ability to clean campaign names or import platform-specific conversion values, such as Facebook Leads vs. GA4 Conversions. 

Looker Studio excels here, allowing you to blend these data sources more flexibly or connect to platforms GA4 still doesn’t support natively, such as LinkedIn or Microsoft Ads.

Metrics and calculations

GA4’s reporting UI has improved significantly, now allowing up to 50 custom metrics per standard property, up from the previous limit of five. However, these are often static. 

Looker Studio allows calculated fields, meaning you can perform calculations on your data in real time, such as calculating profit by subtracting cost from revenue, without altering the source data.

Data blending

Looker Studio lets you blend multiple data sources, essentially joining tables, to create richer insights. While enterprise users on Looker Studio Pro can now use LookML models for robust data governance, the standard free version still offers flexible data blending capabilities to match ad spend with downstream conversions.

Sharing and collaboration

Sharing insights in GA4 often means granting property access or exporting static files. Looker Studio reports are live web links that update automatically. You can also schedule automatic email delivery of PDF reports for free.

Enterprise features in Looker Studio Pro add options for delivery to Google Chat or Slack, but standard email scheduling is available to everyone.

Dig deeper: How to use GA4 predictive metrics for smarter PPC targeting

Why you need Looker Studio

Here’s where Looker Studio moves from helpful to essential for PPC teams.

1. Unified, cross-channel view of PPC performance

You don’t rely on just one ad platform. A Looker Studio dashboard becomes your single source of truth, pulling in intent-based Google Ads data and blending it with awareness-based Meta and Instagram Ads for a holistic view.

Instead of just comparing clicks, use Looker Studio to normalize your data. For instance, you might discover that X Ads drove 17.9% of users, while Microsoft Ads drove 16.1%, allowing you to allocate budget based on actual blended performance.

2. Visualizing creative performance

In industries like real estate, the image sells the click. A spreadsheet saying “Ad_Group_B performed well” means nothing to a client.

Use the IMAGE function in Looker Studio. If you use a connector that pulls the Ad Image URL, you can display the actual photo of that luxury condo or HVAC promotion directly in the report table alongside the CTR. This lets clients see exactly which creative is driving results, without translation.

3. Deeper insight into post-click behavior

Reporting shouldn’t stop at the click. By bringing GA4 data into your Looker Studio report, you connect the ad to the subsequent action.

You might discover that a Cheap Furnace Repair campaign has a high CTR but a 100% bounce rate. Looker Studio lets you visualize engaged sessions per click alongside ad spend, proving lead quality matters more than volume.

4. Custom metrics for business goals

Every business has unique KPIs. A real estate company might track tour-to-close ratio, while an HVAC company focuses on seasonal efficiency. 

Looker Studio lets you build these formulas once and have them update automatically. You can even bridge data gaps to calculate return on ad spend (ROAS) by creating a formula that divides your CRM revenue by your Google Ads cost.

5. Storytelling and narrative

Raw data needs context. Looker Studio allows you to add text boxes, dynamic date ranges, and annotations that turn numbers into narratives.

Use annotations to explain spikes or drops. Highlight the so what behind the metrics. If cost per lead spiked in July, add a text note directly on the chart, “Seasonal demand surge + competitor aggression.” This preempts client questions and transforms a static report into a strategic tool.

Dig deeper: How to leverage Google Analytics 4 and Google Ads for better audience targeting

Get the newsletter search marketers rely on.


Use cases: PPC dashboards that drive real insights

These dashboards go beyond surface metrics and surface insights you can act on immediately.

The budget pacing dashboard

Anxious about overspending? Standard reports show what you’ve spent, but not how it relates to your monthly cap.

Use bullet charts in Looker Studio. Set your target to the linear spend for the current day of the month. For example, if you’re 50% through the month, the target line is 50% of the budget.

This visual instantly shows stakeholders whether you’re overpacing and need to pull back, or underpacing and need to push harder, ensuring the month ends on budget.

The zero-click audit report

High spend with zero conversions is the silent budget killer in service industries.

Create a dedicated table filtered for waste. Set it to show only keywords where conversions = 0 and cost > $50, or whatever threshold makes sense for you, sorted by cost in descending order.

This creates an immediate hit list of keywords to pause. Showing this to a client proves you’re actively managing their budget and cutting waste, or you can use it internally.

Geographic performance maps

For local services, location is everything. GA4 provides location reports, but Looker Studio visualizes them in ways that matter.

Build a geo performance page that shades regions by cost per lead rather than traffic volume.

You might find that while City A drives the most traffic, City B generates leads at half the cost. This allows you to adjust bid modifiers by ZIP code or city to maximize ROI.

Dig deeper: 5 things your Google Looker Studio PPC Dashboard must have

Getting the most out of GA4 and Looker Studio in 2026

To ensure success with this combination, keep these final tips in mind.

Watch your API quotas

One of today’s biggest technical challenges is GA4 API quotas. If your dashboard has too many widgets or gets viewed by too many people at once, charts may break or fail to load.

If you have heavy reporting needs, consider extracting your GA4 data to Google BigQuery first, then connecting Looker Studio to BigQuery. This bypasses API limits and significantly speeds up your reports.

Enable optional metrics

Different clients have different needs. In your charts, enable the “optional metrics” feature. This adds a toggle that lets viewers swap metrics, for example, changing a chart from clicks to impressions, without editing the report each time.

Validate and iterate

When you first build a report, spot-check the numbers against the native GA4 interface. Make sure your attribution settings are correct.

Once you’ve established trust in the data, treat the dashboard as a living product, and keep iterating on the design based on what your stakeholders actually use and need.

From reactive reporting to proactive PPC strategy

Master Looker Studio to unlock GA4’s full potential for PPC reporting. GA4 gives you granular behavioral metrics; Looker Studio is where you combine, refine, and present them.

Move beyond basic metrics and use advanced visualizations — budget pacing, bullet charts, and ad creative tables — to deliver the transparency that builds real trust.

The result? You’ll shift from reactive reporting to proactive strategy, ensuring you’re always one step ahead in the data-driven landscape of 2026.

Dig deeper: Why click-based attribution shouldn’t anchor executive dashboards

Google Ads shows how landing page images power PMax ads

20 February 2026 at 16:36
In Google Ads automation, everything is a signal in 2026

Google Ads is now displaying examples of how “Landing Page Images” can be used inside Performance Max (PMax) campaigns — offering clearer visibility into how website visuals may automatically become ad creatives.

How it works. If advertisers opt in, Google can pull images directly from a brand’s landing pages and dynamically turn them into ads. Now when creating your campaigns, before setting it live, Google Ads will show you the automated creatives it plans on setting live.

Why we care. For PMax campaigns your site is part of your asset library. Any banner, hero image, or product visual could surface across Search, Display, YouTube, or Discover placements — whether you designed it for ads or not. Google Ads is now showing clearer examples of how Landing Page Images may be used inside those PMax campaigns — giving much-needed visibility into what automated creatives could look like.

Instead of guessing how Google might transform site visuals into ads, brands can better anticipate, audit, and control what’s eligible to serve. That visibility makes it easier to refine landing pages proactively and avoid unwanted surprises in live campaigns.

Between the lines: Automation is expanding — but so is creative risk. Therefore this is a very useful update that keeps advertisers aware of what will be set live before the hit the go live button.

Bottom line: In PMax, your website is no longer just a landing page. It’s part of the ad engine.

First seen. This update was spotted by Digital Marketer Thomas Eccel who showed an example on LinkedIn.

This press release strategy actually earns media coverage

20 February 2026 at 16:00
Press release evolution

I stopped using press releases several years ago. I thought they had lost most of their impact.

Then a conversation with a good friend and mentor changed my perspective.

She explained that the days of expecting organic features from simply publishing a press release were long gone. But she was still getting strong results by directly pitching relevant journalists once the release went live, using its key points and a link as added leverage.

I reluctantly tried her approach, and the results were phenomenal, earning my client multiple organic features.

My first thought was, “If it worked this well with a small tweak, I can make it even more effective with a comprehensive strategy.”

The strategy I’m about to share is the result of a year of experiments and refinements to maximize the impact of my press releases.

Yes, it requires more research, planning, and execution. But the results are exponentially greater, and well worth the extra effort.

Research phase

You already know what your client wants the world to know — that’s your starting point.

From there:

  • Map out tangential topics, such as its economic impact, related technology, legislation, and key industry players.
  • Find media coverage from the past three months on those topics in outlets where you want your client featured.
    • Your list should include a link to each piece, its key points, and the journalist’s contact information. Also include links to any related social media posts they’ve published.
  • Sort the list by relevance to your client’s message.

Planning phase

As you write your client’s press release, look for opportunities to cite articles from the list you compiled, including links to the pieces you reference.

Make sure each citation is highly relevant and adds data, clarity, or context to your message. Aim for three to five citations. More won’t add value and will dilute your client’s message.

At the same time, draft tailored pitches to the journalists whose articles you’re citing, aligned with their beat and prior coverage.

Mention their previous work subtly — one short quote they’ll recognize is enough. Include links to a few current social media threads that show active public interest in the topic. Close with a link to your press release (once it’s live) and a clear call to action.

The goal isn’t to win favor by citing them. It’s to show the connection between your client’s message and their previous coverage. Because they’ve already covered the topic, it’s an easy transition to approach it from a new angle — making a media feature far more likely.

Execution phase 

Start by engaging with the journalists on your list through social media for a few days. Comment on their recent posts, especially those covering topics from your list. This builds name recognition and begins the relationship.

Then publish your press release. As soon as it goes live, send the pitches you wrote earlier to the three to five journalists you cited. Include the live link to your press release. (I prefer linking to the most authoritative syndication rather than the wire service version.)

After that, pitch other relevant journalists.

As with the first group, tailor each pitch to the journalist. Reference relevant points from their previous articles that support your client’s message. The difference is that because you didn’t cite these journalists in your press release, the impact may be lower than with the first group.

Track all organic features you secure. You may earn some simply from publishing the press release, though that’s less common now. You’re more likely to earn them through direct pitches, and each one creates new opportunities.

Review each new feature for references to other articles, especially from the list you compiled earlier. Then pitch the journalist who wrote the original article, citing the new piece that references or reinforces their work.

The psychology behind why this works

This strategy leverages two powerful psychological principles:

  • We all have an ego, so when a journalist sees their work cited, it validates their perspective.
  • We look for ways to make life easier, and expanding on a topic they’ve already covered is far easier than starting from scratch.

Follow this framework for your next press release, and you’ll earn more media coverage, keep your clients happier, and create more impact with less effort — while looking like a rockstar.

❌
❌