Normal view

Today — 18 February 2026Search Engine Land

Google launches more visible links in AI Overviews and AI Mode

18 February 2026 at 01:14

Google is rolling out new, more visible links within AI Overviews and AI Mode. These new link cards appear in a pop-up window when you hover over them on desktop. They also show more prominent details about the website.

Google was testing these earlier and now this new style is live.

What it looks like. Here is a screenshot of these new link pop up menus on hover:

What Google said. Google’s Robby Stein posted on X saying:

  • “New on Search: In AI Overviews and AI Mode, groups of links will automatically appear in a pop-up as you hover over them on desktop, so you can jump right into a website to learn more. And we’ll show more descriptive and prominent link icons within the response across both desktop and mobile.”
  • “Our testing shows this new UI is more engaging, making it easier to get to great content across the web.”

Why we care. This new style does appear to encourage more clicks to websites and I do hope that we will see more traffic from Google’s AI experiences from these changes.

Of course, we still have no way to measure this in Search Console.

Airbnb says traffic from AI chatbots converts better than Google

17 February 2026 at 23:54

Traffic from AI chatbots converts at a higher rate than traffic from Google, according to Airbnb CEO Brian Chesky. He shared this tidbit on the company’s Q4 2025 earnings call:

  • “And what we see is that traffic that comes from chatbots convert at a higher rate than traffic that comes from Google,” Chesky said on Feb. 12.

Yes, but. He didn’t share specific conversion rates, and the company didn’t quantify chatbot traffic volume. But for Airbnb, early data suggests visitors arriving via AI chatbots may be further along in the booking process than those coming from traditional Google searches.

  • Airbnb also didn’t specify which chatbots are driving traffic. Chesky referenced OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and others in broader remarks about model availability.

Why we care. AI assistants are emerging as a top-of-funnel discovery layer. The quality of that traffic may outperform clicks from traditional search and align with past claims by Google and Microsoft that AI will drive more qualified traffic at lower volume.

AI search ambitions. Chesky described chatbots as “very similar to search” and positioned them as top-of-funnel discovery engines.

  • “I think these chatbot platforms are gonna be very similar to search. Gonna be really good top-of-funnel discoveries,” he said.

Rather than viewing them as disintermediators, Airbnb sees them as acquisition partners.

  • “We think they are gonna be positive for Airbnb,” Chesky added.

Chesky described the long-term goal as building an “AI-native experience” where the app “does not just search for you. It knows you.”:

  • “So AI search is live to a very small percent of traffic right now. We are doing a lot of experimentation. The way we do things with AI is much more rapid iteration, not big launches. And over time, we are gonna be experimenting with making AI search more conversational, integrating it into more than trip, and, eventually, we will be looking at sponsor listings as result of that. But we want to first nail AI search.”

AI inside Airbnb. Airbnb isn’t just benefiting from external AI platforms. It’s embedding AI into its operations.

  • Its in-house AI customer service agent now resolves nearly one-third of North American support tickets without a human, according to Chesky. The tool is English-only for now but is slated for global, multilingual rollout, including voice support.
  • Chesky said the goal is for AI to handle “significantly more than 30%” of tickets within a year.
  • Airbnb is also testing AI-powered conversational search in its app. The feature is live for a small percentage of users and is being iterated quickly rather than launched as a major product release.

Sponsored listings on hold for now. Airbnb has long faced questions about launching sponsored listings. On the call, Chesky said traditional ad units may not translate directly into conversational AI environments. The company is prioritizing AI search before designing sponsored placements in that format.

Airbnb’s search shift. Airbnb began moving its budget to brand marketing just before the rise of generative AI and AI-powered search. Airbnb bet on broader marketing initiatives, slashing its search marketing spending.

TikTok launches AI-powered ad options for entertainment marketers

17 February 2026 at 22:45
TikTok SEO: The ultimate guide

TikTok is giving entertainment marketers in Europe new tools to reach audiences with precision, leveraging AI to drive engagement and conversions for streaming and ticketed content.

What’s happening. TikTok is introducing two new ad types for European campaigns:

  • Streaming Ads: AI-driven ads for streaming platforms that show personalized content based on user engagement. Formats include a four-title video carousel or a multi-title media card. With 80% of TikTok users saying the app influences their streaming choices, these ads can directly shape viewing decisions.
  • New Title Launch: Targets high-intent users using signals like genre preference and price sensitivity, helping marketers convert cultural moments into ticket sales, subscriptions, or event attendance.

Context. The rollout coincides with the 76th Berlinale International Film Festival, underscoring TikTok’s growing role in entertainment marketing. In 2025, an average of 6.5 million daily posts were shared about film and TV on TikTok, with 15 of the top 20 European box office films last year being viral hits on the platform.

Why we care. TikTok’s new AI-powered ad formats let streaming platforms and entertainment brands target users with highly personalized content, increasing the likelihood of engagement and conversions.

With 80% of users saying TikTok influences their viewing choices (according to TikTok data), these tools can directly shape audience behavior, helping marketers turn cultural moments into subscriptions, ticket sales, or higher viewership. It’s a chance to leverage TikTok’s viral influence for measurable campaign impact.

The bottom line. For entertainment marketers, TikTok’s AI-driven ad formats provide new ways to engage audiences, boost viewership, and turn trending content into measurable results.

Dig deeper. TikTok Adds New Ad Types for Entertainment Marketers

Yesterday — 17 February 2026Search Engine Land

Meta adds Manus AI tools into Ads Manager

17 February 2026 at 22:25
Inside Meta’s AI-driven advertising system: How Andromeda and GEM work together

Meta Platforms is embedding newly acquired AI agent tech directly into Ads Manager, giving advertisers built-in automation tools for research and reporting as the company looks to show faster returns on its AI investments.

What’s happening. Some advertisers are seeing in-stream prompts to activate Manus AI inside Ads Manager.

  • Manus is now available to all advertisers via the Tools menu.
  • Select users are also getting pop-up alerts encouraging in-workflow adoption.
  • The feature rollout signals deeper integration ahead.

What is Manus. Manus AI is designed to power AI agents that can perform tasks like report building and audience research, effectively acting as an assistant within the ad workflow.

Why we care. Manus AI brings AI-powered automation directly into Meta Platforms Ads Manager, making tasks like report-building, audience research, and campaign analysis faster and more efficient.

Meta is currently prioritizing tying AI investment to measurable ad performance, giving advertisers new ways to optimize campaigns and potentially gain a competitive edge by testing workflow efficiencies early.

Between the lines. Meta is under pressure to demonstrate practical value from its aggressive AI spending. Advertising remains its clearest path to monetization, and embedding Manus into everyday ad tools offers a direct way to tie AI investment to performance gains.

Zoom out. The move aligns with CEO Mark Zuckerberg’s push to weave AI across Meta’s product stack. By positioning Manus as a performance tool for advertisers, Meta is betting that workflow efficiencies will translate into stronger ad results — and a clearer AI revenue story.

The bottom line. For advertisers, Manus adds another layer of built-in automation worth testing. Early adopters may uncover time savings and optimization gains as Meta continues expanding AI inside its ad ecosystem.

Google shifts Lookalike to AI signals in Demand Gen

17 February 2026 at 22:02
The Google Ads Demand Gen playbook for today’s fractured consumer journey

A core targeting lever in Google Demand Gen campaigns is changing. Starting March 2026, Lookalike audiences will act as optimization signals — not hard constraints — potentially widening reach and leaning more heavily on automation to drive conversions.

What is happening. Per an update to Google’s Help documentation, Lookalike segments in Demand Gen are moving from strict similarity-based targeting to an AI-driven suggestion model.

  • Before: Advertisers selected a similarity tier (narrow, balanced, broad), and campaigns targeted users strictly within that Lookalike pool.
  • After: The same tiers act as signals. Google’s system can expand beyond the Lookalike list to reach users it predicts are likely to convert.

Between the lines. This effectively reframes Lookalikes from a fence to a compass. Instead of limiting delivery to a defined cohort, advertisers are feeding intent signals into Google’s automation and allowing it to search for performance outside preset boundaries.

How this interacts with Optimized Targeting. The new Lookalike-as-signal approach resembles Optimized Targeting — but it doesn’t replace it.

  • When advertisers layer Optimized Targeting on top, Google says the system may expand reach even further.
  • In practice, this stacks multiple automation signals, increasing the algorithm’s freedom to pursue lower CPA or higher conversion volume.

Opt-out option. Advertisers who want to preserve legacy behavior can request continued access to strict Lookalike targeting through a dedicated opt-out form. Without that request, campaigns will default to the new signal-based model.

Why we care. This update changes how much control advertisers will have over who their ads reach in Google Demand Gen campaigns. Lookalike audiences will no longer strictly limit targeting — they’ll guide AI expansion — which can significantly affect scale, CPA, and overall performance.

It also signals a broader shift toward automation, similar to trends driven by Meta Platforms. Advertisers will need to test carefully, rethink audience strategies, and decide whether to embrace the added reach or opt out to preserve tighter targeting.

Zoom out. The shift mirrors a broader industry trend toward AI-first audience expansion, similar to moves by Meta Platforms over the past few years. Platforms are steadily trading granular manual controls for machine-led optimization.

Why Google is doing this. Digital markerter Dario Zannoni, has two reasons as to why Google is doing this:

  • Strict Lookalike targeting can cap scale and constrain performance in conversion-focused campaigns.
  • Maintaining high-quality similarity models is increasingly complex, making broader automation more attractive.

The bottom line. For performance marketers, this is another step toward automation-centric buying. While reduced control may be uncomfortable, comparable platform changes have often produced performance gains in mainstream use cases. Expect a new testing cycle as advertisers measure how expanded Lookalike signals affect CPA, reach, and incremental conversions.

First seen. This update was spotted by Zannoni who shared his thoughts on LinkedIn.

Dig deeper. Use Lookalike segments to grow your audience

Google’s Jeff Dean: AI Search relies on classic ranking and retrieval

17 February 2026 at 21:52
AI search stack

Jeff Dean says Google’s AI Search still works like classic Search: narrow the web to relevant pages, rank them, then let a model generate the answer.

In an interview on Latent Space: The AI Engineer Podcast, Google’s chief AI scientist explained how Google’s AI systems work and how much they rely on traditional search infrastructure.

The architecture: filter first, reason last. Visibility still depends on clearing ranking thresholds. Content must enter the broad candidate pool, then survive deeper reranking before it can be used in an AI-generated response. Put simply, AI doesn’t replace ranking. It sits on top of it.

Dean said an LLM-powered system doesn’t read the entire web at once. It starts with Google’s full index, then uses lightweight methods to identify a large candidate pool — tens of thousands of documents. Dean said:

  • “You identify a subset of them that are relevant with very lightweight kinds of methods. You’re down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is the final 10 results or 10 results plus other kinds of information.”

Stronger ranking systems narrow that set further. Only after multiple filtering rounds does the most capable model analyze a much smaller group of documents and generate an answer. Dean said:

  • “And I think an LLM-based system is not going to be that dissimilar, right? You’re going to attend to trillions of tokens, but you’re going to want to identify what are the 30,000-ish documents that are with the maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked me to do?”

Dean called this the “illusion” of attending to trillions of tokens. In practice, it’s a staged pipeline: retrieve, rerank, synthesize. Dean said:

  • “Google search gives you … not the illusion, but you are searching the internet, but you’re finding a very small subset of things that are relevant.”

Matching: from keywords to meaning. Nothing new here, but we heard another reminder that covering a topic clearly and comprehensively matters more than repeating exact-match phrases.

Dean explained how LLM-based representations changed how Google matches queries to content.

Older systems relied more on exact word overlap. With LLM representations, Google can move beyond the idea that particular words must appear on the page and instead evaluate whether a page — or even a paragraph — is topically relevant to a query. Dean said:

  • “Going to an LLM-based representation of text and words and so on enables you to get out of the explicit hard notion of particular words having to be on the page. But really getting at the notion of this topic of this page or this page paragraph is highly relevant to this query.”

That shift lets Search connect queries to answers even when wording differs. Relevance increasingly centers on intent and subject matter, not just keyword presence.

Query expansion didn’t start with AI. Dean pointed to 2001, when Google moved its index into memory across enough machines to make query expansion cheap and fast. Dean said:

  • “One of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Because if you don’t have the page in your index, you’re going to not do well.
  • “And then we also needed to scale our capacity because we were, our traffic was growing quite extensively. So we had a sharded system where you have more and more shards as the index grows, you have like 30 shards. Then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. And then as traffic grows, you add more and more replicas of each of those.
  • And so we eventually did the math that realized that in a data center where we had say 60 shards and 20 copies of each shard, we now had 1,200 machines with disks. And we did the math and we’re like, Hey, one copy of that index would actually fit in memory across 1,200 machines. So in 2001, we … put our entire index in memory and what that enabled from a quality perspective was amazing.

Before that, adding terms was expensive because it required disk access. Once the index lived in memory, Google could expand a short query into dozens of related terms — adding synonyms and variations to better capture meaning. Dean said:

  • “Before, you had to be really careful about how many different terms you looked at for a query, because every one of them would involve a disk seek.
  • “Once you have the whole index in memory, it’s totally fine to have 50 terms you throw into the query from the user’s original three- or four-word query. Because now you can add synonyms like restaurant and restaurants and cafe and bistro and all these things.
  • “And you can suddenly start … getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was … 2001, very much pre-LLM, but really it was about softening the strict definition of what the user typed in order to get at the meaning.”

That change pushed Search toward intent and semantic matching years before LLMs. AI Mode (and its other AI experiences) continues Google’s ongoing shift toward meaning-based retrieval, enabled by better systems and more compute.

Freshness as a core advantage. Dean said one of Search’s biggest transformations was update speed. Early systems refreshed pages as rarely as once a month. Over time, Google built infrastructure that can update pages in under a minute. Dean said:

  • “In the early days of Google, we were growing the index quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most.”

That improved results for news queries and affected the main search experience. Users expect current information, and the system is designed to deliver it. Dean said:

  • “If you’ve got last month’s news index, it’s not actually that useful.”

Google uses systems to decide how often to crawl a page, balancing how likely it is to change with how valuable the latest version is. Even pages that change infrequently may be crawled often if they’re important enough. Dean said:

  • “There’s a whole … system behind the scenes that’s trying to decide update rates and importance of the pages. So, even if the update rate seems low, you might still want to recrawl important pages quite often because the likelihood they change might be low, but the value of having updated is high.”

Why we care. AI answers don’t bypass ranking, crawl prioritization, or relevance signals. They depend on them. Eligibility, quality, and freshness still determine which pages are retrieved and narrowed. LLMs change how content is synthesized and presented — but the competition to enter the underlying candidate set remains a search problem.

The interview. Owning the AI Pareto Frontier — Jeff Dean

💾

Behind the AI interface, a staged system narrows tens of thousands of documents to a few, showing that visibility hinges on classic signals.

Why AI optimization is just long-tail SEO done right

17 February 2026 at 19:56
The return of long-tail SEO in the AI era

If you look at job postings on Indeed and LinkedIn, you’ll see a wave of acronyms added to the alphabet soup as companies try to hire people to boost visibility on large language models (LLMs).

Some people are calling it generative engine optimization (GEO). Others call it answer engine optimization (AEO). Still others call it artificial intelligence optimization (AIO). I prefer large model answer optimization (LMAO).

I find these new acronyms a bit ridiculous because while many like to think AI optimization is new, it isn’t. It’s just long-tail SEO — done the way it was always meant to be done.

Why LLMs still rely on search

Most LLMs (e.g., GPT-4o, Claude 4.5, Gemini 1.5, Grok-2) are transformers trained to do one thing: predict the next token given all previous tokens.

AI companies train them on massive datasets from public web crawls, such as:

  • Common Crawl.
  • Digitized books.
  • Wikipedia dumps.
  • Academic papers.
  • Code repositories.
  • News archives.
  • Forums.

The data is heavily filtered to remove spam, toxic content, and low-quality pages. Full pretraining is extremely expensive, so companies run major foundation training cycles only every few years and rely on lighter fine-tuning for more frequent updates.

So what happens when an LLM encounters a question it can’t answer with confidence, despite the massive amount of training data?

AI companies use real-time web search and retrieval-augmented generation (RAG) to keep responses fresh and accurate, bridging the limits of static training data. In other words, the LLM runs a web search.

To see this in real time, many LLMs let you click an icon or “Show details” to view the process. For example, when I use Grok to find highly rated domestically made space heaters, it converts my question into a standard search query.

Dig deeper: AI search is booming, but SEO is still not dead

The long-tail SEO playbook is back

Many of us long-time SEO practitioners have praised the value of long-tail SEO for years. But one main reason it never took off for many brands: Google.

As long as Google’s interface was a single text box, users were conditioned to search with one- and two-word queries. Most SEO revenue came from these head terms, so priorities focused on competing for the No. 1 spot for each industry’s top phrase.

Many brands treated long-tail SEO as a distraction. Some cut content production and community management because they couldn’t see the ROI. Most saw more value in protecting a handful of head terms than in creating content to capture the long tail of search.

Fast forward to 2026. People typing LLM prompts do so conversationally, adding far more detail and nuance than they would in a traditional search engine. LLMs take these prompts and turn them into search queries. They won’t stop at a few words. They’ll construct a query that reflects whatever detail their human was looking for in the prompt.

Suddenly, the fat head of the search curve is being replaced with a fat tail. While humans continue to go to search engines for head terms, LLMs are sending these long-tail search queries to search engines for answers.

While AI companies are coy about disclosing exactly who they partner with, most public information points to the following search engines as the ones their LLMs use most often:

  • ChatGPT – Bing Search.
  • Claude – Brave Search.
  • Gemini – Google Search.
  • Grok – X Search and its own internal web search tool.
  • Perplexity – Uses its own hybrid index.

Right now, humans conduct billions of searches each month on traditional search engines. As more people turn to LLMs for answers, we’ll see exponential growth in LLMs sending search queries on their behalf.

SEO is being reborn.

Dig deeper: Why ‘it’s just SEO’ misses the mark in the era of AI SEO

How to do long-tail SEO with help from AI

The principles of long-tail SEO haven’t changed much. It’s best summed up by Baseball Hall of Famer Wee Willie Keeler: “Keep your eye on the ball and hit ’em where they ain’t.”

Success has always depended on understanding your audience’s deepest needs, knowing what truly differentiates your brand, and creating content at the intersection of the two.

As straightforward as this strategy has been, few have executed it well, for understandable reasons.

Reading your customers’ minds is hard. Keyword research is tedious. Content creation is hard. It’s easy to get lost in the weeds.

Happily, there’s someone to help: your favorite LLM.

Here are a few best practices I’ve used to create strong long-tail content over the years, with a twist. What once took days, weeks, or even months, you can now do in minutes with AI.

1. Ask your LLM what people search when looking for your product or service

The first rule of long-tail SEO has always been to get into your audience’s heads and understand their needs. This once required commissioning surveys and hiring research firms to figure out.

But for most brands and industries, an LLM can handle at least the basics. Here’s a sample prompt you can use.

Act as an SEO strategist and customer research analyst. You're helping with long-tail keyword discovery by modeling real customer questions.

I want to discover long-tail search questions real people might ask about my business, products, and industry. I’m not looking for mere keyword lists. Generate realistic search questions that reflect how people research, compare options, solve problems, and make decisions.

Company name: [COMPANY NAME]
Industry: [INDUSTRY]
Primary product/service: [PRIMARY PRODUCT OR SERVICE]
Target customer: [TARGET AUDIENCE]
Geography (if relevant): [LOCATION OR MARKET]

Generate a list of 75 – 100 realistic, natural-language search queries grouped into the following categories:

AWARENESS
• Beginner questions about the category
• Problem-based questions (pain points, frustrations, confusion)

CONSIDERATION
• Comparison questions (alternatives, competitors, approaches)
• “Best for” and use-case questions
• Cost and pricing questions

DECISION
• Implementation or getting-started questions
• Trust, credibility, and risk questions

POST-PURCHASE
• Troubleshooting questions
• Optimization and advanced/expert questions

EDGE CASES
• Niche scenarios
• Uncommon but realistic situations
• Advanced or expert questions

Guidelines:
• Write queries the way real people search in Google or ask AI assistants.
• Prioritize specificity over generic keywords.
• Include question formats, “how to” queries, and scenario-based searches.
• Avoid marketing language.
• Include emotional, situational, and practical context where relevant.
• Don't repeat the same query structure with minor variations.
• Each query should suggest a clear content angle.

Output as a clean bullet list grouped by category.

You can tweak this prompt for your brand and industry. The key is to force the LLM (and yourself) to think like a customer and avoid the trap of generating keyword lists that are just head-term variations dressed up as long-tail queries.

With a prompt like this, you move away from churning out “keyword ideas” and toward understanding real customer needs you can build useful content around.

Dig deeper: If SEO is rocket science, AI SEO is astrophysics

2. Use your LLM to analyze your search data

Most large brands and sites don’t realize they’ve been sitting on a treasure trove of user intelligence: on-site search data.

When customers type a query into your site’s search box, they’re looking for something they expect your brand to provide.

If you see the same searches repeatedly, it usually means one of two things:

  • You have the information, but users can’t find it.
  • You don’t have it at all.

In both cases, it’s a strong signal you need to improve your site’s UX, add meaningful content, or both.

There’s another advantage to mining on-site search data: it reveals the exact words your audience uses, not the terms your team assumes they use.

Historically, the challenge has been the time required to analyze it. I remember projects where I locked myself in a room for days, reviewing hundreds of thousands of queries line by line to find patterns — sorting, filtering, and clustering them by intent.

If you’ve done the same, you know the pattern. The first few dozen keywords represent unique concepts, but eventually you start seeing synonyms and variations.

All of this is buried treasure waiting to be explored. Your LLM can help. Here’s a sample prompt you can use:

You're an SEO strategist analyzing internal site search data.

My goal is to identify content opportunities from what users are searching for on my website – including both major themes and specific long-tail needs within those themes.

I have attached a list of site search queries exported from GA4. Please:

STEP 1 – Cluster by intent
Group the queries into logical intent-based themes.

STEP 2 – Identify long-tail signals inside each theme
Within each theme:
• Identify recurring modifiers (price, location, comparisons, troubleshooting, etc.)
• Identify specific entities mentioned (products, tools, features, audiences, problems)
• Call out rare but high-intent searches
• Highlight wording that suggests confusion or unmet expectations

STEP 3 – Generate content ideas
For each theme:
• Suggest 3 – 5 content ideas
• Include at least one long-tail content idea derived directly from the queries
• Include one “high-intent” content idea
• Include one “problem-solving” content idea

STEP 4 – Identify UX or navigation issues
Point out searches that suggest:
• Users cannot find existing content
• Misleading navigation labels
• Missing landing pages

Output format:
Theme:
Supporting queries:
Long-tail insights:
Content opportunities:
UX observations:

Again, customize this prompt based on what you know about your audience and how they search.

The detail matters. Many SEO practitioners stop at a prompt like “give me a list of topics for my clients,” but this pushes the LLM beyond simple clustering to understand the intent behind the searches.

I used on-site search data because it’s one of the richest, most transparent, and most actionable sources. But similar prompts can uncover hidden value in other keyword lists, such as “striking distance” terms from Google Search Console or competitive keywords from Semrush.

Even better, if your organization keeps detailed customer interaction records (e.g., sales call notes, support tickets, chat transcripts), those can be more valuable. Unlike keyword datasets, they capture problems in full sentences, in the customer’s own words, often revealing objections, confusion, and edge cases that never appear in traditional keyword research.

Get the newsletter search marketers rely on.


3. Create great content

The next step is to create great content.

Your goal is to create content so strong and authoritative that it’s picked up by sources like Common Crawl and survives the intense filtering AI companies apply when building LLM training sets. Realistically, only pioneering brands and recognized authorities can expect to operate in this rarefied space.

For the rest of us, the opportunity is creating high-quality long-tail content that ranks at the top across search engines — not just Google, but Bing, Brave, and even X.

This is one area where I wouldn’t rely on LLMs, at least not to generate content from scratch.

Why?

LLMs are sophisticated pattern matchers. They surface and remix information from across the internet, even obscure material. But they don’t produce genuinely original thought.

At best, LLMs synthesize. At worst, they hallucinate.

Many worry AI will take their jobs. And it will — for anyone who thinks “great content” means paraphrasing existing authority sources and competing with Wikipedia-level sites for broad head terms. Most brands will never be the primary authority on those terms. That’s OK.

The real opportunity is becoming the authority on specific, detailed, often overlooked questions your audience actually has. The long tail is still wide open for brands willing to create thoughtful, experience-driven content that doesn’t already exist everywhere else.

We need to face facts. The fat head is shrinking. The land rush is now for the “fat tail.” Here’s what brands need to do to succeed:

Dominate searches for your brand

Search your brand name in a keyword tool like Semrush and review the long-tail variations people type into Google. You’ll likely find more than misspellings. You’ll see detailed queries about pricing, alternatives, complaints, comparisons, and troubleshooting.

If you don’t create content that addresses these topics directly — the good and the bad — someone else will. It might be a Reddit thread from someone who barely knows your product, a competitor attacking your site, a negative Google Business Profile review, or a complaint on Trustpilot.

When people search your brand, your site should be the best place for honest, complete answers — even and especially when they aren’t flattering. If you don’t own the conversation, others will define it for you.

The time for “frequently asked questions” is over. You need to answer every question about your brand—frequent, infrequent, and everything in between.

Go long

Head terms in your industry have likely been dominated by top brands for years. That doesn’t mean the opportunity is gone.

Beneath those competitive terms is a vast layer of unbranded, long-tail searches that have likely been ignored. Your data will reveal them.

Review on-site search, Google Search Console queries, customer support questions, and forums like Reddit. These are real people asking real questions in their own words.

The challenge isn’t finding questions to write about. It’s delivering the best answers — not one-line responses to check a box, but clear explanations, practical examples, and content grounded in real experience that reflects what sets your brand apart.

Dig deeper: Timeless SEO rules AI can’t override: 11 unshakeable fundamentals

Expertise is now a commodity: Lean into experience, authority, and trust

Publishing expert content still matters, but its role has changed. Today, anyone can generate “expert-sounding” articles with an LLM.

Whether that content ranks in Google is increasingly beside the point, as many users go straight to AI tools for answers.

As the “expertise” in E-E-A-T becomes table stakes, differentiation comes from what AI and competitors can’t easily replicate: experience, authority, and trust.

That means publishing:

  • Original insights and genuine thought leadership from people inside your company.
  • Real customer stories with measurable outcomes.
  • Transparent reviews and testimonials.
  • Evidence that your brand delivers what it promises.

This isn’t just about blog content. These signals should appear across your site — from your About page to product pages to customer support content. Every page should reinforce why a real person should trust your brand.

Stop paywalling your best content

I’m seeing more brands put their strongest content behind logins or paywalls. I understand why. Many need to protect intellectual property and preserve monetization. But as a long-term strategy, this often backfires.

If your content is truly valuable, the ideas will spread anyway. A subscriber may paraphrase it. An AI system may summarize it. A crawler may access it through technical workarounds. In the end, your insights circulate without attribution or brand lift.

When your best content is publicly accessible, it can be cited, linked to, indexed, and discussed. That visibility builds authority and trust over time.

In a search- and AI-driven ecosystem, discoverability often outweighs modest direct content monetization.

This doesn’t mean content businesses can’t charge for anything. It means being strategic about what you charge for. A strong model is to make core knowledge and thought leadership open while monetizing things such as:

  • Tools.
  • Community access.
  • Premium analysis or data.
  • Courses or certifications.
  • Implementation support.
  • Early access or deeper insights.

In other words, let your ideas spread freely and monetize the experience, expertise, and outcomes around them.

Stop viewing content as a necessary evil

I still see brands hiding content behind CSS “read more” links or stuffing blocks of “SEO copy” at the bottom of pages, hoping users won’t notice but search engines will.

Spoiler alert: they see it. They just don’t care.

Content isn’t something you add to check an SEO box or please a robot. Every word on your site must serve your customers. When content genuinely helps users understand, compare, and decide, it becomes an asset that builds trust and drives conversions.

If you’d be embarrassed for users to read your content, you’re thinking about it the wrong way. There’s no such thing as content that’s “bad for users but good for search engines.” There never was.

Embrace user-generated content

No article on long-tail SEO is complete without discussing user-generated content. I covered forums and Q&A sites in a previous article (see: The reign of forums: How AI made conversation king), and they remain one of the most efficient ways to generate authentic, unique content.

The concept is simple. You have an audience that’s already passionate and knowledgeable. They likely have more hands-on experience with your brand and industry than many writers you hire. They may already be talking about your brand offline, in customer communities, or on forums like Reddit.

Your goal is to bring some of those conversations onto your site.

User-generated content naturally produces the long-tail language marketing teams rarely create on their own. Customers

  • Describe problems differently.
  • Ask unexpected questions.
  • Compare products in ways you didn’t anticipate.
  • Surface edge cases, troubleshooting scenarios, and real-world use cases that rarely appear in polished marketing copy.

This is exactly the kind of content long-tail SEO thrives on.

It’s also the kind of content AI systems and search engines increasingly recognize as credible because it reflects real experience rather than brand messaging many dismiss as inauthentic.

Brands that do this well don’t just capture long-tail traffic. They build trust, reduce support costs, and dominate long-tail searches and prompts.

In the age of AI-generated content, real human experience is one of the strongest differentiators.

The new SEO playbook looks a lot like the old one

For years, SEO has been shaped by the limits of the search box. Short queries and head terms dominated strategy, and long-tail content was often treated as optional.

LLMs are changing that dynamic. AI is expanding search, not eliminating it.

AI systems encourage people to express what they actually want to know. Those detailed prompts still need answers, and those answers come from the web.

That means the SEO opportunity is shifting from competing over a small set of keywords to becoming the best source of answers to thousands of specific questions.

Brands that succeed will:

  • Deeply understand their audience.
  • Publish genuinely useful content.
  • Build trust through real engagement and experience.

That’s always been the recipe for SEO success. But our industry has a habit of inventing complex tactics to avoid doing the simple work well.

Most of us remember doorway pages, exact match domains, PageRank sculpting, LSI obsession, waves of auto-generated pages, and more. Each promised an edge. Few replaced the value of helping users.

We’re likely to see the same cycle repeat in the AI era.

The reality is simpler. AI systems aren’t the audience. They’re intermediaries helping humans find trustworthy answers.

If you focus on helping people understand, decide, and solve problems, you’re already optimizing for AI — whatever you call it.

Dig deeper: Is SEO a brand channel or a performance channel? Now it’s both

Google Search Console AI-powered configuration rolling out

17 February 2026 at 18:27

Over two months ago, Google began testing its AI-powered configuration tool. It allows you to ask AI questions about the Google Search Console performance reports and it would bring back answers for you. Well, Google is now rolling out this tool for all.

Google said on LinkedIn, “The Search Console’s new AI-powered configuration is now available to everyone!”

AI-powered configuration. AI-powered configuration “lets you describe the analysis you want to see in natural language. Your inputs are then transformed into the appropriate filters and settings, instantly configuring the report for you,” Google said.

Rolling out now. If you login to your Search Console account and click on the performance report, you may see a note at the top that says “New! Customize your Performance report using Al.”

When you click on it, you get into the AI tool:

More details. As we reported earlier, Google said “The AI-powered configuration feature is designed to streamline your analysis by handling three key elements for you.”

  • Selecting metrics: Choose which of the four available metrics – Clicks, Impressions, Average CTR, and Average Position – to display based on your question.
  • Applying filters: Narrow down data by query, page, country, device, search appearance, or date range.
  • Configuring comparisons: Set up complex comparisons (like custom date ranges) without manual setup.

Why we care. This is only supported in the Performance report for Search results. It isn’t available for Discover or News reports, yet. Plus, it is AI, so the answers may not be perfect. But it can be fun to play with and get you thinking about things you may not have thought about yet.

So give it a try.

Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

17 February 2026 at 18:00
Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

Rand Fishkin just published the most important piece of primary research the AI visibility industry has seen so far.

His conclusion – that AI tools produce wildly inconsistent brand recommendation lists, making “ranking position” a meaningless metric – is correct, well-evidenced, and long overdue.

But Fishkin stopped one step short of the answer that matters.

He didn’t explore why some brands appear consistently while others don’t, or what would move a brand from inconsistent to consistent visibility. That solution is already formalized, patent pending, and proven in production across 73 million brand profiles.

When I shared this with Fishkin directly, he agreed. The AI models are pulling from a semi-fixed set of options, and the consistency comes from the data. He just didn’t have the bandwidth to dig deeper, which is fair enough, but the digging has been done – I’ve been doing it for a decade.

Here’s what Fishkin found, what it actually means, and what the data proves about what to do about it.

Fishkin’s data killed the myth of AI ranking position

Fishkin and Patrick O’Donnell ran 2,961 prompts across ChatGPT, Claude, and Google AI, asking for brand recommendations across 12 categories. The findings were surprising for most.

Fewer than 1 in 100 runs produced the same list of brands, and fewer than 1 in 1,000 produced the same list in the same order. These are probability engines that generate unique answers every time. Treating them as deterministic ranking systems is – as Fishkin puts it – “provably nonsensical,” and I’ve been saying this since 2022. I’m grateful Fishkin finally proved it with data.

But Fishkin also found something he didn’t fully unpack. Visibility percentage – how often a brand appears across many runs of the same prompt – is statistically meaningful. Some brands showed up almost every time, while others barely appeared at all.

That variance is where the real story lies.

Fishkin acknowledged this but framed it as a better metric to track. The real question isn’t how to measure AI visibility, it’s why some brands achieve consistent visibility and others don’t, and what moves your brand from the inconsistent pile to the consistent pile.

That’s not a tracking problem. It’s a confidence problem.

AI systems are confidence engines, not recommendation engines

AI platforms – ChatGPT, Claude, Google AI, Perplexity, Gemini, all of them – generate every response by sampling from a probability distribution shaped by:

  • What the model knows.
  • How confidently it knows it.
  • What it retrieved at the moment of the query.

When the model is highly confident about an entity’s relevance, that entity appears consistently. When the model is uncertain, the entity sits at a low probability weight in the distribution – included in some samples, excluded in others – not because the selection is random but because the AI doesn’t have enough confidence to commit.

That’s the inconsistency Fishkin documented, and I recognized it immediately because I’ve been tracking exactly this pattern since 2015. 

  • City of Hope appearing in 97% of cancer care responses isn’t luck. It’s the result of deep, corroborated, multi-source presence in exactly the data these systems consume. 
  • The headphone brands at 55%-77% are in a middle zone – known, but not unambiguously dominant. 
  • The brands at 5%-10% have low confidence weight, and the AI includes them in some outputs and not others because it lacks the confidence to commit consistently. 

Confidence isn’t just about what a brand publishes or how it structures its content. It’s about where that brand stands relative to every other entity competing for the same query – a dimension I’ve recently formalized as Topical Position.

I’ve formalized this phenomenon as “cascading confidence” – the cumulative entity trust that builds or decays through every stage of the algorithmic pipeline, from the moment a bot discovers content to the moment an AI generates a recommendation. It’s the throughline concept in a framework I published this week.

Dig deeper: Search, answer, and assistive engine optimization: A 3-part approach

Every piece of content passes through 10 gates before influencing an AI recommendation

The pipeline is called DSCRI-ARGDW – discovered, selected, crawled, rendered, indexed, annotated, recruited, grounded, displayed, and won. That sounds complicated, but I can summarize it in a single question that repeats at every stage: How confident is the system in this content?

  • Is this URL worth crawling? 
  • Can it be rendered correctly? 
  • What entities and relationships does it contain? 
  • How sure is the system about those annotations? 
  • When the AI needs to answer a question, which annotated content gets pulled from the index? 

Confidence at each stage feeds the next. A URL from a well-structured, fast-rendering, semantically clean site arrives at the annotation stage with high accumulated confidence before a single word of content is analyzed. A URL from a slow, JavaScript-heavy site with inconsistent information arrives with low confidence, even if the actual content is excellent.

This is pipeline attenuation, and here’s where the math gets unforgiving. The relationship is multiplicative, not additive:

  • C_final = C_initial × ∏τᵢ

In plain English, the final confidence an AI system has in your brand equals the initial confidence from your entity home multiplied by the transfer coefficient at every stage of the pipeline. The entity home – the canonical web property that anchors your entity in every knowledge graph and every AI model – sets the starting confidence, and then each stage either preserves or erodes it. 

Maintain 90% confidence at each of 10 stages, and end-to-end confidence is 0.9¹⁰ = 35%. At 80% per stage, it’s 0.8¹⁰ = 11%. One weak stage – say 50% at rendering because of heavy JavaScript – drops the total from 35% to 19% even if every other stage is at 90%. One broken stage can undo the work of nine good ones.

This multiplicative principle isn’t new, and it doesn’t belong to anyone. In 2019, I published an article, How Google Universal Search Ranking Works: Darwinism in Search, based on a direct explanation from Google’s Gary Illyes. He described how Google calculates ranking “bids” by multiplying individual factor scores rather than adding them. A zero on any factor kills the entire bid, no matter how strong the other factors are.

Google applies this multiplicative model to ranking factors within a single system, and nobody owns multiplication. But what the cascading confidence framework does is apply this principle across the full 10-stage pipeline, across all three knowledge graphs.

The system provides measurable transfer coefficients at every transition and bottleneck detection that identifies exactly where confidence is leaking. The math is universal, but the application to a multi-stage, multi-graph algorithmic pipeline is the invention.

This complete system is the subject of a patent application I filed with the INPI titled “Système et procédé d’optimisation de la confiance en cascade à travers un pipeline de traitement algorithmique multi-étapes et multi-graphes.” It’s not a metaphor, it’s an engineered system with an intellectual lineage going back seven years to a principle a Google engineer confirmed to me in person.

Fishkin measured the output – the inconsistency of recommendation lists. But the output is a symptom, and the cause is confidence loss at specific stages of this pipeline, compounded across multiple knowledge representations.

You can’t fix inconsistency by measuring it more precisely. You can only fix it by building confidence at every stage.

The corroboration threshold is where AI shifts from hesitant to assertive

There’s a specific transition point where AI behavior changes. I call it the “corroboration threshold” – the minimum number of independent, high-confidence sources corroborating the same conclusion about your brand before the AI commits to including it consistently.

Below the threshold, the AI hedges. It says “claims to be” instead of “is,” it includes a brand in some outputs but not others, and the reason isn’t randomness but insufficient confidence.

The brand sits in the low-confidence zone, where inconsistency is the predictable outcome. Above the threshold, the AI asserts – stating relevance as fact, including the brand consistently, operating with the kind of certainty that produces City of Hope’s 97%.

My data across 73 million brand profiles places this threshold at approximately 2-3 independent, high-confidence sources corroborating the same claim as the entity home. That number is deceptively small because “high-confidence” is doing the heavy lifting – these are sources the algorithm already trusts deeply, including Wikipedia, industry databases, and authoritative media. 

Without those high-authority anchors, the threshold rises considerably because more sources are needed and each carries less individual weight. The threshold isn’t a one-time gate. Once crossed, the confidence compounds with every subsequent corroboration, which is why brands that cross it early pull further ahead over time, while brands that haven’t crossed it yet face an ever-widening gap.

Not identical wording, but equivalent conviction. The entity home states, “X is the leading authority on Y,” two or three independent, authoritative third-party sources confirm it with their own framing, and the AI encodes it as fact.

This fact is visible in my data, and it explains exactly why Fishkin’s experiment produced the results it did. In narrow categories like LA Volvo dealerships or SaaS cloud computing providers – where few brands exist and corroboration is dense – AI responses showed higher pairwise correlation. 

In broad categories like science fiction novels – where thousands of options exist and corroboration is thin – responses were wildly diverse. The corroboration threshold aligns with Fishkin’s findings.

Dig deeper: The three AI research modes redefining search – and why brand wins

Authoritas proved that fabricated entities can’t fool AI confidence systems

Authoritas published a study in December 2025 – “Can you fake it till you make it in the age of AI?” – that tested this directly, and the results confirm that Cascading Confidence isn’t just theory. Where Fishkin’s research shows the output problem – inconsistent lists – Authoritas shows the input side.

Authoritas investigated a real-world case where a UK company created 11 entirely fictional “experts” – made-up names, AI-generated headshots, faked credentials. They seeded these personas into more than 600 press articles across UK media, and the question was straightforward: Would AI models treat these fake entities as real experts?

The answer was absolute: Across nine AI models and 55 topic-based questions – “Who are the UK’s leading experts in X?” – zero fake experts appeared in any recommendation. Six hundred press articles, and not a single AI recommendation. That might seem to contradict a threshold of 2-3 sources, but it confirms it. 

The threshold requires independent, high-confidence sources, and 600 press articles from a single seeding campaign are neither independent – they trace to the same origin – nor high-confidence – press mentions sit in the document graph only.

The AI models looked past the surface-level coverage and found no deep entity signals – no entity home, no knowledge graph presence, no conference history, no professional registration, no corroboration from the kind of authoritative sources that actually move the needle.

The fake personas had volume, they had mentions, but what they lacked was cascading confidence – the accumulated trust that builds through every stage of the pipeline. Volume without confidence means inconsistent appearance at best, while confidence without volume still produces recommendations.

AI evaluates confidence — it doesn’t count mentions. Confidence requires multi-source, multi-graph corroboration that fabricated entities fundamentally can’t build.

Get the newsletter search marketers rely on.


AI citability concentration increased 293% in under two months

Authoritas used the weighted citability score, or WCS, a metric that measures how much AI engines trust and cite entities, calculated across ChatGPT, Gemini, and Perplexity using cross-context questions.

I have no influence over their data collection or their results. Fishkin’s methodology and Authoritas’ aren’t identical. Fishkin pinged the same query repeatedly to measure variance, while Authoritas tracks varied queries on the same topic. That said, the directional finding is consistent.

Their dataset includes 143 recognized digital marketing experts, with full snapshots from the original study by Laurence O’Toole and Authoritas in December 2025 and their latest measurement on Feb. 2. The pattern across the entire dataset tells a story that goes far beyond individual scores.

  • The top 10 experts captured 30.9% of all citability in December. By February, they captured 59.5% – a 92% increase in concentration in under two months.
  • The HHI, or Herfindahl-Hirschman Index, the standard measure of market concentration, rose from 0.026 to 0.104 – a 293% increase in concentration. This happened while the total expert pool widened from 123 to 143 tracked entities.

More experts are being cited, the field is getting bigger, and the top is pulling away faster. Dominance is compounding while the long tail grows.

This is cascading confidence at population scale. The experts who actively manage their digital footprint – clean entity home, corroborated claims, consistent narrative across the algorithmic trinity – aren’t just maintaining their position, they’re accelerating away from everyone else.

Each cycle of AI training and retrieval reinforces their advantage – confident entities generate confident AI outputs, which build user trust, which generate positive engagement signals, which further reinforce the AI’s confidence. It’s a flywheel, and once it’s spinning, it becomes very, very hard for competitors to catch up.

At the individual level, the data confirms the mechanism. I lead the dataset at a WCS of 23.50, up from 21.48 in December, a gain of +2.02. That’s not because I’m more famous than everyone else on the list.

It’s because we’ve been systematically building my cascading confidence for years – clean entity home, corroborated claims across the algorithmic trinity, consistent narrative, structured data, deep knowledge graph presence.

I’m the primary test case because I’m in control of all my variables – I have a huge head start. In a future article, I’ll dig into the details of the scores and why the experts have the scores they do.

The pattern across my client base mirrors the population data. Brands that systematically clean their digital footprint, anchor entity confidence through the entity home, and build corroboration across the algorithmic trinity don’t just appear in AI recommendations.

They appear consistently, their advantage compounds over time, and they exit the low-confidence zone to enter the self-reinforcing recommendation set.

Dig deeper: From SEO to algorithmic education: The roadmap for long-term brand authority

AI retrieves from three knowledge representations simultaneously, not one

AI systems pull from what I call the Three Graphs model – the algorithmic trinity – and understanding this explains why some brands achieve near-universal visibility while others appear sporadically.

  • The entity graph, or knowledge graph, contains explicit entities with binary verified edges and low fuzziness – either a brand is in, or it’s not.
  • The document graph, or search engine index, contains annotated URLs with scored and ranked edges and medium fuzziness.
  • The concept graph, or LLM parametric knowledge, contains learned associations with high fuzziness, and this is where the inconsistency Fishkin documented comes from.

When retrieval systems combine results from multiple sources – and they do, using mechanisms analogous to reciprocal rank fusion – entities present across all three graphs receive a disproportionate boost.

The effect is multiplicative, not additive. A brand that has a strong presence in the knowledge graph and the document index and the concept space gets chosen far more reliably than a brand present in only one.

This explains a pattern Fishkin noticed but didn’t have the framework to interpret – why visibility percentages clustered differently across categories. The brands with near-universal visibility aren’t just “more famous,” they have dense, corroborated presence across all three knowledge representations. The brands in the inconsistent pool are typically present in only one or two. 

The Authoritas fake expert study confirms this from the negative side. The fake personas existed only in the document graph, press articles, with zero entity graph presence and negligible concept graph encoding. One graph out of three, and the AI treated them accordingly.

What I tell every brand after reading Fishkin’s data

Fishkin’s recommendations were cautious – visibility percentage is a reasonable metric, ranking position isn’t, and brands should demand transparent methodology from tracking vendors. All fair, but that’s analyst advice. What follows is practitioner advice, based on doing this work in production.

Stop optimizing outputs and start optimizing inputs

The entire AI tracking industry is fixated on measuring what AI says about you, which is like checking your blood pressure without treating the underlying condition. Measure if it helps, but the work is in building confidence at every stage of the pipeline, and that’s where I focus my clients’ attention from day one.

Start at the entity home

My experience clearly demonstrates that this single intervention produces the fastest measurable results. Your entity home is the canonical web property that should anchor your entity in every knowledge graph and every AI model. If it’s ambiguous, hedging, or contradictory with what third-party sources say about you, it is actively training AI to be uncertain. 

I’ve seen aligning the entity home with third-party corroboration produce measurable changes in bottom-of-funnel AI citation behavior within weeks, and it remains the highest ROI intervention I know.

Cross the corroboration threshold for the critical claims

I ask every client to identify the claims that matter most:

  • Who you are.
  • What you do.
  • Why you’re credible. 

Then, I work with them to ensure each claim is corroborated by at least 2-3 independent, high-authority sources. Not just mentioned, but confirmed with conviction. 

This is what flips AI from “sometimes includes” to “reliably includes,” and I’ve seen it happen often enough to know the threshold is real.

Dig deeper: SEO in the age of AI: Becoming the trusted answer

Build across all three graphs simultaneously

Knowledge graph presence (structured data, entity recognition), document graph presence (indexed, well-annotated content on authoritative sites), and concept graph presence (consistent narrative across the corpus AI trains on) all need attention. 

The Authoritas study showed exactly what happens when a brand exists in only one – the AI treats it accordingly.

Work the pipeline from Gate 1, not Gate 9

Most SEO and GEO advice operates at the display stage, optimizing what AI shows. But if your content is losing confidence at discovery, selection, rendering, or annotation, it will never reach display consistently enough to matter. 

I’ve watched brands spend months on display-stage optimization that produced nothing because the real bottleneck was three stages earlier, and I always start my diagnostic at the beginning of the pipeline, not the end.

Maintain it because the gap is widening

The WCS data across 143 tracked experts shows that AI citability concentration increased 293% in under two months. The experts who maintain their digital footprint are pulling away from everyone else at an accelerating rate. 

Starting now still means starting early, but waiting means competing against entities whose advantage compounds every cycle. This isn’t a one-time project. It’s an ongoing discipline, and the returns compound with every iteration.

Fishkin proved the problem exists. The solution has been in production for a decade.

Fishkin’s research is a gift to the industry. He killed the myth of AI ranking position with data, he validated that visibility percentage, while imperfect, correlates with something real, and he raised the right questions about methodology that the AI tracking vendors should have been answering all along.

But tracking AI visibility without understanding why visibility varies is like tracking a stock price without understanding the business. The price is a signal, and the business is the thing.

AI recommendations are inconsistent when AI systems lack confidence in a brand. They become consistent when that confidence is built deliberately, through:

  • The entity home.
  • Corroborated claims that cross the corroboration threshold.
  • Multi-graph presence.
  • Every stage of the pipeline that processes your content before AI ever generates a response.

This isn’t speculation, and the evidence comes from every direction.

The process behind this approach has been under development since 2015 and is formalized in a peer-review-track academic paper. Several related patent applications have been filed in France, covering entity data structuring, prompt assembly, multi-platform coherence measurement, algorithmic barrier construction, and cascading confidence optimization.

The dataset supporting the work spans 25 billion data points across 73 million brand profiles. In tracked populations, shifts in AI citability have been observed — including cases where the top 10 experts increased their share from 31% to 60% in under two months while the overall field expanded. Independent research from Authoritas reports findings that align with this mechanism.

Fishkin proved the problem exists. My focus over the past decade has been on implementing and refining practical responses to it.

This is the first article in a series. The second piece, “What the AI expert rankings actually tell us: 8 archetypes of AI visibility,” examines how the pipeline’s effects manifest across 57 tracked experts. The third, “The ten gates between your content and an AI recommendation,” opens the DSCRI-ARGDW pipeline itself.

Google Ads adds beta data source integrations to conversion settings

17 February 2026 at 17:06

Google Ads is rolling out a beta feature that lets advertisers connect external data sources directly inside conversion action settings, tightening the link between first-party data and campaign measurement.

How it works. A new section in conversion action details — labeled “Get deeper insights about your customers’ behavior to improve measurement” — prompts advertisers to connect external databases to their Google tag.

  • Supported integrations include platforms like BigQuery and MySQL
  • The goal is to enrich conversion metrics and improve performance signals
  • The feature appears in a highlighted prompt within data attribution settings
  • Rollout is gradual and currently marked as Beta

Why we care. Direct integrations could reduce friction in syncing offline or backend data with ad measurement. This beta from Google Ads makes it easier to connect first-party data directly to conversion tracking, which can improve measurement accuracy and campaign optimization.

By integrating sources like BigQuery or MySQL, brands can feed richer customer data into their signals, helping offset data loss from privacy changes. In practical terms, better data in means smarter bidding, clearer attribution, and potentially stronger ROI.

Between the lines. Embedding data connections inside conversion settings — rather than requiring separate pipelines — makes advanced measurement more accessible to everyday advertisers, not just enterprise teams.

Zoom out. As ad platforms compete on measurement accuracy, native data integrations are becoming a key differentiator, especially for brands investing heavily in proprietary customer data.

How to create a persona GPT for SEO audience research

17 February 2026 at 17:00
How to create a persona GPT for SEO audience research

In a perfect world, you could call up a top customer to pick their brain about a piece of content. But in reality, it can be extremely difficult and time-consuming to conduct audience interviews every time you need to create a new topic or refresh an old piece. 

A few years ago, content marketing was simpler – keyword intent and quality content was enough to rank at the top of Google’s SERP to get clicks. But in the new era of AI, expectations are different.

Audience research has become critical. However, some companies may not have the resources to perform it.

One way to better understand your target audience is to create a custom GPT in ChatGPT, configured with your persona research. These aren’t replacements for audience research or interviews, but they can help you quickly identify what might be missing or wrong in your content. 

Below, I’ll explain how GPTs work so you can use them for audience research.

Perform audience research

Now that the SEO landscape is evolving, audience research is one of your strongest tools to understand the “why” behind search intent. 

Here are several easy-to-use methods and tools to get you started on research. 

  • SparkToro: Search by website, interest, or specific URL to segment different audience types. Research can be in-depth or give an overview of your audience. 
  • Review mining: Create automations through various tools and scrape reviews of your company or competitors to see what users are saying, and then analyze them. What does your target customer like? Why did they like it? What didn’t they like? Why?
  • Listen to calls/review leads: Listen to sales team interactions with customers to hear questions in real time and what led up to a call with a particular client.

Dig deeper: How to do audience research for SEO

Create a customer persona

After completing your research, create a persona – a representation of your target audience. Figma and FigJam are strong tools for building them.

Your persona should include: 

  • Name, bio, and trait slider.
  • Interests, influences, goals, pain points.
  • User stories.
  • The emotional journey during and after.
  • Content focus, trigger words, and calls to action (CTAs).
  • Full customer journey steps.
  • Reviews that support data.

Create a custom GPT of your persona

Now that you have all your research and your persona, it’s time to make a GPT. 

First, log in to ChatGPT, then go to Explore GPTs in the sidebar. 

In the upper right corner, click on Create.

ChatGPT - Create

Once there, prompt ChatGPT with your audience research data and persona information. You can paste in screenshots of your data to make it easier. 

ChatGPT - Hank persona

Once all your data is in and a GPT is created, you can start talking to it. Under the Configure tab, you can use conversation starters to ask it about changes, updates, and copy.

ChatGPT conversation starters

These GPTs, like all AI models, aren’t 100% accurate. They don’t replace a real audience survey or interview, but they can help you quickly identify issues with a piece of content and how it might not connect with your audience. 

Here’s an example of an optimized page. GPT “Hank” helped make sure the section above the fold did what was intended. 

GPT Hank 1
GPT Hank 2
GPT Hank 3

Hank has said what’s working, what isn’t working, and where to improve.

But should you take his advice 100% of the time? Of course not. 

But the GPT helps quickly identify issues you may have missed. That’s where the real benefit of using a GPT comes in. 

Dig deeper: 7 custom GPT ideas to automate SEO workflows

Get the newsletter search marketers rely on.


Ensure data from your GPT is accurate

Nothing analyzed or generated by AI is conclusive evidence. If you’re unsure your GPT is giving you accurate information, double-check by prompting it to provide evidence from the sources you gave it. 

GPT Hank - data accuracy

The GPT can correct itself if the information sounds off. When it does, again ask for evidence from the persona information you provided to double-check the new information. 

Update your persona-based GPT

You can always add more information to your GPT to make it more robust. 

To do this, go back to Explore GPTs in ChatGPT. 

Instead of Create, go to My GPTs in the top right-hand corner. 

Click on your persona. 

GPT Hank Haul

Click on Configure to update, add, or delete your current information.

GPT Hank Haul configuration

Remember that a persona is never a one-and-done situation. The more you learn about your audience and the more information you give a GPT, the better, to keep it up to date. 

Leverage persona GPTs for SEO content

Personas aren’t absolute, and AI can hallucinate. 

But both tools can still help you optimize content. 

Once you’re comfortable creating personas, you can build them for your general audience, specific segments, and individual campaigns.

SEO and marketing are always changing, and you can’t just set it and forget it. As you gain audience insights or if audience intent shifts, update information or delete anything no longer relevant in your GPT. 

When leveraged correctly, these tools can work with SEO to drive traffic and gain more conversions.

Google Ads tool is automatically re-enabling paused keywords

16 February 2026 at 23:36
Why Google Ads auctions now run on intent, not keywords

Some advertisers are reporting that a Google Ads system tool designed for low-activity bulk changes is automatically enabling paused keywords — a behavior many account managers say they haven’t seen before.

What advertisers are seeing. Activity logs show entries tied to Google’s “Low activity system bulk changes” tool that include actions enabling previously paused keywords. The log entries appear as automated bulk updates, with a visible “Undo” option.

Historically, the tool has been associated mainly with pausing inactive elements, not reactivating them.

What we don’t know. Google hasn’t publicly documented the behavior or clarified whether this is an intentional feature, a limited experiment, or a bug.

It’s also unclear what triggers the reactivation or how broadly the behavior is rolling out.

Why we care. Unexpected keyword reactivation can quietly alter campaign delivery, affecting budgets, pacing, and performance — especially in tightly controlled accounts where paused keywords are intentional.

For agencies and in-house teams, the change raises new concerns about automation overriding manual controls.

What advertisers should do now. Account managers may want to review change histories regularly, watch for unexpected keyword activations, and use undo functions quickly if unintended changes appear.

Until Google provides clarification, closer monitoring may be necessary for accounts relying heavily on paused keyword structures.

First seen. The issue was first flagged by Performance Marketing Consultant Francesco Cifardi on LinkedIn.

Before yesterdaySearch Engine Land

How to work with your SEO agency to drive better results, faster

16 February 2026 at 17:00
How to work with your SEO agency to drive better results, faster

Hiring an SEO agency can be a game-changer for brands looking to outshine the competition in search results. 

That said, an SEO agency is only as good as its partnership with its clients. That’s when SEO’s true value can be realized. 

What this looks like practically is working together towards shared goals and keeping momentum high. Sometimes that’s easier said than done. 

Here’s what you can do to ensure you get the most from your SEO agency partnership. 

Because when you’re aligned, you make progress faster and, in turn, can better prove ROI.  

The SEO agency-client partnership

Align SEO with what moves the business

Your company sets the business goals, and SEO’s job is to get the traffic to help you reach them. 

The more you align on goals with your agency, the more effective an SEO program will be. 

Before any campaign is launched, the business and the agency need to discuss how to align SEO with your business goals. 

This meeting is even more effective when you can get cross-departmental stakeholders to weigh in. 

Objectives can be anything – for instance, market expansion, revenue, building brand authority, enhancing the customer experience, or something else. When executed well, SEO can support nearly any business goal.  

This is also an excellent time to facilitate SEO training across teams. 

When departments are aligned on the foundational concepts of SEO, they can understand SEO’s function and their role in it. 

Dig deeper: SEO prioritization: How to focus on what moves the needle

Set the agenda for a productive kickoff 

What does a productive kickoff meeting look like? Here are some things that are important to cover:

  • Your pain points: Even if you already discussed your SEO pain points during the sales call, it’s important that your SEO team hears it directly from you and has an opportunity to ask questions.
  • The ins and outs of your business: Help the SEO team understand your business as best you can. You know your industry better than anyone, and the more the agency knows, the better your SEO program will be.
  • The program’s scope: Make sure you understand the scope and everyone’s role in the project. For example, how long is each phase of the project? Who is responsible on the agency side for which tasks? Who will move things forward at the client company? 
  • In-house capabilities: Update your SEO team on your current capabilities and resources, such as how many writers, developers or designers you have available for tackling tasks.
  • Common roadblocks: Discuss how to prepare for common roadblocks in SEO implementation. Your SEO agency is well-positioned to speak about these kinds of things, so you can be proactive on your end. 
  • Communication methods: You will want to know how to communicate with the agency (emails, Slack channels, Zoom meetings, etc.) and how often. The more communication, the better. Both parties benefit from staying top of mind; the last thing you want is to sign a contract and then things go dark.  
  • Reporting methods: Find out how the agency will report progress. Is it monthly? Quarterly? In what formats will the reports be delivered? Will the reporting structure meet your needs to show ROI to stakeholders?

Setting all these expectations early creates accountability that keeps the project moving and makes it easier to measure success later.

If needed: Shift your mindset from ‘SEO vendor’ to expert partner

If you’ve put in the research, vetted several agencies, and hired the best one, then there’s a bit of mindset work that may need to happen next to make the relationship as strong as it can be. 

While blind trust isn’t the goal, SEO agency clients should prepare themselves to receive and trust their SEO agency’s advice – after all, that’s why you hired them.

Give your agency the visibility it needs to perform

This is relatively simple.

Giving your SEO agency full visibility into historical and real-time performance sets your SEO team up for success on day one. 

Set up a protocol for agency access to:  

  • Google Search Console, GA4, Bing Webmaster Tools, your CMS, and relevant third-party analytics or reporting tools. 
  • CRM data or lead-quality feedback to help the agency align SEO efforts with revenue goals.
  • Any context on past search performance, campaign history, and prior SEO initiatives.
  • Revenue or operational data as needed, so you have another way to corroborate SEO performance.

Finally, ensure agency access is built into onboarding for any new tools and systems you adopt that may impact SEO. 

Dig deeper: How to onboard an SEO agency the right way

Make SEO a cross-functional effort

SEO often needs the cooperation of many teams.  

If you’ve done a good job of including the department leaders in the SEO planning phase, then you will likely get more done. 

Make SEO a cross-functional effort

To remain accountable, it’s advisable that all necessary team members attend key meetings with your SEO agency. 

The more they hear things firsthand, the smoother the implementation will go.

However, even the best plans can go awry when teams with competing priorities collide. This might be a question of culture, but that doesn’t mean you can’t make progress.

You might look into solutions like:

  • Cross-departmental team-building activities. 
  • Open communication about the purpose and goals of the SEO project.
  • Feedback loops to review the status of your interdepartmental collaboration and identify ways to improve. 

The more streamlined and responsive the collaboration, the faster your SEO efforts can gain traction. 

Create SEO content that’s powered by brand knowledge 

Your agency brings deep expertise in SEO, but only you know your brand, offerings, and customers on a deep level. 

This is why collaboration on content is necessary to create truly relevant, helpful content that search engines will rank. 

Rather than relegating SEO content to the agency with minimal involvement, commit to being an active partner in the process. 

This can include the following action items: 

Align on voice, brand, and messaging early

Sharing brand guidelines, tone of voice, and existing messaging frameworks are all helpful. 

Your agency should work as an extension of your marketing team, and that starts with having some guidelines for how the brand communicates. 

Transfer institutional knowledge

Nothing can get a content team up to speed quicker than reviewing existing content assets or plugging into an internal calendar. 

Here are some ways to transfer knowledge with your SEO agency:

  • Provide access to internal resources like product documentation, customer FAQs, or sales enablement content. 
  • Keep the team updated on relevant marketing and sales activities, such as events and promotions. 
  • Give real-time access to an editorial calendar to help plan for future content for the site, whether it’s editing existing pages or creating new ones.

Bring subject matter experts into the process

Identify in-house subject matter experts who can provide input or be interviewed as needed for the content. 

You can’t satisfy the “experience” or “expertise” aspect of Google’s E-E-A-T framework for quality content without some firsthand knowledge. 

Collaborate before content is written

Work together on outlines or briefs to align on structure and intent before drafting begins. 

Content is much stronger when in-house and agency teams are aligned before the content creation process starts.

Review for relevance

Review drafts not just for accuracy, but for alignment with your customers’ needs and expectations. 

The most important thing in SEO content is ensuring it’s relevant to your customers. 

The best content will align with your brand and your customers and sound like it came from your company. 

Strong SEO content comes from brands that bring the knowledge only they can provide.

Get the newsletter search marketers rely on.


Remove the approval friction that slows SEO down 

One of the biggest bottlenecks to SEO progress is waiting. Waiting for approvals, feedback, access, answers – all of these hinder your ability to compete faster in the search results. 

For a more streamlined process, approve all deliverables and tasks promptly. 

Making a commitment to moving the project forward could mean you have hard deadlines for turnaround times. 

To make this step more efficient, you can look into: 

  • Analyzing the approval workflow and identifying any bottlenecks upfront. 
  • Eliminating unnecessary approval steps or people to simplify the process. 
  • Establishing clear review/approval guidelines upfront to reduce confusion, which can slow the approval process. 
  • Using technology that helps make the process smoother (like Slack or others), where people can collaborate with ease. 
  • Leaning on your SEO agency to prioritize which tasks will yield the highest return and go from there.

In SEO, speed is often a competitive advantage. Streamlining the approval process is one way to keep momentum. 

Where SEO progress slows

Prioritize implementation above all else

Don’t be surprised by the results you didn’t get from the work you didn’t do.

Often, there are SEO tasks that need to be implemented on your end and require resources. 

Not implementing SEO agency recommendations is a prevalent challenge, and probably one of the biggest reasons clients end up leaving an agency.

The sole purpose of spending anything on marketing is to bring more money in than went out. 

When internal teams stall on implementing SEO tasks, it can halt the agency’s momentum, hinder your search progress, and waste the company’s budget. 

Technical SEO execution is often where SEO projects lose the most momentum. 

This is where bringing in IT and dev teams early on in the SEO process is invaluable. When they understand why a change matters and impacts performance, they are more likely to get on board. 

Regardless, you can still build in some guardrails to be proactive: 

  • Prioritize SEO tickets in sprint planning.  
  • Involve IT or dev early in discussions that include technical implementation. 
  • Allow direct communication between your agency and development team to speed up resolution. 
  • Create a process for flagging and tracking outstanding technical tasks. 

Take the time to make the updates that your SEO team says are worth the effort.

If the task is difficult or time-consuming but will have a big impact, do everything you can to get it done.

Doing what your competitors are unwilling or unable to do is how you win.

The technical foundation is what enables SEO to scale. The faster you can clear any roadblocks here, the sooner your investment starts delivering results.

Dig deeper: Why governance maturity is a competitive advantage for SEO

Stay engaged long after the kickoff

SEO is a long game. It’s only natural that excitement and momentum are high early on, but after a while, engagement can taper off. 

Client-agency partnerships often find their groove over time. The brand trusts the agency, and the agency knows what to do.

But from this place, cracks can begin to form. Maybe you’re not communicating as much, and that lack of communication can lead to gaps in knowledge – on both sides. 

Here are some tips to help you stay just as engaged on Day 365 as you were on Day 1.

Be present in reviews and check-ins

Be sure to attend regular check-ins and reporting calls to review progress and surface questions. 

It’s easy to miss calls when things pull you away, but staying committed to SEO means showing up, even if it’s just for 15 minutes. 

Keep SEO connected to business changes

Share updates on business priorities, product launches, or changes in market strategy. 

It’s important that SEO remains at the strategy table so it can adjust as needed. 

Use performance data to drive conversations and decisions

Trust in your SEO agency also comes with accountability. So hold your SEO team accountable. 

If you see a decline in rankings, traffic, or revenue coming from search, you need to have a conversation if they haven’t already brought it up. 

Use these performance insights to adjust tactics, reallocate resources, or explore new opportunities. 

Strong SEO results start with strong partnerships

SEO works best when both sides do their part. 

The more aligned and collaborative you are with your SEO agency, the faster your SEO program can gain traction and deliver value.

When both sides bring their expertise, stay engaged, and remove friction from the process, SEO becomes a strategic business initiative.

Andrea Cruz talks about turning client pressure into growth

13 February 2026 at 23:46


On episode 341 of PPC Live The Podcast, I speak to Andrea Cruz, Head of B2B at Tinuiti, to unpack a mistake many senior marketers quietly struggle with: freezing when clients demand answers you don’t immediately have.

The conversation explored how communication missteps can escalate client tension — and how the right mindset, preparation, and culture can turn those moments into career-defining growth.

From hands-on marketer to team leader

As Cruz advanced in her career, she shifted from managing campaigns directly to leading teams running large, complex accounts. That transition introduced a new challenge: representing work she didn’t personally execute day to day.

When clients pushed back — questioning performance or expectations — Cruz sometimes froze. Saying “I don’t know” or delaying a response could quickly erode trust and escalate frustration.

Her key realization: senior leaders are expected to provide perspective in the moment. Even without every detail, they must guide the conversation confidently.

How to buy time without losing trust

Through mentorship and experience, Cruz developed a practical technique: asking clarifying questions to gain thinking time while deepening understanding.

Examples include:

  • Asking clients to clarify expectations or timelines
  • Requesting additional context around their concerns
  • Confirming what the client already knows about the situation

These questions serve two purposes: they slow down emotionally charged moments and ensure responses address the real issue, not just the surface complaint.

For Cruz, this approach was especially important as a non-native English speaker, giving her space to process complex conversations and respond clearly.

A solutions-first culture beats blame

Cruz emphasized that mistakes are inevitable — but how teams respond defines long-term success.

At Tinuiti, the focus is not on assigning blame but on answering two questions:

  1. Where are we now?
  2. How do we get to where we want to be?

This solutions-oriented mindset creates psychological safety. Teams can openly acknowledge errors, run post-mortems, and identify patterns without fear. Cruz argues that leaders must model this behavior by sharing their own mistakes, not just scrutinizing others’.

That transparency builds trust internally and with clients.

Proactive communication builds stronger client relationship

Rather than waiting for clients to surface problems, Cruz encourages teams to raise issues first. Acknowledging underperformance — even when clients haven’t noticed — demonstrates accountability and strengthens partnerships.

She also recommends tailoring communication styles to each client. Some prefer concise updates; others want detailed explanations. Documenting these preferences helps teams deliver information in ways that resonate.

Regular check-ins about business roadblocks — not just campaign metrics — position agencies as strategic partners, not just media operators.

Common agency mistakes in B2B advertising

Cruz didn’t hold back on recurring issues she sees in audits:

  • Budgets spread too thin: Running too many channels with insufficient spend leads to meaningless data and weak performance.
  • Underfunded campaigns: B2B CPCs are inherently high. Campaigns generating only a few clicks per day rarely produce actionable results.

Her advice is blunt: if the budget can’t support a channel properly, it’s better not to run it.

AI is more than a summarization tool

On AI, Cruz cautioned against shallow usage. Treating AI as a simple spreadsheet summarizer misses its broader potential.

Her team is experimenting with advanced applications — automated audits, workflow integrations, and operational efficiencies. She compares AI’s role to medical diagnostics: a powerful assistant that augments expert judgment, not a replacement for it.

For marketers, that means staying curious and continuously exploring new use cases.

The takeaway: preparation and passion drive resilience

Cruz’s central message is simple: mistakes will happen. What matters is preparation, adaptability, and maintaining a solutions-first mindset.

By anticipating client needs, personalizing communication, and embracing experimentation, marketers can transform stressful moments into opportunities to build credibility.

The latest jobs in search marketing

13 February 2026 at 23:26
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • This Role in One Sentence Turn competitor backlink lists into clean outreach campaigns—find contacts, send smart messages, track replies tightly, and keep decisions fast. Is This You? You like tidy spreadsheets and clear notes more than big talk. You follow up without getting weird about it. You notice patterns fast (and write them down). You […]
  • Position Overview: Alan Gray LLC is seeking a strategic, research-driven professional to join our company as a Research & Insights Strategist, reporting directly to the COO. In this role you will drive industry intelligence, thought leadership and competitive positioning across the insurance and reinsurance landscape, helping transform these insights into actionable go-to-market activities. The ideal […]
  • The Company Renewal by Andersen is the replacement division of the 120 year old Andersen Corporation. Andersen is the oldest and largest window and door manufacture in North America. We focus on doing one thing, and doing it well, building the best products in the industry. We build the only unique window offering available in […]
  • Who We Are DCG is an award-winning, full-service engagement, digital, research, and data company with over 15 years of experience supporting the military, Veterans, and the American public. DCG strategically researches, plans, executes, and evaluates large-scale, multi-platform outreach initiatives across a wide range of mission-driven issues including human trafficking awareness, mental health stigma reduction, suicide […]
  • Job Description Principal Digital Strategist Location: Austin, TX (Hybrid. In office Monday, Wednesday, Friday) Reports To: Chief Digital Officer Team: Digital Strategy The Opportunity Intellibright is entering its next phase of growth. Our clients are larger. Their expectations are higher. And the work is no longer about managing channels or reporting metrics. It is about […]
  • Job Description Ingram Content Group (ICG) is searching for a Manager, Online Sales & Marketing to join our team in New York. In this role, you will lead metadata optimization and marketing initiatives and best practices for Ingram Publishing Services (IPS) publishers. You will also lead the marketing strategy for IndiePubs.com, Ingram’s direct to consumer e-commerce platform. […]
  • Job Description DeepSee delivers an open and flexible agentic platform to accelerate AI adoption for financial services in front, middle, and back-office operations. Our cloud-based platform seamlessly integrates with existing bank architectures, whether theyre just starting their AI transformation journey or looking to enhance existing in-house capabilities with Agentic AI solutions. With DeepSees pre-trained & […]
  • Digital Marketing Manager The Digital Marketing Manager will be expected to lead a team that effectively crafts and implements digital marketing initiatives including search marketing, social media, email marketing and lead management for clients in a variety of industries. Candidates should expect to be engaged in managing multiple team members, clients and simultaneous projects, assisting […]
  • About Us HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, from marketing agencies to entrepreneurs to small businesses and beyond. Our platform empowers […]
  • Omniscient Digital is an organic growth agency that partners with ambitious B2B SaaS companies like SAP, Adobe, Loom, and Hotjar to turn SEO and content into growth engines. About this role We’re hiring an SEO Outreach Specialist to partner with high-authority brands and build high-quality backlinks to support our clients’ growth and authority. You will […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • About GenScript GenScript Biotech Corporation (Stock Code: 1548.HK) is a global biotechnology group. Founded in 2002, GenScript has an established global presence across North America, Europe, the Greater China, and Asia Pacific. GenScript’s businesses encompass four major categories based on its leading gene synthesis technology, including operation as a Life Science CRO, enzyme and synthetic […]
  • Senior PPC Manager Wilshire Law Firm is a distinguished, award-winning legal practice with over 18 years of experience, specializing in Personal Injury, Employee Rights, and Consumer Class Action lawsuits. We are dedicated to upholding the highest standards of Excellence and Justice and are united in our commitment to achieve the best outcome for our clients. […]
  • Job Description Job Description This is a remote position. Marketing Manager – Social Media, Email, SEM/SEO Type: Full-Time, Exempt Location: Remote (U.S.-based) Salary: $90,000 – $110,000 annually plus benefits Application Window: Open until filled About Outdoor Afro Outdoor Afro celebrates and inspires Black connections and leadership in nature. Our network connects Black people with our […]
  • Description The Performance Marketing Project Manager supports Mad Fish Digital’s growing client portfolio, managing paid media, SEO, and other performance marketing projects. This role blends strong project management leadership with a deep understanding of digital marketing channels & strategy. You will ensure high-volume, complex, and fast-moving workstreams are delivered efficiently, on time, within scope, and […]
  • Job Description Maxwood Furniture, a rapidly growing furniture company with over two decades of success, is home to an expanding portfolio of brands, including Max & Lily, Plank + Beam, Maxtrix, and more. With thriving direct-to-consumer (DTC) websites, we’re seeking a Google Ads Strategist to join our e-Commerce team. If you’re passionate about driving high-impact […]

Other roles you may be interested in

Senior Manager, SEO, Kennison & Associates (Hybrid, Boston MA)

  • Salary: $150,000 – $180,000
  • You’ll own high-visibility SEO and AI initiatives, architect strategies that drive explosive organic and social visibility, and push the boundaries of what’s possible with search-powered performance.
  • Every day, you’ll experiment, analyze, and optimize-elevating rankings, boosting conversions across the customer journey, and delivering insights that influence decisions at the highest level.

Backlink Manager (SEO Agency), SEOforEcommerce (Remote)

  • Salary: $60,000
  • Managing and overseeing backlink production across multiple campaigns
  • Reviewing and approving backlink opportunities (guest posts, niche edits, outreach-based links, etc.)

Senior Content Marketing Manager / Director, ClarityPay (Hybrid, New York, NY)

  • Salary: $95,000 – $135,000
  • Create high-quality content for core channels: website, LinkedIn, email, SMS, and internal communications
  • Write clear, compelling, and on-brand copy—from lifecycle messaging and short-form updates to long-form pages and narratives

PPC Specialist, BrixxMedia (Remote)

  • Salary: $80,000 – $115,000
  • Manage day-to-day PPC execution, including campaign builds, bid strategies, budgets, and creative rotation across platforms
  • Develop and refine audience strategies, remarketing programs, and lookalike segments to maximize efficiency and scale

Performance Marketing Manager, Mailgun, Sinch (Remote)

  • Salary: $100,000 – $125,000
  • Manage and optimize paid campaigns across various channels, including YouTube, Google Ads, Meta, Display, LinkedIn, and Connected TV (CTV).
  • Drive scalable growth through continuous testing and optimization while maintaining efficiency targets (CAC, ROAS, LTV)

SEO and AI Search Optimization Manager, Big Think Capital (New York)

  • Salary: $100,000
  • Own and execute Big Think Capital’s SEO and AI search (GEO) strategy
  • Optimize website architecture, on-page SEO, and technical SEO

Senior Copywriter, Viking (Hybrid, Los Angeles Metropolitan Area)

  • Salary: $95,000 – $110,000
  • Editorial features and travel articles for onboard magazines
  • Seasonal web campaigns and themed microsites

Digital Marketing Manager, DEPLOY (Hybrid, Tuscaloosa, AL)

  • Salary: $80,000
  • Strong knowledge of digital marketing tools, analytics platforms (e.g., Google Analytics), and content management systems (CMS).
  • Experience Managing Google Ads and Meta ad campaigns.

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Senior Content Manager, TrustedTech (Irvine, CA)

  • Salary: $110,000 – $130,000
  • Develop and manage a content strategy aligned with business and brand goals across blog, web, email, paid media, and social channels.
  • Create and edit compelling copy that supports demand generation and sales enablement programs.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Cloudflare’s Markdown for Agents AI feature has SEOs on alert

13 February 2026 at 22:25
Shadow web

Cloudflare yesterday announced its new Markdown for Agents feature, which serves machine-friendly versions of web content alongside traditional human-facing pages.

  • Cloudflare described the update as a response to the rise of AI crawlers and agentic browsing.
  • When a client requests text/markdown, Cloudflare fetches the HTML from the origin server, converts it at the edge, and returns a Markdown version.
  • The response also includes a token estimate header intended to help developers manage context windows.
  • Early reactions focused on the efficiency gains, as well as the broader implications of serving alternate representations of web content.

What’s happening. Cloudflare, which powers roughly 20% of the web, said Markdown for Agents uses standard HTTP content negotiation. If a client sends an Accept: text/markdown header, Cloudflare converts the HTML response on the fly and returns Markdown. The response includes Vary: accept, so caches store separate variants.

  • Cloudflare positioned the opt-in feature as part of a shift in how content is discovered and consumed, with AI crawlers and agents benefiting from structured, lower-overhead text.
  • Markdown can cut token usage by up to 80% compared to HTML, Cloudflare said.

Security concern. SEO consultant David McSweeney said Cloudflare’s Markdown for Agents feature could make AI cloaking trivial because the Accept: text/markdown header is forwarded to origin servers, effectively signaling that the request is from an AI agent.

  • A standard request returns normal content, while a Markdown request can trigger a different HTML response that Cloudflare then converts and delivers to the AI, McSweeney showed on LinkedIn.
  • The concern: sites could inject hidden instructions, altered product data, or other machine-only content, creating a “shadow web” for bots unless the header is stripped before reaching the origin.

Google and Bing’s markdown smackdown. Recent comments from Google and Microsoft representatives discourage publishers from creating separate markdown pages for large language models. Google’s John Mueller said:

  • “In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?”

And Microsoft’s Fabrice Canel said:

  • “Really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Humans eyes help fixing people and bot-viewed content. We like Schema in pages. AI makes us great at understanding web pages. Less is more in SEO !”
  • Cloudflare’s feature doesn’t create a second URL. However, it generates different representations based on request headers.

The case against markdown. Technical SEO consultant Jono Alderson said that once a machine-specific representation exists, platforms must decide whether to trust it, verify it against the human-facing version, or ignore it:

  • “When you flatten a page into markdown, you don’t just remove clutter. You remove judgment, and you remove context.”
  • “The moment you publish a machine-only representation of a page, you’ve created a second candidate version of reality. It doesn’t matter if you promise it’s generated from the same source or swear that it’s ‘the same content’. From the outside, a system now sees two representations and has to decide which one actually reflects the page.”

Dig deeper. Why LLM-only pages aren’t the answer to AI search

Why we care. Cloudflare’s move could make AI ingestion cheaper and cleaner. But could it be considered cloaking if you’re serving different content to humans and crawlers? To be continued…

Google Ads adds ROAS-based tool for valuing new customers

13 February 2026 at 22:08
Top 5 Google Ads opportunities you might be missing

Google Ads is rolling out a feature that lets advertisers calculate conversion value for new customers based on a target return on ad spend (ROAS), automatically generating a suggested value instead of relying on manual estimates.

The update is designed for campaigns using new customer acquisition goals, where advertisers want to bid more aggressively to attract first-time buyers.

How it works. Advertisers enter their desired ROAS target for new customers, and Google Ads proposes a conversion value aligned with that goal. The system removes some of the guesswork involved in estimating how much a new customer should be worth in bidding models.

The feature doesn’t yet adjust dynamically at the auction, campaign, or product level. Advertisers still apply the value at a broader setting rather than letting the system vary bids based on context.

Why we care. Assigning the right value to a new customer is a weak spot in performance bidding. Many advertisers manually set a flat value that doesn’t always reflect profitability or long-term goals.

By tying suggested conversion values to a target ROAS, advertisers can now optimise towards a more strategy-driven bidding, potentially improving how acquisition campaigns balance growth and efficiency.

What advertisers are saying. Early reactions suggest the feature is a meaningful improvement over static manual inputs. Founder of Savvy Revenue, Andrew Lolk argues the next step would be auction-level intelligence that adjusts values depending on campaign or product performance.

What to watch. If Google expands the feature to support more granular adjustments, it could further reshape how advertisers structure acquisition strategies and value lifetime customer growth.

For now, the tool offers a more structured way to calculate new customer value.

First seen. This update was first spotted by Founder and Digital Marketer Andrew Lolk who showed the new setting on LinkedIn.

SEO leaders: stop chasing rankings, start building visibility systems

13 February 2026 at 19:00
AI search is forcing SEO to become organizational infrastructure

SEO is moving out of the marketing silo into organizational design. Visibility now depends on how information is structured, validated, and aligned across the business.

When information is fragmented or contradictory, visibility becomes unstable. The risk isn’t just ranking volatility – it’s losing control of how your brand is interpreted and cited.

For SEO leaders, the choice is unavoidable: remain a channel optimizer or shape the systems that govern how your organization is understood and cited. That shift isn’t happening in a vacuum. AI systems now interpret, reconcile, and assemble information at scale.

The visibility shift beyond rankings

The future of organic search will be shaped by LLMs alongside traditional algorithms. Optimizing for rankings alone is no longer enough. Brands must optimize for how they are interpreted, cited, and synthesized across AI systems.

Clicks may fluctuate and traffic patterns may shift, but the larger change is this: visibility is becoming an interpretation problem, not just a positioning problem. AI systems assemble answers from structured data, brand narratives, third-party mentions, and product signals. When those inputs conflict, inconsistency becomes the output.

In the AI era, collaboration can’t be informal or personality-driven. LLMs reflect the clarity, consistency, and structure of the information they ingest. When messaging, entity signals, or product data are fragmented, visibility fragments with them.

This is a leadership challenge. Visibility can’t be achieved in a silo. It requires redesigning the systems that govern how information is created, validated, and distributed across the organization. That’s how visibility becomes structural, not situational.

If visibility is structural, it needs a system.

Building the visibility supply chain

Collaboration shouldn’t depend on whether the SEO manager and PR manager get along. It must be built into the content supply chain.

To move from a marketing silo to an operational design, we must treat content like an industrial product that requires specific refinement before it’s released into the ecosystem.

This is where visibility gates come in: a series of nonnegotiable checkpoints that filter brand data for machine consumption.

Implementing visibility gates

Think of your content moving through a high-pressure pipe. At each joint, a gate filters out noise and ensures the output is pure:

  • The technical gate (parsing)
    • The filter: Does the new product page template use valid schema.org markup (product, FAQ, review)?
    • The goal: Ensuring the raw material is structured so LLMs can ingest the data without friction.
  • The brand signal gate (clustering)
    • The filter: Does the PR copy align with our core entities? Are we using terminology that helps LLMs cluster our brand correctly?
    • The goal: Removing linguistic drift that confuses an LLM’s understanding of who we are.
  • The accessibility/readability gate (chunking)
    • The filter: Is the content structured for RAG (retrieval-augmented generation) systems?
    • The goal: Moving away from fluff and towards high-information-density prose that can be easily chunked and retrieved by an AI.
  • The authority and de-duplication gate (governance)
    • The filter: Does this asset create “knowledge cannibalization” or internal noise?
    • The goal: Acting as a final sieve to remove conflicting information, ensuring the LLM sees only one single source of truth.
  • The localization gate (verification)
    • The filter: Is the entity information consistent across global regions?
    • The goal: Ensuring cross-referenced data points align perfectly to build model trust.
The visibility supply chain

If gates protect what enters the ecosystem, accountability ensures that behavior changes.

Embedding visibility into cross-functional OKRs

But alignment without visibility into results won’t sustain change.

The most sophisticated infrastructure will fail if it relies on the SEO team’s influence alone.

To move beyond polite collaboration, visibility must be codified into the organization’s performance DNA.

We need to shift from SEO-specific goals to shared visibility OKRs.

When a product owner is measured on the machine-readability of a new feature, or a PR lead is incentivised by entity citation growth, SEO requirements suddenly migrate from the bottom of the backlog to the top of the sprint.

What shared OKRs look like in an operational design:

  • For product teams: “Achieve 100% schema validation and <100ms time-to-first-byte for all top-tier entity pages.”
  • For PR and communications: “Increase ‘brand-as-a-source’ citations in LLM responses by 15% through high-authority, entity-aligned placements.”
  • For content teams: “Ensure 90% of new assets meet the ‘high information density’ threshold for RAG retrieval.”

When stakeholders’ KPIs are tied to the brand’s digital footprint, visibility is no longer “the SEO team’s job.” Instead, it becomes a collective business imperative. 

This is where the magic happens: the organizational structure finally aligns with the way modern search engines actually work.

Measuring visibility across the organization

The gates ensure the quality of what we put into the digital ecosystem; the unified visibility dashboard measures what we get out. Breaking down silos starts with transparent data.

If the PR team can see which mentions drive AI citations and source links in AI Overviews, they’re more likely to shift toward high-authority, contextually relevant publications instead of chasing any media outlet.

We need to shift from reporting rankings to reporting entity health and Share of Model (SoM). This dashboard is the organization’s single source of truth, showing that when we pass the visibility gates correctly, our brand authority grows with humans and machines.

Systems and incentives matter, but they don’t operate on their own.

Dig deeper: Why most SEO failures are organizational, not technical

Hiring for AI-era visibility

Having the right infrastructure isn’t enough. We need a specific set of qualities in the workforce to drive this model. To navigate the visibility transformation, we need to move away from hiring generalists and start hiring for the two distinct pillars of an operational search strategy.

In my experience, this requires a strategic duo: the hacker and the convincer.

FeatureThe hacker (technical architect)The convincer (visibility advocate)
Core missionEnsuring the brand is discoverable by machines.Ensuring the brand is supported by humans.
Primary domainRAG architecture, schema, vector databases, and LLM testing.Cross-departmental OKRs, C-suite buy-in, and PR/brand alignment.
Success metricShare of model (SoM) and information density.Resource allocation and budget growth.
The gate focusTechnical, accessibility, and authority gates.Brand signal and localization gates.

The hacker: The engine room

Deeply technical, driven, and a relentless early adopter. They don’t just “do SEO.” They reverse-engineer how Perplexity attributes trust and how Google’s knowledge vault weighs brand entities. 

They find the “how.” They aren’t just optimizing for a search bar, but are optimizing for agentic discovery, ensuring your brand is the path of least resistance for an LLM’s reasoning engine.

The convincer: The social butterfly of data

This is the visionary who brings people together and talks the language of business results. They act as the social glue, ensuring the hacker’s technical insights are actually implemented by the brand, tech, and PR teams. They translate schema validation into executive visibility, ensuring that the budget flows where it’s needed most.

Hacker vs. convincer

Get the newsletter search marketers rely on.


How AI visibility reshapes in-house and agency roles

As roles evolve, the brand-agency relationship shifts with them. If you’re an in-house SEO manager today, you’re likely evolving into a chief visibility officer, focusing on the “convincer” role of internal politics and resource allocation.

Historically, agencies were the training ground for talent, and brands hired them for execution. That dynamic may flip. In this new era, brands could become training grounds for junior specialists who need to understand a single entity deeply and manage its internal gates. 

Meanwhile, agencies may evolve into elite strategic partners staffed by seasoned visibility hackers who help brands navigate high-level visibility transformation that in-house teams are often too siloed or time-constrained to see.

Dig deeper: Why governance maturity is a competitive advantage for SEO

Leading the transition in the first 90 days

To prepare your team for the shift to SEO as an operational approach, take these steps:

  • Set the vision: Do you want to be part of the change? Define what visibility-first looks like for your business.
  • Take stock of talent: Do you have hackers and convincers? Audit your team not just for skills, but for mindset.
  • Audit the gaps: Where does communication break down? Find friction points between SEO and PR, or SEO and product, and fix them quickly.
  • Shift the KPIs: Move away from rankings and toward channel authority, impressions, sentiment share, and, most importantly, revenue and leads.
  • Be radically transparent: Clarity is key. You’ll need new templates, job descriptions, and responsibilities. Data should be shared in real time. There’s no room for siloed thinking.

What the first 90 days should look like:

  • Days 1-30 (Audit): Map your brand’s entity footprint. Where does your brand data live, and where is it conflicting?
  • Days 31-60 (Infrastructure): Embed visibility gates into your CMS or project management tool, such as Jira or Asana.
  • Days 61-90 (Incentives): Tie 10% of the PR and product teams’ bonuses to information integrity or AI citation growth.

The SEO leader as a systems architect

As we move further into the age of AI, the successful SEO leader will no longer be the person who simply moves a page from position four to position one. They’ll be the systems architect who builds the infrastructure that allows a brand to be seen, understood, and recommended by machines and humans alike.

This transition is messy. It requires challenging old thought patterns and communicating transparently and directly to secure buy-in. But by redesigning the structures that create silos, we don’t just “do SEO.” We build a resilient organization that is visible by default, regardless of what the next algorithm or LLM brings.

The future of search isn’t just about keywords. It’s about how your organization’s information flows through the digital ecosystem. It’s time to stop optimizing pages and start optimizing organizations.

Dig deeper: AI governance in SEO: Balancing automation and oversight

Why creative, not bidding, is limiting PPC performance

13 February 2026 at 18:00
Why creative, not bidding, is limiting PPC performance

For a long time, PPC performance conversations inside agencies have centered on bidding – manual versus automated, Target CPA versus Maximize Conversions, incrementality debates, budget pacing and efficiency thresholds.

But in 2026, that focus is increasingly misplaced. Across Google Ads, Meta Ads, and other major platforms, bidding has largely been solved by automation. 

What’s now holding performance back in most accounts isn’t how bids are set, but the quality, volume, and diversity of creative being fed into those systems. Recent platform updates, particularly Meta’s Andromeda system, make this shift impossible to ignore.

Bidding has been commoditized by automation

Most advertisers today are using broadly similar bidding frameworks.

Google Smart Bidding uses real-time signals across device, location, behavior, and intent that humans can’t practically manage at scale. Meta’s delivery system works in much the same way, optimizing toward predicted outcomes rather than static audience definitions.

In practice, this means most advertisers are now competing with broadly the same optimization engines.

Google has been clear that Smart Bidding evaluates millions of contextual signals per auction to optimize toward conversion outcomes. Meta has likewise stated that its ad system prioritizes predicted action rates and ad quality over manual bid manipulation.

The implication is simple. If most advertisers are using the same optimization engines, bidding is no longer a sustainable competitive advantage. It’s table stakes.

What differentiates performance now is what you give those algorithms to work with – and the most influential input is creative.

Andromeda makes creative a delivery gate

Meta’s Andromeda update is the clearest evidence yet that creative is no longer just a performance lever. It’s now a delivery prerequisite. This matters because it changes what gets shown, not just what performs best once shown.

Meta published a technical deep dive explaining Andromeda, its next-generation ads retrieval and ranking system, which fundamentally changes how ads are selected.

Instead of evaluating every eligible ad equally, Meta now filters and ranks ads earlier in the process using AI models trained heavily on creative signals, improving ad quality by more than 8% while increasing retrieval efficiency.

What this means in practice is critical for marketers. Ads that don’t generate strong engagement signals may never meaningfully enter the auction, regardless of targeting, budget, or bid strategy.

If your creative doesn’t perform, the platform doesn’t just charge you more. It limits your reach altogether.

Dig deeper: Inside Meta’s AI-driven advertising system: How Andromeda and GEM work together

Creative is now the primary optimization input on Meta

Meta has repeatedly stated that creative quality is one of the strongest drivers of auction outcomes.

In its own advertiser guidance, Meta highlights creative as a core factor in delivery efficiency and cost control. Independent analysis has reached the same conclusion.

A widely cited Meta partnered study showed that campaigns using a higher volume of creative variants saw a 34% reduction in cost per acquisition, despite lower impression volume.

The reason is straightforward. More creative gives the system more signals. More signals improve matching. Better matching improves outcomes.

Andromeda accelerates this effect by learning faster and filtering harder. This is why many advertisers are experiencing plateaus even with stable bidding and budgets. Their creative inputs are not keeping pace with the system’s learning requirements.

Google Ads is quietly making the same shift

While Google has not branded its changes as dramatically as Meta, the direction is the same. Performance Max, Demand Gen, Responsive Search Ads, and YouTube Shorts all rely heavily on creative assets to unlock inventory.

Google has explicitly stated that asset quality and diversity influence campaign performance. Accounts with limited creative assets consistently underperform those with strong asset coverage, even when bidding strategies and budgets are otherwise identical.

Google has reinforced this by introducing creative-focused tools such as Asset Studio and Performance Max experiments that allow advertisers to test creative variants directly. As with Meta, the algorithm can only optimize what it is given.

Strong creative expands reach and efficiency. Weak creative constrains both.

Dig deeper: A quiet Google Ads setting could change your creative

The plateau problem agencies keep hitting

Many agencies are seeing the same pattern across accounts. Performance improves after structural fixes or bidding changes. Then it flattens.

Scaling spend leads to diminishing returns. The instinct is often to revisit bids or efficiency targets. But in most cases, the real constraint is creative fatigue.

Audiences have seen the same hooks, visuals, and messages too many times. Engagement drops. Estimated action rates fall. Delivery becomes more expensive.

This isn’t a platform issue. It’s a creative cadence issue. Creative testing is the missing optimization lever in mature accounts.

Get the newsletter search marketers rely on.


The agency bottleneck: Creative production

Most agencies are structurally set up to optimize bids, budgets, and structure faster than they can produce new creative.

Creative takes time. It requires strategy, copy, design, video, approvals, and iteration. Many retainers still treat creative as a one-off or an add-on rather than a core performance input. The result is predictable. Accounts are technically sound but creatively starved.

If your account has had the same core ads running for three months or more, performance is almost certainly being limited by creative volume, not optimization skill.

High-performing accounts today look messy on the surface with dozens of ads, multiple hooks, frequent refreshes, and constant testing. That isn’t inefficiency. That’s how modern PPC works.

Creative testing is a process, not a campaign

One of the biggest mistakes agencies make is treating creative testing as episodic. Launch new ads. Wait four weeks. Review results. Declare winners and losers. That approach is too slow for how fast platforms learn and audiences fatigue.

High-performing teams treat creative like a product roadmap. There’s always something new in development. Always something learning. Always something being retired.

Effective creative testing focuses on one variable at a time: hook, opening line, visual style, offer framing, social proof, or call to action.

It’s not about finding “the best ad.” It’s about building a library of messages the algorithm can deploy to the right people at the right time.

Dig deeper: Your ads are dying: How to spot and stop creative fatigue before it tanks performance

What agencies should do differently

Once you accept that creative is the constraint, the operational implications are unavoidable. If creative is the main constraint, agency processes need to change.

Creative should be planned alongside media, not after it. Retainers should include ongoing creative production, not just optimization time. Testing frameworks should be explicit and documented.

At a minimum, agencies should be asking:

  • How often are we refreshing creative by platform?
  • Are we testing new hooks or just new designs?
  • Do we have enough volume for the algorithm to learn?
  • Are we feeding performance insights back into creative strategy?

The best agencies now operate closer to content studios than optimization factories. That’s where the value is.

Creative is the performance lever

Bidding, tracking, and structure still matter. But in 2026, those are table stakes.

If your PPC performance is stuck, the answer is rarely another bidding tweak. It’s almost always better creative. More of it. Faster iteration. Smarter testing.

The platforms have told us this. The data supports it. The accounts prove it.

Creative is no longer a nice-to-have. It’s the performance lever. The agencies that recognize that will be the ones that continue to grow.

Dig deeper: Cross-platform, not copy-paste: Smarter Meta, TikTok, and Pinterest ad creative

How to optimize news content for today’s social-first Google SERP

13 February 2026 at 17:00
How to optimize news content for today’s social-first Google SERP

We’re in a new era where web content visibility is fragmenting across a wide range of search and social platforms.

While still a dominant force, Google is no longer the default search experience. Video-based social media platforms like TikTok and community-based sites like Reddit are becoming popular search engines with dedicated audiences. 

This trend is impacting how news content is consumed. Google’s current news SERP evolution is directly influenced by the personalization of query responses offered by LLMs and the rise in influencer authority enabled by social media platforms. 

Google has responded by creating its own AI-powered SERP features, such as AI Overviews and AI Mode, and surfacing more content from social media platforms that provide the “helpful, reliable, people-first content” that Google’s ranking systems prioritize.

Now that search and social are more intertwined than ever, a new paradigm is needed – one in which newsroom audience teams made up of social media, SEO, and AI specialists work holistically on a daily basis toward a cohesive content visibility goal. 

When optimizing news content for social platforms, publishers should also consider how those posts may perform in the Google SERP. I’ll cover optimizing for specific SERP features below, but first, you’ll want to think about making your news content social-friendly.

Optimize news content for social media platforms

First, a dose of sanity. Publishers should resist the temptation to optimize content for every social media platform.

It’s better to pick one or two social platforms – where an audience is already established and that offer the best opportunity for growth – than to create accounts on every social platform and let them languish.

Review analytics and conduct audience surveys to gain insights into which platforms your audience already consumes news content. 

Here’s a breakdown by platform of which content types work best and how content from each platform can appear on Google.

YouTube

If you’re producing YouTube video content, make sure to follow video SEO best practices. This comprehensive YouTube SEO guide will help you develop a successful video strategy and ensure video titles align with your content.

Per Google, YouTube’s search ranking system prioritizes three elements: 

  • Relevance: Metadata needs to accurately represent video content to be surfaced as relevant for a search query.
  • Engagement: Includes factors such as a video’s watch time for a specific user query.
  • Quality: Video content should show topic expertise, authoritativeness, and trustworthiness.

One trend I’ve noticed in YouTube videos on the Google SERP is that older event content can continue to drive visibility rankings long after the event has ended and well after the related article has faded in search rankings.

Explainer videos also demonstrate longevity on the Google SERP. In this government shutdown explainer video, Yahoo Finance includes the expert’s credentials in the description box, further emphasizing the topic expertise element that YouTube’s ranking system prioritizes. 

YouTube can also help your visibility in AI Overviews. Nearly 30% of Google AI Overviews cite YouTube, according to BrightEdge. YouTube was cited most often for tutorials, reviews, and shopping-related queries.

Dig deeper: YouTube is no longer optional for SEO in the age of AI Overviews

Facebook

While Facebook may not be the cool kid on the block anymore, the social platform has served a diverse set of users over its long history, from its initial audience of college kids to now attracting an older, majority female audience, per Pew Research Center data

Community-based content and entertainment news that sparks conversation is key to engagement success on Facebook. 

While Meta removed the dedicated news tab on Facebook in 2023-2024, leading to cratering Facebook referrals for news publishers, it’s worth noting that Facebook posts have been rising in Google SERP visibility over the last year, so it may be time to reconsider the platform from a search perspective.

In my review of Google search visibility, Facebook posts about holidays and the full moon appear consistently, and the short-form video format is popular. 

X

Since Elon Musk took over the platform in 2022, the audience has shifted to the political right. While the left’s exodus made headlines, usage of X for news is stable or increasing, especially in the U.S., according to the 2025 Digital News Report from the Reuters Institute. 

Breaking news, live updates, and political news dominate X feeds and Google visibility, but don’t overlook sports content, where X posts perform well on both the Google SERPs and Discover. 

Instagram

This platform emphasizes stylish, visually driven stories and topics, such as red-carpet fashion at award shows. Health topics, especially nutrition and self-care, are also popular. 

Sports posts from Instagram, especially game highlights, often surface on the Google SERP as part of a dedicated publisher carousel or in “What people are saying.” 

Reddit

A unique aspect of Reddit is that its user base is often not on other social platforms. For news publishers, this can mean a golden opportunity for niche community engagement, but also requires a dedicated strategy that may not translate well to other platforms.

A wide range of news content can perform well on Reddit, from trending topics to health explainers to live sports coverage, but having a deep understanding of the platform’s audience is critical, as is following the Reddit rules of conduct

Publishers should spend time studying the types of news articles and conversations that drive strong engagement on subreddits before posting anything. Per Reddit, the platform’s largest audiences gravitate toward the following topics:

  • Technology.
  • Health.
  • Direct to consumer (DTC).
  • Gaming.
  • Parenting.

The community discussion forum content from Reddit makes it a natural to appear in the Google SERP as part of the “What people are saying” carousel. The Reddit posts I see most often surfaced by Google are related to sports, entertainment, and business.

Dig deeper: A smarter Reddit strategy for organic and AI search visibility

TikTok

The TikTok user base leans female and has a greater share of people of color. Approximately half of 18- to 29-year-olds in the U.S. self-report going on TikTok at least once daily, per Pew Research data

Visual, conversational, and opinion-based content for younger audiences performs best on TikTok. Niche community content also works well; think fashion, #BookTok, etc.

Remember that short-form video requires a dedicated strategy to maximize engagement and reach, and it’s important to keep in mind that TikTok audiences value authenticity over the polish of a professional newsroom production.

Entertainment and shopping content (sales, product reviews) are the categories in which TikTok demonstrates the most Google visibility.

Pinterest

While Pinterest may feel like an old-school social platform, Gen Z is its fastest-growing audience. That being said, Pinterest attracts users from across a wide range of age groups. According to Pinterest’s global data, its audience is 70% women and 30% men.

Don’t overlook the power of Pinterest for lifestyle content niches. Trends around fashion, home decor, DIY, crafts, recipes, and celebrity content are top performers on this visual social platform. 

News publishers interested in this platform should have robust lifestyle content that is actionable and delivered with a motivational tone.

How-to and before/after formats are popular. Excellent quality visuals in a vertical format with a 2:3 aspect ratio and text overlays are recommended. Pinterest supports a more relaxed posting schedule compared to other social platforms. Weekly posting is ideal, since much of the content on Pinterest is evergreen.

Similar to Google Trends, Pinterest Trends can help news publishers stay on top of trending topics on the platform. 

Get the newsletter search marketers rely on.


Social content opportunities by Google SERP feature

If you’re looking to appear in a particular SERP feature, it’s helpful to know how social platform content appears in each type.

Top Stories (or News Box)

The crown jewel of the Google SERP for news publishers, this feature is dedicated to breaking news and developing news stories as well as capturing updates for the big news stories and trends of the moment.

Thumbnail selection is critical for Top Stories. Publishers should pay close attention to the News Box descriptive labels to ensure content is optimized to match the specific intent or angle Google is seeking.

While historically a SERP feature that showcased traditional news publishers, Google is now including relevant social media content in the mix. The Instagram post in Top Stories below is an Instagram Reel from the Detroit Free Press.

Top stories - 2026 Detroit Auto Show

Live update articles are often featured in the News Box and are a great format to embed social media posts.

It helps break up walls of texts and serves as a showcase for a news publisher’s live, original reporting from the scene, eyewitness accounts, and related social content that demonstrates a publisher’s subject expertise.

What people are saying

This Google SERP feature is ideal for capturing audience reaction and user-generated content from a variety of social platforms. Short-form video is often featured in this space.

It’s a showcase for any story or topic that drives emotional engagement, including reactions to everything from a celebrity death to a sporting event outcome to a viral trend. Severe weather is also a recurring topic.

What people are saying

Knowledge Panel

There’s a growing interest in this Google SERP feature among news publishers, especially those publishers who produce entertainment content.

Depending on the configuration, publishers have the opportunity to earn a ranking for an image, social post, or article, such as a celebrity biography.

While content opportunities are limited in the Knowledge Panel, they offer more exclusivity, which can increase CTR. YouTube and Instagram are commonly cited here, but X and TikTok have also been growing in visibility.

Google knowledge panel - Tom Holland

Google Discover

This social-search hybrid product, which features trending, emotionally engaging content based on a user’s web and app activity, requires a separate optimization strategy.

The keys to Discover visibility are identifying topics that spark curiosity and ensuring articles are formatted for frictionless consumption. 

Discover has been considered a “black box” when it comes to content optimization, but there are several basic elements to implement that can increase visibility.

Viral hits may spike a news publisher’s Discover performance temporarily, but as Harry Clarkson-Bennett outlines, publishers need to analyze their Discover performance over time at the entity level to build a smart optimization strategy.

Google’s official Discover optimization tips discourage clickbait practices that actually work quite well on the platform, such as salacious quotes in headlines and content about controversial topics and strong opinion perspectives.

I would never recommend a publisher produce clickbait, but for tabloid publishers, content with a strong, contentious perspective overperforms on Discover, regardless of the official Google guidance.

Headlines and images require serious consideration. While Google is running an experiment in which their AI tool rewrites headlines for Discover, direct, action-oriented, and emotion-driven headlines traditionally perform best. There’s no specific character count recommendation, but at a certain point (typically 100+ characters), the headline will get truncated and an ellipsis will be used.

Images must be formatted to Discover specifications (at least 1,200 pixels wide) and should be eye-catching to make people stop and click. Keep articles short or include a summary box at the top of longer articles. Format articles for scanability.

This Forbes X post featured on my Discover feed nails the elements essential for inclusion.

Politics, sports, and entertainment topics that favor an opinion-driven perspective can drive strong engagement on Discover. For YMYL (Your Money Your Life) content, which can also perform well on Discover, focus on accuracy, expert sources, and lean into the curiosity gap.

YouTube and X are the dominant social platforms featured on Discover, according to a Marfeel study.

This was further confirmed by Clara Soteras, who shared insights from Andy Almeida of Google’s Trust and Safety team as presented at Google Search Central Live in Zurich in December 2025.

Almeida noted that Discover’s algorithm has been updated to “include content from YouTube, Instagram, TikTok, or X published by content creators.”

Threat or opportunity?

Instead of feeling dismayed by the increased competition from social media platform content appearing on Google’s SERPs and Discover, news publishers should welcome the additional opportunities for their content to be seen.

In a social and AI-powered search landscape, brand visibility is the key metric. Whether that visibility comes from a news publisher article, video, or social post, it still counts toward brand engagement.

While search strategies have long focused on algorithms, optimizing content for a social-forward SERP requires a different focus. The merging of social and search will spark a holistic audience team revolution in newsrooms, reduce redundant practices, and inspire a content strategy powered by people over algorithms.

The real story behind the 53% drop in SaaS AI traffic

12 February 2026 at 22:30
AI Search SaaSpocalypse

As the SaaS market reels from a sell-off sparked by autonomous AI agents like Claude Cowork, new data shows a 53% drop in AI-driven discovery sessions. Wall Street dubbed it the “SaaSpocalypse.”

Whether AI agents will replace SaaS products is a bigger question than this dataset can answer. But the panic is already distorting interpretation, and this data cuts through the noise to show what SEO teams should actually watch.

Copilot went from 0.3% to 9.6% of SaaS AI traffic in 14 months

From November 2024 to December 2025, SaaS sites logged 774,331 LLM sessions. ChatGPT drove 82.3% of that traffic, but Copilot’s growth tells a different story:

SaaS AI Traffic by Source (Nov 2024 – Dec 2025)

SourceSessionsShare
ChatGPT637,55182.3%
Copilot74,6259.6%
Claude40,3635.2%
Gemini15,7592.0%
Perplexity6,0330.8%

Starting with just 148 sessions in late 2024, Copilot grew more than 20x by May 2025. From May through December, it averaged 3,822 sessions per month, making it the second-largest AI referrer to SaaS sites by year-end 2025.

Investors erased $300 billion from SaaS market caps over fears that AI agents will replace enterprise software. But this data points to a less dramatic force: proximity.

Copilot thrives because it captures intent inside the workflow. Standalone tools saw a 53% traffic drop while workplace-embedded AI grew 20x.

Software evaluation is work, and Copilot sits where that work happens.

When someone asks, “What CRM should we use for a 20-person sales team?” while building a business case in Excel, that moment is captured—one ChatGPT never sees. The May surge reflects that activation: Microsoft 365 users realizing they could research software without opening a new tab.

41.4% of SaaS AI traffic lands on internal search pages

SaaS AI discovery sends users to internal search results first, not product pages.

Top SaaS Landing Pages by LLM Volume

Page TypeLLM Sessions% of AI TrafficPenetration vs Site Avg
Search320,61541.4%8.7x
Blog127,29116.4%8.1x
Pricing40,5035.2%3.2x
Product39,8645.1%2.0x
Support34,5994.5%2.1x

Despite capturing 320,615 sessions — more than blog, pricing, and product pages combined — this dominance likely reflects LLM limitations, not superior content. LLMs route users to search when they lack a specific answer.

For SaaS companies watching their stock crater, that’s useful news: there’s a concrete technical fix. The 41.4% isn’t an existential threat. It’s a crawlability problem.

When an LLM can’t find a direct answer, it defaults to the site’s internal search. The AI treats your search bar as a trusted backup, assuming the search schema will generate a relevant page even if a specific product page isn’t indexed.

At 1.22%, search page penetration is 8.7x the site average. The cause is a “safety net” effect, not optimization.

When more specific pages — like Product or Pricing — lack the data an LLM needs, it falls back to broader search results. LLMs recognize the search URL structure and trust it will return something relevant, even if they can’t predict what.

Blog pages follow with 127,291 sessions and 1.13% penetration. These are structured comparison posts — “best CRM for small teams” or “Salesforce alternatives” — that LLMs cite when they have specific recommendations.

Pricing pages show 0.45% penetration; product pages, 0.28%. When users ask about software selection, LLMs route to comparison surfaces — search and blog — first. Direct product or pricing pages get cited only when the query is already vendor-specific.

The July peak and Q4 decline reflect corporate work cycles

SaaS AI traffic peaked in July at 146,512 sessions, then declined steadily through Q4:

MonthSessionsChange
July 2025146,512Peak
August 2025120,802-17.5%
September 2025134,162+11.1%
October 2025135,397+0.9%
November 2025107,257-20.8%
December 202568,896-35.8%

Every platform declined. ChatGPT’s volume was cut in half, dropping from 127,510 sessions in July to 56,786 by year-end. Copilot fell from 4,737 to 2,351. Perplexity dropped from 7,475 to 3,752.

Two factors drove the slide:

  • People weren’t working. August is vacation season, November includes Thanksgiving, and December is the holidays. Software research happens during work hours; when offices close, discovery drops.
  • Q4 ends the fiscal “buying window.” Most teams have spent their annual budgets or are deferring contracts until Q1 funding opens. Even teams still working aren’t evaluating tools because there’s no budget left until the new fiscal year.

The July peak reflects midyear momentum: people are working, and Q3 budgets are still available. The Q4 decline reflects both fewer researchers and fewer active buying cycles.

This is where the sell-off narrative breaks down.

Investors treat a 53% traffic drop as proof that AI discovery is stalling. But the data aligns with standard B2B fiscal cycles.

AI isn’t failing as a discovery channel. It’s settling into the same seasonal rhythms as every other B2B buying behavior.

What this data means for SEO teams

Raw traffic numbers don’t show where to invest. Penetration rates and landing page distribution reveal what matters.

Track penetration by page type, not site-wide averages

SaaS shows 0.41% sitewide AI penetration, but that average hides concentration. Search pages reach 1.22%—8.7x higher. Blog pages hit 1.13%. Pricing pages are at 0.45%. Product pages lag at 0.28%.

If you’re only tracking total AI sessions, you’re measuring the wrong metric. AI traffic could grow 50% while penetration on high-value pages declines. Volume hides what matters: where AI users concentrate when they arrive with intent.

Action:

  • Segment AI traffic by page type in GA4 or your analytics platform.
  • Track penetration (AI sessions ÷ total sessions) by page category monthly.
  • Identify pages with elevated concentration, then optimize those surfaces first.

Search results pages are now a primary discovery surface

Internal search captures 41.4% of SaaS AI traffic. If those results aren’t crawlable, indexable, or structured for comparison, you’re invisible to the largest segment of AI-driven buyers.

Most SaaS sites treat internal search as navigation, not content. Results return paginated lists with minimal product detail, no filter signals in URLs, and JavaScript-rendered content LLMs can’t parse.

Action:

  • With 41.4% of traffic hitting internal search, treat your search bar as an API for AI agents.
  • Make search pages crawlable (check robots.txt and indexability).
  • Add structured data using SoftwareApplication or Product schema.
  • Surface comparison data — pricing, key features, user count — directly in results, not just product names.

Make your data legible to LLMs — pricing and content both

The sell-off is pricing in obsolescence, but for most SaaS companies the real risk is invisibility. Pricing pages show 0.45% AI penetration—below the 0.46% cross-industry average. Blog pages captured 127,291 sessions at 1.13% penetration, but only when content directly answered selection queries. The pattern is clear: LLMs cite what they can read and parse. They skip what they can’t.

Many SaaS sites still gate pricing behind contact forms. If pricing requires a sales conversation, AI won’t recommend you for “tools under $100/month” queries. The same applies to blog content. When someone asks, “What CRM should I use?” the LLM looks for posts that compare options, define criteria, and explain tradeoffs. Generic thought leadership on CRM trends doesn’t get cited.

Action:

  • Publish pricing on a dedicated, crawlable page. Include representative examples, seat minimums, contract terms, and exclusions.
  • Keep pricing transparent. Transparent pages get cited; gated pages don’t.
  • Replace generic blog posts with structured comparison pages. Use tables and clear data points.
  • Remove fluff. Provide grounding data that lets AI verify compliance and integration capabilities in seconds, not minutes.

Workplace-embedded AI is growing 10x faster than standalone LLMs

Copilot grew 15.89x year over year. Claude grew 7.79x. ChatGPT grew 1.42x. The fastest growth is in tools embedded in existing workflows.

Workplace AI shifts discovery context. In ChatGPT, users are explicitly researching. In Copilot, they’re asking questions mid-task—drafting a proposal, building a comparison spreadsheet, or reviewing vendor options with their team.

Action:

  • Track Copilot and Claude referrals separately from ChatGPT. Monitor which pages these sources favor.
  • Recognize intent: these users aren’t browsing — they’re mid-task, deeper in evaluation, and closer to a purchase decision.
  • Show up in workplace AI discovery to support real-time purchase justification.

Survival favors the findable

The 53% drop from July to December reflects AI usage settling into the software buying process. Buyers are learning which decisions benefit from AI synthesis and which don’t. The remaining traffic is more deliberate, concentrated on complex evaluations where comparison matters.

For SaaS companies, the window for early positioning is closing. The $300 billion sell-off is hitting the sector broadly, but the companies that survive the repricing will be those buyers can find when they ask an AI agent, “Should we renew this contract?”

Teams investing now in transparent pricing, crawlable data, and comparison-focused content are building that findability while competitors debate whether AI discovery matters.

If SEO is rocket science, AI SEO is astrophysics

12 February 2026 at 19:00
If SEO is rocket science, AI SEO is astrophysics

In Google AI Overviews and LLM-driven retrieval, credibility isn’t enough. Content must be structured, reinforced, and clear enough for machines to evaluate and reuse confidently.

Many SEO strategies still optimize for recognition. But AI systems prioritize utility. If your authority can’t be located, verified, and extracted within a semantic system, it won’t shape retrieval.

This article explains how authority works in AI search, why familiar SEO practices fall short, and what it takes to build entity strength that drives visibility.

Why traditional authority signals worked – until they didn’t

For years, SEOs liked to believe that “doing E-E-A-T” would make sites authoritative.

Author bios were optimized, credentials showcased, outbound links added, and About pages polished, all in hopes that those signals would translate into authority.

In practice, we all knew what actually moved the needle: links.

E-E-A-T never really replaced external validation. Authority was still conferred primarily through links and third-party references.

E-E-A-T helped sites appear coherent as entities, while links supplied the real gravitas behind the scenes. That arrangement worked as long as authority could be vague and still rewarded.

It stops working when systems need to use authority, not just acknowledge it. In AI-driven retrieval, being recognized as authoritative isn’t enough. Authority still has to be specific, independently reinforced, and machine-verifiable, or it doesn’t get used.

Being authoritative but not used is like being “paid” with experience. It doesn’t pay the bills.

How AI systems calculate authority

Search no longer operates on a flat plane of keywords and pages. AI-driven systems rely on a multi-dimensional semantic space that models entities, relationships, and topical proximity.

In that semantic space, entities function much like celestial bodies in physical space, discrete objects whose influence is defined by mass, distance, and interaction with others.

E-E-A-T still matters, but the framework version is no longer a differentiator. Authority is now evaluated in a broader context that can’t be optimized with a handful of on-page tasks.

In AI Overviews, ChatGPT, Claude, and similar systems, visibility doesn’t hinge on prestige or brand recognition. Those are symptoms of entity strength, not its source.

What matters is whether a model can locate your entity within its semantic environment and whether that entity has accumulated enough mass to exert influence.

That mass isn’t decorative. It’s built through third-party citations, mentions, and corroboration, then made machine-legible through consistent authorship, structure, and explicit entity relationships.

Models don’t trust authority. They calculate it by measuring how densely and consistently an entity is reinforced across the broader corpus.

Smaller brands don’t need to shine like legacy publishers. In a semantic system, apparent size and visibility don’t determine influence. Density does.

In astrophysics, some planets appear enormous yet exert surprisingly weak gravity because their mass is spread thinly. Others are much smaller, but dense enough to exert stronger pull.

AI visibility works the same way. What matters isn’t how large your brand appears to humans, but how concentrated and reinforced your authority is in machine-readable form.

Dig deeper: From SEO to algorithmic education: The roadmap for long-term brand authority

The E-E-A-T misinterpretation problem

The problem with E-E-A-T was never the concept itself. It was the assumption that trustworthiness could be meaningfully demonstrated in isolation, primarily through signals a site applied to itself.

Over time, E-E-A-T became operationalized as visible, on-page indicators: author bios, credentials, About pages, and lightweight citations.

These signals were easy to implement and easy to audit, which made them attractive. They created the appearance of rigor, even when they did little to change how authority was actually conferred.

That compromise held when search systems were willing to infer authority from proxies. It breaks down in AI-driven retrieval, where authority must be explicitly reinforced, independently corroborated, and machine-verifiable to carry weight.

Surface-level trust markers don’t fail because models ignore them. They fail because they don’t supply the external reinforcement required to give an entity real mass.

In a semantic system, entities gain influence through repeated confirmation across the broader corpus. On-site signals can help make an entity legible, but they don’t generate density on their own. Compliance isn’t comprehension, and E-E-A-T as a checklist doesn’t create gravitational pull.

In human-centered search, these visible trust cues acted as reasonable stand-ins. In LLM retrieval, they don’t translate. Models aren’t evaluating presentation or intent. They’re evaluating semantic consistency, entity alignment, and whether claims can be cross-verified elsewhere.

E-E-A-T isn’t outdated. It’s incomplete. It explains why humans might trust you.

Applying E-E-A-T principles only within your own site won’t create the mass that machines need to recognize, align with, and prioritize your entity in a retrieval system.

AI doesn’t trust, it calculates

Human trust is emotional. Machine trust is statistical.

In practice:

  • LLMs prioritize clarity. Ambiguous writing reduces confidence.
  • They reward clean extraction. Lists, tables, and focused paragraphs are easiest to reuse.
  • They cross-verify facts. Redundant, consistent statements across multiple sources appear more reliable than a single sprawling narrative.

Retrieval models evaluate confidence, not charisma. Structural decisions such as headings, paragraph boundaries, markup, and lists directly affect how accurately a model can map content to a query.

This is why ChatGPT and AI Overview citations often come from unfamiliar brands.

It’s also why brand-specific queries behave differently. When a query explicitly names a brand or entity, the model isn’t navigating the galaxy broadly. It’s plotting a short, precise trajectory to a known body. 

With intent tightly constrained and only one plausible source of truth, there’s far less risk of drifting toward adjacent entities.

In those cases, the system can rely directly on the entity’s own content because the destination is already fixed. The models aren’t “discovering” hidden experts. They’re rewarding content whose structure reduces uncertainty.

The semantic galaxy: How entities behave like bodies

LLMs don’t experience topics, entities, or websites. They model relationships between representations in a high-dimensional semantic space.

That’s why AI retrieval is better understood as plotting a course through a system of interacting gravitational bodies rather than “finding” an answer. Influence comes from mass, not intention.

In embedding-based retrieval, entities behave like bodies in space, as demonstrated by Karpukhin et al. in their 2020 EMNLP paper on dense passage retrieval.

Over time, citations, mentions, and third-party reinforcement increase an entity’s semantic mass. Each independent reference adds weight, making that entity increasingly difficult for the system to ignore.

Queries move through this space as vectors shaped by intent. As they pass near sufficiently massive entities, they bend. The strongest entities exert the greatest gravitational pull, not because they are trusted in a human sense, but because they are repeatedly reinforced across the broader corpus.

Extractability doesn’t create that gravity. It determines what happens after attraction occurs. An entity can be massive enough to warp trajectories and still be unusable if its signals aren’t machine-legible, like a planet with enough gravity to draw a spacecraft in but no viable way to land.

Authority, in this context, isn’t belief. It’s gravity, the cumulative pull created by repeated, independent reinforcement across the wider semantic system.

Entity strength vs. extractability

Classic SEO emphasized backlinks and brand reputation. AI search desires entity strength for discovery, but demands clarity and semantic extractability to be included.

Entity strength – your connections across the Knowledge Graph, Wikidata, and trusted domains – still matters and arguably matters more now. Unfortunately, no amount of entity strength helps if your content isn’t machine-parsable.

Consider two sites featuring recognized experts:

  • One uses clean headings, explicit definitions, and consistent links to verified profiles.
  • The other buries its expertise inside dense, unstructured paragraphs.

Only one will earn citations.

LLMs need:

  • One entity per paragraph or section.
  • Explicit, unambiguous mentions.
  • Repetition that reinforces relationships (“Dr. Jane Smith, cardiologist at XYZ Clinic”).

Precision makes authority extractable. Extractability determines whether existing gravitational pull can be acted on once attraction has occurred, not whether that pull exists in the first place.

Get the newsletter search marketers rely on.


Structure like you mean it: Abstract first, then detail

LLM retrieval is constrained by context windows and truncation limits, as outlined by Lewis et al. in their 2020 NeurIPS paper on retrieval-augmented generation. Models rarely process or reuse long-form content in its entirety.

If you want to be cited, you can’t bury the lede.

LLMs read the beginning, but then they skim. After a certain number of tokens, they truncate. Basically, if your core insight is buried in paragraph 12, it’s invisible.

To optimize for retrieval:

  • Open with a paragraph that functions as its own TL;DR.
  • State your stance, the core insight, and what follows.
  • Expand below the fold with depth and nuance.

Don’t save your best material for the finale. Neither users nor models will reach it.

Dig deeper: Organizing content for AI search: A 3-level framework

Stop ‘linking out,’ start citing like a researcher

The difference between a citation and a link isn’t subtle, but it’s routinely misunderstood. Part of that confusion comes from how E-E-A-T was operationalized in practice.

In many traditional E-E-A-T playbooks, adding outbound links became a checkbox, a visible, easy-to-execute task that stood in for the harder work of substantiating claims. Over time, “cite sources” quietly degraded into “link out a few times.”

A bad citation looks like this:

A generic outbound link to a blog post or company homepage offered as vague “support,” often with language like “according to industry experts” or “SEO best practices say.”

The source may be tangentially related, self-promotional, or simply restating opinion, but it does nothing to reinforce your entity’s factual position in the broader semantic system.

A good citation behaves more like academic referencing. It points to:

  • Primary research.
  • Original reporting.
  • Standards bodies.
  • Widely recognized authorities in that domain.

It’s also tied directly to a specific claim in your content. The model can independently verify the statement, cross-reference it elsewhere, and reinforce the association.

The point was never to just “link out.” The point was to cite sources.

Engineering retrieval authority without falling back into a checklist

The patterns below aren’t tasks to complete or boxes to tick. They describe the recurring structural signals that, over time, allow an entity to accumulate mass and express gravity across systems.

This is where many SEOs slip back into old habits. Once you say “E-E-A-T isn’t a checklist,” the instinct is to immediately ask, “Okay, so what’s the checklist?”

But engineering retrieval authority isn’t a list of tasks. It’s a way of structuring your entire semantic footprint so your entity gains mass in the galaxy the models navigate.

Authority isn’t something you sprinkle into content. It’s something you construct systematically across everything tied to your entity.

  • Make authorship machine-legible: Use consistent naming. Link to canonical profiles. Add author and sameAs schema. Inconsistent bylines fragment your entity mass.
  • Strengthen your internal entity web: Use descriptive anchor text. Connect related topics the way a knowledge graph would. Strong internal linking increases gravitational coherence.
  • Write with semantic clarity: One idea per paragraph. Minimize rhetorical detours. LLMs reward explicitness, not flourish.
  • Use schema and LLMS.txt as amplifiers: They don’t create authority. They expose it.
  • Audit your “invisible” content: If critical information is hidden in pop-ups, accordions, or rendered outside the DOM, the model can’t see it. Invisible authority is no authority.

From rocket science to astrophysics

E-E-A-T taught us to signal trust to humans. AI search demands more: understanding the forces that determine how information is pulled into view.

Rocket science gets something into orbit. Astrophysics navigates and understands the systems it moves through once there.

Traditional SEO focused on launching pages—optimizing, publishing, promoting. AI SEO is about mass, gravity, and interaction: how often your entity is cited, corroborated, and reinforced across the broader semantic system, and how strongly that accumulated mass influences retrieval.

The brands that win won’t shine brightest or claim authority loudest, nor will they be no-name sites simulating credibility with artificial corroboration and junk links.

They’ll be entities that are dense, coherent, and repeatedly confirmed by independent sources—entities with enough gravity to bend queries toward them.

In an AI-driven search landscape, authority isn’t declared. It’s built, reinforced, and made impossible for machines to ignore.

Dig deeper: User-first E-E-A-T: What actually drives SEO and GEO

How social discovery shapes AI search visibility in beauty

12 February 2026 at 18:00
How social discovery shapes AI search visibility in beaut

AI search visibility in beauty is increasingly shaped before a prompt is ever entered.

Brands that appear in generative answers are often those already discussed, validated, and reinforced across social platforms. By the time a user turns to AI search, much of the groundwork has been laid.

Using the beauty category as a lens, this article examines how social discovery influences brand visibility – and why AI search ultimately reflects those signals.

Discovery didn’t move to AI – it fragmented

Brand discovery has fragmented across platforms. AI tools influence mid-funnel consideration, but much discovery happens before a user enters a prompt.

The signals that determine AI visibility are formed upstream. By the time a user reaches generative search, preferences and perceptions may already be set. If brands wait until AI search to influence demand, the window to shape consideration has narrowed.

That upstream influence is increasingly social. Roughly two-thirds of U.S. consumers now use social platforms as search engines, per eMarketer research. 

This shift extends beyond Gen Z and reflects how people validate information and discover brands. These same platforms consistently appear among the top citation sources in AI results. The dynamic is especially visible in the beauty category.

In a study our agency conducted with a beauty brand partner, we found that Reddit, YouTube, and Facebook ranked among the top cited domains in both AI Overviews and ChatGPT.

Stella beauty prompt study

While Reddit is often viewed as an anti-brand environment, YouTube appears nearly as frequently in citation data, making it a logical and underutilized target for citation optimization.

Dig deeper: Social and UGC: The trust engines powering search everywhere

The volume reality: Social behavior still outpaces AI

It’s easy to focus on headline figures around AI usage, including the billions of prompts processed daily. But when measured against business outcomes such as traffic and transactions, the scale looks different.

Social platforms are already embedded in mainstream search behavior. For many users, search-like activity on platforms such as TikTok and YouTube is habitual. Nearly 40% of TikTok users search the platform multiple times per day, and 73% search at least once daily.

Referral data reinforces the contrast. ChatGPT referral traffic accounted for roughly 0.2% of total sessions in a 12-month analysis of 973 ecommerce sites, a University of Hamburg and Frankfurt School working paper found. In the same dataset, Google’s organic search traffic was approximately 200 times larger than organic LLM referrals.

AI search is growing and strategically important. But in terms of repeat behavior, measurable sessions, and downstream transactions, social platforms and traditional search continue to operate at a substantially larger scale.

The validation loop: Why AI needs social

The most critical contrarian point for 2026 is that optimizing for social is also optimizing for AI. Large language models are not primary sources of truth. They function as mirrors, reflecting the consensus formed through human conversations in the data they are trained on.

AI systems also demonstrate skepticism toward brand-owned properties. One study found that only 25% of sources cited in AI-generated answers were brand-managed websites.

At the same time, AI engines prioritize third-party validation. Up to 6.4% of citation links in AI responses originated from Reddit, an analysis by OtterlyAI found. This outpaces many traditional publishers.

There’s also a measurable relationship between sentiment and visibility. Research shows a moderate positive correlation between positive brand sentiment on social media and visibility in AI search results.

Dig deeper: The social-to-search halo effect: Why social content drives branded search

Get the newsletter search marketers rely on.


Video and expert authority shape AI visibility

Treating video as a “brand channel” or a social-first effort rather than a search surface is a strategic failure.

On platforms such as TikTok and YouTube, ranking signals are shaped by spoken language, on-screen text, and captions – signals AI crawlers increasingly use to “triangulate trust.”

In the beauty category, for example, ChatGPT accounts for about 4.3% of searches, while Google processes roughly 14 billion searches per day. However, for “how-to” and technique-based queries, consumers favor the detailed, personalized guidance of social-first video content.

At the same time, the beauty sector has fractured into two universes, according to Yotpo’s GEO for Beauty Brands analysis.

Science-backed brands such as Paula’s Choice and CeraVe dominate AI-generated results because they publish deep, structured educational content. Meanwhile, more traditional marketing-led brands are significantly less visible.

The phrase “dermatologist recommended” correlates with high visibility in AI results because large language models treat expert social proof as a primary ranking signal, according to the same report.

Breaking the high-production barrier: Creating content at scale

One of the biggest hurdles brands cite is budget. Many believe they need a Hollywood production crew to compete in video environments. That is a legacy mindset. 

In today’s environment, high-gloss production can be a deterrent. The current landscape rewards authenticity over polish. Consumers are looking for real people with real skin concerns, not highly filtered commercials.

Optimizing for video discovery doesn’t require filmmaking expertise. Brands can leverage internal talent without adding headcount.

  • Partner with creator platforms: Platforms such as Billow or Social Native allow brands to work with creators for as little as $500 per video. When mapped to a high-intent query, that investment can drive measurable search visibility outcomes.
  • Leverage social natives on staff: Often, the strongest asset is internal. Identify team members who are active on platforms such as TikTok and understand platform dynamics. Creating internal incentives or challenges to produce content can generate a steady stream of authentic assets while contributing to culture.
  • Make strategy the differentiator: A large following is not a prerequisite for visibility. In one case, a TikTok profile built from scratch with one part-time creator at $2,500 per month generated hundreds of thousands of views within 90 days. The focus was not on viral trends, but on meaningful transactional terms that drive revenue.

If a new profile can reach more than 100,000 views per video within three months on a limited budget, the barrier isn’t equipment. It’s clarity on the business case and disciplined execution.

Dig deeper: How to optimize video for AI-powered search

The new beauty SEO playbook for 2026

The data is clear. Brands can’t win the generative engine if they’re losing the social conversation.

AI models function as mirrors, reflecting web consensus. If real users on Reddit, YouTube, and TikTok aren’t discussing a brand, AI systems have little to surface.

If marketers wait until a user reaches a ChatGPT prompt to shape perception, the opportunity has already narrowed.

Discovery happens upstream. Validation occurs in the loop between social proof and algorithmic citation.

Translating this into action requires rethinking team structure and priorities:

  • Stop the silos: Your SEO and social teams shouldn’t speak different languages. Both must focus on search surfaces.
  • Prioritize the “why” before the “what”: Don’t just fix a technical tag. Build the business case for how social sentiment and expert validation drive market share.
  • Embrace scrappy execution: Whether through $500 creator partnerships or internal social-native talent, start building authentic assets now.

We’re witnessing a shift from algorithm-driven discovery to community-driven discovery.

It’s agile and multidisciplinary, and when executed well, it can meaningfully impact the bottom line.

❌
❌