Normal view

Today — 19 February 2026Main stream

Paid search click share doubles as organic clicks fall: Study

19 February 2026 at 01:01

Organic search clicks are shrinking across major verticals — and it’s not just because of Google’s AI Overviews.

  • Classic organic click share fell sharply across headphones, jeans, greeting cards, and online games queries in the U.S., new Similarweb data comparing January 2025 to January 2026 shows.
  • The biggest winner: text ads.

Why we care. You aren’t just competing with AI Overviews. You’re competing with Google’s aggressive expansion of paid search real estate. Across every vertical analyzed, text ads gained more click share than any other measurable surface. In product categories, paid listings now capture roughly one-third of all clicks. As a result, several brands that are losing organic visibility are increasing their paid investment.

By the numbers. Across four verticals, text ads showed the most consistent, measurable click-share gains.

  • Classic organic lost 11 to 23 percentage points of click share year over year.
  • Text ads gained 7 to 13 percentage points in every case.
  • Paid click share doubled in major product categories.
  • AI Overviews SERP presence rose ~10 to ~30 percentage points, depending on the vertical.

Classic organic is down everywhere. Year-over-year classic organic click share declined across all four verticals. Headphones saw the steepest drop. Even online games — historically organic-heavy — lost double digits. In two verticals (headphones, jeans), total clicks also fell.

  • Headphones: Down from 73% to 50%
  • Jeans: Down from 73% to 56%
  • Greeting cards: Down from 88% to 75%
  • Online games: Down from 95% to 84%

Text ads are the biggest winner. Text ads gained share in every vertical; no other surface showed this level of consistent growth:

  • Headphones: Up from 3% to 16%
  • Online games: Up from 3% to 13%
  • Jeans: Up from 7% to 16%
  • Greeting cards: Up from 9% to 16%

In product categories, PLAs compounded the shift:

  • Headphones: Up from 16% to 36%
  • Jeans: Up from 18% to 34%
  • Greeting cards: Up from 10% to 19%

AI Overviews surged unevenly. The presence of Google AI Overviews expanded sharply, but varied by vertical:

  • Headphones: 2.28% → 32.76%
  • Online games: 0.38% → 29.80%
  • Greeting cards: 0.94% → 21.97%
  • Jeans: 2.28% → 12.06%

Zero-click searches are high — and mostly stable. Except for online games, zero-click rates didn’t change dramatically:

  • Headphones: 63% (flat)
  • Jeans: Down from 65% to 61%
  • Online games: Up from 43% to 50%
  • Greeting cards: Up from 51% to 53%

Brands losing organic traffic are buying it back. In headphones:

  • Amazon increased paid clicks 35% while losing organic volume.
  • Walmart nearly 6x’d paid clicks.
  • Bose boosted paid 49%.

In jeans:

  • Gap grew paid clicks 137% to become the top paid player.
  • True Religion entered the paid top tier without top-10 organic presence.

In online games:

  • CrazyGames quadrupled paid clicks while organic declined.
  • Arkadium entered paid after losing 68% of organic clicks.

The result? We’re seeing a self-reinforcing cycle, according to the study’s author, Aleyda Solis:

  • Organic share declines.
  • Competition intensifies.
  • More brands increase paid budgets.
  • Paid surfaces capture more clicks.

About the data. This analysis used Similarweb data to examine SERP composition and click distribution for the top 5,000 U.S. queries in headphones, jeans, and online games, and the top 956 queries in greeting cards and ecards. It compares January 2025 to January 2026, tracking how clicks shifted across classic organic results, organic SERP features, text ads, PLAs, zero-click searches, and AI Overviews.

The study. Search Isn’t Just Turning to AI, it’s being Re-Monetized: Text Ads Are Taking a Bigger Share of Google SERP Clicks (Data)

Yesterday — 18 February 2026Main stream

44% of ChatGPT citations come from the first third of content: Study

18 February 2026 at 21:47

ChatGPT heavily favors the top of content when selecting citations, according to an analysis of 1.2 million AI answers and 18,012 verified citations by Kevin Indig, Growth Advisor.

Why we care. Traditional search rewarded depth and delayed payoff. AI favors immediate classification — clear entities and direct answers up front. If your substance isn’t surfaced early, it’s less likely to appear in AI answers.

By the numbers. Indig’s team found a consistent “ski ramp” citation pattern that held across randomized validation batches. He called the results statistically indisputable:

  • 44.2% of citations come from the first 30% of content.
  • 31.1% come from the middle (30–70%).
  • 24.7% come from the final third, with a sharp drop near the footer.

At the paragraph level, AI reads more deeply:

  • 53% of citations come from the middle of paragraphs.
  • 24.5% come from first sentences.
  • 22.5% come from last sentences.

The big takeaway. Front-load key insights at the article level. Within paragraphs, prioritize clarity and information density over forced first sentences.

Why this happens. Large language models are trained on journalism and academic writing that follow a “bottom line up front” structure. The model appears to weight early framing more heavily, then interpret the rest through that lens.

  • Modern models can process massive token windows, but they prioritize efficiency and establish context quickly.

What gets cited. Indig identified five traits of highly cited content:

  • Definitive language: Cited passages were nearly twice as likely to use clear definitions (“X is,” “X refers to”). Direct subject-verb-object statements outperform vague framing.
  • Conversational Q&A structure: Cited content was 2x more likely to include a question mark. 78.4% of citations tied to questions came from headings. AI often treats H2s as prompts and the following paragraph as the answer.
  • Entity richness: Typical English text contains 5% to 8% proper nouns. Heavily cited text averaged 20.6%. Specific brands, tools, and people anchor answers and reduce ambiguity.
  • Balanced sentiment: Cited text clustered around a subjectivity score of 0.47 — neither dry fact nor emotional opinion. The preferred tone resembles analyst commentary: fact plus interpretation.
  • Business-grade clarity: Winning content averaged a Flesch-Kincaid grade level of 16 versus 19.1 for lower-performing content. Shorter sentences and plain structure beat dense academic prose.

About the data. Indig analyzed 3 million ChatGPT responses and 30 million citations, isolating 18,012 verified citations to examine where and why AI pulls content. His team used sentence-transformer embeddings to match responses to specific source sentences, then measured their page position and linguistic traits such as definitions, entity density, and sentiment.

Bottom line. Narrative “ultimate guide” writing may underperform in AI retrieval. Structured, briefing-style content performs better.

  • Indig argues this creates a “clarity tax.” Writers must surface definitions, entities, and conclusions early—not save them for the end.

The report. The science of how AI pays attention

Perplexity stops testing advertising

18 February 2026 at 18:16

Perplexity is abandoning advertising, for now at least. The company believes sponsored placements — even labeled ones — risk undermining the trust on which its AI answer engine depends.

  • Perplexity phased out the ads it began testing in 2024 and has no plans to bring them back, the Financial Times reported.
  • The AI search company could revisit advertising or “never ever need to do ads,” the report said.

Why we care. If Perplexity remains ad-free, brands lose paid access to a fast-growing audience. The company previously reported that it gets 780 million monthly queries. With sponsored placements gone, brands have no way to get visibility inside Perplexity’s answers other than via organic citations.

What changed. Perplexity was one of the first AI search companies to test ads, placing sponsored answers beneath chatbot responses. It said at the time that ads were clearly labeled and didn’t influence outputs. Executives now say perception matters as much as policy.

  • “A user needs to believe this is the best possible answer,” one executive said, adding that once ads appear, users may second-guess response integrity.

Meanwhile. Perplexity’s exit comes as other AI platforms experiment with ads.

Perplexity says subscriptions are its core business. It offers a free tier and paid plans from $20 to $200 per month. It has more than 100 million users and about $200 million in annualized revenue, according to executives.

  • Perplexity also introduced shopping features, but doesn’t take a cut of transactions, another indication it’s cautious about revenue models that could create conflicts of interest.
  • “We are in the accuracy business, and the business is giving the truth, the right answers,” one executive said.

The report. Perplexity drops advertising as it warns it will hurt trust in AI (subscription required)

Airbnb says traffic from AI chatbots converts better than Google

17 February 2026 at 23:54

Traffic from AI chatbots converts at a higher rate than traffic from Google, according to Airbnb CEO Brian Chesky. He shared this tidbit on the company’s Q4 2025 earnings call:

  • “And what we see is that traffic that comes from chatbots convert at a higher rate than traffic that comes from Google,” Chesky said on Feb. 12.

Yes, but. He didn’t share specific conversion rates, and the company didn’t quantify chatbot traffic volume. But for Airbnb, early data suggests visitors arriving via AI chatbots may be further along in the booking process than those coming from traditional Google searches.

  • Airbnb also didn’t specify which chatbots are driving traffic. Chesky referenced OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and others in broader remarks about model availability.

Why we care. AI assistants are emerging as a top-of-funnel discovery layer. The quality of that traffic may outperform clicks from traditional search and align with past claims by Google and Microsoft that AI will drive more qualified traffic at lower volume.

AI search ambitions. Chesky described chatbots as “very similar to search” and positioned them as top-of-funnel discovery engines.

  • “I think these chatbot platforms are gonna be very similar to search. Gonna be really good top-of-funnel discoveries,” he said.

Rather than viewing them as disintermediators, Airbnb sees them as acquisition partners.

  • “We think they are gonna be positive for Airbnb,” Chesky added.

Chesky described the long-term goal as building an “AI-native experience” where the app “does not just search for you. It knows you.”:

  • “So AI search is live to a very small percent of traffic right now. We are doing a lot of experimentation. The way we do things with AI is much more rapid iteration, not big launches. And over time, we are gonna be experimenting with making AI search more conversational, integrating it into more than trip, and, eventually, we will be looking at sponsor listings as result of that. But we want to first nail AI search.”

AI inside Airbnb. Airbnb isn’t just benefiting from external AI platforms. It’s embedding AI into its operations.

  • Its in-house AI customer service agent now resolves nearly one-third of North American support tickets without a human, according to Chesky. The tool is English-only for now but is slated for global, multilingual rollout, including voice support.
  • Chesky said the goal is for AI to handle “significantly more than 30%” of tickets within a year.
  • Airbnb is also testing AI-powered conversational search in its app. The feature is live for a small percentage of users and is being iterated quickly rather than launched as a major product release.

Sponsored listings on hold for now. Airbnb has long faced questions about launching sponsored listings. On the call, Chesky said traditional ad units may not translate directly into conversational AI environments. The company is prioritizing AI search before designing sponsored placements in that format.

Airbnb’s search shift. Airbnb began moving its budget to brand marketing just before the rise of generative AI and AI-powered search. Airbnb bet on broader marketing initiatives, slashing its search marketing spending.

Before yesterdayMain stream

Google’s Jeff Dean: AI Search relies on classic ranking and retrieval

17 February 2026 at 21:52
AI search stack

Jeff Dean says Google’s AI Search still works like classic Search: narrow the web to relevant pages, rank them, then let a model generate the answer.

In an interview on Latent Space: The AI Engineer Podcast, Google’s chief AI scientist explained how Google’s AI systems work and how much they rely on traditional search infrastructure.

The architecture: filter first, reason last. Visibility still depends on clearing ranking thresholds. Content must enter the broad candidate pool, then survive deeper reranking before it can be used in an AI-generated response. Put simply, AI doesn’t replace ranking. It sits on top of it.

Dean said an LLM-powered system doesn’t read the entire web at once. It starts with Google’s full index, then uses lightweight methods to identify a large candidate pool — tens of thousands of documents. Dean said:

  • “You identify a subset of them that are relevant with very lightweight kinds of methods. You’re down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is the final 10 results or 10 results plus other kinds of information.”

Stronger ranking systems narrow that set further. Only after multiple filtering rounds does the most capable model analyze a much smaller group of documents and generate an answer. Dean said:

  • “And I think an LLM-based system is not going to be that dissimilar, right? You’re going to attend to trillions of tokens, but you’re going to want to identify what are the 30,000-ish documents that are with the maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked me to do?”

Dean called this the “illusion” of attending to trillions of tokens. In practice, it’s a staged pipeline: retrieve, rerank, synthesize. Dean said:

  • “Google search gives you … not the illusion, but you are searching the internet, but you’re finding a very small subset of things that are relevant.”

Matching: from keywords to meaning. Nothing new here, but we heard another reminder that covering a topic clearly and comprehensively matters more than repeating exact-match phrases.

Dean explained how LLM-based representations changed how Google matches queries to content.

Older systems relied more on exact word overlap. With LLM representations, Google can move beyond the idea that particular words must appear on the page and instead evaluate whether a page — or even a paragraph — is topically relevant to a query. Dean said:

  • “Going to an LLM-based representation of text and words and so on enables you to get out of the explicit hard notion of particular words having to be on the page. But really getting at the notion of this topic of this page or this page paragraph is highly relevant to this query.”

That shift lets Search connect queries to answers even when wording differs. Relevance increasingly centers on intent and subject matter, not just keyword presence.

Query expansion didn’t start with AI. Dean pointed to 2001, when Google moved its index into memory across enough machines to make query expansion cheap and fast. Dean said:

  • “One of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Because if you don’t have the page in your index, you’re going to not do well.
  • “And then we also needed to scale our capacity because we were, our traffic was growing quite extensively. So we had a sharded system where you have more and more shards as the index grows, you have like 30 shards. Then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. And then as traffic grows, you add more and more replicas of each of those.
  • And so we eventually did the math that realized that in a data center where we had say 60 shards and 20 copies of each shard, we now had 1,200 machines with disks. And we did the math and we’re like, Hey, one copy of that index would actually fit in memory across 1,200 machines. So in 2001, we … put our entire index in memory and what that enabled from a quality perspective was amazing.

Before that, adding terms was expensive because it required disk access. Once the index lived in memory, Google could expand a short query into dozens of related terms — adding synonyms and variations to better capture meaning. Dean said:

  • “Before, you had to be really careful about how many different terms you looked at for a query, because every one of them would involve a disk seek.
  • “Once you have the whole index in memory, it’s totally fine to have 50 terms you throw into the query from the user’s original three- or four-word query. Because now you can add synonyms like restaurant and restaurants and cafe and bistro and all these things.
  • “And you can suddenly start … getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was … 2001, very much pre-LLM, but really it was about softening the strict definition of what the user typed in order to get at the meaning.”

That change pushed Search toward intent and semantic matching years before LLMs. AI Mode (and its other AI experiences) continues Google’s ongoing shift toward meaning-based retrieval, enabled by better systems and more compute.

Freshness as a core advantage. Dean said one of Search’s biggest transformations was update speed. Early systems refreshed pages as rarely as once a month. Over time, Google built infrastructure that can update pages in under a minute. Dean said:

  • “In the early days of Google, we were growing the index quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most.”

That improved results for news queries and affected the main search experience. Users expect current information, and the system is designed to deliver it. Dean said:

  • “If you’ve got last month’s news index, it’s not actually that useful.”

Google uses systems to decide how often to crawl a page, balancing how likely it is to change with how valuable the latest version is. Even pages that change infrequently may be crawled often if they’re important enough. Dean said:

  • “There’s a whole … system behind the scenes that’s trying to decide update rates and importance of the pages. So, even if the update rate seems low, you might still want to recrawl important pages quite often because the likelihood they change might be low, but the value of having updated is high.”

Why we care. AI answers don’t bypass ranking, crawl prioritization, or relevance signals. They depend on them. Eligibility, quality, and freshness still determine which pages are retrieved and narrowed. LLMs change how content is synthesized and presented — but the competition to enter the underlying candidate set remains a search problem.

The interview. Owning the AI Pareto Frontier — Jeff Dean

💾

Behind the AI interface, a staged system narrows tens of thousands of documents to a few, showing that visibility hinges on classic signals.

Cloudflare’s Markdown for Agents AI feature has SEOs on alert

13 February 2026 at 22:25
Shadow web

Cloudflare yesterday announced its new Markdown for Agents feature, which serves machine-friendly versions of web content alongside traditional human-facing pages.

  • Cloudflare described the update as a response to the rise of AI crawlers and agentic browsing.
  • When a client requests text/markdown, Cloudflare fetches the HTML from the origin server, converts it at the edge, and returns a Markdown version.
  • The response also includes a token estimate header intended to help developers manage context windows.
  • Early reactions focused on the efficiency gains, as well as the broader implications of serving alternate representations of web content.

What’s happening. Cloudflare, which powers roughly 20% of the web, said Markdown for Agents uses standard HTTP content negotiation. If a client sends an Accept: text/markdown header, Cloudflare converts the HTML response on the fly and returns Markdown. The response includes Vary: accept, so caches store separate variants.

  • Cloudflare positioned the opt-in feature as part of a shift in how content is discovered and consumed, with AI crawlers and agents benefiting from structured, lower-overhead text.
  • Markdown can cut token usage by up to 80% compared to HTML, Cloudflare said.

Security concern. SEO consultant David McSweeney said Cloudflare’s Markdown for Agents feature could make AI cloaking trivial because the Accept: text/markdown header is forwarded to origin servers, effectively signaling that the request is from an AI agent.

  • A standard request returns normal content, while a Markdown request can trigger a different HTML response that Cloudflare then converts and delivers to the AI, McSweeney showed on LinkedIn.
  • The concern: sites could inject hidden instructions, altered product data, or other machine-only content, creating a “shadow web” for bots unless the header is stripped before reaching the origin.

Google and Bing’s markdown smackdown. Recent comments from Google and Microsoft representatives discourage publishers from creating separate markdown pages for large language models. Google’s John Mueller said:

  • “In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?”

And Microsoft’s Fabrice Canel said:

  • “Really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Humans eyes help fixing people and bot-viewed content. We like Schema in pages. AI makes us great at understanding web pages. Less is more in SEO !”
  • Cloudflare’s feature doesn’t create a second URL. However, it generates different representations based on request headers.

The case against markdown. Technical SEO consultant Jono Alderson said that once a machine-specific representation exists, platforms must decide whether to trust it, verify it against the human-facing version, or ignore it:

  • “When you flatten a page into markdown, you don’t just remove clutter. You remove judgment, and you remove context.”
  • “The moment you publish a machine-only representation of a page, you’ve created a second candidate version of reality. It doesn’t matter if you promise it’s generated from the same source or swear that it’s ‘the same content’. From the outside, a system now sees two representations and has to decide which one actually reflects the page.”

Dig deeper. Why LLM-only pages aren’t the answer to AI search

Why we care. Cloudflare’s move could make AI ingestion cheaper and cleaner. But could it be considered cloaking if you’re serving different content to humans and crawlers? To be continued…

❌
❌