Normal view

Today — 18 February 2026Main stream

Airbnb says traffic from AI chatbots converts better than Google

17 February 2026 at 23:54

Traffic from AI chatbots converts at a higher rate than traffic from Google, according to Airbnb CEO Brian Chesky. He shared this tidbit on the company’s Q4 2025 earnings call:

  • “And what we see is that traffic that comes from chatbots convert at a higher rate than traffic that comes from Google,” Chesky said on Feb. 12.

Yes, but. He didn’t share specific conversion rates, and the company didn’t quantify chatbot traffic volume. But for Airbnb, early data suggests visitors arriving via AI chatbots may be further along in the booking process than those coming from traditional Google searches.

  • Airbnb also didn’t specify which chatbots are driving traffic. Chesky referenced OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and others in broader remarks about model availability.

Why we care. AI assistants are emerging as a top-of-funnel discovery layer. The quality of that traffic may outperform clicks from traditional search and align with past claims by Google and Microsoft that AI will drive more qualified traffic at lower volume.

AI search ambitions. Chesky described chatbots as “very similar to search” and positioned them as top-of-funnel discovery engines.

  • “I think these chatbot platforms are gonna be very similar to search. Gonna be really good top-of-funnel discoveries,” he said.

Rather than viewing them as disintermediators, Airbnb sees them as acquisition partners.

  • “We think they are gonna be positive for Airbnb,” Chesky added.

Chesky described the long-term goal as building an “AI-native experience” where the app “does not just search for you. It knows you.”:

  • “So AI search is live to a very small percent of traffic right now. We are doing a lot of experimentation. The way we do things with AI is much more rapid iteration, not big launches. And over time, we are gonna be experimenting with making AI search more conversational, integrating it into more than trip, and, eventually, we will be looking at sponsor listings as result of that. But we want to first nail AI search.”

AI inside Airbnb. Airbnb isn’t just benefiting from external AI platforms. It’s embedding AI into its operations.

  • Its in-house AI customer service agent now resolves nearly one-third of North American support tickets without a human, according to Chesky. The tool is English-only for now but is slated for global, multilingual rollout, including voice support.
  • Chesky said the goal is for AI to handle “significantly more than 30%” of tickets within a year.
  • Airbnb is also testing AI-powered conversational search in its app. The feature is live for a small percentage of users and is being iterated quickly rather than launched as a major product release.

Sponsored listings on hold for now. Airbnb has long faced questions about launching sponsored listings. On the call, Chesky said traditional ad units may not translate directly into conversational AI environments. The company is prioritizing AI search before designing sponsored placements in that format.

Airbnb’s search shift. Airbnb began moving its budget to brand marketing just before the rise of generative AI and AI-powered search. Airbnb bet on broader marketing initiatives, slashing its search marketing spending.

Yesterday — 17 February 2026Main stream

Google’s Jeff Dean: AI Search relies on classic ranking and retrieval

17 February 2026 at 21:52
AI search stack

Jeff Dean says Google’s AI Search still works like classic Search: narrow the web to relevant pages, rank them, then let a model generate the answer.

In an interview on Latent Space: The AI Engineer Podcast, Google’s chief AI scientist explained how Google’s AI systems work and how much they rely on traditional search infrastructure.

The architecture: filter first, reason last. Visibility still depends on clearing ranking thresholds. Content must enter the broad candidate pool, then survive deeper reranking before it can be used in an AI-generated response. Put simply, AI doesn’t replace ranking. It sits on top of it.

Dean said an LLM-powered system doesn’t read the entire web at once. It starts with Google’s full index, then uses lightweight methods to identify a large candidate pool — tens of thousands of documents. Dean said:

  • “You identify a subset of them that are relevant with very lightweight kinds of methods. You’re down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is the final 10 results or 10 results plus other kinds of information.”

Stronger ranking systems narrow that set further. Only after multiple filtering rounds does the most capable model analyze a much smaller group of documents and generate an answer. Dean said:

  • “And I think an LLM-based system is not going to be that dissimilar, right? You’re going to attend to trillions of tokens, but you’re going to want to identify what are the 30,000-ish documents that are with the maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked me to do?”

Dean called this the “illusion” of attending to trillions of tokens. In practice, it’s a staged pipeline: retrieve, rerank, synthesize. Dean said:

  • “Google search gives you … not the illusion, but you are searching the internet, but you’re finding a very small subset of things that are relevant.”

Matching: from keywords to meaning. Nothing new here, but we heard another reminder that covering a topic clearly and comprehensively matters more than repeating exact-match phrases.

Dean explained how LLM-based representations changed how Google matches queries to content.

Older systems relied more on exact word overlap. With LLM representations, Google can move beyond the idea that particular words must appear on the page and instead evaluate whether a page — or even a paragraph — is topically relevant to a query. Dean said:

  • “Going to an LLM-based representation of text and words and so on enables you to get out of the explicit hard notion of particular words having to be on the page. But really getting at the notion of this topic of this page or this page paragraph is highly relevant to this query.”

That shift lets Search connect queries to answers even when wording differs. Relevance increasingly centers on intent and subject matter, not just keyword presence.

Query expansion didn’t start with AI. Dean pointed to 2001, when Google moved its index into memory across enough machines to make query expansion cheap and fast. Dean said:

  • “One of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Because if you don’t have the page in your index, you’re going to not do well.
  • “And then we also needed to scale our capacity because we were, our traffic was growing quite extensively. So we had a sharded system where you have more and more shards as the index grows, you have like 30 shards. Then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. And then as traffic grows, you add more and more replicas of each of those.
  • And so we eventually did the math that realized that in a data center where we had say 60 shards and 20 copies of each shard, we now had 1,200 machines with disks. And we did the math and we’re like, Hey, one copy of that index would actually fit in memory across 1,200 machines. So in 2001, we … put our entire index in memory and what that enabled from a quality perspective was amazing.

Before that, adding terms was expensive because it required disk access. Once the index lived in memory, Google could expand a short query into dozens of related terms — adding synonyms and variations to better capture meaning. Dean said:

  • “Before, you had to be really careful about how many different terms you looked at for a query, because every one of them would involve a disk seek.
  • “Once you have the whole index in memory, it’s totally fine to have 50 terms you throw into the query from the user’s original three- or four-word query. Because now you can add synonyms like restaurant and restaurants and cafe and bistro and all these things.
  • “And you can suddenly start … getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was … 2001, very much pre-LLM, but really it was about softening the strict definition of what the user typed in order to get at the meaning.”

That change pushed Search toward intent and semantic matching years before LLMs. AI Mode (and its other AI experiences) continues Google’s ongoing shift toward meaning-based retrieval, enabled by better systems and more compute.

Freshness as a core advantage. Dean said one of Search’s biggest transformations was update speed. Early systems refreshed pages as rarely as once a month. Over time, Google built infrastructure that can update pages in under a minute. Dean said:

  • “In the early days of Google, we were growing the index quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most.”

That improved results for news queries and affected the main search experience. Users expect current information, and the system is designed to deliver it. Dean said:

  • “If you’ve got last month’s news index, it’s not actually that useful.”

Google uses systems to decide how often to crawl a page, balancing how likely it is to change with how valuable the latest version is. Even pages that change infrequently may be crawled often if they’re important enough. Dean said:

  • “There’s a whole … system behind the scenes that’s trying to decide update rates and importance of the pages. So, even if the update rate seems low, you might still want to recrawl important pages quite often because the likelihood they change might be low, but the value of having updated is high.”

Why we care. AI answers don’t bypass ranking, crawl prioritization, or relevance signals. They depend on them. Eligibility, quality, and freshness still determine which pages are retrieved and narrowed. LLMs change how content is synthesized and presented — but the competition to enter the underlying candidate set remains a search problem.

The interview. Owning the AI Pareto Frontier — Jeff Dean

💾

Behind the AI interface, a staged system narrows tens of thousands of documents to a few, showing that visibility hinges on classic signals.
Before yesterdayMain stream

Cloudflare’s Markdown for Agents AI feature has SEOs on alert

13 February 2026 at 22:25
Shadow web

Cloudflare yesterday announced its new Markdown for Agents feature, which serves machine-friendly versions of web content alongside traditional human-facing pages.

  • Cloudflare described the update as a response to the rise of AI crawlers and agentic browsing.
  • When a client requests text/markdown, Cloudflare fetches the HTML from the origin server, converts it at the edge, and returns a Markdown version.
  • The response also includes a token estimate header intended to help developers manage context windows.
  • Early reactions focused on the efficiency gains, as well as the broader implications of serving alternate representations of web content.

What’s happening. Cloudflare, which powers roughly 20% of the web, said Markdown for Agents uses standard HTTP content negotiation. If a client sends an Accept: text/markdown header, Cloudflare converts the HTML response on the fly and returns Markdown. The response includes Vary: accept, so caches store separate variants.

  • Cloudflare positioned the opt-in feature as part of a shift in how content is discovered and consumed, with AI crawlers and agents benefiting from structured, lower-overhead text.
  • Markdown can cut token usage by up to 80% compared to HTML, Cloudflare said.

Security concern. SEO consultant David McSweeney said Cloudflare’s Markdown for Agents feature could make AI cloaking trivial because the Accept: text/markdown header is forwarded to origin servers, effectively signaling that the request is from an AI agent.

  • A standard request returns normal content, while a Markdown request can trigger a different HTML response that Cloudflare then converts and delivers to the AI, McSweeney showed on LinkedIn.
  • The concern: sites could inject hidden instructions, altered product data, or other machine-only content, creating a “shadow web” for bots unless the header is stripped before reaching the origin.

Google and Bing’s markdown smackdown. Recent comments from Google and Microsoft representatives discourage publishers from creating separate markdown pages for large language models. Google’s John Mueller said:

  • “In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?”

And Microsoft’s Fabrice Canel said:

  • “Really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Humans eyes help fixing people and bot-viewed content. We like Schema in pages. AI makes us great at understanding web pages. Less is more in SEO !”
  • Cloudflare’s feature doesn’t create a second URL. However, it generates different representations based on request headers.

The case against markdown. Technical SEO consultant Jono Alderson said that once a machine-specific representation exists, platforms must decide whether to trust it, verify it against the human-facing version, or ignore it:

  • “When you flatten a page into markdown, you don’t just remove clutter. You remove judgment, and you remove context.”
  • “The moment you publish a machine-only representation of a page, you’ve created a second candidate version of reality. It doesn’t matter if you promise it’s generated from the same source or swear that it’s ‘the same content’. From the outside, a system now sees two representations and has to decide which one actually reflects the page.”

Dig deeper. Why LLM-only pages aren’t the answer to AI search

Why we care. Cloudflare’s move could make AI ingestion cheaper and cleaner. But could it be considered cloaking if you’re serving different content to humans and crawlers? To be continued…

❌
❌