Reading view

AI citations favor listicles, articles, product pages: Study

AI citation engine

AI search citations favor a small set of formats. Listicles, articles, and product pages drive over half of all mentions across major LLMs, according to new Wix Studio AI Search Lab research analyzing 75,000 AI answers and more than 1 million citations across ChatGPT, Google AI Mode, and Perplexity.

The findings. Listicles led at 21.9% of citations, followed by articles (16.7%) and product pages (13.7%). Together, these three formats made up 52% of all AI citations.

  • Articles dominated informational queries, cited 2.7x more than other formats.
  • Listicles captured 40% of commercial-intent citations, nearly double any other type.

Why intent wins. Query intent — not industry or model — most strongly predicts which content gets cited. This pattern held across industries, from SaaS to health.

  • Informational queries skewed heavily toward articles (45.5%) and listicles (21.7%).
  • Commercial queries were led by listicles (40.9%).
  • Transactional and navigational queries favored product and category pages (around 40% combined).

Why we care. This research indicates that you want to map content types to user goals rather than just creating more content. Articles educate, listicles drive comparison, and product pages convert. Aligning content format with user intent could help you capture more AI citations and increase visibility.

Not all listicles perform equally. Third-party listicles accounted for 80.9% of citations in professional services, compared to 19.1% for self-promotional lists. That seems to indicate LLMs prefer neutral, editorial comparisons over brand-led rankings.

Model differences. All models favored listicles, but diverged after that.

  • ChatGPT leaned heavily into articles and informational content.
  • Google AI Mode showed the most balanced distribution.
  • Perplexity stood out, with 17% of citations coming from discussions like Reddit and forums.

Industry patterns. Content preferences shifted slightly by vertical:

  • SaaS and professional services over-indexed on listicles.
  • Health favored authoritative articles.
  • Ecommerce spread citations across listicles, articles, and category pages.
  • Home repair showed the most even distribution across formats.

The research. The content types most cited by LLMs

ChatGPT citations favor a small group of domains: Study

AI retrieval vs citations

AI citations in ChatGPT are far more concentrated than citation distributions in traditional search. Roughly 30 domains capture 67% of citations within a topic.

  • That’s according to Kevin Indig’s latest study, which also found that broad topical coverage, long-form pages, and cluster-based models outperform the old “one keyword, one page” approach.

The details. Citation visibility wasn’t evenly distributed. In product comparison topics, the top 10 domains accounted for 46% of citations; the top 30, 67%.

  • AI visibility was slightly less concentrated than classic organic search, but still highly centralized.
  • Indig’s conclusion: you’re effectively shut out unless you build enough authority to win one of a limited number of citation “seats.”

What changed. Ranking No. 1 in Google still matters, but it’s not enough. Of pages ranking No. 1, 43.2% were cited by ChatGPT — 3.5x more often than pages beyond the top 20.

  • ChatGPT retrieved far more pages than it cited. AirOps found that it retrieved ~6x as many pages as it cited, and 85% of the retrieved pages were never cited.
  • A third of the cited pages came from fan-out queries, and 95% of those had zero search volume.

Why we care. Publishing the “best answer” for one keyword isn’t enough. ChatGPT rewards domains that cover a topic from multiple angles, not pages optimized for isolated terms. And discovery often happens outside the keyword universe you track.

The patterns. Longer pages generally earned more citations, with variation by vertical. The biggest lift appeared between 5,000 to 10,000 characters. Pages above 20,000 characters averaged 10.18 citations vs. 2.39 for pages under 500.

  • This pattern broke in Finance, where shorter, denser pages often outperformed long guides. In Education, Crypto, and Product Analytics, longer pages continued to gain citation value with little drop-off.
  • 58% of cited URLs were cited only once. Pages that recurred across prompts were usually category roundups, comparison pages, or broad guides answering multiple related questions.

On-page behavior. ChatGPT cited heavily from the upper part of a page. The 10% to 20% section performed best across all industries.

  • The bottom 10% earned just 2.4% to 4.4% of citations. Conclusions were largely ignored.
  • Finance had the steepest ramp, with 43.7% of citations in the first 30%.
  • Healthcare and HR Tech were flatter.
  • Education peaked later, around 30% to 40%.

About the data. Indig analyzed ~98,000 citation rows from ~1.2 million ChatGPT responses (Gauge), isolating seven verticals. The study used structural page parsing, positional mapping, and entity and sentiment analysis to identify which pages earned citations and where they come from.

The study. The science of how AI picks its sources

Bing Webmaster Tools now links AI queries to cited pages

AI connection map

Microsoft added query-to-page mapping to its AI Performance report in Bing Webmaster Tools, letting you connect AI grounding queries directly to cited URLs.

Why we care. The original dashboard showed queries and pages separately, limiting optimization. Now you can tie specific AI-triggering queries to the exact cited pages, so you can prioritize updates based on real AI-driven demand — not guesses.

The details. The new Grounding Query–Page Mapping feature links two existing views in the AI Performance dashboard:

  • Click a grounding query to see which pages are cited
  • Click a page to see which grounding queries drive its citations
  • Mapping is many-to-many: one query can map to multiple pages, and vice versa

Catch up quick. Microsoft launched the AI Performance report in Bing Webmaster Tools in February as its first GEO-focused dashboard. It:

  • Tracks where and how often your content is cited in AI answers across Bing, Copilot, and partners.
  • Shows grounding queries, cited URLs, and visibility trends over time.
  • Focuses on citation visibility — not clicks, rankings, or traffic.

What they’re saying. Microsoft said the update responds to “strong positive customer feedback and numerous requests.”

The announcement. The addition of query-to-page mapping to Bing Webmaster Tools appeared in a Microsoft Advertising blog post: The AI Performance dashboard: Your view into where your brand appears across the AI web

Google Business Profile tests AI-generated replies to reviews

Google AI reviews

Google is testing AI-generated review replies in Google Business Profile.

Why we care. Responding to reviews can impact conversions and trust. But generic AI replies could be risky and erode trust, especially on negative reviews where authenticity matters most. Response quality matters more than whether a business replies to reviews.

What it looks like. Here’s a screenshot:

The details. Google appears to be rolling out a limited test of Reply to reviews with AI inside Google Business Profile.

  • The feature generates suggested responses to customer reviews.
  • Users can review, edit, and manually submit replies.
  • Availability is inconsistent across accounts and reviews.
  • The feature has been spotted in the U.S., Brazil, and India, but not widely in Europe.

Early behavior. Some users report prompts focused on older, unanswered negative reviews.

  • In at least one test, users could trigger AI responses in bulk.
  • There are conflicting reports on automation — some users say bulk responses still require review; others report fully automated replies can be published without edits.

First seen. The feature was first shared on LinkedIn by Chandan Mishra, a freelance local SEO specialist, and amplified by Darren Shaw, founder of Whitespark.

Google confirms AI headline rewrites test in Search results

Google rewriting titles

Google is testing AI-generated headline rewrites in Search results, describing it as a small, narrow experiment for now.

What’s happening. Google confirmed to The Verge (subscription required) that it’s testing AI-generated titles in traditional Search results, not just Discover.

  • The test is “small” and “narrow,” and not approved for broader rollout.
  • It impacts news sites but isn’t limited to them.
  • The goal is to better match titles to queries and improve engagement, Google said.

One example showed Google replacing original headlines with shorter or reworded versions, sometimes changing tone or intent (e.g., reducing “I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything” to “‘Cheat on everything’ AI tool.”).

Why we care. Google Search is already sending fewer clicks. Now you also have to contend with Google generating entirely new headlines with AI, risking changes to meaning, brand voice, and click-through rates.

Dig deeper. Google changed 76% of title tags in Q1 2025 – Here’s what that means

What they’re saying. Sean Hollister, senior editor at The Verge, wrote:

  • “This is like a bookstore ripping the covers off the books it puts on display and changing their titles. We spend a lot of time trying to write headlines that are true, interesting, fun, and worthy of your attention without resorting to clickbait, but Google seems to believe we don’t have an inherent right to market our own work that way.”

Title links. According to the Google Search Central section on title links, originally published in 2021:

Google’s generation of title links on the Google Search results page is completely automated and takes into account both the content of a page and references to it that appear on the web. The goal of the title link is to best represent and describe each result.

Google said it uses these sources to “automatically determine title links”

  • Content in <title> elements
  • Main visual title shown on the page
  • Heading elements, such as <h1> elements
  • Content in og:title meta tags
  • Other content that’s large and prominent through the use of style treatments
  • Other text contained in the page
  • Anchor text on the page
  • Text within links that point to the page
  • WebSite structured data

What to watch. Google called this one of many routine experiments, but that’s no guarantee it stays small. The Verge noted a similar “experiment” in Discover later became a full feature.

  • Any future launch may not rely on generative AI, but Google didn’t explain how that would work.

Reaction. After seeing this news, Louisa Frahm, SEO director at ESPN, wrote on LinkedIn:

  • “After 10+ years in news SEO, I’ve come to find that a headline is the most prominent element for attracting readers in timely windows, to provide a targeted synopsis that elevates your brand voice. If that vision gets altered and facts are misrepresented, long-term audience trust will be compromised.”

Cloudflare CEO: Bots could overtake human web usage by 2027

AI vs human internet traffic

AI bots could outnumber humans on the web by 2027, according to Cloudflare CEO Matthew Prince, as agent-driven browsing explodes alongside generative AI adoption.

  • Prince made the prediction at SXSW, warning that bots are already reshaping how the internet is used — and how it’s monetized.

Why we care. Search is shifting from human clicks to AI-generated answers. If bots become the web’s primary “users,” you’ll need to reshape your strategy to ensure AI systems can access, trust, and use your content.

The details. Prince said AI agents generate far more web activity than humans because they gather information differently. A person shopping might visit five sites. An AI agent could hit thousands.

  • “If a human were doing a task… you might go to five websites. Your agent… will often go to a thousand times the number of sites.”
  • “So it might go to 5,000 sites. And that’s real traffic, and that’s real load.”

He also noted the web’s baseline is shifting fast.

  • “For a long time, the internet was about 20% bot traffic.”
  • “We suspect that in 2027 the amount of bot traffic online will exceed the amount of human traffic.”

Prince said this growth isn’t spiking like COVID-era traffic. It’s rising steadily with no end in sight.

Between the lines. Prince compared AI to past shifts like mobile and social. The difference: users may no longer visit websites directly. Instead, they rely on AI interfaces that aggregate and answer.

  • “The business model of the internet was… create content, drive traffic, and then sell things… That was the business model.”
  • “That breaks down because… bots don’t click on ads.”
  • “Customers are trusting the output from the helpful robot. They’re not clicking through the footnotes.”

AI sandboxes. AI agents also change how computing works behind the scenes. Prince described a future where “sandboxes” — temporary environments for AI agents — spin up and shut down instantly, potentially millions of times per second.

  • “You can… as easily as you open a new tab in your browser… spin up new code which can then run and service the agents.”
  • “We think that there will be literally millions of times a second these sort of sandboxes… being created… and then torn back down.”

The result: sustained pressure on internet infrastructure.

  • “We’re seeing internet traffic grow and grow and grow. And we don’t see anything that’s going to slow it down or stop it.”

The business impact. Companies are already split on how to respond to AI agents. Prince pointed to diverging strategies across major retailers.

  • “There are three radically different strategies about how they are going to interact with the bots.”

At the core is a bigger risk: losing the customer relationship.

  • “The nature of bots is going to be that it disintermediates the relationship between you and your customer.”
  • “Agents… don’t care about brand.”

For publishers. Prince argued AI could both hurt and help media. While AI reduces direct traffic and breaks ad-based models, AI companies need unique, original data — especially local and hard-to-replicate information — and may pay for it.

  • “Traffic has always been a really bad proxy for value.”
  • “What they actually want is… unique local interesting information they can’t get elsewhere.”

He pointed to local media as an example.

  • “If you don’t have the Park Record, then you don’t get that information.”
  • “We may make more off licensing our content to AI companies than we do off digital advertising.”

For small businesses. Prince was more blunt. AI agents optimize for price, quality and efficiency — not brand loyalty or proximity.

  • “My bot doesn’t care.”
  • “My bot is going to figure out actually who is the best… and route that traffic.”

That could erode traditional advantages.

  • “The shortcuts of trust that small business had in the past… are going to be much more difficult.”
  • “The natural tendency of AI is towards that level of aggregation.”

What to watch. The next phase of the web will hinge on control and compensation. Prince said:

  • “There has to be some exchange of value.”
  • “We’ve got to figure out… what’s going to pay for it.”

Prince said the core question is still unresolved:

  • “What is the future business model of the internet?… I don’t know what it’s going to be, but it’s going to change.”

The SXSW interview. The Internet After Search

💾

Matthew Prince says AI bots could soon surpass humans, driving massive traffic surges, breaking ad models, and reshaping search.
❌