Normal view

Yesterday — 16 April 2026Main stream

ChatGPT citations reward ranking and precision over length: Study

16 April 2026 at 21:02
ChatGPT citations

ChatGPT citations favor pages that rank well, match the query in their headings, and stay tightly focused, according to an AirOps study of 16,851 queries. The top retrieval result was cited 58% of the time, and pages that answered the main query more narrowly outperformed broader, more comprehensive guides.

Why we care. This study clarifies how to earn ChatGPT citations: win retrieval, mirror the query in your headings, and answer one question extremely well. In this study, that mattered more than breadth.

The findings. Retrieval rank was the strongest signal. Pages in the top search position were cited 58.4% of the time, versus 14.2% for pages in position 10.

  • Heading relevance was the strongest on-page factor. Pages with the strongest heading-query match were cited 41.0% of the time, compared with roughly 30% for weaker matches.
  • Focused pages also beat comprehensive ones. Pages that answered the main query more narrowly outperformed broader, more comprehensive guides, undercutting the usual “ultimate guide” approach.

What drove ChatGPT citations. In this study, pages that won citations usually ranked well, used headings that closely matched the query, and stayed focused on answering it.

  • Structure helped, but only slightly: Pages with JSON-LD markup posted a 38.5% citation rate versus 32.0% for pages without it, and articles with 4 to 10 subheadings performed best.
  • Beyond a certain point, length hurt performance: Pages between 500 and 2,000 words performed best, but pages longer than 5,000 words were cited less often than pages under 500 words.

Freshness helps, up to a point. Pages published 30 to 89 days earlier performed best, while pages newer than 30 days performed worse. This suggests new content may need time to build retrieval signals.

  • Pages more than 2 years old were cited less often, which suggests that content refreshes could help if you’re already ranking for the right queries.

About the data. AirOps said it scraped ChatGPT’s interface, not the API, and analyzed 50,553 responses generated from 16,851 unique queries run three times each. The dataset included 353,799 pages and more than 1.5 million fan-out detail rows across 10 verticals and four query types.

The study. The Fan-Out Effect: What Happens Between a Query and a Citation

Before yesterdayMain stream

March 2026 Google core update more volatile than December — here’s what changed

15 April 2026 at 21:48
Google core update-volatility

The March 2026 Google core update drove far higher ranking volatility than the December 2025 core update. Nearly 80% of top-three results shifted, and almost one in four top-10 pages fell out of the top 100, according to SE Ranking data shared exclusively with Search Engine Land.

The data. Volatility increased across every ranking tier.

  • In the top 3, 79.5% of URLs changed positions, up from 66.8% in December. In the top 10, 90.7% shifted, compared to 83.1%.
  • Stability dropped sharply. Only 20.5% of top 3 URLs held their exact position, down from 33.1%. In the top 10, that fell to 9.3%, from 16.9%.
  • Churn intensified at the top. About 24.1% of pages ranking in the top 10 fell out of the top 100 entirely, versus 14.7% after the December update.

It’s (sort of) complicated. The March 2026 core update began rolling out a day after the March 2026 spam update completed. This complicated attribution, according to SE Ranking:

  • Based on historical patterns and the scale of movement, most volatility was likely driven by the core update, with the spam update amplifying disruption.
  • That overlap likely skews direct comparisons to December, though March still appeared more volatile.

More core update analysis. Meanwhile, independent analysis by Aleyda Solis, using Sistrix data from March 26 to April 11, found a consistent shift in where visibility concentrates. Rankings appeared to move from intermediary sites toward stronger destination sources. Website types gaining search visibility:

  • Official and institutional.
  • Specialist and niche.
  • Established brands.
  • Dominant platforms.

Losses were more common among aggregators, directories, and comparison-driven sites.

Winners and losers. Among the vertical shifts Solis highlighted:

  • Dictionary and language reference sites declined, while larger reference platforms and major destinations gained visibility.
  • Job aggregators like ZipRecruiter and Glassdoor lost ground, while employer sites and specialized platforms like USAJobs and Amazon.jobs surged.
  • Government and institutional domains, including Census.gov and BLS.gov, saw strong gains on fact-driven queries.
  • Travel and real estate visibility shifted away from broad discovery platforms toward stronger brands and primary destinations.
  • Health results were re-sorted. Broad consumer health sites declined, while clinical, research-driven, and specialist sources gained.
  • One exception: YouTube had the largest visibility loss in the dataset.

Why we care. The data suggests Google’s March 2026 core update raised the bar for ranking. Strong brands, owned data, and direct query value won. Intermediaries now look increasingly exposed.

Agentic engine optimization: Google AI director outlines new content playbook

15 April 2026 at 18:28
Agentic engine optimization

Addy Osmani, Google Cloud AI’s director of engineering, published an article April 11 on Agentic Engine Optimization (AEO). In it, Osmani said sites should restructure content for AI agents that fetch, parse, and act on pages differently than humans do.

He compared this AEO, not to be confused with Answer Engine Optimization, to SEO, but for a different consumer.

What is AEO. He defined it as the practice of structuring and serving technical content so AI agents can use it, not just render it. That includes discoverability, parsability, token efficiency, capability signaling, and access control.

The token problem. Osmani said long, bloated pages can be truncated, skipped, or chunked poorly by agents working within limited context windows, raising the odds of incomplete answers or hallucinated implementations.

How content needs to change. Token count is now a core optimization factor, Osmani said, adding his advice:

  • Keep quick starts under roughly 15,000 tokens, conceptual guides under 20,000, and individual API references under 25,000 when possible.
  • Pages should front-load the answer within the first 500 tokens because agents have “limited patience for preamble.”

Markdown over HTML. Osmani also pushed for serving clean markdown, exposing token counts, creating llms.txt as a discovery layer, and using skill.md or AGENTS.md files to help agents understand capabilities, constraints, and key docs before spending context budget on full pages.

  • He released an open-source audit tool, agentic-seo, to check for some of those signals.

Why we care. Osmani’s recommendations align with what many SEOs are already testing for AI retrieval: shorter, cleaner pages, clearer semantic signals, machine-readable formats, and content that gets to the point fast. These all affect whether your content appears in AI-driven responses.

Between the lines. To be clear, the type of AEO Osmani discussed in his article is unrelated to Google Search or organic search rankings. What his article highlights is that content may now need to work for two audiences at once: humans reading pages and agents extracting them.

The article. Agentic Engine Optimization (AEO)

Dig deeper:

❌
❌