Reading view

Google expands Personal Intelligence to AI Mode, Gemini, Chrome

Google Personal Intelligence expands

Google is expanding Personal Intelligence across AI Mode, Gemini, and Chrome in the U.S., moving it beyond beta into broader consumer use.

Why we care. Personal Intelligence pushes Google further into fully personalized search, using first-party data like Gmail and Photos. That makes results harder to replicate, rank against, or track — especially in AI Mode, where outputs may vary based on user history, purchases, and behavior.

The details. Personal Intelligence now works across:

  • AI Mode in Google Search (available now in the U.S.)
  • Gemini app (rolling out to free users)
  • Gemini in Chrome (rolling out)

How it works. Users can connect apps like Gmail and Google Photos so Google can tailor responses using personal context. Examples Google shared include:

  • Shopping recommendations based on past purchases and brand preferences.
  • Tech troubleshooting using receipt data to identify exact devices.
  • Travel suggestions using flight details, timing, and past trips.
  • Personalized itineraries and local recommendations.
  • Hobby suggestions inferred from user interests.

Availability. These features are available only for personal accounts, not Workspace users, Google said.

Dig deeper. Google says AI Mode stays ad-free for Personal Intelligence users

Catch-up quick. Google introduced Personal Intelligence as a U.S.-only beta for Gemini subscribers in January. At the time:

  • It was limited to AI Pro and Ultra users.
  • It focused on Gemini, with Search integration “coming soon.”
  • The feature was opt-in and off by default.
  • This update delivers on that roadmap by:
  • Bringing it to Search AI Mode.
  • Expanding access to free users.
  • Extending it to Chrome.

Privacy and control. Google emphasized:

  • Users must opt in to connect apps.
  • Connections can be turned on or off at any time.
  • Models do not train directly on Gmail or Photos content.
  • Limited data, such as prompts and responses, may be used to improve systems.

Google’s blog post. Bringing the power of Personal Intelligence to more people

Yahoo CEO: Google AI Mode is the biggest threat to web traffic

Yahoo traffic pipeline

Yahoo CEO Jim Lanzone said AI-powered search — especially Google’s AI Mode — is putting the open web’s core traffic model at risk and argues AI search engines must send users back to publishers.

  • “I think that the LLMs are one big reason that they’re under threat, with AI Mode in Google being the biggest challenge.”
  • “Those publishers deserve [traffic], and we’re not going to have the content to consume to give great answers if publishers aren’t healthy.”

Why we care. Many websites are seeing less traffic from answer engines like Google and OpenAI — and I think it’ll only get worse. So it’s encouraging to see Yahoo trying to preserve the “search sends traffic” model. As he said: “We have very purposefully highlighted and linked very explicitly and bent over backwards to try to send more traffic downstream to the people who created the content.”

Yahoo’s AI stance. Yahoo is taking a different approach from chatbot-style interfaces, Lanzone said on the Decoder podcast. He added that Yahoo isn’t trying to compete as a full AI assistant:

  • “Ours looks a lot more like traditional search and it is more paragraph-driven. It’s not a chatbot that’s trying to act like it’s a person and be your friend.”
  • “We’re not a large language model. We’re not going to be the place you come to code. We’ve really launched Scout as an answer engine.”

What’s next: Personalization + agentic actions. Yahoo plans to expand Scout beyond basic answers and is embedding AI across its ecosystem:

  • “You are very shortly going to see us get into very personalized results. You’re going to see us get into very agentic actions that you can take.”
  • “There’s a button in Yahoo Finance that does analysis of a given stock on the fly… It is in Yahoo Mail to help summarize and process emails.”

Yahoo vs. Google isn’t a thing. Yahoo isn’t trying to win by converting Google users directly. Instead, Yahoo is prioritizing its existing audience and increasing usage frequency over immediate market share gains:

  • “Nobody chooses, you will not be surprised, Yahoo over Google or somewhere else to search. The way that we get our search volume is because we have 250 million US users and 700 million global users in the Yahoo network at any given time. There’s a search box there. And infrequently, they use it.”

A warning. Companies — including publishers — should be cautious about relying too heavily on AI platforms as intermediaries. Lanzone compared today’s AI partnerships to Yahoo’s past reliance on Google:

  • “You are tempting fate by opening up a way for consumers to access your product within a large language model.”
  • “The big bad wolf will come to your door and say everything’s cool.”

The interview. Yahoo CEO Jim Lanzone on reviving the web’s homepage

LinkedIn updates feed algorithm with LLM-powered ranking and retrieval

LinkedIn AI feed algorithm

LinkedIn is launching a new AI-powered feed ranking system that uses large language models and GPUs to analyze post content and surface more relevant updates to its 1.3 billion members.

Why we care. Understanding how LinkedIn surfaces content is critical if you want your posts — or your brand’s — to be discovered. The new system prioritizes topical relevance and engagement patterns, LinkedIn said. Posts that demonstrate expertise and align with emerging professional conversations may travel farther across the network — even without existing connections.

The details. LinkedIn rebuilt much of its feed recommendation system using large language models, transformer models, and GPU infrastructure. The overhaul centers on two systems: retrieving relevant posts and ranking them in the feed.

Unified retrieval system. LinkedIn replaced several separate discovery systems with a single LLM-powered retrieval model.

  • Previously, feed candidates came from multiple sources, including network activity, trending posts, collaborative filtering, and topic-based systems.
  • The new approach uses LLM-generated embeddings to understand what posts are about and how they connect to your professional interests.
  • Now, LinkedIn can link related topics even when they use different terminology. For example, engagement with posts about small modular reactors could surface content about electrical grid infrastructure or renewable energy.

Ranking that follows your interests. After retrieval, LinkedIn ranks posts using a transformer-based sequential model. Instead of evaluating posts independently, the model analyzes patterns across your past interactions — including likes, comments, dwell time, and other signals.

  • This helps LinkedIn detect how your professional interests evolve and recommend content that reflects those shifts.

System performance and infrastructure. The system runs on GPU infrastructure designed to process millions of posts while keeping feeds fresh.

  • The architecture can update content embeddings within minutes and retrieve candidates in under 50 milliseconds, LinkedIn said.

Improving feed quality and authenticity. LinkedIn also announced updates to improve content quality:

  • Cracking down on automated engagement. LinkedIn is taking action against comment automation tools, browser extensions, and engagement pods that create inauthentic conversations. These tools violate platform rules and undermine real professional discussions, LinkedIn said.
  • Reducing engagement bait and generic posts. LinkedIn plans to show less content designed purely to drive comments or clicks — including posts asking people to comment “Yes” to boost reach, posts pairing unrelated videos with text to game distribution, and recycled thought-leadership with little substance.
  • Helping new members personalize their feeds faster. LinkedIn is testing an “Interest Picker” during signup that lets new users choose topics such as leadership, job search skills, or career growth, helping deliver relevant content from day one.

SerpApi asks court to throw out Reddit scraping complaint

Reddit vs SerpApi

SerpApi is asking a federal court to dismiss Reddit’s lawsuit over alleged scraping of Reddit content from Google Search, saying Reddit is trying to use copyright law to control user posts and public search results.

  • The motion follows Reddit’s amended complaint filed in February.
  • SerpApi says the filing still fails to show copyright ownership, circumvention of technical protections, or concrete harm.

SerpApi’s argument. SerpApi CEO Julien Khaleghy, in a blog post today, argued the lawsuit fails for several reasons:

  • Reddit doesn’t own most of the content at issue. Its user agreement states that users retain ownership.
  • Reddit holds only a non-exclusive license to user posts.
  • The snippets cited in the complaint (e.g., dates, addresses, short fragments) aren’t copyrightable.
  • SerpApi accessed Google Search pages, not Reddit itself.

DMCA. Khaleghy said Reddit claims SerpApi violated the Digital Millennium Copyright Act (DMCA) by circumventing technical protections. SerpApi disputes that claim, saying it retrieves the same search results visible to anyone who enters a query in Google. Khaleghy argued that:

  • SerpApi doesn’t break encryption or bypass authentication.
  • Accessing public webpages isn’t “circumvention” under the DMCA.
  • Reddit is trying to enforce copyright protections it doesn’t own.
  • Reddit’s privacy policy states that public posts may appear in search results.

Catch up quick. Legal fights over search scraping and AI data have intensified in recent months:

Why we care. The case tests whether companies can extract information from Google’s search results without violating copyright or the DMCA. The outcome could affect SEO tools and AI training data.

What’s next. The court must decide whether Reddit’s amended complaint can proceed. If the judge dismisses the case with prejudice, Reddit’s claims against SerpApi in this lawsuit would end.

SerpApi’s blog post. Reddit’s Lawsuit is a Dangerous Attempt to Expand Platform Power

Only 15% of pages retrieved by ChatGPT appear in final answers: Report

AI search fan out

ChatGPT retrieves far more webpages than it cites. A new AirOps analysis found that 85% of discovered sources never appear in the final answer.

Why we care. If you want your content cited in AI-generated answers, discovery isn’t enough. Most retrieved pages never become visible to users.

Key finding. In AI answers, retrieval doesn’t equal citation. Your page can rank and be retrieved yet still lose the citation to a source that better matches the prompt or supporting context.

  • This shifts optimization toward earning selection inside the AI synthesis process—not just appearing in search results, per the report.

By the numbers:

  • 82,108 citations appeared in final responses.
  • Only 15% of retrieved pages were cited.
  • 85% of pages surfaced during research never appeared in answers.

Citation rates also varied by query type:

  • 18.3% for product discovery queries
  • 16.9% for how-to queries
  • 11.3% for validation searches

Fan-out queries. ChatGPT often expands prompts with additional internal searches while generating an answer, creating what the report calls a “second citation surface.” Across the dataset:

  • 89.6% of prompts triggered two or more follow-up searches.
  • Fan-out searches expanded 15,000 prompts into 43,233 queries.
  • 32.9% of cited pages appeared only in fan-out results—not the original prompt.
  • 95% of fan-out queries had zero traditional search volume.

Google ranking correlation. High Google rankings strongly correlated with citations:

  • 55.8% of cited pages ranked in Google’s top 20.
  • Pages ranking in Position 1 were cited 3.5 times more often than pages outside the top 20.

About the data. AirOps analyzed 548,534 pages retrieved across 15,000 prompts to examine how ChatGPT expands queries and selects citations.

The study. The Influence of Retrieval, Fan-out, and Google SERPs on ChatGPT Citations

Google AI Overviews cut search clicks 42%: Report

Google traffic redistribution

Google’s AI Overviews may be reducing traditional search clicks, but publishers still have meaningful growth opportunities in breaking news and Google Discover, according to new data from Define Media Group.

  • Organic search clicks have fallen 42% since AI Overviews began expanding in Google Search, according to Define Media Group’s analysis of Google Search Console data across its portfolio of 64 sites.

Why we care. AI-generated answers are reshaping search traffic. Evergreen content is losing clicks, while real-time news coverage and Discover distribution are emerging as stronger traffic channels for publishers.

By the numbers. Across Google Search, Discover, and Google News, breaking news traffic grew 103% from November 2024 through early 2026 in the company’s dataset. Losses were concentrated in informational and evergreen content:

  • Organic search traffic averaged 1.7 billion clicks per quarter from Q1 2023 through Q1 2024.
  • After AI Overviews launched, traffic fell 16% immediately and never recovered.
  • As Google expanded AI Overviews in May 2025, declines accelerated.
  • By Q4 2025, search traffic was down 42% from the pre-AI Overviews baseline.

Discover’s role: Google Discover, which grew 30% across the portfolio, is now the main growth engine for breaking news distribution. Discover traffic rose steadily as web search traffic fell. For the first time in the dataset, Discover and web search now drive roughly equal traffic.

Why is this happening? AI Overviews appear less often for news queries than for other topics. AI Overviews appeared for about 15% of news queries — nearly three times less often than in categories such as health and science — according to Ahrefs data cited in the report.

  • News queries often trigger the Top Stories carousel, which links directly to publisher articles. Searches for major developing events, such as international conflicts, typically show Top Stories rather than AI summaries.
  • Define Media Group suggests Google may be avoiding AI-generated summaries for breaking news because events change rapidly, accuracy stakes are high, and generative systems can still hallucinate.

The report. BREAKING! News Thrives in the Age of AI

❌