Normal view

Today — 26 March 2026Main stream

Google PMax gets new exclusions, expanded reporting features

26 March 2026 at 22:40
Ads control dashboard

Google is launching new Performance Max controls and reporting: audience exclusions, expanded reporting, and budget forecasting tools.

What’s new. Google announced a mix of “steering updates” and “actionable insights” for PMax:

  • First-party audience exclusions: You can exclude customer lists to shift spend toward net-new customer acquisition instead of repeat conversions.
  • Budget reporting: A new in-platform report projects end-of-month spend and shows how daily budget changes impact performance.
  • Full audience reporting: You get detailed breakdowns by demographics, including age and gender.
  • Network segmentation: You can segment placement reports by network, now under When and where ads showed.

Why we care. These updates help address concerns about PMax’s lack of control and transparency. Exclusions help you avoid wasting spend on existing customers, while improved reporting gives you clearer signals for optimization, budgeting, and brand safety decisions.

Google’s announcement. New Performance Max steering and reporting updates coming in 2026

Automated traffic is growing 8x faster than human traffic: Report

26 March 2026 at 21:41
Human vs AI traffic

Automated traffic grew 23.5% year over year in 2025 — about eight times faster than human traffic, which rose 3.1%, according to HUMAN Security’s State of AI Traffic report.

  • AI-driven traffic appears to be a major contributor to that growth, with average monthly volume increasing 187% year over year, while traffic from AI agents and agentic browsers (e.g., OpenAI’s Atlas, Perplexity’s Comet) grew nearly 8,000% year over year.
  • Automated traffic is defined in the report as: “All internet traffic generated by software systems rather than human users, including traditional automation such as search engine crawlers, monitoring bots, and conventional scraping tools, as well as AI-driven traffic.”
  • This report follows Cloudflare CEO Matthew Prince’s prediction that bots could overtake human web usage by 2027.

Why we care. Search is increasingly shaped by more than human queries, crawling, and indexing. AI agents now participate in discovery, comparison, and transactions — within Google’s evolving results and across AI-driven interfaces.

The details. HUMAN groups AI-driven traffic into three broad categories:

  • Training crawlers collecting data for models. They still dominate at 67.5% of AI traffic, but their share is declining as scrapers and agents scale.
  • Real-time scrapers that feed AI search and answers. Scraper traffic grew nearly 600% in 2025, driven by AI-powered search and real-time answer engines.
  • Agentic AI systems that execute tasks autonomously. Smaller in share, but growing fastest and most disruptive.

AI agents behave more like users. These systems aren’t limited to reading content. They increasingly navigate funnels, log in, and transact. In 2025:

  • 77% of observed agent activity (requests) occurred on product and search pages.
  • Nearly 9% touched account-level interactions.
  • More than 2% reached checkout flows.

About the data. HUMAN analyzed more than one quadrillion interactions (requests/events) across its customer base in 2025, with aggregated, anonymized data from 2022 to 2025. It classified AI-driven traffic into training crawlers, AI scrapers, and agentic AI using user-agent strings, infrastructure signals, and observed behavior, noting limits in self-declared bot identity, which may undercount or misclassify some AI-driven activity.

Bottom line. Traffic is becoming less purely human, and discovery is no longer confined to search engines. Optimization now means deciding which machines can access, interpret, and act on your content.

The report. The 2026 State of AI Traffic & Cyberthreat Benchmark Report

Google-Agent user agent identifies AI agent traffic in server logs

26 March 2026 at 20:38
Google-Agent

Google introduced a new user agent, called Google-Agent, that signals when AI agents act on users’ behalf, marking an early shift toward agent-driven web interactions.

What happened. Google added Google-Agent to its list of user-triggered fetchers on March 20 and has begun a gradual rollout.

  • The Google-Agent user agent identifies requests made by AI agents running on Google infrastructure, including experimental tools like Project Mariner.

How it works. Google-Agent appears in HTTP requests when an AI agent visits a site to complete a user-initiated task.

  • Example use cases include browsing pages, evaluating content, or taking actions such as submitting forms.
  • This differs from Googlebot and other crawlers, which run continuously in the background without direct user prompts.

IP ranges. Google shared the IP ranges for its desktop agent:

Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent) Chrome/W.X.Y.Z Safari/537.36

And the IP ranges for its mobile agent:

Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent)

Why we care. This lets you identify agent-driven traffic in server logs. You can now distinguish traditional crawl activity from visits triggered by real users through AI agents. That should help you track agent-assisted conversions, understand emerging user behavior, and prepare for agentic search.

What they’re saying. According to Google’s announcement:

  • “The Google-Agent user agent is rolling out over the next few weeks, and will be used by Google agents hosted on Google infrastructure to navigate the web and perform actions upon user request.”

What to watch. Early volumes will be low as the rollout continues, but now is the time to establish a baseline. What to do:

  • Monitor logs for Google-Agent activity.
  • Make sure CDNs and WAFs aren’t blocking the published IP ranges.
  • Validate that key site actions, including forms and flows, work for automated agents.

Dig deeper. Google’s releasing Google-Agent: Here’s what to know

SMX Now: Learn how brands must adapt for AI-driven search

26 March 2026 at 19:11
AI Search Picks Winners Here's the GEO Strategy Behind It

Visibility is no longer just about ranking. It depends on whether your content is discovered, evaluated, and selected in AI-driven search experiences.

We’re kicking off our new monthly SMX Now webinar series on April 1 at 1 p.m. ET with iPullRank’s Zach Chahalis, Patrick Schofield, and Garrett Sussman on how you must adapt.

The session introduces iPullRank’s Relevance Engineering (r19g) framework for executing Generative Engine Optimization (GEO) through an omnichannel content strategy. You’ll learn how AI search uses query fan-outs to discover and select sources, and how to structure content so it’s retrieved, surfaced, and cited.

It also emphasizes that GEO success isn’t universal. It requires testing, tailored strategies, and a three-tier measurement model spanning discovery, selection, and citation impact.

Save your spot

Search Engine Land is proud to be a media partner for iPullRank’s upcoming SEO Week event.

Report: Clickout Media turned news sites into AI gambling hubs

26 March 2026 at 18:36
Parasite SEO

A company called Clickout Media is being called out for buying trusted news and niche sites, replacing them with AI-generated gambling content, and abandoning them after Google penalties. Some call this “parasite SEO,” but to me it sounds more like large-scale search spam.

What’s happening. The company acquired sports, gaming, and tech sites, then rapidly shifted them from editorial coverage to casino and crypto content, PressGazette reported.

  • Sites were stripped of original reporting, filled with AI-written articles, and used to push offshore gambling links, according to former employees.

How it works. The strategy relies on buying domains with existing authority, then exploiting their ability to rank in Google. Content typically followed a pattern:

  • Legitimate coverage continues briefly to preserve credibility
  • Gambling content is introduced and scaled
  • AI-generated articles and fake author profiles replace human writers
  • Revenue comes from affiliate deals with casino operators, sometimes tied to player losses

The impact. Several previously active publications now appear deindexed, with layoffs and closures following. In some cases, even charity websites were repurposed to host gambling content.

What they’re saying. Google prohibits publishing content at scale for the primary purpose of manipulating rankings. It refers to extreme cases like this as “site reputation abuse,” a violation that can trigger manual actions and removal from Google’s index and search results.

  • “While we aren’t able to comment on a specific site’s ranking on Search, our policies prohibit publishing content at scale for the primary purpose of manipulating search rankings,” Google said about this case.

Why we care. This isn’t SEO in any meaningful sense. It’s reputation abuse designed to game rankings at scale.

The report. The SEO parasites buying, exploiting and ultimately killing online newsbrands by Rob Waugh at PressGazette.

Google updates structured data for forum and Q&A content

26 March 2026 at 17:21
Q&A forum content cards

Google expanded its structured data support for forum and Q&A pages, adding properties that help you signal reply threads, quoted content, and whether content is human- or machine-generated. The update aims to reduce how Google misreads discussion and Q&A content.

What changed. Google’s QAPage docs now support commentCount and digitalSourceType. DiscussionForumPosting docs now support sharedContent plus the same commentCount and digitalSourceType.

The details. In Q&A markup, you can use commentCount on questions, answers, and comments to show total comments even if not fully marked up. answerCount + commentCount should equal total replies of any type.

How it works. digitalSourceType lets you flag whether content comes from a trained model or simpler automation. Use TrainedAlgorithmicMediaDigitalSource for LLM-style output and AlgorithmicMediaDigitalSource for simpler bots. If omitted, Google assumes human-generated content.

What’s new for forums. sharedContent lets you mark the primary item shared in a post. Google accepts WebPage, ImageObject, VideoObject, and referenced DiscussionForumPosting or Comment, including quotes or reposts.

Why we care. This gives you more precise control over how Google reads modern community content — especially forum-heavy sites, support communities, UGC platforms, and Q&A sections. Google can better distinguish answers from comments, count partial threads across pagination, and identify when a post mainly shares a link, image, video, or quoted reply.

The documentation. It was updated March 24.

Before yesterdayMain stream

AI citations favor listicles, articles, product pages: Study

24 March 2026 at 20:28
AI citation engine

AI search citations favor a small set of formats. Listicles, articles, and product pages drive over half of all mentions across major LLMs, according to new Wix Studio AI Search Lab research analyzing 75,000 AI answers and more than 1 million citations across ChatGPT, Google AI Mode, and Perplexity.

The findings. Listicles led at 21.9% of citations, followed by articles (16.7%) and product pages (13.7%). Together, these three formats made up 52% of all AI citations.

  • Articles dominated informational queries, cited 2.7x more than other formats.
  • Listicles captured 40% of commercial-intent citations, nearly double any other type.

Why intent wins. Query intent — not industry or model — most strongly predicts which content gets cited. This pattern held across industries, from SaaS to health.

  • Informational queries skewed heavily toward articles (45.5%) and listicles (21.7%).
  • Commercial queries were led by listicles (40.9%).
  • Transactional and navigational queries favored product and category pages (around 40% combined).

Why we care. This research indicates that you want to map content types to user goals rather than just creating more content. Articles educate, listicles drive comparison, and product pages convert. Aligning content format with user intent could help you capture more AI citations and increase visibility.

Not all listicles perform equally. Third-party listicles accounted for 80.9% of citations in professional services, compared to 19.1% for self-promotional lists. That seems to indicate LLMs prefer neutral, editorial comparisons over brand-led rankings.

Model differences. All models favored listicles, but diverged after that.

  • ChatGPT leaned heavily into articles and informational content.
  • Google AI Mode showed the most balanced distribution.
  • Perplexity stood out, with 17% of citations coming from discussions like Reddit and forums.

Industry patterns. Content preferences shifted slightly by vertical:

  • SaaS and professional services over-indexed on listicles.
  • Health favored authoritative articles.
  • Ecommerce spread citations across listicles, articles, and category pages.
  • Home repair showed the most even distribution across formats.

The research. The content types most cited by LLMs

ChatGPT citations favor a small group of domains: Study

24 March 2026 at 19:58
AI retrieval vs citations

AI citations in ChatGPT are far more concentrated than citation distributions in traditional search. Roughly 30 domains capture 67% of citations within a topic.

  • That’s according to Kevin Indig’s latest study, which also found that broad topical coverage, long-form pages, and cluster-based models outperform the old “one keyword, one page” approach.

The details. Citation visibility wasn’t evenly distributed. In product comparison topics, the top 10 domains accounted for 46% of citations; the top 30, 67%.

  • AI visibility was slightly less concentrated than classic organic search, but still highly centralized.
  • Indig’s conclusion: you’re effectively shut out unless you build enough authority to win one of a limited number of citation “seats.”

What changed. Ranking No. 1 in Google still matters, but it’s not enough. Of pages ranking No. 1, 43.2% were cited by ChatGPT — 3.5x more often than pages beyond the top 20.

  • ChatGPT retrieved far more pages than it cited. AirOps found that it retrieved ~6x as many pages as it cited, and 85% of the retrieved pages were never cited.
  • A third of the cited pages came from fan-out queries, and 95% of those had zero search volume.

Why we care. Publishing the “best answer” for one keyword isn’t enough. ChatGPT rewards domains that cover a topic from multiple angles, not pages optimized for isolated terms. And discovery often happens outside the keyword universe you track.

The patterns. Longer pages generally earned more citations, with variation by vertical. The biggest lift appeared between 5,000 to 10,000 characters. Pages above 20,000 characters averaged 10.18 citations vs. 2.39 for pages under 500.

  • This pattern broke in Finance, where shorter, denser pages often outperformed long guides. In Education, Crypto, and Product Analytics, longer pages continued to gain citation value with little drop-off.
  • 58% of cited URLs were cited only once. Pages that recurred across prompts were usually category roundups, comparison pages, or broad guides answering multiple related questions.

On-page behavior. ChatGPT cited heavily from the upper part of a page. The 10% to 20% section performed best across all industries.

  • The bottom 10% earned just 2.4% to 4.4% of citations. Conclusions were largely ignored.
  • Finance had the steepest ramp, with 43.7% of citations in the first 30%.
  • Healthcare and HR Tech were flatter.
  • Education peaked later, around 30% to 40%.

About the data. Indig analyzed ~98,000 citation rows from ~1.2 million ChatGPT responses (Gauge), isolating seven verticals. The study used structural page parsing, positional mapping, and entity and sentiment analysis to identify which pages earned citations and where they come from.

The study. The science of how AI picks its sources

Bing Webmaster Tools now links AI queries to cited pages

24 March 2026 at 18:03
AI connection map

Microsoft added query-to-page mapping to its AI Performance report in Bing Webmaster Tools, letting you connect AI grounding queries directly to cited URLs.

Why we care. The original dashboard showed queries and pages separately, limiting optimization. Now you can tie specific AI-triggering queries to the exact cited pages, so you can prioritize updates based on real AI-driven demand — not guesses.

The details. The new Grounding Query–Page Mapping feature links two existing views in the AI Performance dashboard:

  • Click a grounding query to see which pages are cited
  • Click a page to see which grounding queries drive its citations
  • Mapping is many-to-many: one query can map to multiple pages, and vice versa

Catch up quick. Microsoft launched the AI Performance report in Bing Webmaster Tools in February as its first GEO-focused dashboard. It:

  • Tracks where and how often your content is cited in AI answers across Bing, Copilot, and partners.
  • Shows grounding queries, cited URLs, and visibility trends over time.
  • Focuses on citation visibility — not clicks, rankings, or traffic.

What they’re saying. Microsoft said the update responds to “strong positive customer feedback and numerous requests.”

The announcement. The addition of query-to-page mapping to Bing Webmaster Tools appeared in a Microsoft Advertising blog post: The AI Performance dashboard: Your view into where your brand appears across the AI web

❌
❌