Today, Google released Google Search Live globally where AI Mode is available, for these languages and regions. This brings Search Live to more than 200 countries and territories.
Google credits its new audio and voice model, Gemini 3.1 Flash Live, which it says “delivers even more natural and intuitive conversations.” The “new model is also inherently multilingual, which means that people around the world can now speak with Search in their preferred language,” Google added.
How it works. To use Search Live, open the Google app on Android or iOS and tap the Live icon under the Search bar. From there, you can ask your question out loud to get a helpful audio response, then continue the conversation with follow-up questions or dive deeper with helpful web links. If you want to ask about something in front of you, like how to install a new shelving unit, you can enable your camera to add visual context. This way, Search can see what your camera sees and offer helpful suggestions, plus links to more information on the web.
You can also access Search Live if you’re already pointing your camera with Google Lens — just tap the Live option at the bottom of the screen to have a real-time, back-and-forth conversation about what you see in the real world.
Why we care. This is another way users can have conversations with Google’s AI instead of typing queries. Answers could increasingly bypass traditional clicks, and further erode traffic to websites. The inclusion of links (citations at the bottom) means publishers and brands could still see some benefits, but most searchers likely will have little need or desire to click on those links or dig deeper after getting their answer.
Google is launching new Performance Max controls and reporting: audience exclusions, expanded reporting, and budget forecasting tools.
What’s new. Google announced a mix of “steering updates” and “actionable insights” for PMax:
First-party audience exclusions: You can exclude customer lists to shift spend toward net-new customer acquisition instead of repeat conversions.
Budget reporting: A new in-platform report projects end-of-month spend and shows how daily budget changes impact performance.
Full audience reporting: You get detailed breakdowns by demographics, including age and gender.
Network segmentation: You can segment placement reports by network, now under When and where ads showed.
Why we care. These updates help address concerns about PMax’s lack of control and transparency. Exclusions help you avoid wasting spend on existing customers, while improved reporting gives you clearer signals for optimization, budgeting, and brand safety decisions.
Automated traffic grew 23.5% year over year in 2025 — about eight times faster than human traffic, which rose 3.1%, according to HUMAN Security’s State of AI Traffic report.
AI-driven traffic appears to be a major contributor to that growth, with average monthly volume increasing 187% year over year, while traffic from AI agents and agentic browsers (e.g., OpenAI’s Atlas, Perplexity’s Comet) grew nearly 8,000% year over year.
Automated traffic is defined in the report as: “All internet traffic generated by software systems rather than human users, including traditional automation such as search engine crawlers, monitoring bots, and conventional scraping tools, as well as AI-driven traffic.”
Why we care. Search is increasingly shaped by more than human queries, crawling, and indexing. AI agents now participate in discovery, comparison, and transactions — within Google’s evolving results and across AI-driven interfaces.
The details. HUMAN groups AI-driven traffic into three broad categories:
Training crawlers collecting data for models. They still dominate at 67.5% of AI traffic, but their share is declining as scrapers and agents scale.
Real-time scrapers that feed AI search and answers. Scraper traffic grew nearly 600% in 2025, driven by AI-powered search and real-time answer engines.
Agentic AI systems that execute tasks autonomously. Smaller in share, but growing fastest and most disruptive.
AI agents behave more like users. These systems aren’t limited to reading content. They increasingly navigate funnels, log in, and transact. In 2025:
77% of observed agent activity (requests) occurred on product and search pages.
Nearly 9% touched account-level interactions.
More than 2% reached checkout flows.
About the data. HUMAN analyzed more than one quadrillion interactions (requests/events) across its customer base in 2025, with aggregated, anonymized data from 2022 to 2025. It classified AI-driven traffic into training crawlers, AI scrapers, and agentic AI using user-agent strings, infrastructure signals, and observed behavior, noting limits in self-declared bot identity, which may undercount or misclassify some AI-driven activity.
Bottom line. Traffic is becoming less purely human, and discovery is no longer confined to search engines. Optimization now means deciding which machines can access, interpret, and act on your content.
Google introduced a new user agent, called Google-Agent, that signals when AI agents act on users’ behalf, marking an early shift toward agent-driven web interactions.
What happened. Google added Google-Agent to its list of user-triggered fetchers on March 20 and has begun a gradual rollout.
The Google-Agent user agent identifies requests made by AI agents running on Google infrastructure, including experimental tools like Project Mariner.
How it works. Google-Agent appears in HTTP requests when an AI agent visits a site to complete a user-initiated task.
Example use cases include browsing pages, evaluating content, or taking actions such as submitting forms.
This differs from Googlebot and other crawlers, which run continuously in the background without direct user prompts.
IP ranges. Google shared the IP ranges for its desktop agent:
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent) Chrome/W.X.Y.Z Safari/537.36
And the IP ranges for its mobile agent:
Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent)
Why we care. This lets you identify agent-driven traffic in server logs. You can now distinguish traditional crawl activity from visits triggered by real users through AI agents. That should help you track agent-assisted conversions, understand emerging user behavior, and prepare for agentic search.
“The Google-Agent user agent is rolling out over the next few weeks, and will be used by Google agents hosted on Google infrastructure to navigate the web and perform actions upon user request.”
What to watch. Early volumes will be low as the rollout continues, but now is the time to establish a baseline. What to do:
Monitor logs for Google-Agent activity.
Make sure CDNs and WAFs aren’t blocking the published IP ranges.
Validate that key site actions, including forms and flows, work for automated agents.
Visibility is no longer just about ranking. It depends on whether your content is discovered, evaluated, and selected in AI-driven search experiences.
We’re kicking off our new monthly SMX Now webinar series on April 1 at 1 p.m. ET with iPullRank’s Zach Chahalis, Patrick Schofield, and Garrett Sussman on how you must adapt.
The session introduces iPullRank’s Relevance Engineering (r19g) framework for executing Generative Engine Optimization (GEO) through an omnichannel content strategy. You’ll learn how AI search uses query fan-outs to discover and select sources, and how to structure content so it’s retrieved, surfaced, and cited.
It also emphasizes that GEO success isn’t universal. It requires testing, tailored strategies, and a three-tier measurement model spanning discovery, selection, and citation impact.
While initially criticized as a black box, Performance Max has evolved into a fairly critical campaign type. With each passing quarter, Google has introduced more functionality and visibility.
Additional reporting is helpful, but what matters is what you can actually act on. While you can’t control everything in Performance Max, there are specific levers that can have a meaningful impact on performance. Here are the parts of PMax you can control and how to use them effectively.
Control what you can: Search terms and placements
One of the most exciting updates in the last year to Performance Max has been the ability to add these campaign-level negative keywords.
In the past, you could contact Google to add these in. It was somewhat cumbersome and involved filling out an Excel doc, forwarding it to Google, and giving them permission to implement.
With the inclusion of the search terms report, we’re now able to select a keyword and quickly add it to the campaign-level negative keyword list, just as we can with a search or shopping campaign.
Another way to optimize PMax is to review and monitor the placements report. Most recently, Google has moved the Performance Max placements report out of the reporting section of the Google Ads account and into the Where ads have shown section at the campaign level. While this makes analysis easier by removing additional steps, we still only have impression-level reporting on placements.
We can use this information to decide whether to add these placements as negative placements at the account level. This is found in Tools > Content suitability > Advanced settings > Excluded placements.
While this isn’t ideal, there’s still useful insight we can glean from this report, such as ads appearing in kids’ programming or driving a high number of impressions from mobile apps.
Also located in the When and where ads showed section is the ad schedule. Even if you hadn’t selected an ad schedule when creating the campaign, Google automatically dayparts performance hourly.
Google typically recommends an open ad schedule, but if you have a limited budget, restricting your ad schedule during off-peak or non-converting hours is an excellent way to increase efficiency.
You can do this by creating a campaign-level ad schedule within Campaigns > Audiences, keywords, and content > Ad schedule. Make sure your Performance Max campaign is selected in the top left dropdown menu.
Demographic exclusions are a relatively new feature at the campaign settings level for Performance Max. Unfortunately, reports for these campaigns are hard to obtain, limiting informed decisions on demographic exclusions.
This functionality is helpful if you’re aware of specific demographics that aren’t actively in the market for specific products or services. To make adjustments, go to Campaign-level settings > Other settings > Demographic exclusions. From here, you can turn on age or gender exclusions:
While PMax initially didn’t even provide device-level reporting, a new feature lets you opt out of serving on certain devices.
If you opt into all device targeting when launching a PMax campaign, you should periodically review device performance and adjust accordingly. This is best done by segmenting at the campaign or asset group level by device. Device-level data is extremely helpful for determining which device is better suited to reach your goal.
Likewise, if you almost always opt out of certain devices when launching a campaign, this data makes it easier to either launch with all device targeting enabled and monitor performance, or add a device you hadn’t initially added to see how it impacts performance. Device-level targeting is also available at the campaign level, under Other settings.
Improve inputs: Creative and AI assets
Ad assets play a large role in the display, YouTube, and Discover network performance of a PMax campaign. For many, there’s still a gap in producing high volumes of quality image and video creative.
While still evolving, AI assets are getting closer to filling these gaps — enabling us to more effectively target these additional networks. As newer iterations of LLMs emerge, this will become a primary way to generate video content and professional-looking images.
Google already offers generative AI image assets from shopping feed products that look relatively impressive. But we’re still a ways out from seeing high-quality AI-generated videos without the well-known glitches we typically see in this type of content.
Understand the limits of control in Performance Max
The channel controls report gave more insight into where ads were serving. I have an unpopular opinion on this report. While helpful, there’s little we can do within the campaign to improve performance. Because of this, the report is frustrating.
We’ll likely see channel controls available within Performance Max in the near future — similar to what we already have in Demand Gen campaigns. For now, adjust creative and bids to sway volume within certain networks. To opt out of certain networks completely and focus on shopping, then a feed-only Performance Max campaign will do just that.
Performance Max is evolving from a black box to a critical asset in a marketer’s toolkit. The steady stream of new functionality, from campaign-level negative keywords to detailed placement and ad schedule reports, shows Google’s commitment to providing greater control.
Use these levers — strategic exclusions, device adjustments, and budget-aware scheduling — to move beyond set-it-and-forget-it and run Performance Max campaigns with precision and efficiency.
A company called Clickout Media is being called out for buying trusted news and niche sites, replacing them with AI-generated gambling content, and abandoning them after Google penalties. Some call this “parasite SEO,” but to me it sounds more like large-scale search spam.
What’s happening. The company acquired sports, gaming, and tech sites, then rapidly shifted them from editorial coverage to casino and crypto content, PressGazette reported.
Sites were stripped of original reporting, filled with AI-written articles, and used to push offshore gambling links, according to former employees.
How it works. The strategy relies on buying domains with existing authority, then exploiting their ability to rank in Google. Content typically followed a pattern:
Legitimate coverage continues briefly to preserve credibility
Gambling content is introduced and scaled
AI-generated articles and fake author profiles replace human writers
Revenue comes from affiliate deals with casino operators, sometimes tied to player losses
The impact. Several previously active publications now appear deindexed, with layoffs and closures following. In some cases, even charity websites were repurposed to host gambling content.
What they’re saying. Google prohibits publishing content at scale for the primary purpose of manipulating rankings. It refers to extreme cases like this as “site reputation abuse,” a violation that can trigger manual actions and removal from Google’s index and search results.
“While we aren’t able to comment on a specific site’s ranking on Search, our policies prohibit publishing content at scale for the primary purpose of manipulating search rankings,” Google said about this case.
Why we care. This isn’t SEO in any meaningful sense. It’s reputation abuse designed to game rankings at scale.
Like it or not, everyone is fishing in the same pond. As content marketers and SEO practitioners, we all have the same subscriptions to Semrush and other SEO tools, giving us access to the same data as our competitors.
If we all have the same tools, aren’t we just writing the same content?
There’s a better way.
You may be sitting on a wealth of data about your target audience and your existing customers, and you don’t even know it. These insights are invisible to your competitors, yet they’re unread, unanalyzed, and underutilized by the marketing team.
The problem: Third-party tools can create an over-commoditized content echo chamber
While SEO toolsets are invaluable (and I’ll always be using one, pretty much daily, for the rest of my career), they aren’t a failsafe way to ensure you’re creating the best content for your audience. These tools measure existing search demand through their own data, giving the best estimate of keyword traffic and search results.
However, when these aren’t viewed through the lens of your own customers, the result can be content that’s oversaturated in your market, overwhelming anyone looking for help or answers online.
When your content isn’t unique to your current or target audience, your organization and its offerings may get lost in the sea of SEOs and content strategists at your competitor organizations, who are trying to follow the same best practices and strategy.
It’s time to better utilize your own data to implement content campaigns that drive interest from the very audience that’s already shown a proven interest.
For the purposes of this article and marketing content creation, first-party data is any data from current, potential, or past customers that’s only accessible internally. The top “5 goldmines” where I’ve consistently found nuggets of content foundations and insight are:
Internal site search queries: What visitors couldn’t find on your site, but keep searching for.
Sales call transcripts: The exact language and questions prospects say before they buy.
CRM data: Spotting patterns in deal stages, objections, and lost deals.
Support tickets: The issues and questions your product or service keeps failing to answer, leading to frustrated customers.
Email replies and metrics: What the audience actually responds to versus what they ignore.
These five areas are a great place to start collecting and utilizing first-party data to its full potential.
This data is key to better, more-targeted content marketing for three reasons.
It’s proprietary
This data is confidential and only available to your internal team. Often, it’s not even accessible to everyone and may require favors from data analysts or web developers to pull. That’s what makes it so unique. Competitors can’t find or replicate it, no matter what SEO tools they have.
It reflects real buyer language
This relates to the “curse of knowledge” cognitive bias, where you know so much about a topic that you assume others do as well. One of my favorite examples is the “facial tissue” market. You may know facial tissue as “Kleenex,” even though that’s technically a brand name for a type of facial tissue.
With many consumers using a competitor’s brand name colloquially, how do competitors refer to their own product? Because most people likely aren’t searching “facial tissue” with the intent to buy, it’s up to manufacturers to determine the language their audience uses to find alternatives.
Even though employees at XYZ Tissue Co. know the product is technically “facial tissue,” that doesn’t mean their customers do.
It maps to your full marketing funnel
While third-party keyword data usually skews to the top of the funnel, first-party data captures mid- and bottom-funnel content gaps that drive conversions and brand loyalty, not just traffic.
How to get content ideas from first-party data: The specifics
We know these data sources are valuable. So, how do we use them? Let’s break it down.
Internal site search
Site search is one of my favorite sources of insight and inspiration. It’s active, ongoing, real-time data showing how your target audience is trying to interact and engage with you through internal site search. No matter what the data looks like, it can hold a wealth of information about what content your users expect to find on your website.
If you don’t have site search on your website, you can create it using Google’s programmable site search feature. While it will provide internal site search data, it may also display ads or external results on users’ results pages.
To use site search effectively, export the queries monthly, clean the data to remove spam, then cluster by theme (such as product collections or service offerings). Finally, run it through keyword research tools to flag anything with high keyword volume and low competition that’s missing from your site.
Bonus: For products or services your customers are searching for that don’t exist, it might be useful to send that data to the R&D department for potential new offerings to consider.
Use a service like Gong, Chorus, or manual transcriptions from sales calls and CRM data to look for recurring needs, questions, and objections across customers from all stages of the purchasing funnel.
If, for instance, you see continued resistance to your enterprise SaaS analytics platform due to the long onboarding process, consider creating a time-bound, step-by-step guide that makes it painless for anyone to switch analytics platforms. This can be great collateral for the sales team to address popular objections.
In the CRM, you can also filter lost deals by reason. For instance, finding “went with competitor” + common objection could lead to a comparison or differentiation article that highlights your features vs. the competitors you keep losing deals to.
Besides reviewing the data, ask the sales team directly on a call or email about their most common objections. Because they’re constantly in communication with potential customers, they’ll likely know immediately the top objections they receive regularly.
Support tickets
The support team can also be an invaluable resource. In addition to asking the support team directly what problems they solve for customers on a daily basis, look in your customer support ticket queue and dashboard to find old and new tickets with recurring issues (your top 10 most common complaints are probably content gaps you need to address ASAP).
An explainer blog post, knowledge base article, or PDF guide that tackles the issue from an actionable angle can not only give you more content to promote, but also help the support team with materials to share with your customers.
Email replies and metrics
Depending on the industry, your email lists’ reply inboxes may be exploding with valuable customer data. At a supplements company I worked at, we regularly received customer responses to our email marketing campaigns. They asked questions about products, gave suggestions, and even offered enthusiastic reviews we could feature on our website.
You can also look at the metrics.
If your monthly newsletter is the highest-performing email, should you increase it to a biweekly newsletter?
If your product features never get high conversions, is that because of the content, or are they more interested in value-focused blog posts and videos?
Don’t take your first-party data for granted. Build automated pipelines for report generation, conversation follow-ups, and content creation from these sources to build momentum around the topics your audience most wants to hear.
While competitors can copy your articles, they can never copy your customer conversations. Try it out this week: audit a first-party data source and see what content ideas you can find.
Google expanded its structured data support for forum and Q&A pages, adding properties that help you signal reply threads, quoted content, and whether content is human- or machine-generated. The update aims to reduce how Google misreads discussion and Q&A content.
What changed. Google’s QAPage docs now support commentCount and digitalSourceType. DiscussionForumPosting docs now support sharedContent plus the same commentCount and digitalSourceType.
The details. In Q&A markup, you can use commentCount on questions, answers, and comments to show total comments even if not fully marked up. answerCount + commentCount should equal total replies of any type.
How it works. digitalSourceType lets you flag whether content comes from a trained model or simpler automation. Use TrainedAlgorithmicMediaDigitalSource for LLM-style output and AlgorithmicMediaDigitalSource for simpler bots. If omitted, Google assumes human-generated content.
What’s new for forums. sharedContent lets you mark the primary item shared in a post. Google accepts WebPage, ImageObject, VideoObject, and referenced DiscussionForumPosting or Comment, including quotes or reposts.
Why we care. This gives you more precise control over how Google reads modern community content — especially forum-heavy sites, support communities, UGC platforms, and Q&A sections. Google can better distinguish answers from comments, count partial threads across pagination, and identify when a post mainly shares a link, image, video, or quoted reply.
For years, we’ve relied on regular expression (regex) filters, custom dashboards like Looker Studio, or third-party tools — approaches that were often inconsistent and difficult to maintain. Now, GSC’s branded query filter brings that capability natively into one of the most widely used organic reporting platforms.
With this shift, a key gap in SEO reporting becomes easier to address — along with some of the assumptions behind it. Brand demand and discovery can now be evaluated independently, improving performance interpretation and enabling clearer, more defensible reporting grounded in first-party data.
How GSC’s branded query filter works
At its core, the feature does exactly what it promises. It automatically filters queries into:
Why branded vs. non-branded reporting has been inconsistent
Separating branded from non-branded search performance isn’t new. What’s changed is how practical it is to do consistently.
Historically, we’ve built this segmentation manually using:
Regex rules in GSC performance reports.
Keyword tagging in third-party rank-tracking tools.
Custom dashboards pulling from GA4 or BigQuery.
Query classification via exports.
These approaches worked, but they were fragile and difficult to maintain at scale. Common challenges included:
Character limits on regexes.
International sites with language variants.
Misspellings that would slip through.
No shared standard for what counts as a branded term.
Without a consistent framework, segmentation varied by team, tool, and implementation — making it difficult to rely on as a repeatable reporting practice. When data is difficult to access, it doesn’t shape everyday decisions.
GSC’s branded query filter doesn’t make third-party tools obsolete. They remain valuable for competitor brand analysis. GSC becomes the authoritative source for first-party branded performance, while cross-tool comparison shifts from a workaround to a validation step.
The center of gravity shifts back to GSC — right where we want it.
Why SEO performance looks different when you split the data
Branded traffic is both a signal of brand awareness and a high-converting traffic source. It also skews performance when blended with non-branded data.
Without segmentation, reporting often leads to misleading narratives:
“Our organic CTR is improving” (driven mostly by branded growth).
“I’m seeing rankings as stable” (while non-branded discovery is declining or vice versa).
“Traffic was flat year-over-year” (masking rising/declining brand demand).
These patterns make it difficult to understand what’s actually driving performance.
Separating branded and non-branded data allows you to distinguish between brand demand and discovery and evaluate each on its own terms. It also makes it easier to answer key questions:
Are we growing brand demand or non-branded reach?
Is our content strategy increasing non-branded visibility?
If nothing else, is the current strategy working as it should be?
How branded vs. non-branded data reveals what’s really happening
Measuring brand health
Branded search trends are among the clearest signals of brand awareness and trust. Monitoring organic performance for branded terms can surface gaps and opportunities across other channels.
For example, using a regex filter to isolate branded performance, this ecommerce property shows clear year-over-year declines over the last three months. That raises important questions:
Has search demand for the primary branded term increased or decreased?
Was paid search spend for branded terms adjusted?
Are there social, video, or PR opportunities that aren’t being fully leveraged?
In this case, further analysis using tools like Keyword Planner (via Google Ads), Google Trends, and third-party keyword platforms showed a 12% year-over-year decline in branded search demand. That contributed to a 32% decrease in branded clicks.
There are additional factors worth exploring — including paid spend and brand sentiment — but isolating branded performance helps pinpoint where to investigate next.
Non-branded queries typically drive the majority of organic traffic, while branded queries make up a smaller share but convert at significantly higher rates. These differences reflect user intent.
Searches that include a brand name are usually navigational or transactional, while non-branded queries signal discovery.
As a result, impressions, clicks, CTR, and conversions behave differently across branded and non-branded segments.
Searches that include a brand name often indicate intent to visit that brand’s website (see the ecommerce property CTR comparison chart below). Because of this, branded queries are considered bottom-of-funnel and more likely to convert.
Efficiency, strategy, and measuring discovery
Non-branded performance remains the clearest proxy for:
Topical authority.
Content effectiveness.
Organic discovery and reach.
Tracking non-branded visibility separately allows teams to answer:
Are we reaching new users?
Is our content strategy expanding keyword footprints?
Did recent core algorithm updates, which typically create keyword volatility, impact non-branded traffic?
In the ecommerce example above, non-branded impressions dropped sharply around Sept. 12, 2025 — a period when performance should have been trending upward heading into back-to-school, Halloween, and the holiday season.
In this case, the decline was not tied to SEO strategy. Instead, non-branded impressions dipped following Google’s retirement of the &num=100 parameter in Search Console reporting in mid-September 2025.
Because branded queries typically rank higher, they were less affected by this change, making the issue harder to detect in blended data.
Most SEO teams already separate branded and non-branded performance, but consistency has been the challenge.
With native segmentation now built into GSC, achieving that consistency becomes far easier. What once required workarounds can now be done directly within the primary reporting interface.
It’s easy to view the branded query filter as just another GSC feature. In reality, it represents something larger:
Standardized brand classification.
Native segmentation inside first-party data.
More consistent and reliable SEO reporting.
Stronger ties between SEO and broader marketing performance.
This shift changes how SEO work gets done. Teams gain clearer visibility into brand demand trends and discovery performance, and can spend less time reconciling discrepancies across tools and more time interpreting results.
As adoption grows, branded versus non-branded reporting will likely become the default rather than an advanced, custom setup. Reporting becomes more consistent, and performance narratives are easier to support with shared data.
If you’re focused on driving impact, the opportunity is to move beyond reconciling data and toward more confident, consistent interpretation and communication.
LinkedIn Ads consistently delivers some of the highest-quality B2B leads in paid media. But it also has a reputation for being very expensive — for both cost-per-click (CPC) and cost-per-lead (CPL) metrics.
Because of that reputation, I wanted to test a theory: that I could get low CPCs and low-cost qualified leads from LinkedIn Ads by creating a highly valuable, audience-specific piece of content.
As an agency, we usually run LinkedIn Ads campaigns for our clients. We don’t really run many paid ads for ourselves. However, to have the most control over this test, I decided that Saltbox Solutions would be the guinea pig. (Disclosure: I’m the director of strategy at Saltbox Solutions, a B2B-focused PPC and SEO agency.)
The results were impressive.
We spent less than $1,000 and generated a significant volume of leads at a sub-$10 CPL. For advertisers on a shoestring budget, LinkedIn Ads may not be out of reach as previously thought. It just requires a solid strategy.
Here’s what I did, why it worked, and how you can apply the same framework to your own campaigns — regardless of your advertising budget.
The campaign setup
The goal of this campaign was to get our target audience to download our 2026 B2B Demand Gen Playbook — a hefty, 23-page guide created specifically for B2B marketing decision-makers. The timing was key because many marketing leaders were already planning for 2026 in Q4 2025.
For this LinkedIn Ads campaign, I used a document ad format + a lead generation objective. The document ad lets the audience flip through and preview the content before downloading, with four pages available to preview before requiring a download to access more.
I also used a lead gen form for contact capture, since it’s fairly frictionless — the form lives within the LinkedIn platform and autofills most of the contact information from a user’s profile. There was just one campaign for this test, with three ad copy variations for the document ad.
In terms of budget and bid strategy, the campaign used a $600 lifetime budget and a $15 manual bid.
This is what allowed for such low CPLs. Before writing a single word, I did deep audience research to figure out what they really cared about and what would be useful to them.
I knew exactly who I wanted to talk to (and who would be a good fit for the agency): B2B marketing decision-makers at larger companies with a dedicated marketing team. They worked mostly in a demand generation capacity and needed help prioritizing the channels that would make sense for their 2026 goals.
From there, the research focused on understanding what they would actually need in that planning process. It involved:
Mining client meeting notes and calls for recurring questions, common pain points, and frequent requests that kept coming up during planning season.
Using SparkToro to plug in my ideal customer profile (ICP) details and explore the questions, topics, and channels the audience was already engaging with.
Scanning LinkedIn, where I’m active and where a majority of my network is in B2B marketing, for real-time insight into what people were worried about.
Reviewing Reddit threads and B2B marketing communities I’m part of, which were super helpful for getting at the questions marketing leaders had.
The main question throughout this process was, “If I were in my audience’s shoes, what resource would actually be helpful right now?”
One big advantage I had: My audience is me. I’m a B2B marketer talking to other B2B marketers. Being plugged into the same communities and conversations made it much easier to put a personal spin on the content and write like a human.
Once I had a clear picture of what my audience needed, the focus shifted to going deep. The goal was to create a genuinely useful resource, not a thinly veiled sales pitch disguised as a playbook.
That took time to get right. But that depth is likely what drove the 76% lead form completion rate. When people could preview the document in their feed and see that it was substantive, they trusted it was worth downloading.
A few other notes on creating the playbook:
Timeliness: It was created to address a very timely and important marketing activity – annual planning. Because of that, 2026 became the focal point of the cover, and the content was framed around the moment the audience was already in.
Contextual CTAs: Calls to action to get a free audit were sprinkled into sections that dealt with PPC and SEO/GEO, which are the services we actually provide. The CTAs felt earned rather than forced because they were relevant to the surrounding content.
Cover design: A lot of effort went into how the guide looked. Knowing it would be promoted as an ad, the goal was to make it pop in the LinkedIn feed and grab the audience’s attention.
The targeting strategy
For audience targeting, I used a few different layers:
I also excluded a few attributes deliberately after viewing the audience insights:
The resulting audience was about 54,000 people. It could’ve been smaller and still delivered great results.
Job title targeting would also be worth testing. The leads were qualified as-is, but it would be interesting to see what the results would look like with more specific role targeting.
Three ad variations were used to test different copy angles. All three used the same document ad format and lead gen form. The only variable was the copy.
Here are the variations.
Version 1:
Version 2:
Version 3:
A few principles guided the ad copy process:
Each variation led with a strong hook. The first sentence had to grab attention and make people want to keep reading.
The copy ran longer than you typically see in ads to give a clearer sense of the guide’s tone and value before the click.
Common fears and questions the audience already had were addressed, such as translating high-level strategy into execution and staying visible in AI search results.
The tone leaned into a “we’ve got you” approach rather than being overhyped or promotional. B2B buyers are skeptical and respond to guidance and valuable information, not pressure.
The copy also had some personality, with a slightly cheeky edge while staying professional. For example, it called out common situations, such as having a beautiful strategy deck but never executing the plan.
Campaign and ad results
Recapping the campaign’s overall performance from Jan. 5 to Jan. 31:
One interesting note is that while the CPC bid was set at $15, the average CPC actually came in way under that at $5.41.
The average CTR was also above LinkedIn’s typical benchmark of 0.50%, and the lead form completion rate was over 75%.
LinkedIn lead gen campaigns have delivered strong results across many client engagements. But even by those standards, this performance was pretty good.
And for the specific ads, V2 was the winner by far:
The LinkedIn Ads algorithm zeroed in on that one and gave it pretty much all the airtime. It makes sense — that had the most eye-catching hook, “Steal our best demand gen ideas.”
The campaign was intentionally stopped at 60 leads. We’re a small, boutique agency, and the goal was to be thoughtful about nurturing the leads generated rather than flooding the funnel with volume that couldn’t be followed up on well.
Of the 60 leads, roughly 56 were qualified — a remarkable outcome for a prospecting campaign.
Our approach to working these leads has been organic LinkedIn engagement rather than a hard sell. No cold pitch sequences. Just showing up in their world as a familiar, credible presence.
As the person who wrote the playbook, I’m also personally reaching out to downloaders to ask for feedback on what they found useful and what they were hoping to see that wasn’t there. That insight will directly shape the next version of the guide and any future content assets created.
The campaign is still in the nurture phase. The primary goal of this test was to validate the model, not generate an immediate pipeline. On that measure, it exceeded expectations.
What made this work and what could be done differently
Looking back at the campaign as a whole, a few things stand out as the real drivers of performance:
Audience research came first. The target audience was clearly defined before anything was created. The content, the targeting, and the copy all flowed from that. As a result, it was very specific.
The content was timely. Releasing a 2026 planning guide early in the year, when everyone was back from the holidays, really worked in this campaign’s favor.
Depth built trust before the form appeared. The preview paired with substantive ad copy had a positive impact on lead form completion rate.
The copy sounded like a person, not a brand.
What could be done differently next time:
Despite the high conversion rates, adding a bit more friction to the form completion process may help. The fact that it was so easy to fill out the form means that the audience may not remember actually downloading it.
Following up with the leads faster after downloading would be a priority. The same approach of asking for feedback would still apply, rather than a sales pitch.
Running it longer and getting more leads would provide a larger data set to learn from.
Testing more ad copy variations against the winner.
How to do this yourself
Whether you’re running lead gen for a client or testing it on your own business, here are some tips to make it work:
Do your audience research before you create the asset: Reddit, SparkToro, community forums, and your own client conversations are all underutilized sources of real audience pain points, and you get pointers on the language they use.
Build something genuinely useful: If it’s a thinly veiled promotion, you’re wasting your audience’s time.
Match your content topic to a timely moment your audience is already in: What season, event, or planning cycle are they navigating right now?
Give your ad copy some personality: Test a hook that stands out, or at least is something that sounds like it was written by a real person.
Start small intentionally: Validate CPL and lead quality before scaling. A $500 test can tell you a lot.
Let the winner run: Early creative testing gives you the signal you need to spend efficiently at scale.
Align your content and your targeting precisely: If you wrote the guide for marketing decision-makers, make sure the campaign isn’t picking up sales roles.
We plan to relaunch this campaign once we’ve gathered enough feedback from the first wave of downloaders. The playbook itself is a living document. It will be updated as the industry shifts, particularly with the wave of ads in AI Overviews and responses.
This was one content asset and one campaign. More are in the works, and this test gave a lot of confidence in the approach.
The platform isn’t the problem. The strategy and offering might be what is driving up the cost.
If you’re willing to put the work into research, producing a quality asset, and getting the messaging right, LinkedIn Ads can be one of the most efficient B2B lead generation channels available.
WordStream by LocaliQ’s 2025 benchmarks show nearly 87% of industries saw year-over-year CPC increases. The cross-industry Google Ads average reached $5.26 per click. High-intent verticals are higher: legal services average $8.58, and the most competitive B2B categories approach or exceed $8 to $9 per click.
These increases reflect structural shifts in how search results pages are designed, how auctions are optimized, and how inefficiencies compound across paid search accounts. Many remain invisible until a structured PPC audit uncovers them. Protecting the budget you already have — starting with your branded terms — is where recovery begins.
Here are the five trends every advertiser needs to understand right now.
What’s driving your CPC
More advertisers are chasing the same finite inventory
Search advertising is, at its core, an auction. When more advertisers compete for the same keywords, prices rise. Global PPC spend continues to surge (Quantumrun Research), while available click slots on results pages haven’t grown at the same rate. More money chasing the same inventory yields higher prices.
The pandemic permanently accelerated this shift—brands that hadn’t invested seriously in paid search entered Google’s auction and didn’t leave.
Google’s AI Overviews are squeezing in
One of the most consequential structural changes in paid search over the past decade is the SERP itself. Google’s AI Overviews now occupy prominent space for informational and exploratory queries. As they expand through 2024 and 2025, they reduce the number of organic and paid listings visible above the fold.
A late-2025 Seer Interactive analysis of 3,119 search terms across 42 organizations found paid CTR on queries with AI Overviews dropped 68%—from 19.7% to 6.34%.
The mechanism is straightforward: as AI Overviews take more real estate (Skai), fewer paid placements appear above the fold. Impression share tightens. Automated bidding competes more aggressively for what remains, and prices rise.
The nuance: users who click past an AI Overview tend to be further along in the buying journey. WordStream’s data shows roughly 65% of industries saw higher conversion rates despite rising CPCs. The implication is clear: shift budget toward high-intent transactional queries where AI Overviews are less likely, and away from informational queries where they dominate.
Smart bidding is making the whole auction more expensive
Modern Google Ads campaigns increasingly rely on automated bidding strategies, such as maximizing conversions or target CPA. Per Google’s Smart Bidding documentation, the system sets a precise bid for each auction based on predicted conversion likelihood — prioritizing performance over cost control.
When nearly every competitor uses the same logic, it creates a self-reinforcing loop of rising bid pressure. This is a market-wide dynamic you can’t reverse — only adapt to.
Unauthorized brand bidding is inflating your costs from the inside
While you can’t control platform algorithms or the macroeconomy, one major driver of CPC inflation is within your control.
When affiliates, partners, or competitors bid on your trademarked keywords, they enter an auction that should be nearly uncontested. Each additional bidder drives your branded CPC up, and you pay twice: once to create the demand, and again when a third party captures that same searcher at the bottom of the funnel.
The effects compound. AI Overviews have already compressed available click inventory; unauthorized brand bidding then inflates the cost of the inventory you win.
Detecting violations requires more than manual SERP checks. Unauthorized bidders often use cloaking—geotargeting away from your headquarters or dayparting outside business hours—to evade detection. With a self-service platform like Bluepear, you can run automated 24/7 monitoring across search engines, geographies, and devices—capturing ad copy and landing page evidence to dispute invalid affiliate commissions and enforce trademark guidelines at scale. Fewer bidders on your branded terms mean less auction pressure and lower CPCs on traffic you already own. It’s one of the few paid search levers that doesn’t require a broader strategy overhaul to move.
What to do about it: three priorities for advertisers
The data points to three clear priorities as you navigate this environment:
Protect your branded baseline. Branded keywords reflect demand you already created. Systematically monitor who else is in that auction and remove unauthorized bidders with automated brand protection tools — one of the highest-leverage actions available right now.
Anchor optimization to cost per acquisition. WordStream’s 2025 benchmarks show a higher CPC can deliver a higher-quality, further-down-funnel user and a lower CPA. The headline CPC number is increasingly a poor proxy for campaign health.
Build first-party data infrastructure. You’re best insulated from continued CPC inflation when your bidding algorithms use high-quality, proprietary conversion signals — reducing reliance on the platform’s broad audience approximations.
Average CPCs are at their highest levels in years, and that trend is unlikely to reverse. Advertisers who manage costs most effectively have adapted their strategies accordingly.
Not sure how many unauthorized bidders are in your branded auction right now? Register with promo code BRANDAUDIT: Bluepear team will deliver a customized audit of your branded search landscape within 48 hours!
For the latest insights on branded search and paid search protection, follow Bluepear on LinkedIn.
Once upon a time, in the delightfully chaotic 1990s, web copywriting was all about exact-match keywords and relentless meta tag stuffing. As algorithms matured, so did SEOcopywriting.
Now, with proposition-based retrieval systems, writing like you’re in the business of tricking a crawler into seeing relevance through keyword repetition is no longer a viable strategy.
Below is a playbook for generative AI-friendly copywriting, broken down into self-contained, high-density concepts.
The ‘grounding budget’: Quality over quantity
Large language models (LLMs) don’t seek less information. They seek higher information density. Google’s Gemini operates on a limited budget of retrieved information, according to research by DEJAN AI, which analyzed over 7,000 queries.
The grounding budget is roughly 1,900 words per query, split across multiple sources. For an individual webpage, your typical allocation is around 380 words. You’re competing for a tiny slice of a fixed pie, so being precise helps the AI’s matching process.
If Schema.org is the external scaffolding of a building, structured language is the load-bearing internal frame. Language itself is the structure we provide machines, such as “semantic triplets” (subject → predicate → object). When a copywriter moves structure inside the language, the sentences become inherently machine-readable.
Google’s passage ranking, AI Overviews, and third-party LLMs like ChatGPT all evaluate content at the passage level using similar retrieval infrastructure. A sentence that works for one works for all of them.
A properly structured sentence fulfills four strict data criteria:
Names the entities: Explicitly identifies subjects and objects (e.g., “Notion Team Plan”).
States the relationships: Defines how entities interact using clear verbs (e.g., “costs”).
Preserves the conditions: Includes context that makes the statement true (e.g., “$10 per user per month”).
Includes specifics: Provides verifiable details rather than marketing fluff (e.g., “includes 30-day version history”).
Feature
The marketing fluff
Structured language (GEO-friendly)
Example
“Our revolutionary platform makes managing your team easier than ever. It is affordable and comes with great support.”
“The Asana Enterprise Plan [Entity] streamlines [Relationship] cross-functional project tracking [Specifics] for teams over 100 people [Condition], starting at $24.99 per user [Data].”
Machine utility
Low (Vague, hard to extract)
High (Decomposable into atomic claims)
Best practices for AI-friendly copywriting
Traditional copywriting flows like a row of dominoes. When an AI “chunks” your page, it snaps those dominoes apart. If your sentences aren’t load-bearing on their own, the logic collapses.
Rule 1: Every sentence must survive in isolation
Ensure every single sentence explicitly names its subject. Vague pronouns like “this,” “it,” or “the above” become dead bits when extracted.
Broken: “It also includes unlimited cloud storage.”
Anchorable: “The Dropbox Business Standard Plan includes 5TB of encrypted cloud storage.”
Rule 2: State relationships, don’t just list entities
Keyword stuffing introduces inference errors. Effective structured language explicitly states the relationship between nodes.
The keyword dump: “We offer SEO, PPC, and content marketing services.”
The structured relationship: “Our agency integrates PPC data into SEO strategies to lower the cost per acquisition (CPA) by an average of 15% within the first 90 days.”
“Ramon Eijkemans is a freelance SEO specialist at Eikhart.com, specializing in enterprise SEO for platforms with 100,000 or more pages. He developed the LLM Utility Analysis framework, a five-lens content scoring system that measures the likelihood of content being selected and cited by AI systems, covering structural fitness, selection criteria, extractability, entity and propositional completeness, and natural language quality, based on research into passage retrieval architectures, Google patent evidence, and proposition-based extraction systems. The framework is the subject of this Search Engine Land article.”
The AI inverted pyramid: Engineering ‘citation bait’
Research shows LLMs reliably extract claims near the beginning or end of a text. Adding more content often dilutes your coverage.
“Pages under 5,000 characters get about 66% of their content used. Pages over 20,000 characters? 12%. Adding more content dilutes your coverage.”
Here’s the four-step formula for citation bait.
The direct answer: Open with a dense, 40-60 word declarative statement answering the “who, what, why, or how.”
Context and detail: Follow up with nuance, maintaining high semantic density.
Structured evidence: Use bulleted lists, tables, or numbered steps (extractable data).
Follow-up alignment: Anticipate the next logical prompt in clearly labeled H2 or H3 subheadings.
Clear headings above a paragraph can improve its mathematical relevance (cosine similarity) to AI systems by up to 17.54%.
To ensure your high-value pages are programmatically extractable, run these four stress tests on your mid-page copy.
The isolation test
The action: Select a single sentence completely at random from the middle of a webpage and read it in total isolation.
The goal: If the sentence relies on preceding paragraphs to make sense or uses vague pronouns (e.g., “This allows for…”), the page has a utility gap. Every sentence should be self-contained.
The context test (‘Scroll twice and read’)
The action: Scroll down twice on a homepage so the hero banner and primary H1 disappear, then start reading from wherever your eyes land.
The goal: If a reader (or a machine “chunking” that section) can’t immediately identify the product or service without the top visual layout, the mid-page text fails the context test.
The disambiguation test
The action: Read a mid-page sentence out loud and ask: Could this apply to the deforestation of the Amazon or a steamy romance novel?
The goal: If a sentence is wildly generic (e.g., “We empower our clients to achieve more”), an LLM will struggle to map it to your specific entity. Specifics prevent misinterpretation.
The URL accessibility test
The action: Run the live URL through an LLM agent or NotebookLM.
The goal: If convoluted JavaScript, heavy code bloat, or aggressive bot protection prevents an agent from “seeing” the raw text, generative search engines may skip the content entirely.
AI search content optimization FAQs
Here are answers to common questions about optimizing content for AI search.
Is generative engine optimization (GEO) a legitimate discipline?
Traditional SEO relies on bolt-on machine-readable code to make human narratives SEO-worthy. AI search optimization requires embedding explicit entity relationships and structure directly inside your copy.
What is the ideal section length for chunking?
Open with a dense 40-60-word declarative statement. Information buried deep in long paragraphs is rarely retrieved.
Does copywriting for AI search help traditional SEO?
Yes. Because Google uses vector embeddings to evaluate content at the passage level, structuring language for an LLM improves traditional visibility.
Is longer content better?
No. Density beats length. Pages under 5,000 characters see a 66% extraction rate, while pages over 20,000 characters plummet to 12%.
What is the inverted pyramid for AI copywriting?
The AI inverted pyramid means abandoning the slow, conversational introduction and placing your core entities, exact claims, and specific conditions in the very first sentence to guarantee flawless machine extraction.
The content creator is now a machine-readability engineer. Our job is to build narratives that are persuasive to humans while being programmatically extractable for neural networks.
If your content lacks explicit entity relationships, perfectly self-contained sentences, and highly “anchorable” citable claims, the machines will simply look right through you.
Google released the March 2026 spam update less than 24 hours ago and it is already done rolling out. The update finished today at 10:40 a.m. ET.
This update was released yesterday (March 24) at 3:20 p.m. It took 19 hours and 30 minutes to fully roll out, which is super fast.
Why we care. This is the second Google algorithm update announced in 2026. It’s unclear what spam it targeted, but if you see ranking or traffic changes in the next few days, the Google March 2026 spam update could be the cause.
“While Google’s automated systems to detect search spam are constantly operating, we occasionally make notable improvements to how they work. When we do, we refer to this as a spam update and share when they happen on our list of Google Search ranking updates.
For example, SpamBrain is our AI-based spam-prevention system. From time-to-time, we improve that system to make it better at spotting spam and to help ensure it catches new types of spam.
Sites that see a change after a spam update should review our spam policies to ensure they are complying with those. Sites that violate our policies may rank lower in results or not appear in results at all. Making changes may help a site improve if our automated systems learn over a period of months that the site complies with our spam policies.
In the case of a link spam update (an update that specifically deals with link spam), making changes might not generate an improvement. This is because when our systems remove the effects spammy links may have, any ranking benefit the links may have previously generated for your site is lost. Any potential ranking benefits generated by those links cannot be regained.”
Impact. This update should only impact sites spamming Google Search, so hopefully you didn’t see any major negative impact.
Influencer content isn’t just a brand awareness play. It’s showing up in Google SERPs, Google AI Overviews, and AI answers, making keyword strategy an essential part of every influencer brief.
When we brief an influencer, we assign them a keyword. Not as a nice-to-have, but as a required part of the strategy, usually woven into the script, the caption, the on-screen text, and the hashtags.
That might sound like an SEO team overreaching into an influencer team’s lane. But in 2026, the lane lines don’t exist.
Social content is search inventory. If your influencer marketing program isn’t built around that reality, you’re leaving a significant and measurable share of voice on the table.
Search journeys now span platforms, formats, and sources
For most of search’s history, optimization meant ranking on Google. That’s still important, but it’s no longer the full story.
Over a third of consumers now prefer to start their search journey with AI tools like ChatGPT over Google. Platforms like YouTube, Instagram, and Pinterest have also become primary discovery engines for product research, how-to queries, and purchase decisions.
A user searches “best lightweight running shoes” on TikTok and watches three creator videos.
Then they ask ChatGPT for a comparison.
Next, they Google for brand reviews to look at Reddit commentary and What People Are Saying content.
Then they navigate to a brand’s site.
Each of these touchpoints is a search moment, and there’s a strong chance they involve influencer content. The brands showing up at every step are the ones treating influencer marketing content as search content from the beginning.
Ross Simmonds, CEO of Foundation Marketing, shared with me:
“Influencers exist on practically every platform, whether we’re talking about LinkedIn, Reddit, Instagram, or TikTok. They’re creating content every day. When people search, whether through Google or directly on these platforms through things like Ask Reddit or TikTok search, they’re coming across content that influencers have created.”
“If those influencers understand best practices around search and discoverability, they’re more likely to create content that ranks not only on native platforms, but also directly in the SERP. That’s a marketer’s dream.”
What people are saying SERP feature for “best skin care for moms”
Google’s What people are saying SERP feature is a carousel that appears directly in search results and surfaces user-generated and creator content from platforms like YouTube, TikTok, LinkedIn, Instagram, and Reddit for relevant queries.
It’s now a default feature in U.S. search results and consistently shows up for mid- to bottom-of-funnel keywords, exactly where purchase decisions are made. A brand can appear in this SERP feature (either directly or indirectly via an influencer) without ranking in the traditional Top 10 results.
“Short videos” SERP feature for “skin routine for moms”
Additionally, the Short videos SERP feature is another prime spot for your influencer content to take up shelf space on Google. This means an influencer video optimized with the right SEO keyword can surface in multiple spots on Google for a commercial query your brand’s own site might never rank for.
It’s not theoretical. It’s happening now.
Google AI Mode referencing TikTok and Instagram content for a hair curling prompt
Meanwhile, AI answers are pulling from social content at scale. An analysis of 40 million AI search results found Reddit to be the single most-cited domain across ChatGPT, Copilot, and Perplexity. Ahrefs research confirms that YouTube mentions and branded web mentions are among the top factors correlating with AI brand visibility in ChatGPT, AI Mode, and AI Overviews.
“YouTube is the No. 1 cited domain for Gemini. And 35% of the channels getting cited have under 10K subscribers. We checked the correlation between views and citations. It’s basically zero.
“What actually correlates? How well the creator describes the topic in their video description. So if an influencer makes a video about your product and writes a lazy two-line description, you’re leaving AI visibility on the table.”
The more creators talk about your product with consistent language, the more confident AI becomes in recommending you. So if your influencer content doesn’t contain the SEO keywords your audience is actually searching for, it won’t be surfaced in all the places that matter.
Sample influencer brief with keyword included as a standard
Keyword research should be a standard step in every influencer campaign. Start by identifying your target keyword from data across three sources:
Existing keyword targets shared by the organic strategists.
In platform searches for what’s trending and/or suggested auto completes.
AnswerThePublic searches for both brand and non-brand terms related to the campaign theme.
Once the keyword is identified, embed it into every element of the creator’s content:
Script: Spoken naturally, ideally in the first half of the video, where TikTok’s algorithm is most attentive to audio signals.
Caption: Written to open with or include the keyword, supporting both platform and Google indexing.
On-screen text: Reinforcing the keyword visually for accessibility and algorithm legibility.
Hashtags: Used to connect the content to the broader topic the keyword lives in.
Don’t confuse this with keyword stuffing. It’s modern content architecture.
There’s a big difference between a creator naturally saying, “If you’re searching for the best running shoes right now…” versus a brand clunkily forcing a phrase into otherwise natural content. The influencer brief sets the requirement, yes, but the creator’s job is to incorporate their unique voice.
Ashley Liddell, co-founder and Search Everywhere director at Deviation, shared:
“We assign keywords to influencers based on real search behaviour across platforms, not just brand messaging, and map demand from TikTok, YouTube, Reddit, and Google, then align specific queries to creators whose content style and audience best fit that intent.
“Each brief gives a clear search-led direction, including topic, angles, and format, while leaving room for the creator’s own creativity. The goal is to make influencer content discoverable in-platform search while ensuring it remains engaging in-feed.”
Once the content is live, track whether the creator’s post is surfacing for the target keyword across:
The native platforms (e.g., TikTok, Instagram, etc.)
Google SERP features
Videos and Short videos carousel
What people are saying
Standard organic results
Screenshot and log positions immediately (because rankings can quickly shift). This data tells a story clients aren’t used to seeing from an influencer program.
Influencers extend your search everywhere footprint
Our search everywhere optimization framework
There’s a reason this matters beyond any individual campaign. Google organic CTRs have declined dramatically, by as much as 61% on queries where AI Overviews appear.
With Google SERP features increasingly highlighting video and social content, traditional web content is losing surface area on the SERPs. Social content, conversely, is gaining traction, and we cannot ignore this.
For brands, influencer content has taken on a much stronger value: scalable, authentic, human-first search inventory distributed across platforms where their audiences spend time. It doesn’t replace a traditional SEO program, but it extends reach into channels where creator voices tend to outperform brand-owned content.
Younger audiences search socially first. In some categories, a meaningful share of consideration-stage audiences see creator content before they ever search for your brand. If your influencers don’t use the language your audience searches, you’re invisible in the moments that matter most.
Search everywhere optimization comes down to one thing: showing up where your audience actually searches with content worth stopping for.
The operational reality: Putting things into practice
The biggest barrier to building keyword optimization into influencer programs is structural. SEO and influencer teams often sit within different parts of an organization, owned by different teams with different KPIs, and little reason to collaborate.
Even when those teams are close, a common hesitation remains: adding a keyword requirement to a creator brief may make the content feel scripted or inauthentic. That concern is valid, but somewhat misplaced. A keyword isn’t a constraint on creativity — it’s a topic signal.
Creators integrate talking points, product messaging, and brand language into their content all the time. A search term is no different, as long as the brief gives them room to use it in their own voice.
Closing that gap requires a few concrete changes.
SEO and influencer strategy should share a brief template. The target keyword, along with guidance on how to integrate it naturally, should be a standard field, not an afterthought. If the influencer lead and the SEO lead aren’t in the same briefing conversation, that’s the first thing to fix.
Keyword selection should be platform-specific. What users search on TikTok differs from what they search on Google. TikTok search is more conversational and trend-based. Pull keywords from TikTok’s own autocomplete, not just a traditional keyword tool, then validate on AnswerThePublic, and cross-reference with existing organic targets to find terms that work across surfaces.
Approval workflows should include keyword checks. When reviewing a script, a caption, or a live post, include a keyword compliance check. If the keyword is missing, ask the influencer for a revision before the content goes live. This sounds small, but it’s the difference between content that ranks and content that doesn’t.
Reporting should include search metrics. Did the post surface on TikTok for the target keyword? Did it appear in one of Google’s video sections or “What People Are Saying”? These are trackable, reportable metrics, and they belong in campaign reports alongside reach, engagement, and conversions.
Influencer content has always shaped brand perception. Today, it also shapes search visibility across social platforms, Google’s evolving SERP features, and AI-generated answers.
Brands that recognize this apply a search strategy to a channel that, until recently, operated without it. You treat every influencer video as search content — briefing keywords and reporting on search performance as you would for other organic channels.
Influencer content is search inventory. The only question is whether you’re optimizing it.
Does schema markup really benefit AI search optimization? Some suggest it can 3x your citations or dramatically boost AI visibility. But when you dig into the evidence, the picture is far more nuanced.
Let’s separate what’s known from what’s assumed, and look at how schema actually fits into an AI search strategy.
How schema fits into AI search now
Search is shifting from surfacing a SERP with blue links to AI Overviews, generative answers, and chat‑style summaries that collate content in addition to links.
To get your content to appear in this model, your site has to be understood as entities — singular, unique things or concepts, such as a person, place, or event — and the relationships between them, not just strings of text.
Schema markup is one of the few tools SEOs have to make those entities and relationships explicit and understandable for an AI: This is a person, they work for this organization, this product is offered at this price, this article is authored by that person, etc.
For AI, three elements matter the most:
Entity definition: Which brands, authors, services, or SKUs exist on the page.
Attribute clarity: Which properties belong to which entity (e.g., prices, availability, ratings, job titles).
Entity relationships: How entities connect (e.g., offeredBy, worksFor, authoredBy, and sameAs schema tags).
When schema is implemented with stable values (@id) and a structure (@graph), it starts to behave like a small internal knowledge graph.
AI systems won’t have to guess who you are and how your content fits together, and will be able to follow explicit connections between your brand, your authors, and your topics.
Two major platforms have confirmed that schema markup helps their AIs understand content. For these platforms, it is confirmed infrastructure, not speculation.
What about ChatGPT, Perplexity, and other AI search platforms?
We don’t know how these platforms use schema yet. They haven’t publicly confirmed whether they preserve schema during web crawling or use it for extraction. The technical capability exists for LLMs to process structured data, but that doesn’t mean their search systems do.
This doesn’t mean schema is useless, it means schema alone doesn’t drive citations. LLM systems appear to prioritize relevance, topical authority, and semantic clarity over whether content has structured markup.
Put differently, LLMs perform best when you give them a structured form to fill out, not a blank canvas. When models are asked to extract into predefined fields, they make fewer errors than when told to simply “pull out what matters.”
Schema markup on a page is the web equivalent of that form: a set of explicit entity, brand, product, price, author, and topic fields that a system can map to, rather than inferring everything from unstructured prose.
What the research tells us
This tells us that LLMs have the technical capability to process structured data more accurately than unstructured text.
However, this doesn’t tell us whether AI search systems preserve schema markup during web crawling, whether they use it to guide extraction from web pages, or whether this results in better visibility.
The leap from “LLMs can process structured data” to “web schema markup improves AI search visibility” requires assumptions we can’t verify for most platforms.
For Microsoft Bing and Google AI Overviews, schema likely improves extraction accuracy, since they’ve confirmed they use it. For other platforms, we don’t have confirmation of actual implementation.
AI search is so new — for example, ChatGPT search only launched in October 2024 — that companies haven’t disclosed their indexing methods. Measurement is difficult with non-deterministic AI responses. There are significant gaps in what we can verify.
To date, there are no peer-reviewed studies on schema’s impact on AI search visibility, or controlled experiments on LLM citation behavior and schema markup.
OpenAI, Anthropic, Perplexity, and other platforms besides Microsoft or Google haven’t published their indexing methods.
This gap exists because AI search is genuinely new (ChatGPT search launched in October 2024), companies don’t disclose indexing methods, and measurement is difficult with non-deterministic AI responses.
How schema builds an entity graph
In traditional SEO, many implementations stop at adding Article or Organization markup in isolation. For AI search, the more useful pattern is to connect nodes into a coherent graph using @id. For example:
An Organization node with a stable @id that represents your brand.
A Person node for the author who works for your organization.
An Article node authoredBy that person and publishedBy that organization, with about properties that declare the main topics.
That connected pattern turns your schema from a set of disconnected hints into a reusable entity graph. For any AI system that preserves the JSON‑LD, it becomes much clearer which brand owns the content, which human is responsible for it, and what high‑level topics it is about, regardless of how the page layout or copy changes over time.
Aspect
Traditional SEO schema
Entity graph schema
Structure
Single @type object per page
@graph array of interconnected nodes
Entity ID
None (anonymous)
Stable @id URLs for reuse across site
Relationships
Nested, one‑way (author: “name”)
Bidirectional via @id refs (worksFor, authoredBy)
Primary benefit
Rich snippets, SERP CTR
Entity disambiguation, extraction accuracy for AI
AI impact
Minimal (tokenization often strips)
Makes site a unified knowledge graph source if preserved
Recommendations for implementing schema for AI search
For AI search, the best way to position schema right now is to:
Make entities and relationships machine-readable for platforms that preserve and use structured data (confirmed for Bing Copilot and Google AI Overviews).
Reduce ambiguity around brand, author, and product identity so that extraction, when it happens, is cleaner and more consistent.
Complement topical depth, authority, and clear brand signals, not replace them.
Use schema markup for:
Improving visibility in Bing Copilot.
Supporting inclusion in Google AI Overviews.
Enhancing traditional SEO.
Making content easier to parse (good practice regardless of AI).
Maintaining a low-cost implementation with potential upside as platforms evolve.
However, don’t expect:
Guaranteed citations in ChatGPT or Perplexity.
A dramatic visibility lift from schema alone.
Schema to compensate for weak content or low authority.
Priority schema types (based on platform guidance) include:
Organization (brand entity identity).
Article or BlogPosting (content attribution and authorship)
Schema markup is infrastructure, not a magic bullet. It won’t necessarily get you cited more, but it’s one of the few things you can control that platforms such as Bing and Google AI Overviews explicitly use.
The real opportunity isn’t schema in isolation. It’s the combination of structured data with proper entity relationships, high-quality, topically authoritative content, clear entity identity and brand signals, and the strategic use of @graph and @id to build entity connections.
You launch a new TikTok ad. Early metrics look great — low CPCs, high engagement, and a ROAS that makes you look like a pro. Then, a few days later, performance slips.
Ad frequency creeps up, the hook rate drops, and you’re suddenly back at the drawing board.
Some call it creative fatigue. On TikTok, it’s closer to creative exhaustion.
A TikTok ad’s “half-life” is shorter than any other platform. If you’re still treating it like a Meta ad campaign, you’ll lose.
To win, treat creative like a supply chain, not a campaign asset.
Why TikTok creative decays so quickly
On intent-based platforms like Google, Amazon, or Pinterest, people search for things. On social platforms, people look for family, friends, and other people. On TikTok, above all, people go for entertainment (though they still discover things and people).
TikTok’s algorithm favors variety, and you consume content at lightning speed. The moment something feels repetitive or stale, you swipe.
Your creative decays faster because the platform runs on high-velocity novelty. You’re competing with thousands of creators and brands.
If your process relies on long feedback loops — from storyboarding to shooting to editing — you’ll fall behind. By the time your ad goes live, the trend has shifted, the audio is dated, the hooks are stale, and your audience has moved on.
Use ongoing content capture to avoid bottlenecks and keep up with TikTok’s shrinking content half-life.
Modular creative: Record five hooks, three body segments, and four CTAs. Get 60 ad permutations from one hour of filming. Block time on your calendar to shoot.
Creator-in-residence: Don’t rely on one-off shoots. Hire creators in-house or on retainer to capture footage and document the brand daily. Make content creation more efficient and effective.
The 80/20 fidelity rule: Keep 80% of your content lo-fi and native, as if it were shot on a phone. Use the other 20% for higher-production, polished hero assets. Blend into the feed, maximize performance, and elevate your brand where it matters.
Every high-performing TikTok ad can be broken down into three distinct modules.
The hook (0:00-0:03)
The most volatile part. It stops the scroll and fatigues fastest.
Film 5–7 variations for each concept. Use pattern interrupts—start mid-action, zoom in, throw a box. Try a negative constraint: “Stop doing [common mistake] if you want [result].”
Use green screen reactions with trending news or customer reviews as the backdrop, with your commentary over it. Strong statements and questions keep it open-ended.
The body (0:04-0:15)
This is where you retain attention, deliver value, and show the “why” or “how.” It’s more educational or narrative and lasts longer than the hook.
Test “us vs. them” in a split-screen showing your product solving a common problem.
Test first-person use in real settings—at home, in the kitchen, outside, at the gym, or at work.
The CTA (last 3-5 seconds)
This is where you close. Test psychological triggers to see what moves the needle:
Use scarcity: “Our last drop sold out in 48 hours—don’t miss this one.”
Test low-friction angles: “Take the 2-minute quiz to find your best fit.”
Offer incentives beyond “Shop Now” or “Link in bio”: “Use code (X) for (% off) your first order.”
When a winning ad fatigues, don’t kill it. Keep the body and CTA, swap in a new hook. TikTok weights the first seconds for audience matching — use that to reset fatigue and extend performance.
When to pause or reallocate
A common mistake is cutting an ad too soon and missing its potential—or letting it run too long and wasting budget.
Your intuition matters, but TikTok’s algorithm sees more. An ad may fatigue with one audience and find a second life with another, so don’t give up too quickly. Here’s when to pause and when to move it elsewhere:
Kill signal: If your thumb-stop rate (3-second views/impressions) drops below your benchmark for three straight days, your hook isn’t working—pause it. If your hook is very fast, use 2-second views/impressions.
Iterate signal: If engagement is high but conversions are low, your creative may work, but your offer, CTA, or landing page is adding friction.
Algorithm reallocation: Before you delete any asset, test broad targeting — especially with Smart+ campaigns. Let the algorithm find a new audience that hasn’t seen your ad and compare performance to manual targeting.
With fast iteration cycles, your TikTok budget can’t be static. Dedicate 20% to 30% of your monthly budget to testing new creative concepts. This budget isn’t for hitting your target ROAS — it’s for buying data and insight.
Once you find a winner, move it into scaling campaigns. This prevents performance from dropping when a single creative hits its half-life.
Brands winning on TikTok aren’t the ones with the biggest budgets or name recognition. They create and test the most.
Capture everything—packaging, shipping, unboxings, product use, customer testimonials—as raw material in your creative supply chain. Shorten the distance between a brand event and launch.
The shrinking ad half-life won’t slow you down. It will become your advantage.
For the past several years, marketing strategy has reorganized itself around a simple premise. Third-party data is fading. Privacy expectations are rising. The solution, we are told, is first-party data.
Collect more of it. Centralize it. Build the customer view around it.
In many ways, the shift was necessary. Direct relationships with customers are more durable than rented audiences. Consent and transparency matter. Organizations that invested early in their own data ecosystems are better positioned today than those that relied entirely on external signals.
But the industry’s confidence in first-party data has grown so strong that it now obscures a more complicated reality.
Owning customer data does not automatically translate into understanding customers.
Most marketing leaders have sensed this tension already. Despite increasingly sophisticated technology stacks, many organizations still struggle with familiar questions. Which records represent active individuals? Which identities are stale or misattributed? How much of the customer view reflects current behavior versus historical assumptions?
These are not philosophical concerns. They surface in everyday operational decisions. Campaigns that reach fewer real customers than expected. Personalization efforts that plateau. Measurement models that appear precise but produce inconsistent outcomes.
The problem is not the absence of data. If anything, the opposite is true.
The problem is the assumption that the data sitting inside our systems still reflects reality.
When first-party data becomes historical data
One of the quiet characteristics of customer data is how quickly it shifts from present tense to past tense.
Most organizations gather identity information at moments of interaction. Account creation, purchases, subscriptions, service requests. These events create durable records that enter CRM systems, marketing platforms and data warehouses.
From that point forward, the records largely persist as they were captured.
What changes is the world around them.
Consumers rotate devices. Email addresses evolve from primary to secondary. People move, change jobs, create new accounts, abandon others. Behavioral patterns shift with new platforms, new habits, and new privacy controls.
The record still exists, but the certainty surrounding the identity begins to loosen.
Marketing teams encounter this reality in subtle ways. Lists that appear healthy but deliver diminishing engagement. Customer profiles that fragment across systems. Identity graphs that require constant reconciliation as signals drift out of alignment.
None of this means first-party data is wrong. It simply means it ages.
The moment of collection is precise. The months and years that follow are less so.
The distance between records and reality
The idea of a unified customer profile has become foundational to modern marketing infrastructure. Customer data platforms, identity graphs and advanced analytics environments all attempt to bring scattered signals together into a coherent picture.
When the signals align, the results can be powerful.
But the effectiveness of these systems depends heavily on the integrity of the identifiers entering them. Email addresses, login credentials, device associations and other identity anchors serve as the connective tissue between records.
When those anchors drift or degrade, the unified profile begins to lose clarity.
This is not a failure of the technology itself. Most identity platforms perform exactly as designed. They connect the signals available to them.
The challenge is that many of those signals were captured months or years earlier, during moments when the system had limited visibility into the broader identity context surrounding the individual.
As the digital environment evolves, the original record becomes one reference point among many.
Marketing leaders recognize this gap when their systems produce technically accurate profiles that still fail to explain current customer behavior. The database reflects what was known. The customer reflects what is happening now.
Closing that gap requires something more dynamic than stored attributes alone.
The value of activity signals
In recent years, some organizations have begun looking beyond the traditional boundaries of customer records and focusing more closely on signals that indicate whether an identity is still active within the broader digital ecosystem.
Activity signals provide a different kind of intelligence.
Instead of asking what information was collected about a customer in the past, they ask whether the identity attached to that information continues to exhibit real-world behavior today.
Is the email address still being used?
Does the identity appear in recent digital interactions?
Are the signals surrounding it consistent with genuine consumer activity?
These questions are becoming increasingly important for teams responsible for both growth and risk management.
For marketing, activity signals help clarify which audiences remain reachable and which identities have quietly gone dormant. For fraud teams, they help differentiate legitimate consumers from synthetic identities that appear valid on the surface but lack authentic behavioral patterns.
Both disciplines are ultimately trying to answer the same question.
Does this identity correspond to a real person who is active in the digital world right now?
Stored data alone rarely answers that question with confidence.
A more durable identity anchor
Among the many identifiers circulating through the digital ecosystem, one has proven particularly resilient over time.
Email.
For decades it served as both a communication channel and a persistent identity anchor. It appears in authentication systems, commerce transactions, subscriptions, customer service interactions and countless other digital touchpoints.
That ubiquity produces a secondary effect. Email addresses generate a continuous stream of activity signals that reflect how identities move through the online world.
When those signals are analyzed across large networks, they reveal patterns that extend far beyond a single company’s customer database.
They can indicate whether an identity is actively engaged in digital life or has fallen silent. They can highlight inconsistencies that suggest risk. They can surface connections that help reconcile fragmented customer views.
In other words, they transform a simple identifier into a dynamic indicator of identity health.
Organizations that understand this dynamic tend to treat email differently. It becomes less of a campaign endpoint and more of a reference point for understanding identity across channels.
Rethinking what it means to know the customer
Over the past decade, marketing technology has made extraordinary progress in storing and organizing customer data. Few organizations today lack the infrastructure to capture and analyze enormous volumes of information.
The next frontier is not accumulation. It is validation.
Knowing a customer increasingly depends on the ability to verify that the identities inside a database still correspond to real individuals with ongoing digital activity.
This shift changes how teams think about data quality.
Instead of focusing solely on completeness, forward-looking organizations pay closer attention to vitality. Which identities remain active. Which have quietly faded. Which exhibit patterns that suggest fraud or synthetic creation.
These distinctions influence everything from campaign reach to attribution accuracy to risk exposure.
When identity signals are strong, the rest of the marketing ecosystem performs more reliably. Personalization becomes more relevant. Measurement reflects real outcomes. Customer experiences align more closely with actual behavior.
When identity signals weaken, even the most advanced tools begin operating on uncertain ground.
Moving beyond the illusion
The industry’s embrace of first-party data was an important correction after years of dependence on opaque third-party sources.
But ownership alone does not guarantee clarity.
Customer records capture moments in time. The people behind them continue to evolve.
For organizations that want to truly understand their customers, the challenge is no longer simply collecting data. It is maintaining an accurate connection between stored identities and real-world activity.
That requires looking beyond the database itself and paying closer attention to the signals that reveal whether an identity remains alive in the digital ecosystem.
Companies that make that shift discover something important.
The most valuable customer data is not the information they collect once.
It is the intelligence that helps them keep that data connected to real people over time.
Google released its March 2026 spam update today at 3:20 p.m. It’s the second announced Google algorithm update of 2026, following the February 2026 Discover core update.
This is the first spam update of 2026.
Google’s most recent spam update was in August 2025.
Timing. This update may only “take a few days to complete,” Google said. On LinkedIn, Google added:
“This is a normal spam update, and it will roll out for all languages and locations. The rollout may take a few days to complete.”
Why we care. This is the second announced Google algorithm update of 2026. It’s unclear what spam this update targets, but if you see ranking or traffic changes in the next few days, it could be due to it.
“While Google’s automated systems to detect search spam are constantly operating, we occasionally make notable improvements to how they work. When we do, we refer to this as a spam update and share when they happen on our list of Google Search ranking updates.
For example, SpamBrain is our AI-based spam-prevention system. From time-to-time, we improve that system to make it better at spotting spam and to help ensure it catches new types of spam.
Sites that see a change after a spam update should review our spam policies to ensure they are complying with those. Sites that violate our policies may rank lower in results or not appear in results at all. Making changes may help a site improve if our automated systems learn over a period of months that the site complies with our spam policies.
In the case of a link spam update (an update that specifically deals with link spam), making changes might not generate an improvement. This is because when our systems remove the effects spammy links may have, any ranking benefit the links may have previously generated for your site is lost. Any potential ranking benefits generated by those links cannot be regained.”
Reddit is rolling out new Dynamic Product Ad features, including a shoppable Collection Ads format and Shopify integration, the company announced today.
What’s new.
Collection Ads: A new Dynamic Product Ad format that pairs a lifestyle hero image with shoppable product tiles in one carousel, bridging discovery and purchase. Early adopters following best practices are seeing an 8% ROAS lift.
Community and Deal overlays: Reddit-native labels like “Redditors’ Top Pick” and automatic discount callouts surface social proof and pricing signals without extra work from you.
Shopify integration: Now in alpha, this simplifies catalog and pixel setup for new DPA advertisers, automatically matching products to the right users and context.
The numbers. Reddit DPA delivered an average 91% higher ROAS year over year in Q4 2025. Liquid I.V. reports DPA already accounts for 33% of its total platform revenue and outperforms its other conversion campaigns by 40%.
Why now. Reddit has seen a 40% year-over-year increase in shopping conversations. Also, 84% of shoppers say they feel more confident in purchases after researching products on Reddit.
Why we care. The new tools, especially the Shopify integration, lower the barrier to getting started with Dynamic Product Ads. Reddit might still be viewed by some as an undervalued paid media channel, but there’s an opportunity to get in before competition and costs rise.
Bottom line. Reddit is increasingly a serious performance channel for ecommerce, and these tools make it easier to get started. If you’re not yet running DPA on Reddit, the combination of undervalued inventory and improving ad formats makes this a good time to test.
AI search citations favor a small set of formats. Listicles, articles, and product pages drive over half of all mentions across major LLMs, according to new Wix Studio AI Search Lab research analyzing 75,000 AI answers and more than 1 million citations across ChatGPT, Google AI Mode, and Perplexity.
The findings. Listicles led at 21.9% of citations, followed by articles (16.7%) and product pages (13.7%). Together, these three formats made up 52% of all AI citations.
Articles dominated informational queries, cited 2.7x more than other formats.
Listicles captured 40% of commercial-intent citations, nearly double any other type.
Why intent wins. Query intent — not industry or model — most strongly predicts which content gets cited. This pattern held across industries, from SaaS to health.
Informational queries skewed heavily toward articles (45.5%) and listicles (21.7%).
Commercial queries were led by listicles (40.9%).
Transactional and navigational queries favored product and category pages (around 40% combined).
Why we care. This research indicates that you want to map content types to user goals rather than just creating more content. Articles educate, listicles drive comparison, and product pages convert. Aligning content format with user intent could help you capture more AI citations and increase visibility.
Not all listicles perform equally. Third-party listicles accounted for 80.9% of citations in professional services, compared to 19.1% for self-promotional lists. That seems to indicate LLMs prefer neutral, editorial comparisons over brand-led rankings.
Model differences. All models favored listicles, but diverged after that.
ChatGPT leaned heavily into articles and informational content.
Google AI Mode showed the most balanced distribution.
Perplexity stood out, with 17% of citations coming from discussions like Reddit and forums.
Industry patterns. Content preferences shifted slightly by vertical:
SaaS and professional services over-indexed on listicles.
Health favored authoritative articles.
Ecommerce spread citations across listicles, articles, and category pages.
Home repair showed the most even distribution across formats.
A quiet but important policy update is coming to Google Shopping ads next month, requiring some merchants to verify their accounts before running ads featuring political content.
What’s changing. From April 16, merchants running Shopping ads with certain political content in nine countries will need to verify their Google Ads account as an election advertiser. Google will also outright prohibit some political Shopping ads in India.
The countries affected. Argentina, Australia, Chile, Israel, Mexico, New Zealand, South Africa, the United Kingdom, and the United States.
Why we care. Shopping ads aren’t typically associated with political advertising — this update signals that Google is broadening its election integrity efforts beyond search and display into commerce formats. Merchants selling politically themed merchandise, campaign materials, or other related products in the affected countries need to act before the April 16 deadline.
What to do now.
Review the updated policy language to determine if your Shopping ads feature content that falls under the new restrictions
If affected, apply for election advertiser verification through Google Ads before April 16 to avoid disruption to your campaigns
The bottom line.This affects a narrow but specific set of merchants — but the consequences of missing the deadline could mean ads being disapproved or accounts being flagged. If you sell anything with a political angle in the listed countries, check your eligibility now.
AI citations in ChatGPT are far more concentrated than citation distributions in traditional search. Roughly 30 domains capture 67% of citations within a topic.
That’s according to Kevin Indig’s latest study, which also found that broad topical coverage, long-form pages, and cluster-based models outperform the old “one keyword, one page” approach.
The details. Citation visibility wasn’t evenly distributed. In product comparison topics, the top 10 domains accounted for 46% of citations; the top 30, 67%.
AI visibility was slightly less concentrated than classic organic search, but still highly centralized.
Indig’s conclusion: you’re effectively shut out unless you build enough authority to win one of a limited number of citation “seats.”
What changed. Ranking No. 1 in Google still matters, but it’s not enough. Of pages ranking No. 1, 43.2% were cited by ChatGPT — 3.5x more often than pages beyond the top 20.
ChatGPT retrieved far more pages than it cited. AirOps found that it retrieved ~6x as many pages as it cited, and 85% of the retrieved pages were never cited.
A third of the cited pages came from fan-out queries, and 95% of those had zero search volume.
Why we care. Publishing the “best answer” for one keyword isn’t enough. ChatGPT rewards domains that cover a topic from multiple angles, not pages optimized for isolated terms. And discovery often happens outside the keyword universe you track.
The patterns. Longer pages generally earned more citations, with variation by vertical. The biggest lift appeared between 5,000 to 10,000 characters. Pages above 20,000 characters averaged 10.18 citations vs. 2.39 for pages under 500.
This pattern broke in Finance, where shorter, denser pages often outperformed long guides. In Education, Crypto, and Product Analytics, longer pages continued to gain citation value with little drop-off.
58% of cited URLs were cited only once. Pages that recurred across prompts were usually category roundups, comparison pages, or broad guides answering multiple related questions.
On-page behavior. ChatGPT cited heavily from the upper part of a page. The 10% to 20% section performed best across all industries.
The bottom 10% earned just 2.4% to 4.4% of citations. Conclusions were largely ignored.
Finance had the steepest ramp, with 43.7% of citations in the first 30%.
Healthcare and HR Tech were flatter.
Education peaked later, around 30% to 40%.
About the data. Indig analyzed ~98,000 citation rows from ~1.2 million ChatGPT responses (Gauge), isolating seven verticals. The study used structural page parsing, positional mapping, and entity and sentiment analysis to identify which pages earned citations and where they come from.
A new creative feature has been spotted inside Google Ads Performance Max campaigns — and it could change how advertisers without video budgets approach animated display advertising.
What was found. Vice President of Search at JumpFly, Inc. Nikki Kuhlman spotted an option to generate animated video clips directly within PMax asset groups, using AI to enhance and animate a single source image.
How it works.
Upload a source image — a logo, a product shot, a property photo
AI generates several “enhanced” versions of that image
Each enhanced image produces two animated clips
Select up to five animated clips per asset group
Note: faces cannot be used in source images, though AI may generate people in enhanced versions
Early results from testing. A logo generated a spinning animation of the image element. A house with a sold sign produced a slow cinematic pan. Simple inputs, but the output quality appears usable for display advertising without any video production required.
Where the ads appear. Google hasn’t provided in-product documentation on placement, but early testing shows animated clips surfacing in Display ad previews when added to an asset group.
Why we care. Video assets continue to be a strong creative option on Paid Media — but producing video has always required time, budget, and resources many advertisers don’t have. This feature effectively removes that barrier — turning a single product photo or logo into animated display creative in seconds, at no additional production cost.
For advertisers who’ve been running PMax on static images alone, this could be a meaningful and easy win.
The bottom line. This feature is still unconfirmed by Google, but advertisers running PMax should check their asset groups now. If it’s available in your account, it’s worth testing — especially for campaigns that have been running on static images alone.
First seen. Kuhlman shared spotting this new feature on LinkedIn.
AI tools and visibility have dominated the SEO conversation in the past two years. But while discussions focus on these new technologies, most of the biggest SEO risks in 2026 will come from somewhere else: within your own organization.
Fragmented data, unclear ownership, outdated KPIs, and weak collaboration can quietly destroy even the best strategies. As SEO expands beyond the website and into AI-driven discovery, the role of the SEO team is becoming broader, more influential, and, paradoxically, harder to define.
Here are some of the risks your team should start thinking about now.
Relying too much on AI for everything
Many SEO teams now rely on AI for everything, from generating briefs to analyzing data. That’s often necessary. You can’t spend hours creating a brief when AI can produce something usable in minutes. But that’s also where the risk starts.
AI can generate content quickly, but “acceptable” won’t differentiate you. You still need a clear point of view — what story you’re telling and what unique angle you bring. Without that, your content becomes generic, predictable, and indistinguishable from competitors using the same tools.
The issue is simple: if you ask similar tools similar questions, you’ll get similar answers. And your competitors have access to the same tools.
Some companies try to stand out by training models on proprietary data. In reality, few teams do this at scale. Most prioritize speed over quality.
There’s also risk in using AI for analysis without understanding the data behind it. AI is fast, but it can misinterpret or hallucinate results.
I’ve seen this firsthand. An AI tool hallucinated part of a calculation during an urgent analysis, making every insight that followed incorrect. It only acknowledged the mistake after it was explicitly pointed out.
More broadly, AI excels at identifying patterns. But in SEO, competitive advantage rarely comes from following patterns. The most effective strategies don’t just mirror what everyone else is doing. Sometimes the best opportunity isn’t the obvious one.
AI is reshaping how SEO work gets done, how impact is measured, and whether it can be measured at all.
For years, SEO professionals have worked with incomplete datasets. We’ve never had a full view of the user journey. That’s one reason organic impact has often been underestimated. In the past, though, we could still piece together a reasonably clear picture — from ranking to click to conversion.
Today, that picture is far more fragmented. AI tools have changed how people research and discover products. Users now start in AI assistants – asking questions, comparing options, and building shortlists before ever visiting a website. By the time they land on your page, part of the decision-making process is already done.
The problem is we have zero visibility into that journey. If a user discovers your brand through an AI-generated answer, adds you to a shortlist, then later searches for you directly, the signals that influenced that decision are invisible. We only see the final step.
Microsoft Bing has introduced basic reporting for AI searches, but it’s limited. We still can’t see the prompts behind specific page visibility.
At the same time, SEO teams are still expected to prove impact. Some companies are adding questions to lead forms to understand how users discovered them. In theory, this adds signal. In practice, it depends on accurate self-reporting. I know how I fill out forms, so I question how reliable that data really is. Still, it’s a start.
Setting the wrong KPIs
Fragmented data creates another risk: focusing on the wrong KPIs. Stakeholders still ask about traffic. No matter how often SEO teams explain that its role has changed, traffic remains a default measure of success. For years, organic growth meant more sessions, users, and visits. That mindset hasn’t fully shifted.
At the same time, stakeholders are drawn to newer metrics — AI visibility, citations, and mentions. These aren’t inherently wrong, but they need to be used carefully.
Most tools measure AI visibility using a predefined set of queries. That’s where risk creeps in. Teams can become too focused on improving visibility scores, even if it means optimizing for prompts that look good in reports rather than those that matter to the business.
For example, appearing for “What is XYZ software?” isn’t the same as showing up for “Which XYZ software is best?” The first may drive visibility, but the second is much closer to a purchase decision.
To avoid this, visibility metrics need to be tied to business outcomes — a real challenge given the fragmented data problem.
Tracking AI visibility also opens another rabbit hole: debates over which prompts to track, how many to include, and why. This can quickly overcomplicate measurement, especially if teams lose sight of the goal. The objective isn’t to track every phrasing, but to understand the intent behind it. Trying to capture every variation is impossible.
SEO teams are expected to own AI visibility strategy much like they owned SEO strategy. But strategy is often treated as execution.
Even in the past, SEO was never fully independent. It relied on other teams — engineering to implement changes and content to create pages. The difference is that most of this work used to happen on the company’s own website.
That’s no longer true. Visibility in AI answers requires presence beyond your domain — Reddit threads, YouTube videos, and media mentions all play a role.
This significantly expands the scope of work. At the same time, many of these surfaces don’t have clear owners inside organizations. Even when they do, there’s a tendency to assume that if SEO owns the strategy, it should also own execution or at least be accountable for outcomes.
The opposite happens, too. If other teams own execution, they may take ownership of the entire strategy. In reality, neither model works well.
SEO teams can’t manage every platform that influences AI visibility. They don’t have the expertise to produce YouTube content or run PR campaigns. Their strength is knowing what works and helping optimize it. For example, advising on how a video should be structured to perform on YouTube.
Owning strategy also doesn’t mean deciding who owns execution. That’s a leadership responsibility. It requires visibility across teams and the authority to assign ownership. Otherwise, one team is left deciding how its peers should operate.
Even when companies recognize the importance of AI visibility, cross-team collaboration remains a challenge.
Roles and processes are often unclear. SEO teams may expect others to execute, while those teams assume it’s SEO’s responsibility. In other cases, teams don’t prioritize AI visibility because their KPIs focus elsewhere.
This is where leadership alignment becomes critical. If AI visibility is truly a strategic priority, it needs to be reflected in goals and KPIs across all relevant teams. When AI-related KPIs sit only with SEO, it creates an imbalance: one team is accountable for outcomes, while execution depends on many others.
Many teams are also unsure how to work with SEO. Some don’t involve SEO early enough. Others choose not to follow recommendations because they don’t agree with them.
SEO teams share responsibility here, too. They need to actively onboard other teams and clearly connect SEO efforts to broader business goals. It’s our job to show that lack of visibility means lost revenue.
I’ve seen cases where teams critical to AI visibility hadn’t even read the strategy document. In these situations, the issue isn’t one-sided. Teams need to understand what’s expected of them, and SEO needs to push for alignment and involve stakeholders early. Simply moving forward without that alignment doesn’t work.
SEO teams also don’t always explain the “why.” AI visibility can end up treated as a standalone SEO metric rather than a business driver. Even when there’s agreement on its importance, a lack of clear processes, shared goals, and training keeps collaboration inconsistent.
With rapid changes in search, SEO teams often spend more time on theory — reading, analyzing, building frameworks, and refining strategies — instead of making changes to the website.
That doesn’t mean teams should stop learning. Quite the opposite. But strategy without execution quickly loses value. In many organizations, SEO teams are expected to produce in-depth strategy documents meant to align teams and define priorities. In reality, many go unread outside the SEO team. They require significant effort but deliver little impact.
Part of the problem is that strategies are often too theoretical. They explain the why but miss the what. The value of a strategy isn’t the document, but the actions that follow. Other teams need to understand what to do and how to contribute.
AI is also accelerating how quickly search evolves. Waiting months to test ideas no longer works. A more practical approach is to understand the direction, implement changes, observe results, and iterate. Smaller experiments often lead to faster learning.
When SEO succeeds, SEO disappears
SEO has always been a consulting function. Success depends on collaboration with teams like engineering, content, and product. Today, that dynamic is more visible than ever. In many cases, SEO teams don’t execute directly. Their role is to enable others.
In mature organizations, this works well. Collaboration is strong, and credit is shared. SEO’s consulting role is recognized without forcing the team to own areas outside its expertise. In less mature environments, it can lead to SEO being undervalued or seen as unnecessary.
AI adds another layer. It can generate keyword ideas, outlines, and optimization suggestions, making SEO look deceptively simple, much like writing content. AI lowers the barrier to entry, but it doesn’t replace expertise. Without that expertise, teams produce work that’s technically correct but average.
It’s a familiar pattern: copy-pasting a Screaming Frog SEO Spider error list into a task doesn’t demonstrate real understanding. This creates a paradox. The more SEO becomes a company-wide capability, the more the SEO team risks becoming invisible.
SEO teams won’t fail in 2026 because of a lack of knowledge. They’ll fail if they can’t turn that knowledge into action, influence, and business impact.
The challenge is no longer just optimizing pages. It’s building processes, partnerships, and measurement models that reflect how visibility works today.
Success also depends on leadership support. Many of the biggest risks are structural — fragmented data, unclear ownership, weak collaboration, outdated KPIs, and the gap between strategy and execution.
AI visibility expands beyond the website and into the broader organization. That doesn’t make SEO less important, but it does make it harder to define, measure, and defend.
The companies that succeed will stop treating SEO as a traffic function and start treating it as a business capability that drives visibility, discovery, and growth.
Apple is preparing to introduce sponsored listings in Apple Maps, marking a significant expansion of its advertising business beyond the App Store.
How it will work. According to Bloomberg’s Mark Gurman, the system will function similarly to Google Maps — allowing retailers and brands to bid for ad slots against search queries. Sponsored businesses will appear in Maps search results, much like sponsored apps already appear in App Store searches.
The timeline. An announcement could come as early as this month, with ads beginning to appear inside Maps as early as this summer across iPhone, other Apple devices, and the web version.
Why Apple is doing this. Advertising is a growing and high-margin revenue stream for Apple’s services business. Maps — with its massive built-in user base across Apple devices — is a natural next step, particularly as location-based advertising continues to grow.
Why we care. Apple Maps has a massive built-in user base across iPhone and Apple devices, and users searching within Maps are expressing clear, high-intent signals — they’re actively looking for somewhere to go or something to buy. This opens up a brand new location-based advertising channel that previously didn’t exist on Apple’s platform, giving local businesses and retailers a way to reach those users at exactly the right moment.
Advertisers already running Google Maps or local search campaigns should pay close attention, as this could quickly become a significant complementary channel.
The privacy angle. True to Apple’s form, a user’s location and the ads they see and interact with in Maps are not associated with their Apple Account. Personal data stays on the user’s device, is not collected or stored by Apple, and is not shared with third parties.
How to access it. Businesses will be able to access a fully automated experience for creating ads through Apple Business in a few simple steps. Current Apple Ads advertisers and agencies will also have the option to book ads through their existing Apple Ads experience, which will offer additional customization options.
What you need to do now. When Apple Business becomes available in April, businesses will need to first claim their location on Maps apple before ads become available this summer — so the time to get set up is now, not when the auction opens.
The bottom line.Apple Maps ads should open up a high-intent, location-based channel that hasn’t existed before on Apple’s platform. Advertisers running local or retail campaigns should claim their Maps listing now and start planning budgets for a summer launch. Early entrants in a new ad auction typically benefit from lower competition before the market matures.
Microsoft added query-to-page mapping to its AI Performance report in Bing Webmaster Tools, letting you connect AI grounding queries directly to cited URLs.
Why we care. The original dashboard showed queries and pages separately, limiting optimization. Now you can tie specific AI-triggering queries to the exact cited pages, so you can prioritize updates based on real AI-driven demand — not guesses.
The details. The new Grounding Query–Page Mapping feature links two existing views in the AI Performance dashboard:
Click a grounding query to see which pages are cited
Click a page to see which grounding queries drive its citations
Mapping is many-to-many: one query can map to multiple pages, and vice versa
The entity home is the single page that anchors how algorithms, bots, and people understand your brand. It’s usually your About page, and it does far more than most teams realize.
It’s where algorithms resolve your identity, where bots map your footprint, and where humans verify trust before they convert. In one test, improving that page alone lifted conversions by 6% for visitors who reached it. The reason is simple: the human and the algorithm are doing the same job — checking claims, validating evidence, and deciding whether to trust you.
For years, this was overlooked. Most SEOs focused on rankings and traffic while underinvesting in the page that defines what their brand actually is. That’s no longer sustainable. The entity home is the foundation of how your brand is interpreted across search, AI, and what comes next.
What the entity home isn’t
Before going further, here are four misreadings worth pre-empting.
Not a ranking trick
Getting the entity home right doesn’t produce a traffic spike next Tuesday. It builds the confidence prior that compounds through every gate of the pipeline over time.
Not just schema
Schema markup helps the algorithm read what is already there. It isn’t a substitute for the claims, the evidence links, and the consistent positioning that schema describes. Schema without substance is a well-formatted, empty declaration.
Not always the About page
For most companies, it is, and for most individuals, it is a page on someone else’s website. The right URL to use carries the clearest identity statement, the strongest internal link prominence from the rest of the site, and the most stable long-term address (something people often don’t think about).
Not enough without corroboration
The entity home is where you declare your claims. Independent third-party sources confirm and corroborate your claims. The algorithm will only cross the confidence threshold when what you say matches what the weight of evidence supports.
Three audiences, one anchor — and most brands are ignoring two of them
The entity home serves three simultaneously, through three completely different mechanisms. Most brands haven’t yet given them enough thought.
Bots use the entity home when mapping the digital footprint. They use it to establish what entity they are dealing with and how to interpret every corroborative source they find.
Algorithms anchor their identity resolution against it, checking confidence at every relevant gate against whatever baseline this page set.
Humans reach for it when they want to see a resource that feels authoritative precisely because it is structured to inform rather than to sell.
So, the entity home webpage is vital to all three audiences — bots, algorithms, and humans: it sets the tone for the bot in DSCRI, the algorithms in ARGDW, and for the person who converts.
The entity home is just one page, and that isn’t enough
The entity home anchors everything: the canonical URL where the algorithm initializes its model of the brand, where bots orient themselves, and where humans arrive to verify their instinct. One page, doing one critical job. But one page declares. It doesn’t educate.
The entity home website educates. Every facet of the brand structured across pages that give the algorithm a complete picture of:
Who this entity is.
What it does.
Who it works alongside.
What it has produced.
Where independent sources confirm what the brand claims about itself.
The difference between the two is the difference between introducing yourself and making your case.
Search built the web around a single assumption — the human acts. The engine organized, the website presented, and the human chose. That model shaped 30 years of architecture decisions because the website’s job was to win the human’s attention and trust once the engine had delivered them to you.
But assistive engines broke that assumption. They took on the evaluation work the human used to do: reading, comparing, synthesizing, and recommending. The human still makes the final call, but the website needs to have made its case to the algorithm before the human ever arrives.
The audience that matters first has shifted, and a website that speaks only to humans is already losing the conversation that determines whether those humans show up at all.
Agents go one step further. The agent researches, decides, and acts. The human receives the outcome. The website that wins in an agentic environment isn’t the one with the most compelling hero section — it’s the one the agent can read, trust, and act on without inferring anything.
All three modes co-exist, and all three always will.
Search serves the window shopper.
Assistive engines serve the human who wants a recommendation without doing the research.
Agents serve the task that can be delegated entirely.
What shifts over the next three years isn’t which mode exists — it’s which mode does the most work, and what your website needs to do to win each one.
This is where I’ll plant a flag, and you can disagree. All three jobs need attention right now — the percentages below describe where the main focus of your effort sits, not permission to ignore the others.
The work on assistive and agential is already overdue. The speed of change will probably make these figures look dated in a few months.
2026: Search 60%, Assistive 35%, Agential 5%
Search still drives most conversions. But the 35% on assistive isn’t optional, it’s late. The brands that started two years ago are already compounding.
2027: Search 35%, Assistive 50%, Agential 15%
Assistive engines will be handling enough upstream evaluation that discovery and correct interpretation become the primary battle. Search remains significant. Agential execution is arriving.
2028: Search 20%, Assistive 45%, Agential 35%
Agents execute. The algorithm’s confidence in your brand determines whether you’re in the consideration set before any human is involved. Search and assistive don’t disappear — they become the infrastructure the agential layer runs on.
The entity home website anchors all three eras. What changes is who it speaks to first, and what that conversation needs to contain.
Each cluster in that diagram declares something: these satellite pages, grouped this way, belong to this entity and describe one specific dimension of what it is.
/social names the platforms the brand controls.
/peers places the entity in its professional network.
/companies closes the relationship loop between person and organization.
The grouping carries meaning — an algorithm that reads the structure learns something the individual pages couldn’t tell it separately.
The entity home website has three jobs
Search, assistive, and agential engines co-exist, which means the entity home website runs three distinct jobs simultaneously.
The search job is the one 30 years of practice has refined, and it doesn’t change: get the bots through the DSCRI infrastructure gates cleanly, so the ranking engine delivers the right humans to you, and your content draws them through the funnel with clarity, credibility, and a path to conversion.
The assistive job is the one most brands are ignoring, and where the competitive gap is opening fastest: educate the algorithms. Your entity home website structures your brand’s story so algorithms understand it without guessing, and your content wins the competitive phase (ARGDW) with the highest possible confidence intact. Every explicit link from your entity home website to a satellite property declares a graph edge, carrying higher confidence through the pipeline than any connection the algorithm has to infer for itself.
Hardest to prepare for, and already arriving: brief the agents. Agentic engines don’t read your website the way a human reads a marketing page — they read it the way an instructed system reads a briefing document, scanning for structured, unambiguous, machine-interpretable facts. Don’t make the machine use imagination it doesn’t have.
Entity pillar pages solve the identity problem keyword cornerstones were never built for
SEO has always known what to do with a topic: build an authoritative page around it, link it well, and earn rankings. That architecture works because the ranking engine evaluates content.
What it can’t do is tell the algorithm who the entity behind that content is, what relationships it has built, what it has demonstrated over time, or why it should be trusted to recommend rather than merely rank.
An entity has facets, and facets aren’t the same thing as topics. A person isn’t “SEO consultant” plus “technical SEO” plus “keynote speaker”: those are keyword clusters, useful for ranking, useless for identity.
What the algorithm actually resolves identity against is the network of dimensions that define what this entity is — the companies it belongs to, the peers it works alongside, the publications it has appeared in, the expertise it has demonstrated over years, the events it speaks at, and the work it has produced.
An entity pillar page is the authoritative page on your own property for one of those dimensions.
The /expertise page establishes demonstrated knowledge in a specific domain, not as a content topic, but as an identity declaration.
The /peers page places the entity in a professional network the algorithm already trusts.
The /companies page closes the loop between person and organization.
The /press page links to independent coverage that corroborates the entity’s claims, giving the algorithm something to cross-reference rather than take on faith.
These pages aren’t traffic pages in the traditional sense, and that framing matters: SEOs who measure them against keyword rankings will consistently underinvest in them because the return doesn’t show up in rank tracking. The return shows up in what AI assistive engines say about your brand when your prospects ask.
Keyword cornerstone pages and entity pillar pages serve different audiences, and your website needs both
The keyword cornerstone page and the entity pillar page aren’t competing strategies: they’re parallel architectures serving different audiences, which means your website needs both, and the question is how to build them so they compound each other’s value rather than compete for the same resource.
The coincidence between them is real and worth engineering deliberately. The expertise page that ranks for “technical SEO audit” can also function as the entity pillar page that declares this entity’s demonstrated knowledge in that domain if it’s built with that second function in mind:
Explicit entity statements.
Schema that names the relationships rather than just the topic.
Links to corroborating third-party sources stable enough to persist across years.
A URL structure that commits to the identity dimension rather than the keyword cluster.
When those two requirements align, one page does both jobs, which is a good thing.
When they diverge: when the page that captures search traffic can’t easily carry the identity declaration without sacrificing one function for the other, you face an architectural choice, and making that choice consciously rather than defaulting to the keyword model is the skill the transition requires.
The percentages already told you the weighting: Both layers are required starting today
Earlier in this article, the 2026/2027/2028 split put search at 60%, then 35%, then 20% of focus. What those numbers don’t say, but what the logic demands, is that the other percentage — the assistive and agential share — needs your website to feed them right now. Don’t wait until the balance shifts.
Keyword cornerstone pages feed the search share. Entity Pillar Pages feed the assistive and agential share.
If you build the Entity Pillar Pages in 2027 when assistive engines truly dominate, you’ll be building into a window that has already closed for the brands that started in 2025, because the algorithm’s model of your entity solidifies around whatever you gave it during the period it was actively learning.
The percentages describe where the demonstrable value sits at each stage. Your investment needs to precede the moment your boss sees the results, not follow it.
Both architectures are required today; the balance shifts, but the requirement for both never goes away.
Building for machines and humans simultaneously is cheaper than building for each separately
The risk brands hear when they encounter the machine-optimization argument is a false trade-off: build for machines at the expense of humans, strip the warmth from the copy, replace narrative with structured data fields, and turn the About page into a schema exercise. You can absolutely avoid the trade-off in practice because the best practices are more complementary than they might appear.
Clear entity statements that help the algorithm resolve your identity also help the human visitor understand immediately who they’re dealing with. Explicit links to corroborating third-party sources that build algorithmic confidence also give the human prospect the independent validation they’re quietly looking for. Schema markup that declares relationships for machine consumption gives structured clarity that human scanners doing final due diligence actually appreciate.
For me, this is the reframe that makes the whole project manageable: my approach to the entity home website is your current marketing, restructured to serve three audiences simultaneously, not a technical infrastructure project running alongside it. One investment that has three returns, and (when done right), the requirements pull in the same direction more often than they pull apart.
The funnel is moving inside the assistant.
When an assistive engine names your brand, summarizes it, and links to it in response to a user query, a conversion event has happened that you don’t see in your Analytics dashboard, and the human who arrives at your website has already been half-sold by the algorithm before they clicked. Traffic will decline as more of that evaluation work moves upstream, and the brands that measure only what arrives at the site will systematically underestimate both the value they’re generating and the gaps in their strategy.
Start measuring where your brand appears in assistive engine responses, how consistently it appears, and what the algorithm says about you when it does.
Getting the entity home right requires definition, proof, and a sustained corroboration campaign
Start with the entity home page itself: choose the single URL that functions as the canonical anchor for your brand’s identity and commit to it. Don’t discover it by asking an AI engine what it thinks your entity home is, because the engine will tell you what it has already learned, and that might be your website homepage, Wikipedia, a press profile, or a LinkedIn page you half-filled in five years ago. You choose it, then you verify the algorithm has learned the lesson you are giving it. You are the adult in the room.
Five criteria determine that choice, in order of weight:
The most explicit identity statement on the property.
The strongest internal link prominence from the rest of the site.
The best-structured schema markup with a stable @id.
The clearest outbound links to corroborating third-party sources.
The most stable long-term URL.
If your About page doesn’t hit all five, it isn’t doing the job the algorithm requires.
Invest in your About page. Strengthen it with a clear entity statement, schema with a proper @id, verified links to Wikipedia and Wikidata where they exist, every accurate sameAs declaration you can support, and the claims that define your brand’s positioning.
That single page is the anchor.
The entity home website is the education hub built around it: every entity pillar page you build — /expertise, /peers, /companies, /press — extends the identity declaration outward, giving the algorithm more dimensions to resolve against and more facets to cross-reference with independent sources. Each of those pages does for one identity dimension what the About page does for the whole: declares something specific, verifiable, and machine-readable about who this entity is.
The practical work on the entity home website side is the same audit applied at scale: for each entity pillar page, ask whether it declares a clear facet, links to corroborating evidence, and carries schema that names the relationship rather than just the topic. The pages that answer yes to all three are doing both jobs simultaneously — identity infrastructure and keyword architecture. The ones that don’t need a decision: extend them, or build the pillar function its own dedicated page.
If you’re unsure how much influence you actually have over what AI communicates about you, the answer is more than most people assume — and the channels that give you the most leverage are exactly the ones entity pillar pages are built to activate.
Then force the corroboration loop across the whole footprint: drive independent third-party sources to reference, link to, and echo the claims the entity home makes and the facets the pillar pages declare across enough independent contexts that the algorithm’s confidence crosses from hedged claim to corroborated fact.
That crossing doesn’t happen on a deadline and can’t be engineered in a sprint. The corroboration loop is the curriculum, slow by design, compounding with every cycle, never truly finished. It is the work, and it rewards the brands that start it today over the ones that plan to start it when the percentages shift.
This is the sixth piece in my AI authority series.