On episode 341 of PPC Live The Podcast, I speak to Andrea Cruz, Head of B2B at Tinuiti, to unpack a mistake many senior marketers quietly struggle with: freezing when clients demand answers you don’t immediately have.
The conversation explored how communication missteps can escalate client tension — and how the right mindset, preparation, and culture can turn those moments into career-defining growth.
From hands-on marketer to team leader
As Cruz advanced in her career, she shifted from managing campaigns directly to leading teams running large, complex accounts. That transition introduced a new challenge: representing work she didn’t personally execute day to day.
When clients pushed back — questioning performance or expectations — Cruz sometimes froze. Saying “I don’t know” or delaying a response could quickly erode trust and escalate frustration.
Her key realization: senior leaders are expected to provide perspective in the moment. Even without every detail, they must guide the conversation confidently.
How to buy time without losing trust
Through mentorship and experience, Cruz developed a practical technique: asking clarifying questions to gain thinking time while deepening understanding.
Examples include:
Asking clients to clarify expectations or timelines
Requesting additional context around their concerns
Confirming what the client already knows about the situation
These questions serve two purposes: they slow down emotionally charged moments and ensure responses address the real issue, not just the surface complaint.
For Cruz, this approach was especially important as a non-native English speaker, giving her space to process complex conversations and respond clearly.
A solutions-first culture beats blame
Cruz emphasized that mistakes are inevitable — but how teams respond defines long-term success.
At Tinuiti, the focus is not on assigning blame but on answering two questions:
Where are we now?
How do we get to where we want to be?
This solutions-oriented mindset creates psychological safety. Teams can openly acknowledge errors, run post-mortems, and identify patterns without fear. Cruz argues that leaders must model this behavior by sharing their own mistakes, not just scrutinizing others’.
That transparency builds trust internally and with clients.
Proactive communication builds stronger client relationship
Rather than waiting for clients to surface problems, Cruz encourages teams to raise issues first. Acknowledging underperformance — even when clients haven’t noticed — demonstrates accountability and strengthens partnerships.
She also recommends tailoring communication styles to each client. Some prefer concise updates; others want detailed explanations. Documenting these preferences helps teams deliver information in ways that resonate.
Regular check-ins about business roadblocks — not just campaign metrics — position agencies as strategic partners, not just media operators.
Common agency mistakes in B2B advertising
Cruz didn’t hold back on recurring issues she sees in audits:
Budgets spread too thin: Running too many channels with insufficient spend leads to meaningless data and weak performance.
Underfunded campaigns: B2B CPCs are inherently high. Campaigns generating only a few clicks per day rarely produce actionable results.
Her advice is blunt: if the budget can’t support a channel properly, it’s better not to run it.
AI is more than a summarization tool
On AI, Cruz cautioned against shallow usage. Treating AI as a simple spreadsheet summarizer misses its broader potential.
Her team is experimenting with advanced applications — automated audits, workflow integrations, and operational efficiencies. She compares AI’s role to medical diagnostics: a powerful assistant that augments expert judgment, not a replacement for it.
For marketers, that means staying curious and continuously exploring new use cases.
The takeaway: preparation and passion drive resilience
Cruz’s central message is simple: mistakes will happen. What matters is preparation, adaptability, and maintaining a solutions-first mindset.
By anticipating client needs, personalizing communication, and embracing experimentation, marketers can transform stressful moments into opportunities to build credibility.
Looking to take the next step in your search marketing career?
Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.
Digital Marketing Manager The Digital Marketing Manager will be expected to lead a team that effectively crafts and implements digital marketing initiatives including search marketing, social media, email marketing and lead management for clients in a variety of industries. Candidates should expect to be engaged in managing multiple team members, clients and simultaneous projects, assisting […]
About Us HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, from marketing agencies to entrepreneurs to small businesses and beyond. Our platform empowers […]
Omniscient Digital is an organic growth agency that partners with ambitious B2B SaaS companies like SAP, Adobe, Loom, and Hotjar to turn SEO and content into growth engines. About this role We’re hiring an SEO Outreach Specialist to partner with high-authority brands and build high-quality backlinks to support our clients’ growth and authority. You will […]
SUMMARY The Digital Marketing Manager is a growth-oriented role that will evolve into a strategic marketing leadership position. You will work closely with the CCO and leadership team to shape our go-to-market strategy while executing high-impact marketing programs today. JOB RESPONSIBILITIES ESSENTIAL FUNCTIONS: Strategic Marketing & Positioning Collaborate with CCO and commercial leadership to evolve […]
Job Description We are a highly motivated bunch who seek to create a space where you know you are going to have a good time. We are known for our art-inspired spaces that are great for social gatherings. Our restaurants are wall to wall with lights, murals, and vignettes. We are the marinara-muddled minds behind […]
Benefits: Bonus based on performance Competitive salary Training & development Fischetti Law Group, a fast-growing Personal Injury and Estate Planning law firm, is seeking a creative, results-driven Digital Marketing Manager to lead our digital presence and community outreach efforts. This is a full-time, in-office position working directly with our Management team to expand our brand […]
Who We Are Oncourse Home Solutions (OHS) is a people-centric, $500M organization that is owned by private equity firm, Apax Partners operating under the brands American Water Resources, Pivotal Home Solutions and American Home Solutions. We do what is right for our people so they can do their best when serving our 1.9+ million customers […]
AppFolio is more than a company. We’re a community of dreamers, big thinkers, problem solvers, active listeners, and multipliers. At every opportunity, we set the pace while delivering innovation built to carry real estate into the future. One in which every experience feels effortless, yet meaningful. Where customers are empowered to take on any opportunity. […]
The Company: VeSync is a portfolio company with brands that cover different categories of health & wellness products. We wouldn’t be surprised if you have one of our Levoit air purifiers in your living room or a COSORI air fryer whipping up healthy and delicious meals for you every night. We’re a young and energetic […]
We’re looking for a Senior SEO Strategist to lead enterprise-level organic growth strategies across traditional search and modern discovery channels, including AI-powered SERPs, Google AI Overviews, and large language models (LLMs). In this role, you’ll own both strategy and execution for a portfolio of enterprise and high-growth clients. You’ll act as a trusted, client-facing advisor—translating complex technical […]
Benefits 401(k) 401(k) matching Company parties Competitive salary introduce open tags Dental insurance Imageno duplicate details Free food & snacks Health insurance Opportunity for advancement Paid time off Parental leave Vision insurance Wellness resources Buzz Franchise Brands i s a fast-growing, multi-brand franchise company headquartered in Virginia Beach, VA. We’re seeking a Digital Paid Media […]
WHY DEPT®? We are pioneers at heart. What this means, is that we are always leaning forward, thinking of what we can create tomorrow that does not exist today. We were born digital and we are a new model of agency, with a deep skillset in tech and marketing. That’s why we hire curious, self-driven, […]
Overview Nutricost is a leading supplement brand known for high-quality nutrition at affordable prices. We proudly partner with Shaquille O’Neal, the Utah Jazz, BYU Athletics, and US Speedskating to promote health and performance nationwide. Job Overview We are seeking a highly experienced and results-driven Paid Search Manager to lead Nutricost’s paid search strategy and execution […]
Paid Search Manager Location: Round Rock, TX – Onsite (5 days a week) Responsibilities Implement and optimize paid search campaigns to meet KPIs and business objectives. Supervise a team of paid search specialists to ensure operational rigor, budget pacing, and campaign quality. Collaborate with creative teams, vendors, and internal stakeholders to develop and execute effective […]
Overview Sam’s Club is hiring a Senior Manager, Marketing Planning & Strategy (Paid Search + Measurement Enablement) to strengthen one of our largest growth engines: paid search and paid channels performance. This role sits in Marketing, but operates at the intersection of paid media, analytics, and technical enablement—ensuring our campaigns are executed flawlessly, measured accurately, […]
You’ll own high-visibility SEO and AI initiatives, architect strategies that drive explosive organic and social visibility, and push the boundaries of what’s possible with search-powered performance.
Every day, you’ll experiment, analyze, and optimize-elevating rankings, boosting conversions across the customer journey, and delivering insights that influence decisions at the highest level.
Cloudflare yesterday announced its new Markdown for Agents feature, which serves machine-friendly versions of web content alongside traditional human-facing pages.
Cloudflare described the update as a response to the rise of AI crawlers and agentic browsing.
When a client requests text/markdown, Cloudflare fetches the HTML from the origin server, converts it at the edge, and returns a Markdown version.
The response also includes a token estimate header intended to help developers manage context windows.
Early reactions focused on the efficiency gains, as well as the broader implications of serving alternate representations of web content.
What’s happening. Cloudflare, which powers roughly 20% of the web, said Markdown for Agents uses standard HTTP content negotiation. If a client sends an Accept: text/markdown header, Cloudflare converts the HTML response on the fly and returns Markdown. The response includes Vary: accept, so caches store separate variants.
Cloudflare positioned the opt-in feature as part of a shift in how content is discovered and consumed, with AI crawlers and agents benefiting from structured, lower-overhead text.
Markdown can cut token usage by up to 80% compared to HTML, Cloudflare said.
Security concern. SEO consultant David McSweeney said Cloudflare’s Markdown for Agents feature could make AI cloaking trivial because the Accept: text/markdown header is forwarded to origin servers, effectively signaling that the request is from an AI agent.
A standard request returns normal content, while a Markdown request can trigger a different HTML response that Cloudflare then converts and delivers to the AI, McSweeney showed on LinkedIn.
The concern: sites could inject hidden instructions, altered product data, or other machine-only content, creating a “shadow web” for bots unless the header is stripped before reaching the origin.
Google and Bing’s markdown smackdown. Recent comments from Google and Microsoft representatives discourage publishers from creating separate markdown pages for large language models. Google’s John Mueller said:
“In my POV, LLMs have trained on – read & parsed – normal web pages since the beginning, it seems a given that they have no problems dealing with HTML. Why would they want to see a page that no user sees? And, if they check for equivalence, why not use HTML?”
And Microsoft’s Fabrice Canel said:
“Really want to double crawl load? We’ll crawl anyway to check similarity. Non-user versions (crawlable AJAX and like) are often neglected, broken. Humans eyes help fixing people and bot-viewed content. We like Schema in pages. AI makes us great at understanding web pages. Less is more in SEO !”
Cloudflare’s feature doesn’t create a second URL. However, it generates different representations based on request headers.
The case against markdown. Technical SEO consultant Jono Alderson said that once a machine-specific representation exists, platforms must decide whether to trust it, verify it against the human-facing version, or ignore it:
“When you flatten a page into markdown, you don’t just remove clutter. You remove judgment, and you remove context.”
“The moment you publish a machine-only representation of a page, you’ve created a second candidate version of reality. It doesn’t matter if you promise it’s generated from the same source or swear that it’s ‘the same content’. From the outside, a system now sees two representations and has to decide which one actually reflects the page.”
Why we care. Cloudflare’s move could make AI ingestion cheaper and cleaner. But could it be considered cloaking if you’re serving different content to humans and crawlers? To be continued…
Google Ads is rolling out a feature that lets advertisers calculate conversion value for new customers based on a target return on ad spend (ROAS), automatically generating a suggested value instead of relying on manual estimates.
The update is designed for campaigns using new customer acquisition goals, where advertisers want to bid more aggressively to attract first-time buyers.
How it works. Advertisers enter their desired ROAS target for new customers, and Google Ads proposes a conversion value aligned with that goal. The system removes some of the guesswork involved in estimating how much a new customer should be worth in bidding models.
The feature doesn’t yet adjust dynamically at the auction, campaign, or product level. Advertisers still apply the value at a broader setting rather than letting the system vary bids based on context.
Why we care. Assigning the right value to a new customer is a weak spot in performance bidding. Many advertisers manually set a flat value that doesn’t always reflect profitability or long-term goals.
By tying suggested conversion values to a target ROAS, advertisers can now optimise towards a more strategy-driven bidding, potentially improving how acquisition campaigns balance growth and efficiency.
What advertisers are saying. Early reactions suggest the feature is a meaningful improvement over static manual inputs. Founder of Savvy Revenue, Andrew Lolk argues the next step would be auction-level intelligence that adjusts values depending on campaign or product performance.
What to watch. If Google expands the feature to support more granular adjustments, it could further reshape how advertisers structure acquisition strategies and value lifetime customer growth.
For now, the tool offers a more structured way to calculate new customer value.
First seen. This update was first spotted by Founder and Digital Marketer Andrew Lolk who showed the new setting on LinkedIn.
SEO is moving out of the marketing silo into organizational design. Visibility now depends on how information is structured, validated, and aligned across the business.
When information is fragmented or contradictory, visibility becomes unstable. The risk isn’t just ranking volatility – it’s losing control of how your brand is interpreted and cited.
For SEO leaders, the choice is unavoidable: remain a channel optimizer or shape the systems that govern how your organization is understood and cited. That shift isn’t happening in a vacuum. AI systems now interpret, reconcile, and assemble information at scale.
The visibility shift beyond rankings
The future of organic search will be shaped by LLMs alongside traditional algorithms. Optimizing for rankings alone is no longer enough. Brands must optimize for how they are interpreted, cited, and synthesized across AI systems.
Clicks may fluctuate and traffic patterns may shift, but the larger change is this: visibility is becoming an interpretation problem, not just a positioning problem. AI systems assemble answers from structured data, brand narratives, third-party mentions, and product signals. When those inputs conflict, inconsistency becomes the output.
In the AI era, collaboration can’t be informal or personality-driven. LLMs reflect the clarity, consistency, and structure of the information they ingest. When messaging, entity signals, or product data are fragmented, visibility fragments with them.
This is a leadership challenge. Visibility can’t be achieved in a silo. It requires redesigning the systems that govern how information is created, validated, and distributed across the organization. That’s how visibility becomes structural, not situational.
If visibility is structural, it needs a system.
Building the visibility supply chain
Collaboration shouldn’t depend on whether the SEO manager and PR manager get along. It must be built into the content supply chain.
To move from a marketing silo to an operational design, we must treat content like an industrial product that requires specific refinement before it’s released into the ecosystem.
This is where visibility gates come in: a series of nonnegotiable checkpoints that filter brand data for machine consumption.
Implementing visibility gates
Think of your content moving through a high-pressure pipe. At each joint, a gate filters out noise and ensures the output is pure:
The technical gate (parsing)
The filter: Does the new product page template use valid schema.org markup (product, FAQ, review)?
The goal: Ensuring the raw material is structured so LLMs can ingest the data without friction.
The brand signal gate (clustering)
The filter: Does the PR copy align with our core entities? Are we using terminology that helps LLMs cluster our brand correctly?
The goal: Removing linguistic drift that confuses an LLM’s understanding of who we are.
The accessibility/readability gate (chunking)
The filter: Is the content structured for RAG (retrieval-augmented generation) systems?
The goal: Moving away from fluff and towards high-information-density prose that can be easily chunked and retrieved by an AI.
The authority and de-duplication gate (governance)
The filter: Does this asset create “knowledge cannibalization” or internal noise?
The goal: Acting as a final sieve to remove conflicting information, ensuring the LLM sees only one single source of truth.
The localization gate (verification)
The filter: Is the entity information consistent across global regions?
The goal: Ensuring cross-referenced data points align perfectly to build model trust.
If gates protect what enters the ecosystem, accountability ensures that behavior changes.
Embedding visibility into cross-functional OKRs
But alignment without visibility into results won’t sustain change.
The most sophisticated infrastructure will fail if it relies on the SEO team’s influence alone.
To move beyond polite collaboration, visibility must be codified into the organization’s performance DNA.
We need to shift from SEO-specific goals to shared visibility OKRs.
When a product owner is measured on the machine-readability of a new feature, or a PR lead is incentivised by entity citation growth, SEO requirements suddenly migrate from the bottom of the backlog to the top of the sprint.
What shared OKRs look like in an operational design:
For product teams: “Achieve 100% schema validation and <100ms time-to-first-byte for all top-tier entity pages.”
For PR and communications: “Increase ‘brand-as-a-source’ citations in LLM responses by 15% through high-authority, entity-aligned placements.”
For content teams: “Ensure 90% of new assets meet the ‘high information density’ threshold for RAG retrieval.”
When stakeholders’ KPIs are tied to the brand’s digital footprint, visibility is no longer “the SEO team’s job.” Instead, it becomes a collective business imperative.
This is where the magic happens: the organizational structure finally aligns with the way modern search engines actually work.
Measuring visibility across the organization
The gates ensure the quality of what we put into the digital ecosystem; the unified visibility dashboard measures what we get out. Breaking down silos starts with transparent data.
If the PR team can see which mentions drive AI citations and source links in AI Overviews, they’re more likely to shift toward high-authority, contextually relevant publications instead of chasing any media outlet.
We need to shift from reporting rankings to reporting entity health and Share of Model (SoM). This dashboard is the organization’s single source of truth, showing that when we pass the visibility gates correctly, our brand authority grows with humans and machines.
Systems and incentives matter, but they don’t operate on their own.
Having the right infrastructure isn’t enough. We need a specific set of qualities in the workforce to drive this model. To navigate the visibility transformation, we need to move away from hiring generalists and start hiring for the two distinct pillars of an operational search strategy.
In my experience, this requires a strategic duo: the hacker and the convincer.
Feature
The hacker (technical architect)
The convincer (visibility advocate)
Core mission
Ensuring the brand is discoverable by machines.
Ensuring the brand is supported by humans.
Primary domain
RAG architecture, schema, vector databases, and LLM testing.
Cross-departmental OKRs, C-suite buy-in, and PR/brand alignment.
Success metric
Share of model (SoM) and information density.
Resource allocation and budget growth.
The gate focus
Technical, accessibility, and authority gates.
Brand signal and localization gates.
The hacker: The engine room
Deeply technical, driven, and a relentless early adopter. They don’t just “do SEO.” They reverse-engineer how Perplexity attributes trust and how Google’s knowledge vault weighs brand entities.
They find the “how.” They aren’t just optimizing for a search bar, but are optimizing for agentic discovery, ensuring your brand is the path of least resistance for an LLM’s reasoning engine.
The convincer: The social butterfly of data
This is the visionary who brings people together and talks the language of business results. They act as the social glue, ensuring the hacker’s technical insights are actually implemented by the brand, tech, and PR teams. They translate schema validation into executive visibility, ensuring that the budget flows where it’s needed most.
How AI visibility reshapes in-house and agency roles
As roles evolve, the brand-agency relationship shifts with them. If you’re an in-house SEO manager today, you’re likely evolving into a chief visibility officer, focusing on the “convincer” role of internal politics and resource allocation.
Historically, agencies were the training ground for talent, and brands hired them for execution. That dynamic may flip. In this new era, brands could become training grounds for junior specialists who need to understand a single entity deeply and manage its internal gates.
Meanwhile, agencies may evolve into elite strategic partners staffed by seasoned visibility hackers who help brands navigate high-level visibility transformation that in-house teams are often too siloed or time-constrained to see.
To prepare your team for the shift to SEO as an operational approach, take these steps:
Set the vision: Do you want to be part of the change? Define what visibility-first looks like for your business.
Take stock of talent: Do you have hackers and convincers? Audit your team not just for skills, but for mindset.
Audit the gaps: Where does communication break down? Find friction points between SEO and PR, or SEO and product, and fix them quickly.
Shift the KPIs: Move away from rankings and toward channel authority, impressions, sentiment share, and, most importantly, revenue and leads.
Be radically transparent: Clarity is key. You’ll need new templates, job descriptions, and responsibilities. Data should be shared in real time. There’s no room for siloed thinking.
What the first 90 days should look like:
Days 1-30 (Audit): Map your brand’s entity footprint. Where does your brand data live, and where is it conflicting?
Days 31-60 (Infrastructure): Embed visibility gates into your CMS or project management tool, such as Jira or Asana.
Days 61-90 (Incentives): Tie 10% of the PR and product teams’ bonuses to information integrity or AI citation growth.
The SEO leader as a systems architect
As we move further into the age of AI, the successful SEO leader will no longer be the person who simply moves a page from position four to position one. They’ll be the systems architect who builds the infrastructure that allows a brand to be seen, understood, and recommended by machines and humans alike.
This transition is messy. It requires challenging old thought patterns and communicating transparently and directly to secure buy-in. But by redesigning the structures that create silos, we don’t just “do SEO.” We build a resilient organization that is visible by default, regardless of what the next algorithm or LLM brings.
The future of search isn’t just about keywords. It’s about how your organization’s information flows through the digital ecosystem. It’s time to stop optimizing pages and start optimizing organizations.
For a long time, PPC performance conversations inside agencies have centered on bidding – manual versus automated, Target CPA versus Maximize Conversions, incrementality debates, budget pacing and efficiency thresholds.
But in 2026, that focus is increasingly misplaced. Across Google Ads, Meta Ads, and other major platforms, bidding has largely been solved by automation.
What’s now holding performance back in most accounts isn’t how bids are set, but the quality, volume, and diversity of creative being fed into those systems. Recent platform updates, particularly Meta’s Andromeda system, make this shift impossible to ignore.
Bidding has been commoditized by automation
Most advertisers today are using broadly similar bidding frameworks.
Google Smart Bidding uses real-time signals across device, location, behavior, and intent that humans can’t practically manage at scale. Meta’s delivery system works in much the same way, optimizing toward predicted outcomes rather than static audience definitions.
In practice, this means most advertisers are now competing with broadly the same optimization engines.
Google has been clear that Smart Bidding evaluates millions of contextual signals per auction to optimize toward conversion outcomes. Meta has likewise stated that its ad system prioritizes predicted action rates and ad quality over manual bid manipulation.
The implication is simple. If most advertisers are using the same optimization engines, bidding is no longer a sustainable competitive advantage. It’s table stakes.
What differentiates performance now is what you give those algorithms to work with – and the most influential input is creative.
Andromeda makes creative a delivery gate
Meta’s Andromeda update is the clearest evidence yet that creative is no longer just a performance lever. It’s now a delivery prerequisite. This matters because it changes what gets shown, not just what performs best once shown.
Meta published a technical deep dive explaining Andromeda, its next-generation ads retrieval and ranking system, which fundamentally changes how ads are selected.
Instead of evaluating every eligible ad equally, Meta now filters and ranks ads earlier in the process using AI models trained heavily on creative signals, improving ad quality by more than 8% while increasing retrieval efficiency.
What this means in practice is critical for marketers. Ads that don’t generate strong engagement signals may never meaningfully enter the auction, regardless of targeting, budget, or bid strategy.
If your creative doesn’t perform, the platform doesn’t just charge you more. It limits your reach altogether.
Creative is now the primary optimization input on Meta
Meta has repeatedly stated that creative quality is one of the strongest drivers of auction outcomes.
In its own advertiser guidance, Meta highlights creative as a core factor in delivery efficiency and cost control. Independent analysis has reached the same conclusion.
A widely cited Meta partnered study showed that campaigns using a higher volume of creative variants saw a 34% reduction in cost per acquisition, despite lower impression volume.
The reason is straightforward. More creative gives the system more signals. More signals improve matching. Better matching improves outcomes.
Andromeda accelerates this effect by learning faster and filtering harder. This is why many advertisers are experiencing plateaus even with stable bidding and budgets. Their creative inputs are not keeping pace with the system’s learning requirements.
While Google has not branded its changes as dramatically as Meta, the direction is the same. Performance Max, Demand Gen, Responsive Search Ads, and YouTube Shorts all rely heavily on creative assets to unlock inventory.
Google has explicitly stated that asset quality and diversity influence campaign performance. Accounts with limited creative assets consistently underperform those with strong asset coverage, even when bidding strategies and budgets are otherwise identical.
Google has reinforced this by introducing creative-focused tools such as Asset Studio and Performance Max experiments that allow advertisers to test creative variants directly. As with Meta, the algorithm can only optimize what it is given.
Strong creative expands reach and efficiency. Weak creative constrains both.
Many agencies are seeing the same pattern across accounts. Performance improves after structural fixes or bidding changes. Then it flattens.
Scaling spend leads to diminishing returns. The instinct is often to revisit bids or efficiency targets. But in most cases, the real constraint is creative fatigue.
Audiences have seen the same hooks, visuals, and messages too many times. Engagement drops. Estimated action rates fall. Delivery becomes more expensive.
This isn’t a platform issue. It’s a creative cadence issue. Creative testing is the missing optimization lever in mature accounts.
Most agencies are structurally set up to optimize bids, budgets, and structure faster than they can produce new creative.
Creative takes time. It requires strategy, copy, design, video, approvals, and iteration. Many retainers still treat creative as a one-off or an add-on rather than a core performance input. The result is predictable. Accounts are technically sound but creatively starved.
If your account has had the same core ads running for three months or more, performance is almost certainly being limited by creative volume, not optimization skill.
High-performing accounts today look messy on the surface with dozens of ads, multiple hooks, frequent refreshes, and constant testing. That isn’t inefficiency. That’s how modern PPC works.
Creative testing is a process, not a campaign
One of the biggest mistakes agencies make is treating creative testing as episodic. Launch new ads. Wait four weeks. Review results. Declare winners and losers. That approach is too slow for how fast platforms learn and audiences fatigue.
High-performing teams treat creative like a product roadmap. There’s always something new in development. Always something learning. Always something being retired.
Effective creative testing focuses on one variable at a time: hook, opening line, visual style, offer framing, social proof, or call to action.
It’s not about finding “the best ad.” It’s about building a library of messages the algorithm can deploy to the right people at the right time.
Once you accept that creative is the constraint, the operational implications are unavoidable. If creative is the main constraint, agency processes need to change.
Creative should be planned alongside media, not after it. Retainers should include ongoing creative production, not just optimization time. Testing frameworks should be explicit and documented.
At a minimum, agencies should be asking:
How often are we refreshing creative by platform?
Are we testing new hooks or just new designs?
Do we have enough volume for the algorithm to learn?
Are we feeding performance insights back into creative strategy?
The best agencies now operate closer to content studios than optimization factories. That’s where the value is.
Creative is the performance lever
Bidding, tracking, and structure still matter. But in 2026, those are table stakes.
If your PPC performance is stuck, the answer is rarely another bidding tweak. It’s almost always better creative. More of it. Faster iteration. Smarter testing.
The platforms have told us this. The data supports it. The accounts prove it.
Creative is no longer a nice-to-have. It’s the performance lever. The agencies that recognize that will be the ones that continue to grow.
We’re in a new era where web content visibility is fragmenting across a wide range of search and social platforms.
While still a dominant force, Google is no longer the default search experience. Video-based social media platforms like TikTok and community-based sites like Reddit are becoming popular search engines with dedicated audiences.
This trend is impacting how news content is consumed. Google’s current news SERP evolution is directly influenced by the personalization of query responses offered by LLMs and the rise in influencer authority enabled by social media platforms.
Google has responded by creating its own AI-powered SERP features, such as AI Overviews and AI Mode, and surfacing more content from social media platforms that provide the “helpful, reliable, people-first content” that Google’s ranking systems prioritize.
Now that search and social are more intertwined than ever, a new paradigm is needed – one in which newsroom audience teams made up of social media, SEO, and AI specialists work holistically on a daily basis toward a cohesive content visibility goal.
When optimizing news content for social platforms, publishers should also consider how those posts may perform in the Google SERP. I’ll cover optimizing for specific SERP features below, but first, you’ll want to think about making your news content social-friendly.
Optimize news content for social media platforms
First, a dose of sanity. Publishers should resist the temptation to optimize content for every social media platform.
It’s better to pick one or two social platforms – where an audience is already established and that offer the best opportunity for growth – than to create accounts on every social platform and let them languish.
Review analytics and conduct audience surveys to gain insights into which platforms your audience already consumes news content.
Here’s a breakdown by platform of which content types work best and how content from each platform can appear on Google.
YouTube
If you’re producing YouTube video content, make sure to follow video SEO best practices. This comprehensive YouTube SEO guide will help you develop a successful video strategy and ensure video titles align with your content.
Per Google, YouTube’s search ranking system prioritizes three elements:
Relevance: Metadata needs to accurately represent video content to be surfaced as relevant for a search query.
Engagement: Includes factors such as a video’s watch time for a specific user query.
Quality: Video content should show topic expertise, authoritativeness, and trustworthiness.
One trend I’ve noticed in YouTube videos on the Google SERP is that older event content can continue to drive visibility rankings long after the event has ended and well after the related article has faded in search rankings.
Explainer videos also demonstrate longevity on the Google SERP. In this government shutdown explainer video, Yahoo Finance includes the expert’s credentials in the description box, further emphasizing the topic expertise element that YouTube’s ranking system prioritizes.
YouTube can also help your visibility in AI Overviews. Nearly 30% of Google AI Overviews cite YouTube, according to BrightEdge. YouTube was cited most often for tutorials, reviews, and shopping-related queries.
While Facebook may not be the cool kid on the block anymore, the social platform has served a diverse set of users over its long history, from its initial audience of college kids to now attracting an older, majority female audience, per Pew Research Center data.
Community-based content and entertainment news that sparks conversation is key to engagement success on Facebook.
While Meta removed the dedicated news tab on Facebook in 2023-2024, leading to cratering Facebook referrals for news publishers, it’s worth noting that Facebook posts have been rising in Google SERP visibility over the last year, so it may be time to reconsider the platform from a search perspective.
In my review of Google search visibility, Facebook posts about holidays and the full moon appear consistently, and the short-form video format is popular.
Since Elon Musk took over the platform in 2022, the audience has shifted to the political right. While the left’s exodus made headlines, usage of X for news is stable or increasing, especially in the U.S., according to the 2025 Digital News Report from the Reuters Institute.
Breaking news, live updates, and political news dominate X feeds and Google visibility, but don’t overlook sports content, where X posts perform well on both the Google SERPs and Discover.
Instagram
This platform emphasizes stylish, visually driven stories and topics, such as red-carpet fashion at award shows. Health topics, especially nutrition and self-care, are also popular.
Sports posts from Instagram, especially game highlights, often surface on the Google SERP as part of a dedicated publisher carousel or in “What people are saying.”
Reddit
A unique aspect of Reddit is that its user base is often not on other social platforms. For news publishers, this can mean a golden opportunity for niche community engagement, but also requires a dedicated strategy that may not translate well to other platforms.
A wide range of news content can perform well on Reddit, from trending topics to health explainers to live sports coverage, but having a deep understanding of the platform’s audience is critical, as is following the Reddit rules of conduct.
Publishers should spend time studying the types of news articles and conversations that drive strong engagement on subreddits before posting anything. Per Reddit, the platform’s largest audiences gravitate toward the following topics:
Technology.
Health.
Direct to consumer (DTC).
Gaming.
Parenting.
The community discussion forum content from Reddit makes it a natural to appear in the Google SERP as part of the “What people are saying” carousel. The Reddit posts I see most often surfaced by Google are related to sports, entertainment, and business.
The TikTok user base leans female and has a greater share of people of color. Approximately half of 18- to 29-year-olds in the U.S. self-report going on TikTok at least once daily, per Pew Research data.
Visual, conversational, and opinion-based content for younger audiences performs best on TikTok. Niche community content also works well; think fashion, #BookTok, etc.
Remember that short-form video requires a dedicated strategy to maximize engagement and reach, and it’s important to keep in mind that TikTok audiences value authenticity over the polish of a professional newsroom production.
Entertainment and shopping content (sales, product reviews) are the categories in which TikTok demonstrates the most Google visibility.
Pinterest
While Pinterest may feel like an old-school social platform, Gen Z is its fastest-growing audience. That being said, Pinterest attracts users from across a wide range of age groups. According to Pinterest’s global data, its audience is 70% women and 30% men.
Don’t overlook the power of Pinterest for lifestyle content niches. Trends around fashion, home decor, DIY, crafts, recipes, and celebrity content are top performers on this visual social platform.
News publishers interested in this platform should have robust lifestyle content that is actionable and delivered with a motivational tone.
How-to and before/after formats are popular. Excellent quality visuals in a vertical format with a 2:3 aspect ratio and text overlays are recommended. Pinterest supports a more relaxed posting schedule compared to other social platforms. Weekly posting is ideal, since much of the content on Pinterest is evergreen.
Similar to Google Trends, Pinterest Trends can help news publishers stay on top of trending topics on the platform.
Social content opportunities by Google SERP feature
If you’re looking to appear in a particular SERP feature, it’s helpful to know how social platform content appears in each type.
Top Stories (or News Box)
The crown jewel of the Google SERP for news publishers, this feature is dedicated to breaking news and developing news stories as well as capturing updates for the big news stories and trends of the moment.
Thumbnail selection is critical for Top Stories. Publishers should pay close attention to the News Box descriptive labels to ensure content is optimized to match the specific intent or angle Google is seeking.
While historically a SERP feature that showcased traditional news publishers, Google is now including relevant social media content in the mix. The Instagram post in Top Stories below is an Instagram Reel from the Detroit Free Press.
Live update articles are often featured in the News Box and are a great format to embed social media posts.
It helps break up walls of texts and serves as a showcase for a news publisher’s live, original reporting from the scene, eyewitness accounts, and related social content that demonstrates a publisher’s subject expertise.
What people are saying
This Google SERP feature is ideal for capturing audience reaction and user-generated content from a variety of social platforms. Short-form video is often featured in this space.
It’s a showcase for any story or topic that drives emotional engagement, including reactions to everything from a celebrity death to a sporting event outcome to a viral trend. Severe weather is also a recurring topic.
Knowledge Panel
There’s a growing interest in this Google SERP feature among news publishers, especially those publishers who produce entertainment content.
Depending on the configuration, publishers have the opportunity to earn a ranking for an image, social post, or article, such as a celebrity biography.
While content opportunities are limited in the Knowledge Panel, they offer more exclusivity, which can increase CTR. YouTube and Instagram are commonly cited here, but X and TikTok have also been growing in visibility.
Google Discover
This social-search hybrid product, which features trending, emotionally engaging content based on a user’s web and app activity, requires a separate optimization strategy.
The keys to Discover visibility are identifying topics that spark curiosity and ensuring articles are formatted for frictionless consumption.
Discover has been considered a “black box” when it comes to content optimization, but there are several basic elements to implement that can increase visibility.
Viral hits may spike a news publisher’s Discover performance temporarily, but as Harry Clarkson-Bennett outlines, publishers need to analyze their Discover performance over time at the entity level to build a smart optimization strategy.
Google’s official Discover optimization tips discourage clickbait practices that actually work quite well on the platform, such as salacious quotes in headlines and content about controversial topics and strong opinion perspectives.
I would never recommend a publisher produce clickbait, but for tabloid publishers, content with a strong, contentious perspective overperforms on Discover, regardless of the official Google guidance.
Headlines and images require serious consideration. While Google is running an experiment in which their AI tool rewrites headlines for Discover, direct, action-oriented, and emotion-driven headlines traditionally perform best. There’s no specific character count recommendation, but at a certain point (typically 100+ characters), the headline will get truncated and an ellipsis will be used.
Images must be formatted to Discover specifications (at least 1,200 pixels wide) and should be eye-catching to make people stop and click. Keep articles short or include a summary box at the top of longer articles. Format articles for scanability.
This Forbes X post featured on my Discover feed nails the elements essential for inclusion.
Politics, sports, and entertainment topics that favor an opinion-driven perspective can drive strong engagement on Discover. For YMYL (Your Money Your Life) content, which can also perform well on Discover, focus on accuracy, expert sources, and lean into the curiosity gap.
YouTube and X are the dominant social platforms featured on Discover, according to a Marfeel study.
This was further confirmed by Clara Soteras, who shared insights from Andy Almeida of Google’s Trust and Safety team as presented at Google Search Central Live in Zurich in December 2025.
Almeida noted that Discover’s algorithm has been updated to “include content from YouTube, Instagram, TikTok, or X published by content creators.”
Instead of feeling dismayed by the increased competition from social media platform content appearing on Google’s SERPs and Discover, news publishers should welcome the additional opportunities for their content to be seen.
In a social and AI-powered search landscape, brand visibility is the key metric. Whether that visibility comes from a news publisher article, video, or social post, it still counts toward brand engagement.
While search strategies have long focused on algorithms, optimizing content for a social-forward SERP requires a different focus. The merging of social and search will spark a holistic audience team revolution in newsrooms, reduce redundant practices, and inspire a content strategy powered by people over algorithms.
As the SaaS market reels from a sell-off sparked by autonomous AI agents like Claude Cowork, new data shows a 53% drop in AI-driven discovery sessions. Wall Street dubbed it the “SaaSpocalypse.”
Whether AI agents will replace SaaS products is a bigger question than this dataset can answer. But the panic is already distorting interpretation, and this data cuts through the noise to show what SEO teams should actually watch.
Copilot went from 0.3% to 9.6% of SaaS AI traffic in 14 months
From November 2024 to December 2025, SaaS sites logged 774,331 LLM sessions. ChatGPT drove 82.3% of that traffic, but Copilot’s growth tells a different story:
SaaS AI Traffic by Source (Nov 2024 – Dec 2025)
Source
Sessions
Share
ChatGPT
637,551
82.3%
Copilot
74,625
9.6%
Claude
40,363
5.2%
Gemini
15,759
2.0%
Perplexity
6,033
0.8%
Starting with just 148 sessions in late 2024, Copilot grew more than 20x by May 2025. From May through December, it averaged 3,822 sessions per month, making it the second-largest AI referrer to SaaS sites by year-end 2025.
Investors erased $300 billion from SaaS market caps over fears that AI agents will replace enterprise software. But this data points to a less dramatic force: proximity.
Copilot thrives because it captures intent inside the workflow. Standalone tools saw a 53% traffic drop while workplace-embedded AI grew 20x.
Software evaluation is work, and Copilot sits where that work happens.
When someone asks, “What CRM should we use for a 20-person sales team?” while building a business case in Excel, that moment is captured—one ChatGPT never sees. The May surge reflects that activation: Microsoft 365 users realizing they could research software without opening a new tab.
41.4% of SaaS AI traffic lands on internal search pages
SaaS AI discovery sends users to internal search results first, not product pages.
Top SaaS Landing Pages by LLM Volume
Page Type
LLM Sessions
% of AI Traffic
Penetration vs Site Avg
Search
320,615
41.4%
8.7x
Blog
127,291
16.4%
8.1x
Pricing
40,503
5.2%
3.2x
Product
39,864
5.1%
2.0x
Support
34,599
4.5%
2.1x
Despite capturing 320,615 sessions — more than blog, pricing, and product pages combined — this dominance likely reflects LLM limitations, not superior content. LLMs route users to search when they lack a specific answer.
For SaaS companies watching their stock crater, that’s useful news: there’s a concrete technical fix. The 41.4% isn’t an existential threat. It’s a crawlability problem.
When an LLM can’t find a direct answer, it defaults to the site’s internal search. The AI treats your search bar as a trusted backup, assuming the search schema will generate a relevant page even if a specific product page isn’t indexed.
At 1.22%, search page penetration is 8.7x the site average. The cause is a “safety net” effect, not optimization.
When more specific pages — like Product or Pricing — lack the data an LLM needs, it falls back to broader search results. LLMs recognize the search URL structure and trust it will return something relevant, even if they can’t predict what.
Blog pages follow with 127,291 sessions and 1.13% penetration. These are structured comparison posts — “best CRM for small teams” or “Salesforce alternatives” — that LLMs cite when they have specific recommendations.
Pricing pages show 0.45% penetration; product pages, 0.28%. When users ask about software selection, LLMs route to comparison surfaces — search and blog — first. Direct product or pricing pages get cited only when the query is already vendor-specific.
The July peak and Q4 decline reflect corporate work cycles
SaaS AI traffic peaked in July at 146,512 sessions, then declined steadily through Q4:
Month
Sessions
Change
July 2025
146,512
Peak
August 2025
120,802
-17.5%
September 2025
134,162
+11.1%
October 2025
135,397
+0.9%
November 2025
107,257
-20.8%
December 2025
68,896
-35.8%
Every platform declined. ChatGPT’s volume was cut in half, dropping from 127,510 sessions in July to 56,786 by year-end. Copilot fell from 4,737 to 2,351. Perplexity dropped from 7,475 to 3,752.
Two factors drove the slide:
People weren’t working. August is vacation season, November includes Thanksgiving, and December is the holidays. Software research happens during work hours; when offices close, discovery drops.
Q4 ends the fiscal “buying window.” Most teams have spent their annual budgets or are deferring contracts until Q1 funding opens. Even teams still working aren’t evaluating tools because there’s no budget left until the new fiscal year.
The July peak reflects midyear momentum: people are working, and Q3 budgets are still available. The Q4 decline reflects both fewer researchers and fewer active buying cycles.
This is where the sell-off narrative breaks down.
Investors treat a 53% traffic drop as proof that AI discovery is stalling. But the data aligns with standard B2B fiscal cycles.
AI isn’t failing as a discovery channel. It’s settling into the same seasonal rhythms as every other B2B buying behavior.
What this data means for SEO teams
Raw traffic numbers don’t show where to invest. Penetration rates and landing page distribution reveal what matters.
Track penetration by page type, not site-wide averages
SaaS shows 0.41% sitewide AI penetration, but that average hides concentration. Search pages reach 1.22%—8.7x higher. Blog pages hit 1.13%. Pricing pages are at 0.45%. Product pages lag at 0.28%.
If you’re only tracking total AI sessions, you’re measuring the wrong metric. AI traffic could grow 50% while penetration on high-value pages declines. Volume hides what matters: where AI users concentrate when they arrive with intent.
Action:
Segment AI traffic by page type in GA4 or your analytics platform.
Track penetration (AI sessions ÷ total sessions) by page category monthly.
Identify pages with elevated concentration, then optimize those surfaces first.
Search results pages are now a primary discovery surface
Internal search captures 41.4% of SaaS AI traffic. If those results aren’t crawlable, indexable, or structured for comparison, you’re invisible to the largest segment of AI-driven buyers.
Most SaaS sites treat internal search as navigation, not content. Results return paginated lists with minimal product detail, no filter signals in URLs, and JavaScript-rendered content LLMs can’t parse.
Action:
With 41.4% of traffic hitting internal search, treat your search bar as an API for AI agents.
Make search pages crawlable (check robots.txt and indexability).
Add structured data using SoftwareApplication or Product schema.
Surface comparison data — pricing, key features, user count — directly in results, not just product names.
Make your data legible to LLMs — pricing and content both
The sell-off is pricing in obsolescence, but for most SaaS companies the real risk is invisibility. Pricing pages show 0.45% AI penetration—below the 0.46% cross-industry average. Blog pages captured 127,291 sessions at 1.13% penetration, but only when content directly answered selection queries. The pattern is clear: LLMs cite what they can read and parse. They skip what they can’t.
Many SaaS sites still gate pricing behind contact forms. If pricing requires a sales conversation, AI won’t recommend you for “tools under $100/month” queries. The same applies to blog content. When someone asks, “What CRM should I use?” the LLM looks for posts that compare options, define criteria, and explain tradeoffs. Generic thought leadership on CRM trends doesn’t get cited.
Action:
Publish pricing on a dedicated, crawlable page. Include representative examples, seat minimums, contract terms, and exclusions.
Keep pricing transparent. Transparent pages get cited; gated pages don’t.
Replace generic blog posts with structured comparison pages. Use tables and clear data points.
Remove fluff. Provide grounding data that lets AI verify compliance and integration capabilities in seconds, not minutes.
Workplace-embedded AI is growing 10x faster than standalone LLMs
Copilot grew 15.89x year over year. Claude grew 7.79x. ChatGPT grew 1.42x. The fastest growth is in tools embedded in existing workflows.
Workplace AI shifts discovery context. In ChatGPT, users are explicitly researching. In Copilot, they’re asking questions mid-task—drafting a proposal, building a comparison spreadsheet, or reviewing vendor options with their team.
Action:
Track Copilot and Claude referrals separately from ChatGPT. Monitor which pages these sources favor.
Recognize intent: these users aren’t browsing — they’re mid-task, deeper in evaluation, and closer to a purchase decision.
Show up in workplace AI discovery to support real-time purchase justification.
Survival favors the findable
The 53% drop from July to December reflects AI usage settling into the software buying process. Buyers are learning which decisions benefit from AI synthesis and which don’t. The remaining traffic is more deliberate, concentrated on complex evaluations where comparison matters.
For SaaS companies, the window for early positioning is closing. The $300 billion sell-off is hitting the sector broadly, but the companies that survive the repricing will be those buyers can find when they ask an AI agent, “Should we renew this contract?”
Teams investing now in transparent pricing, crawlable data, and comparison-focused content are building that findability while competitors debate whether AI discovery matters.
In Google AI Overviews and LLM-driven retrieval, credibility isn’t enough. Content must be structured, reinforced, and clear enough for machines to evaluate and reuse confidently.
Many SEO strategies still optimize for recognition. But AI systems prioritize utility. If your authority can’t be located, verified, and extracted within a semantic system, it won’t shape retrieval.
This article explains how authority works in AI search, why familiar SEO practices fall short, and what it takes to build entity strength that drives visibility.
Why traditional authority signals worked – until they didn’t
For years, SEOs liked to believe that “doing E-E-A-T” would make sites authoritative.
Author bios were optimized, credentials showcased, outbound links added, and About pages polished, all in hopes that those signals would translate into authority.
In practice, we all knew what actually moved the needle: links.
E-E-A-T never really replaced external validation. Authority was still conferred primarily through links and third-party references.
E-E-A-T helped sites appear coherent as entities, while links supplied the real gravitas behind the scenes. That arrangement worked as long as authority could be vague and still rewarded.
It stops working when systems need to use authority, not just acknowledge it. In AI-driven retrieval, being recognized as authoritative isn’t enough. Authority still has to be specific, independently reinforced, and machine-verifiable, or it doesn’t get used.
Being authoritative but not used is like being “paid” with experience. It doesn’t pay the bills.
Search no longer operates on a flat plane of keywords and pages. AI-driven systems rely on a multi-dimensional semantic space that models entities, relationships, and topical proximity.
In that semantic space, entities function much like celestial bodies in physical space, discrete objects whose influence is defined by mass, distance, and interaction with others.
E-E-A-T still matters, but the framework version is no longer a differentiator. Authority is now evaluated in a broader context that can’t be optimized with a handful of on-page tasks.
In AI Overviews, ChatGPT, Claude, and similar systems, visibility doesn’t hinge on prestige or brand recognition. Those are symptoms of entity strength, not its source.
What matters is whether a model can locate your entity within its semantic environment and whether that entity has accumulated enough mass to exert influence.
That mass isn’t decorative. It’s built through third-party citations, mentions, and corroboration, then made machine-legible through consistent authorship, structure, and explicit entity relationships.
Models don’t trust authority. They calculate it by measuring how densely and consistently an entity is reinforced across the broader corpus.
Smaller brands don’t need to shine like legacy publishers. In a semantic system, apparent size and visibility don’t determine influence. Density does.
In astrophysics, some planets appear enormous yet exert surprisingly weak gravity because their mass is spread thinly. Others are much smaller, but dense enough to exert stronger pull.
AI visibility works the same way. What matters isn’t how large your brand appears to humans, but how concentrated and reinforced your authority is in machine-readable form.
The problem with E-E-A-T was never the concept itself. It was the assumption that trustworthiness could be meaningfully demonstrated in isolation, primarily through signals a site applied to itself.
Over time, E-E-A-T became operationalized as visible, on-page indicators: author bios, credentials, About pages, and lightweight citations.
These signals were easy to implement and easy to audit, which made them attractive. They created the appearance of rigor, even when they did little to change how authority was actually conferred.
That compromise held when search systems were willing to infer authority from proxies. It breaks down in AI-driven retrieval, where authority must be explicitly reinforced, independently corroborated, and machine-verifiable to carry weight.
Surface-level trust markers don’t fail because models ignore them. They fail because they don’t supply the external reinforcement required to give an entity real mass.
In a semantic system, entities gain influence through repeated confirmation across the broader corpus. On-site signals can help make an entity legible, but they don’t generate density on their own. Compliance isn’t comprehension, and E-E-A-T as a checklist doesn’t create gravitational pull.
In human-centered search, these visible trust cues acted as reasonable stand-ins. In LLM retrieval, they don’t translate. Models aren’t evaluating presentation or intent. They’re evaluating semantic consistency, entity alignment, and whether claims can be cross-verified elsewhere.
Applying E-E-A-T principles only within your own site won’t create the mass that machines need to recognize, align with, and prioritize your entity in a retrieval system.
AI doesn’t trust, it calculates
Human trust is emotional. Machine trust is statistical.
They reward clean extraction. Lists, tables, and focused paragraphs are easiest to reuse.
They cross-verify facts. Redundant, consistent statements across multiple sources appear more reliable than a single sprawling narrative.
Retrieval models evaluate confidence, not charisma. Structural decisions such as headings, paragraph boundaries, markup, and lists directly affect how accurately a model can map content to a query.
This is why ChatGPT and AI Overview citations often come from unfamiliar brands.
It’s also why brand-specific queries behave differently. When a query explicitly names a brand or entity, the model isn’t navigating the galaxy broadly. It’s plotting a short, precise trajectory to a known body.
With intent tightly constrained and only one plausible source of truth, there’s far less risk of drifting toward adjacent entities.
In those cases, the system can rely directly on the entity’s own content because the destination is already fixed. The models aren’t “discovering” hidden experts. They’re rewarding content whose structure reduces uncertainty.
The semantic galaxy: How entities behave like bodies
LLMs don’t experience topics, entities, or websites. They model relationships between representations in a high-dimensional semantic space.
That’s why AI retrieval is better understood as plotting a course through a system of interacting gravitational bodies rather than “finding” an answer. Influence comes from mass, not intention.
Over time, citations, mentions, and third-party reinforcement increase an entity’s semantic mass. Each independent reference adds weight, making that entity increasingly difficult for the system to ignore.
Queries move through this space as vectors shaped by intent. As they pass near sufficiently massive entities, they bend. The strongest entities exert the greatest gravitational pull, not because they are trusted in a human sense, but because they are repeatedly reinforced across the broader corpus.
Extractability doesn’t create that gravity. It determines what happens after attraction occurs. An entity can be massive enough to warp trajectories and still be unusable if its signals aren’t machine-legible, like a planet with enough gravity to draw a spacecraft in but no viable way to land.
Authority, in this context, isn’t belief. It’s gravity, the cumulative pull created by repeated, independent reinforcement across the wider semantic system.
Entity strength vs. extractability
Classic SEO emphasized backlinks and brand reputation. AI search desires entity strength for discovery, but demands clarity and semantic extractability to be included.
Entity strength – your connections across the Knowledge Graph, Wikidata, and trusted domains – still matters and arguably matters more now. Unfortunately, no amount of entity strength helps if your content isn’t machine-parsable.
Consider two sites featuring recognized experts:
One uses clean headings, explicit definitions, and consistent links to verified profiles.
The other buries its expertise inside dense, unstructured paragraphs.
Only one will earn citations.
LLMs need:
One entity per paragraph or section.
Explicit, unambiguous mentions.
Repetition that reinforces relationships (“Dr. Jane Smith, cardiologist at XYZ Clinic”).
Precision makes authority extractable. Extractability determines whether existing gravitational pull can be acted on once attraction has occurred, not whether that pull exists in the first place.
Structure like you mean it: Abstract first, then detail
LLM retrieval is constrained by context windows and truncation limits, as outlined by Lewis et al. in their 2020 NeurIPS paper on retrieval-augmented generation. Models rarely process or reuse long-form content in its entirety.
If you want to be cited, you can’t bury the lede.
LLMs read the beginning, but then they skim. After a certain number of tokens, they truncate. Basically, if your core insight is buried in paragraph 12, it’s invisible.
To optimize for retrieval:
Open with a paragraph that functions as its own TL;DR.
State your stance, the core insight, and what follows.
Expand below the fold with depth and nuance.
Don’t save your best material for the finale. Neither users nor models will reach it.
Stop ‘linking out,’ start citing like a researcher
The difference between a citation and a link isn’t subtle, but it’s routinely misunderstood. Part of that confusion comes from how E-E-A-T was operationalized in practice.
In many traditional E-E-A-T playbooks, adding outbound links became a checkbox, a visible, easy-to-execute task that stood in for the harder work of substantiating claims. Over time, “cite sources” quietly degraded into “link out a few times.”
A bad citation looks like this:
A generic outbound link to a blog post or company homepage offered as vague “support,” often with language like “according to industry experts” or “SEO best practices say.”
The source may be tangentially related, self-promotional, or simply restating opinion, but it does nothing to reinforce your entity’s factual position in the broader semantic system.
A good citation behaves more like academic referencing. It points to:
Primary research.
Original reporting.
Standards bodies.
Widely recognized authorities in that domain.
It’s also tied directly to a specific claim in your content. The model can independently verify the statement, cross-reference it elsewhere, and reinforce the association.
The point was never to just “link out.” The point was to cite sources.
Engineering retrieval authority without falling back into a checklist
The patterns below aren’t tasks to complete or boxes to tick. They describe the recurring structural signals that, over time, allow an entity to accumulate mass and express gravity across systems.
This is where many SEOs slip back into old habits. Once you say “E-E-A-T isn’t a checklist,” the instinct is to immediately ask, “Okay, so what’s the checklist?”
But engineering retrieval authority isn’t a list of tasks. It’s a way of structuring your entire semantic footprint so your entity gains mass in the galaxy the models navigate.
Authority isn’t something you sprinkle into content. It’s something you construct systematically across everything tied to your entity.
Make authorship machine-legible: Use consistent naming. Link to canonical profiles. Add author and sameAs schema. Inconsistent bylines fragment your entity mass.
Strengthen your internal entity web: Use descriptive anchor text. Connect related topics the way a knowledge graph would. Strong internal linking increases gravitational coherence.
Write with semantic clarity: One idea per paragraph. Minimize rhetorical detours. LLMs reward explicitness, not flourish.
Use schema and LLMS.txt as amplifiers: They don’t create authority. They expose it.
Audit your “invisible” content: If critical information is hidden in pop-ups, accordions, or rendered outside the DOM, the model can’t see it. Invisible authority is no authority.
E-E-A-T taught us to signal trust to humans. AI search demands more: understanding the forces that determine how information is pulled into view.
Rocket science gets something into orbit. Astrophysics navigates and understands the systems it moves through once there.
Traditional SEO focused on launching pages—optimizing, publishing, promoting. AI SEO is about mass, gravity, and interaction: how often your entity is cited, corroborated, and reinforced across the broader semantic system, and how strongly that accumulated mass influences retrieval.
The brands that win won’t shine brightest or claim authority loudest, nor will they be no-name sites simulating credibility with artificial corroboration and junk links.
They’ll be entities that are dense, coherent, and repeatedly confirmed by independent sources—entities with enough gravity to bend queries toward them.
In an AI-driven search landscape, authority isn’t declared. It’s built, reinforced, and made impossible for machines to ignore.
AI search visibility in beauty is increasingly shaped before a prompt is ever entered.
Brands that appear in generative answers are often those already discussed, validated, and reinforced across social platforms. By the time a user turns to AI search, much of the groundwork has been laid.
Using the beauty category as a lens, this article examines how social discovery influences brand visibility – and why AI search ultimately reflects those signals.
Discovery didn’t move to AI – it fragmented
Brand discovery has fragmented across platforms. AI tools influence mid-funnel consideration, but much discovery happens before a user enters a prompt.
The signals that determine AI visibility are formed upstream. By the time a user reaches generative search, preferences and perceptions may already be set. If brands wait until AI search to influence demand, the window to shape consideration has narrowed.
That upstream influence is increasingly social. Roughly two-thirds of U.S. consumers now use social platforms as search engines, per eMarketer research.
This shift extends beyond Gen Z and reflects how people validate information and discover brands. These same platforms consistently appear among the top citation sources in AI results. The dynamic is especially visible in the beauty category.
In a study our agency conducted with a beauty brand partner, we found that Reddit, YouTube, and Facebook ranked among the top cited domains in both AI Overviews and ChatGPT.
While Reddit is often viewed as an anti-brand environment, YouTube appears nearly as frequently in citation data, making it a logical and underutilized target for citation optimization.
The volume reality: Social behavior still outpaces AI
It’s easy to focus on headline figures around AI usage, including the billions of prompts processed daily. But when measured against business outcomes such as traffic and transactions, the scale looks different.
Social platforms are already embedded in mainstream search behavior. For many users, search-like activity on platforms such as TikTok and YouTube is habitual. Nearly 40% of TikTok users search the platform multiple times per day, and 73% search at least once daily.
Referral data reinforces the contrast. ChatGPT referral traffic accounted for roughly 0.2% of total sessions in a 12-month analysis of 973 ecommerce sites, a University of Hamburg and Frankfurt School working paper found. In the same dataset, Google’s organic search traffic was approximately 200 times larger than organic LLM referrals.
AI search is growing and strategically important. But in terms of repeat behavior, measurable sessions, and downstream transactions, social platforms and traditional search continue to operate at a substantially larger scale.
The validation loop: Why AI needs social
The most critical contrarian point for 2026 is that optimizing for social is also optimizing for AI. Large language models are not primary sources of truth. They function as mirrors, reflecting the consensus formed through human conversations in the data they are trained on.
AI systems also demonstrate skepticism toward brand-owned properties. One study found that only 25% of sources cited in AI-generated answers were brand-managed websites.
At the same time, AI engines prioritize third-party validation. Up to 6.4% of citation links in AI responses originated from Reddit, an analysis by OtterlyAI found. This outpaces many traditional publishers.
There’s also a measurable relationship between sentiment and visibility. Research shows a moderate positive correlation between positive brand sentiment on social media and visibility in AI search results.
Treating video as a “brand channel” or a social-first effort rather than a search surface is a strategic failure.
On platforms such as TikTok and YouTube, ranking signals are shaped by spoken language, on-screen text, and captions – signals AI crawlers increasingly use to “triangulate trust.”
In the beauty category, for example, ChatGPT accounts for about 4.3% of searches, while Google processes roughly 14 billion searches per day. However, for “how-to” and technique-based queries, consumers favor the detailed, personalized guidance of social-first video content.
Science-backed brands such as Paula’s Choice and CeraVe dominate AI-generated results because they publish deep, structured educational content. Meanwhile, more traditional marketing-led brands are significantly less visible.
The phrase “dermatologist recommended” correlates with high visibility in AI results because large language models treat expert social proof as a primary ranking signal, according to the same report.
Breaking the high-production barrier: Creating content at scale
One of the biggest hurdles brands cite is budget. Many believe they need a Hollywood production crew to compete in video environments. That is a legacy mindset.
In today’s environment, high-gloss production can be a deterrent. The current landscape rewards authenticity over polish. Consumers are looking for real people with real skin concerns, not highly filtered commercials.
Optimizing for video discovery doesn’t require filmmaking expertise. Brands can leverage internal talent without adding headcount.
Partner with creator platforms: Platforms such as Billow or Social Native allow brands to work with creators for as little as $500 per video. When mapped to a high-intent query, that investment can drive measurable search visibility outcomes.
Leverage social natives on staff: Often, the strongest asset is internal. Identify team members who are active on platforms such as TikTok and understand platform dynamics. Creating internal incentives or challenges to produce content can generate a steady stream of authentic assets while contributing to culture.
Make strategy the differentiator: A large following is not a prerequisite for visibility. In one case, a TikTok profile built from scratch with one part-time creator at $2,500 per month generated hundreds of thousands of views within 90 days. The focus was not on viral trends, but on meaningful transactional terms that drive revenue.
If a new profile can reach more than 100,000 views per video within three months on a limited budget, the barrier isn’t equipment. It’s clarity on the business case and disciplined execution.
The data is clear. Brands can’t win the generative engine if they’re losing the social conversation.
AI models function as mirrors, reflecting web consensus. If real users on Reddit, YouTube, and TikTok aren’t discussing a brand, AI systems have little to surface.
If marketers wait until a user reaches a ChatGPT prompt to shape perception, the opportunity has already narrowed.
Discovery happens upstream. Validation occurs in the loop between social proof and algorithmic citation.
Translating this into action requires rethinking team structure and priorities:
Stop the silos: Your SEO and social teams shouldn’t speak different languages. Both must focus on search surfaces.
Prioritize the “why” before the “what”: Don’t just fix a technical tag. Build the business case for how social sentiment and expert validation drive market share.
Embrace scrappy execution: Whether through $500 creator partnerships or internal social-native talent, start building authentic assets now.
We’re witnessing a shift from algorithm-driven discovery to community-driven discovery.
It’s agile and multidisciplinary, and when executed well, it can meaningfully impact the bottom line.