Advertisers can now generate short videos directly inside Google Ads using Veo, Google’s most advanced generative video model — no video production required.
How it works. Upload up to three static images into Asset Studio and Veo generates videos up to 10 seconds long with natural motion, designed specifically for YouTube formats and audiences. These can then be turned into ready-to-serve ads using customisable templates.
What else it can do. Combined with Nano Banana, advertisers can adapt creatives further — swapping backgrounds, adjusting messaging, and tailoring content to specific audience interests.
The bigger picture. This follows Google’s earlier rollout of video templates and automatic video creation in Demand Gen campaigns, and represents the next step in Google’s push to make video creative accessible to advertisers of all sizes without dedicated production resources.
Why we care. Video consistently outperforms static creative on YouTube — but producing it has always required time, budget, and expertise. Veo removes most of that barrier, letting advertisers turn existing product images into polished video ads in minutes. For teams running image-heavy campaigns who have been unable to compete in video placements, this changes the equation significantly.
Early testing. Hop Skip Media founder Ameet Khabra shared some early results of the testing she did showing a video she created on LinkedIn. Her review is:
“Consumer product brands with clean imagery and inherent motion logic will get the most out of this”
The bottom line. As Google continues building AI creative tools directly into the ads platform, the gap between advertisers with production budgets and those without narrows. For anyone who struggles to get video production budget approved and have assets with inherent motion logic, now could be the best time to test AI-generated video in Google Ads.
Google is testing AI-generated summaries in YouTube feeds, replacing video titles with auto-written synopses.
Some YouTube users are seeing video titles replaced by AI-generated summaries in the Android app. Reports on Reddit showed title-less video cards with collapsible summary boxes instead.
The details. Video thumbnails remain, but titles are missing in some cases.
AI summaries appear in expandable text boxes beneath each video.
Users must tap to expand summaries to understand the content.
The test appears limited to YouTube on Android.
What it looks like. Here’s a screenshot Reddit user GrimmOConnor shared:
Why we care. This further abstracts creator metadata and reduces control over how your YouTube content appears. Titles remain a critical ranking and click-through signal. Replacing them with AI summaries can impact keyword targeting, brand voice, and intent matching — and increase the risk of inaccuracies that hurt performance.
Google confirmed a “small” and “narrow” experiment replacing original page titles with AI-generated versions in Search results.
According to Google, the goal is to better match queries and improve engagement.
But examples showed Google shortening or rewording headlines, changing tone and meaning.
Reaction. Early feedback suggests a worse browsing experience. Expanding summaries slows discovery and adds friction to content selection, which runs counter to YouTube’s engagement goals.
What’s next. There’s no official confirmation from YouTube on a broader rollout. The missing titles may be a bug, but the AI summary feature aligns with Google’s broader push into generative AI.
Looking to take the next step in your search marketing career?
Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.
Job Description Salary: $21.27-$25.85 hourly The Marketing Specialist Acquisition will play a key role in overseeing and managing Syntrios digital marketing efforts to drive awareness, engagement, and conversions. This role is responsible for overseeing and executing integrated campaigns across several online channels. Key Responsibilities: Implement, monitor, and improve PPC campaigns (Google, Bing, etc.) Plan and […]
Why USA Clinics Group? Founded by Harvard-trained physicians with a vision of offering patient-first care beyond the hospital settings, we’ve grown into the nation’s largest network of outpatient vein, fibroid, vascular, and prostate centers, with 170+ clinics across the country. Our mission is simple: deliver life-changing, minimally invasive care, close to home. We’re building a […]
Description Lightburn is hiring a Search Optimization Manager to drive the research, strategy, and execution of SEO and emerging search optimization needs. This role combines deep analytical research with hands-on optimization to improve visibility, discoverability, and performance for a wide range of clients. You’ll own SEO and GEO strategy for clients by conducting competitive analysis, […]
Job Description The intern supports strategic projects within the Organic Growth and Digital Strategy teams. These projects include tracking brand visibility across Large Language Models (LLMs), assisting with AI-driven search analysis, documenting organic discovery trends, and supporting brand authority research across search engines and social platforms. This opportunity provides a unique, “behind-the-scenes” view of how […]
Benefits: Competitive salary Health insurance Opportunity for advancement Paid time off Training & development Digital Marketing Specialist (SEO Focus) Company: Direct Clicks Inc. Job Type: Full-Time or Hourly Based on Experience Location: Remote Candidates must be located within driving distance of Roseville, Minnesota for occasional in-person team meetups. About Direct Clicks Inc. Direct Clicks Inc. […]
Company Description At Sectigo, we align around our mission and pride ourselves in helping thousands of customers sleep better at night. Sectigo is the most innovative provider of certificate lifecycle management (CLM), delivering comprehensive solutions that secure human and machine identities for the world’s largest brands. Sectigo’s automated, cloud-native CLM platform issues and manages digital […]
Job Description Kuhn Raslavich is seeking a Digital Marketing Manager to lead and execute the firm’s digital strategy across web, SEO, social media, content, and analytics. This is a hands-on role ideal for someone who can operate as a department of one, build processes from the ground up, and elevate a growing law firm’s digital […]
Omniscient Digital is an organic growth agency that partners with ambitious B2B SaaS companies like SAP, Adobe, Loom, and Hotjar to turn SEO and content into growth engines. We pride ourselves on being lean, agile, and experimental. Our team thrives on R&D and innovation, always exploring the smartest ways to deliver exceptional results. We believe […]
SEO Operations Associate (AI Search) We’re hiring a hungry, detail-obsessed operator to help execute and manage SEO + AI search campaigns for fast-growing companies. This is not a traditional SEO role. You’ll be working directly on cutting-edge AI search (ChatGPT, Perplexity, Gemini) while helping drive execution across multiple client projects. If you’re organized, sharp, and […]
We are seeking an experienced SEO Strategist to assist our team on a full-time basis. This is an exciting opportunity for a data-driven individual with a creative mindset who thrives on delivering exceptional results. If you are a problem-solver with a passion for SEO and a desire to make a meaningful impact, we want to […]
Responsibilities Executing lead generating paid media strategies and campaigns across digital ad platforms such as LinkedIn Ads, Google Ads, Meta Ads, Reddit Ads, and other relevant advertising channels Trafficking advertising campaigns into digital platforms, including ad creative, keywords, URL UTMs, audience building, and more. Ensuring that advertising messages align with client or company objectives Measuring […]
JustFab is currently looking for a Paid Media Specialist This position will report to the Sr. Manager, Media. Key Responsibilities Media Management & Optimization Build, launch, and manage media campaigns across major platforms including, but not limited to, Google, Meta, TikTok, Snapchat, Pinterest, and other emerging channels, with the ability to quickly learn and adopt […]
Benefits 401(k) Bonus based on performance Competitive salary Dental insurance Free food & snacks Health insurance Opportunity for advancement Paid time off Training & development Vision insurance Are you passionate about supporting business owners and helping them grow? Do you enjoy being the go-to partner who helps clients show up at the right moment and […]
About WPP Media WPP is the creative transformation company. We use the power of creativity to build better futures for our people, planet, clients and communities. For more information, visit wpp.com. Role At WPP Media, we believe in the power of our culture and our people. It’s what elevates us to deliver exceptional experiences for […]
We are seeking an experienced and results-driven Paid Search Manager to oversee the strategy, execution, and optimization of paid search marketing campaigns. The ideal candidate will have a strong background in digital advertising, data analysis, and campaign optimization, with the ability to manage large-scale paid search initiatives that drive measurable growth and return on investment. […]
Organic Growth: Build and execute the SEO roadmap across technical, content, and off-page. Own the numbers: traffic, rankings, conversions. No handoffs, no excuses.
AI-Optimized Search (AIO): Define and drive CARE.com’s strategy for visibility in AI-generated results — Google AI Overviews, ChatGPT, Perplexity, and whatever comes next. Optimize entity coverage, content structure, and schema to ensure we’re the answer, not just a result.
We’ve all seen the charts going viral on LinkedIn. They’re everywhere at this point. Multiple industry studies, even this research from Semrush, confirm that Wikipedia and Reddit are the top-cited domains across major LLM platforms — and CMOs are running with this data.
The response is predictable: Just search for any bottom-of-funnel (BOFU) software query, and you’ll find Reddit threads in the top-ranking positions. This is exactly why the market is currently flooded with “Reddit SEO” agencies:
Just stop.
Taking this macro context — or a few isolated, high-ranking SERPs — and pivoting your entire GEO strategy toward Reddit or Wikipedia is a massive strategic error for the majority of B2B brands.
Why CMOs are misguided by the Reddit hype
The algorithmic tide is running toward massive community forums and open-source encyclopedias. That shift is real — but how it’s being interpreted isn’t.
The charts driving this executive FOMO are mathematically accurate, but they’re strategically misguided. Applying them as a universal GEO playbook ignores why that aggregate data exists and why certain pages rank for high-intent queries.
Reddit is the primary target because it’s perceived as easier to influence. While the industry respects Wikipedia’s ironclad editorial guardrails, Reddit is often viewed as an open loophole.
This is a classic case of marketing whiplash, where teams abandon foundational principles to chase the shiny new object.
To understand why Reddit and Wikipedia are a high-effort, low-upside channel for the vast majority of brands, you have to look at the context executives ignore.
“Wikipedia, Reddit, and YouTube are heavily cited by LLMs because they are massive websites with a topical footprint that spans into a million different areas.”
By default, they’ll always get the most aggregate LLM citations.
High-ranking Reddit threads on BOFU queries can’t be reproduced
When you see a Reddit thread driving CTR for a specific BOFU software query, it’s tempting to view it as an SEO loophole that can be easily reverse-engineered. This is incorrect.
In reality, this is a scenario where the “voice of the customer” largely dictates who gets recommended.
This isn’t an SEO hack or a growth trick. It’s the culmination of years of actual human peer reviews and real discussion on a topic that has reached a definitive consensus. Your marketing team can’t microwave this historical, multi-year, authentic brand sentiment.
Claiming you need a Reddit or Wikipedia strategy because they are the most-cited domains overall is like claiming spaghetti carbonara is the most-eaten dish in Italy. Yes, it’s ubiquitous and popular, but just because it’s everywhere doesn’t mean you should put it on the menu at a high-end steakhouse.
The illusion of ‘hacking’ Reddit and Wikipedia for AI visibility
Even if you ignore the macro context and decide to aggressively pursue a Reddit or Wikipedia SEO strategy, you’ll quickly realize how LLMs actually process data.
Hacking them for AI citations is an illusion built on a fundamental misunderstanding of what LLMs are looking for. When you look at the mechanics of AI citations, two massive roadblocks emerge.
Historical consensus can’t be microwaved
Thirsty SEO agencies will frequently pitch Reddit marketing services, promising to generate hundreds of upvotes and comments to trigger LLM visibility. But the data shows LLMs don’t care about manufactured virality.
Up to 80% of Reddit threads cited by AI have fewer than 20 upvotes, according to Semrush. More importantly, the average age of a cited post is roughly 900 days. LLMs are surfacing historical, established consensus, not yesterday’s growth hack.
Wikipedia editors will just delete you
The exact same brutal reality applies to Wikipedia. A Princeton University study analyzing AI-generated Wikipedia content revealed exactly what happens when marketers try to “hack” the encyclopedia with generative tools.
Researchers found that when users utilized AI to create self-promotional pages for businesses, the articles were mathematically lower in quality, lacking proper footnotes and internal links.
The result?
Human moderators quickly identified the low-effort content, deleted the pages for “unambiguous advertising,” and actively banned users.
Paraphrasing destroys narrative control
Even if you successfully infiltrate a subreddit or a Wikipedia page without getting banned, you lose control over your product positioning. Benji Hyamnotes that Reddit mentions are typically too short and lack the depth necessary for an LLM to associate your product with a specific problem and solution.
The Semrush data also proves this: AI tools don’t quote Reddit word-for-word. They blend and paraphrase discussions (showing a semantic similarity score of just 0.53).
Your carefully crafted value proposition will be mashed up with random, anonymous user comments, or stripped down to dry, encyclopedic neutrality, diluting your brand narrative entirely.
Posting on Reddit isn’t an SEO strategy — it’s shouting through a bus window, hoping to join the conversation. At best, it’s a short-term tactic. At worst, it actively damages your brand.
The lack of ROI is only half the problem when it comes to building a Reddit or Wikipedia presence. The much larger issue is the active harm it can inflict on your brand’s image.
Brands that treat these platforms as loopholes for AI citations fundamentally misunderstand their architecture.
As Eli Schwartzpoints out, trying to replicate decades of genuine human conversation with templated brand messaging isn’t just ineffective — it’s a massive reputational hazard.
Reddit communities are aggressively moderated
Subreddits and wiki pages are policed by passionate human moderators and veteran Wikipedia editors. They’ve seen every variation of corporate infiltration.
A new account dropping a link, manufacturing enthusiasm, or violating Wikipedia’s strict conflict of interest (COI) guidelines is flagged, reverted, and banned almost immediately. Sometimes, this is accompanied by a public callout (featured on subreddits like r/hailcorporate), causing more brand damage than the campaign was ever worth.
LLMs ingest deleted spam and banned accounts
This is the most critical and misunderstood risk. Reddit sells its data directly to companies like Google and OpenAI. Wikipedia’s entire edit history is completely open source.
LLMs aren’t just scraping the public-facing websites. They’re receiving the entire firehose of data (including deleted posts, reverted wiki edits, and banned accounts). When your agency’s fake comments or promotional product descriptions get removed by moderators, those AI models still see the manipulation.
Astroturfing creates a permanent negative trust signal
Because the AI models have full visibility into the moderation pipeline, links or mentions flagged as inauthentic carry negative weight. By attempting to game the system, you’re essentially training the AI to associate the brand with spam and coordinated manipulation.
Once you accept that hacking Reddit or Wikipedia is both ineffective and dangerous, you have to look at where LLMs are actually pulling their answers from when a buyer is ready to make a purchase. When you filter for high-intent, BOFU prompts, the “Reddit/Wikipedia is everywhere” narrative falls apart.
Using AI visibility platforms like Scrunch AI exposes Reddit’s and Wikipedia’s true influence on specific target categories. For one B2B client, tracking 300+ custom prompts generated thousands of LLM responses, but just two specific Reddit threads were responsible for the vast majority of citations.
The Wikipedia data was even more revealing.
For high-intent software queries, the encyclopedia barely registered. When AI tools cited Wikipedia, they were almost exclusively scraping broad, top-of-funnel category definitions, or pulling background facts from a specific company’s history page.
Data from Grow and Convert shows the same thing. For trucking software queries, LLMs consistently cited domains like PCS Software and TruckingOffice.
For project management queries, the AI cited specialized software review sites and niche blogs.
If you’re chasing platforms simply because they cover massive topical geography, you’re making a painful error. You don’t need to be visible everywhere. You only need to be visible in the specific digital neighborhood that influences your flagship category.
How to actually earn AI recommendations: Owned content and niche citations
Winning in AI search requires optimizing for targeted influence rather than aggregate metrics. The most effective GEO strategy abandons massive topical geography and focuses entirely on the pillars you can actually control.
Publish deep, human-written owned content
Your website remains your most powerful asset. To be recommended, you must provide the specific, granular depth the AI needs to understand your value. Your key product and solution pages need to explicitly cover:
Who the product is for.
How it’s used.
The specific pain points it solves.
Its core benefits.
This depth is exactly what gives you a chance at showing up for the highly specific, long-tail queries a customer types into an AI when evaluating products.
Execute targeted citation outreach
Use AI visibility tools to identify the specific, niche domains that currently influence your flagship categories. Once you know which industry blogs, review sites, and peer publications the LLMs are actually citing for your BOFU queries, execute targeted outreach to earn your place on those exact lists.
If you want a Reddit or Wikipedia strategy, respect their ecosystems
Reddit and Wikipedia carry real authority, and earning trust there is valuable independent of AI visibility. If you choose to invest in them, it must be a long-term play, not a marketing hack.
Engage authentically on Reddit: Answer questions, provide unique insights, and participate in discussions where your buyers actually hang out. Build street cred before recommending your own tools.
Build a branded subreddit for transparency: Create an official space for your team to share expertise, host AMAs, and answer product questions openly.
Monitor conversations for product insights: Use the platform to spot emerging pain points and shifts in sentiment before they hit traditional search engines.
Leave Wikipedia to the experts: If your brand genuinely deserves a Wikipedia page, it will be created by independent editors using reliable secondary sources. Don’t try to write your own product entry.
The path to AI visibility runs through your own domain and the highly specific digital neighborhoods your buyers trust. AI engines reflect the authority you already have. If you want the algorithm to recommend your brand, then you have to do the work to actually be recommendable.
Just six weeks after launching its ad pilot, OpenAI has hit a significant milestone — and the platform is still in its early stages of rollout.
The numbers.
Over $100 million in annualized ad revenue, generated from less than 20% of eligible US free and Go tier users seeing ads daily
Around 85% of Free and Go users are eligible to see ads — meaning the current revenue represents a fraction of the platform’s eventual ad capacity
More than 600 advertisers are now on the platform
What’s coming next.
Self-serve advertiser access is on track to launch in April
Geographic expansion into Canada, Australia, and New Zealand is being explored
OpenAI has hired former Meta ad executive Dave Dugan to lead ad sales
Why we care. ChatGPT’s ad business has scaled to $100 million in annualized revenue in just six weeks — and that’s from less than 20% of eligible users seeing ads today, meaning the inventory is about to get significantly larger.
Self-serve access launching in April is the moment this becomes accessible to the broader advertiser market, not just the 600+ brands currently in the managed pilot. Getting in early, before competition drives up costs, is the same playbook that rewarded early movers in search and social advertising.
The quality picture. OpenAI says fewer than 7% of ads are rated by users as “low relevance” — a metric the company says they are actively focused on improving alongside user trust.
The bigger context. Ads are a key part of OpenAI’s path to profitability ahead of an anticipated IPO. Executives have told investors the company expects to generate more than $17 billion from ChatGPT consumers in 2026 — with advertising representing a meaningful slice of revenue from its free user base.
The bottom line.$100 million in annualized revenue from less than 20% of eligible users in six weeks is a strong early signal. When self-serve access opens in April and the eligible audience expands, the numbers could scale quickly — and advertisers who have been waiting on the sidelines may soon find the platform harder to ignore.
OpenAI have been pumping out the ads ads for free-tier ChatGPT users in the US for over a month now, and early testing suggests they’re more frequent and more targeted than many users might expect.
How often they appear. In a test of 500 questions across the mobile app, roughly one in five questions in a new conversation thread triggered an ad at the bottom of ChatGPT’s response — always as a website link button, always tailored to the topic of the question.
What kind of ads appeared. The range was broad — dog food, hotel bookings, productivity software, cruise vacations, streaming services, corporate credit cards, AI coding tools, and basketball tickets, among others. Travel questions triggered ads most frequently; asking for help planning a trip to Palm Springs surfaced a Booking.com ad that automatically searched for hotels in that location.
The “poaching” dynamic. When a question mentioned a brand by name — DoorDash or Netflix, for example — the ad that appeared was sometimes for a direct competitor. Marketing professors describe this as a longtime staple of digital advertising now migrating to AI.
Why we care. ChatGPT ads are appearing roughly once every five questions on the free tier, with targeting based on conversation topic and memory — making it an emerging channel advertisers should monitor, particularly given the “poaching” dynamic that allows brands to appear against competitor mentions, a tactic already proven in search advertising.
What OpenAI says.
Ads do not influence ChatGPT’s answers
Full conversation content is not shared with advertisers
Ad targeting is based on question topic, past chats, and whatever ChatGPT has stored in memory about the user
Early signals show low ad dismissal rates and no impact on consumer trust metrics
The irony. OpenAI CEO Sam Altman called ads “a last resort” in 2024, saying the mix of “ads plus AI is sort of uniquely unsettling.” The company is now expanding the rollout to Canada, Australia, and New Zealand after its US pilot.
The big picture. Neither Google’s Gemini nor Anthropic’s Claude currently features sponsored ad buttons in outputs — though Google has said it’s not ruling it out. OpenAI is essentially pioneering a new ad format, and how it handles the balance between monetisation and user trust will shape whether AI advertising becomes a lasting industry or a cautionary tale.
Spotted. Digital marketer Glenn Gabe, shared on X how the ads are showing on mobile and confirmed it isn’t showing on Plus accounts.
The bottom line. For advertisers, ChatGPT’s ad inventory is becoming real, even though there is still a long way to go to prove ROI. However the platform’s credibility depends entirely on whether users feel the ads are eroding the experience. That’s a tension worth watching closely as the rollout scales.
You know SEO improves traffic, authority, and trust. What we don’t talk about enough is how a strong SEO foundation can help other channels, including PPC.
This practical case study will show you how performance marketing scales in a high-consideration B2B medical device market and how getting SEO fundamentals firmly in place enables paid media to deliver at scale.
B2B medical device marketing breaks most performance playbooks
Marketing a premium pelvic floor chair has little in common with selling SaaS tools or consumer products. This is a high-ticket medical device with a long sales cycle and a strong reliance on medical expertise.
Buyers include doctors, fitness centers, physiotherapists, urologists, and gynecologists. They ask detailed questions and have high expectations around clinical evidence, credibility, and long-term value.
In markets like this, many common performance tactics fail quickly. Increasing bids, expanding keyword coverage, or testing endless landing page variants doesn’t compensate for a lack of credibility and topical authority.
If potential buyers don’t trust the provider behind the product, no amount of optimization will create sustainable results. That was exactly the situation we faced at the outset of this project.
Starting point: Paid media with limited topical authority
At the end of 2023, we launched our first Google Ads lead generation campaigns. At that stage:
The website wasn’t optimized to modern SEO standards and lacked topical authority.
Conversion tracking was limited, which meant conversions were often attributed to the direct channel.
There were no clearly defined Google Tag Manager (GTM) events in place.
Conversions imported from GA4 meant delayed signals and limited usefulness for bidding algorithms.
Sales were happening, but primarily driven by word of mouth rather than by clearly attributable digital touchpoints.
Still, those early campaigns revealed something valuable. We began seeing our first sales coming through paid search. That wasn’t enough to scale, but it confirmed that search demand existed and could convert once the surrounding system improved.
The turning point: Treating SEO as revenue infrastructure
In mid-2024, we made a deliberate shift, treating SEO as revenue infrastructure instead of a secondary task or a nice-to-have initiative. Rather than focusing on quick ranking wins, the goal became building topical authority in pelvic health and creating the trust layer that paid media depends on, especially in medical markets.
The emphasis was intentionally top-of-funnel. The keyword and content strategy focused on education. We:
Mapped the full informational landscape around pelvic health, pelvic floor therapy, and non-invasive treatment options using Semrush.
Focused on the questions patients actually ask, explanations of treatment mechanisms, and comparisons between different therapeutic approaches. The content avoided aggressive selling and instead aimed to educate clearly and responsibly.
Invested in long-form articles with structured sections, FAQ elements, and embedded videos featuring our physiotherapists. The objective was to build a credible resource that users and search engines would trust over time.
Semrush Position Tracking development
Authority building through medical partnerships
Our most impactful SEO lever was authority and link building. The brand already worked closely with clinics and medical professionals who used the pelvic floor chair in their practices. Instead of relying on traditional outreach or guest posting, we developed a partner-driven backlink strategy.
We provided clinics using our chair with free, ready-to-use content ranging from performance campaign visuals (B2C lead generation) to educational materials and clinical studies on the effectiveness of the technology. In return, the clinics linked to our website from their dedicated product pages, using the content we supplied and creating natural, relevant references.
These weren’t generic backlinks. They came from trusted medical domains, embedded in highly relevant content, and aligned closely with how Google evaluates expertise and trust in healthcare contexts.
Over time, this resulted in steady growth in referring domains and a significant increase in topical authority. As the network graph from our Semrush backlink analysis shows, our strongest referring pages have high authority scores.
Semrush Network Graph for backlink analysis
They originate primarily from fitness studios, physiotherapy practices, and medical clinics. These relationships position the brand at the center of a tightly connected backlink network, reinforcing topical relevance and trust within the healthcare and wellness ecosystem.
By late 2024, the impact was clearly visible. The website ranked number one for the most important generic keywords, such as “Beckenbodenstuhl” (German for pelvic floor chair), and gained visibility across a broad range of related queries.
More importantly, organic visibility began shaping brand perception. Prospects encountered the brand repeatedly during their research phase, often through AI Overviews, long before ever clicking on an ad. SEO has effectively become a trust engine.
How organic dominance reshaped Google Ads performance
This is where paid media behavior started to change. A strong organic presence does more than drive unpaid traffic. It changes how users respond to ads.
When people already recognize a brand from their organic research, paid listings feel familiar and credible rather than intrusive. This is especially true with AI Overviews, where we held top rankings for the most important generic keywords related to our product.
This effect became especially clear in competitor campaigns. Users searching for alternative pelvic floor solutions clicked on our ads because we were a brand they had already encountered organically, and click-through rates reflected that trust.
With authority finally established, we restructured our Google Ads campaigns by:
Focusing on exact match keywords for core intent.
Clearly separating brand, generic, and competitor campaigns.
Bidding aggressively on competitor terms.
This was only possible because landing pages aligned perfectly with user intent and the brand already carried organic credibility.
In several competitor campaigns (competitor names are blurred out), click-through rates reached an astonishing average of 48.29%. In isolation, those numbers might seem unrealistic. In context, they were the natural result of strong organic preconditioning, with AI Overviews playing a major role, and the brand recognition we built through consistent visibility.
Fixing the signal: GTM events and CRM feedback loops
Another major improvement came from conversion tracking. We moved away from GA4-imported conversions and implemented GTM-native events designed specifically around meaningful lead actions.
This provided Google Ads with faster, cleaner signals and significantly improved Smart Bidding performance. In high-ticket B2B and medical markets, signal quality matters far more than volume, and optimizing toward the wrong conversion can do more harm than not optimizing at all.
The final step was closing the loop between marketing and sales. By integrating HubSpot CRM, we tracked lead quality beyond form submissions and identified which leads actually converted into revenue.
That information was fed back into Google Ads, allowing the algorithm to optimize toward outcomes that truly mattered, not just surface-level conversions. In long sales cycles, this feedback loop is essential.
With approximately $12,000 in total ad spend in 2025, the combined SEO and PPC system delivered strong year-over-year growth.
Unit sales driven by SEO and PPC increased by 140% from 2023 to 2024, followed by a further 79% increase in 2025.
Across the full two-year period, this represented more than a fourfold increase in sales volume, with digital marketing playing a central role in influencing revenue throughout a long and complex buying cycle.
This translated into a return on ad spend of approximately 133x, underscoring the compounding effect of aligning organic visibility, paid media, and clean conversion data.
What actually made PPC scale: Trust, signals, and SEO
The biggest takeaway for us was that SEO shouldn’t be viewed solely as a traffic channel. It has a direct impact on the quality of signals fed into paid media platforms, and those signals ultimately determine how well algorithms can optimize.
Paid search only begins to scale once a foundation of trust is in place. In categories where consideration cycles are long and credibility matters, ads perform much better when users already recognize the brand and perceive it as an authority in the field.
Sustainable, high ROAS is rarely the result of a single optimization. It’s built through interconnected systems that align SEO, paid media, tracking, and CRM feedback regarding lead quality.
Performance marketing itself doesn’t fail in complex markets. What fails is the assumption that complexity can be solved with simple tactics.
If your entire Google Ads strategy consists of targeting brand and non-brand keywords, you’re limiting growth. If performance is declining, it’s not the platform — it’s the strategy.
People don’t discover you through non-brand search. They research on Reddit, ChatGPT, Facebook, LinkedIn, and YouTube. They watch demos, read testimonials, and learn about your brand long before they ever search for it.
If you have a complex sales process and a long customer journey, this shift is critical and requires a different approach. Here’s what you need to know to make this work in B2B.
AI-forward campaigns: A cost-effective growth gold mine
Google has been developing multi-channel, multi-asset campaigns for years — first with Performance Max, then with Demand Gen. These campaigns reach your audience across the web as they research and evaluate.
Your brand is front and center while your audience builds their shortlist. By the time they’re ready to pick vendors to take the next step, you’ve already built trust. Then they’ll find you by searching for your brand.
A Performance Max campaign with a variety of ad types, like image and video ads, can showcase demos or customer testimonials on YouTube. They can appear across the web via the Display Network. They can follow (retarget) your target audience as they research. That’s what drives the branded search that converts later.
These campaigns let you do all of this cost-effectively. In a Performance Max campaign, you can use keywords alongside your own customer data as signals. You’re not abandoning keywords. You’re using them smarter.
The search experience is changing. Your strategy should, too.
Google’s search results pages have evolved with AI Overviews and AI Mode. If this experience is changing so dramatically, isn’t it time to rethink your ad strategy as well?
I’m a fan of the 4S framework: search, scroll, stream, and shop.
I’d add “ask” to reflect how people now engage across AI tools. They ask ChatGPT or Gemini, search Google, scroll LinkedIn, stream YouTube videos, and shop across platforms. If your strategy only covers one or two of those behaviors, you’re missing how growth actually happens.
Focusing only on keyword targeting means you’re missing the bigger picture. Yes, brand keywords will convert better than non-brand keywords. But how do people even know to search for your brand in the first place? (The answer is that you’ve been showing up in their feed the whole time.)
This approach takes time, especially for B2B companies with long sales cycles.
It took us nearly a year to realize the value Performance Max was driving for a life science client. Most of their deals take months to close. Our account manager was about to pause the campaign at one point because the ad platform data didn’t look good.
But as we began piping in sales data, things started clicking. Once we got over the sales cycle hump and started seeing revenue data, Performance Max proved its value.
If you can sync data beyond MQLs — like Proposal Sent — that provides more data and signals to Google, and more peace of mind until you can add sales data.
Be patient, feed the system better data, and don’t give up too early. B2B sales cycles are complex.
You might have 100 people at an event that you promoted through a LinkedIn ad strategy. Some of those people caught an email promoting a webinar. Months later, they searched for you on Google and asked for a proposal. Still months later, they became a customer.
Even with the best-recorded data, you won’t see this happening right away in a long sales cycle.
If you don’t have a test-and-learn budget, reallocate 5% to 10% to introduce AI-forward campaign types. Test strategically. Don’t go all in, and don’t launch major tests during a busier time of year. Give yourself breathing room while the system learns.
This approach takes time. But it drives sustainable growth if you commit to the process. The advertisers who figure this out are building sustainable growth, while others are still stuck optimizing for a shrinking slice of demand.
A new version of the Google Ads API is out, bringing a handful of targeted updates across video, app campaigns, and audience planning tools.
Key changes in this release.
A new VideoEnhancement resource that surfaces whether a video ad is Google-generated or advertiser-provided — giving developers clearer visibility into auto-enhanced creative
A new AppTopCombinationView resource providing read-only insights into top-performing asset combinations in App campaigns
The ability to disable the hotel feed in Demand Gen campaigns via HotelSettingInfo.disable_hotel_setting
A new conversion metric for indirect first in-app installs across Campaign, Customer, and AdGroup resources
Several enhancements to ContentCreatorInsightsService and ReachPlanService
What to do. Upgrading to v23.2 requires updating both client libraries and client code — all updated libraries and code examples are already published.
Catch the walkthrough. Google is hosting a live release walkthrough on March 26 at 11am ET on Discord and YouTube Live, with a recorded version to follow for those who can’t attend.
Why we care. The VideoEnhancement gives developers the ability to programmatically identify whether a video ad is Google-generated or advertiser-provided, which has been a notable blind spot in Performance Max reporting. For agencies and teams building custom reporting tools, this is a meaningful step toward greater creative visibility.
The bottom line.A routine but useful release — the VideoEnhancement resource in particular is worth attention for any developer building tools around Performance Max creative reporting.
Refreshing creatives for every seasonal moment just got significantly faster — Google has quietly launched Asset Group Theming inside Performance Max, letting advertisers apply seasonal themes to existing asset groups without rebuilding from scratch.
How it works. Advertisers can clone a high-performing asset group and apply a theme — Google then generates themed image variations and suggests aligned headlines and descriptions, while leaving the original asset group completely untouched for safe testing.
Available themes cover.
Promotional: Sale, Studio/Editorial
Seasons: Winter, Spring, Summer, Fall
Cultural moments: Christmas, Black Friday/Cyber Monday, Halloween, Valentine’s Day, Easter, Mother’s Day, Father’s Day, Hanukkah, New Year, Lunar New Year, and Back to School
Where to find it. Look for the prompt inside Asset Groups ahead of major holidays, or via “Apply theme to existing asset group” when creating a new one.
Important caveat. This is a starting point, not a finished product. The tool uses existing images as a base and adds themed backgrounds — it does not replace videos, and typically only updates a handful of headlines to match the theme. Everything still needs to be reviewed and sense-checked before going live.
Why we care. Seasonal creative refresh has always been one of the more time-consuming parts of campaign management — requiring design resources, rebuilding asset groups, and risking performance drops on proven setups. This feature removes most of that friction, letting teams adapt their best performers to key moments in minutes rather than days.
The bottom line. Think of it as a creative assistant, not a replacement for a designer — but for advertisers managing multiple seasonal peaks across the year, the time savings alone make it worth exploring.
First spotted. This update was spotted by Google Ads specialist Bia Camargo who shared a screenshot on LinkedIn.
The local SEO community remains locked in a permanent debate over the “hide address” toggle for service area businesses (SABs). Most owners view this switch as a simple privacy setting. In reality, it’s a high-stakes decision that dictates how Google’s algorithm interprets your physical relevance.
Does your defined service area influence where you rank?
Does hiding your street address suppress your visibility in the local pack?
Most importantly, does Google purge that data from its system, or does your map pin simply become an invisible anchor?
These are fundamental and relevant questions of how proximity functions when you choose to go off the grid.
How Google actually places your map pin
To be clear, the address and the map pin aren’t the same thing. When you enter an address into your Google Business Profile, Google doesn’t simply drop a pin. It runs the address through its geocoding engine to resolve the text string against its internal database.
To understand why a map pin ends up in a highway median or a city center, you must examine Google’s internal data models:
Google is looking for a match it can trust. When it finds a high-confidence match, it places the pin specifically at the rooftop of your building.
Once you understand how these three work together, you can get some clarity on why Google appears to rank SABs differently in the local map pack.
Is your map pin placement a bug or the default?
Make no mistake: this isn’t a bug. It’s a fundamental breakdown in how Google translates a text string into a physical coordinate.
When this translation fails, your business ends up with a misplaced map pin, which directly misplaces your local proximity authority.
When Google can’t find a high-confidence match at the building level, it doesn’t just leave your pin floating. Instead, it falls back to the most reliable geographic feature it can confidently resolve. In most cases, that fallback is the city centroid (the geographic center of the municipality tied to your address).
Google’s own Geocoding API documentation outlines this fallback logic, explaining why pins for businesses with perfectly visible, verified addresses sometimes end up dumped in the middle of a city.
Simply put, if your address isn’t recognized by Google’s internal systems, the geocoding process lacks the confidence to place the pin precisely.
If Google can’t reconcile your GeostoreAddressProto with a high level of certainty, it may not anchor your GeostorePointProto to your building’s rooftop.
Geocoding loses confidence when a business shares a generic building footprint, lacks a distinct suite number, or is placed in a newly developed zone that Google’s Street View API hasn’t yet mapped.
A building that’s newly constructed or recently added to a commercial complex may not yet exist in Google’s geographic database with enough detail for a rooftop-level match. The street and city exist, but the specific parcel hasn’t accumulated enough mapping data for Google to confidently place a pin.
To understand why, it helps to know how Google’s geocoding data actually gets populated. Google’s own developer documentation states that data collection is a periodic process, and new construction data can take time to be reflected in Google Maps.
The address hierarchy Google geocodes against is built from a combination of sources, including satellite imagery updates, municipal records, and USPS address data, none of which updates in real time.
When the API resolves an address, it returns one of four location types: ROOFTOP, RANGE_INTERPOLATED, GEOMETRIC_CENTER, or APPROXIMATE.
The suite number problem
I’ve said this to clients more times than I can count. It seems like a minor formatting detail. It isn’t.
When a business enters something like 1234 Main Street, Suite 200, in Address line 1, Google’s geocoding engine attempts to resolve that entire string as a street address.
Suite numbers are unit identifiers. They exist within buildings. They aren’t street-level geographic data, and Google’s geocoding process doesn’t use them to identify rooftop locations.
Embedding a suite number in Address line 1 introduces a conflict into the geocoding query that the system can’t cleanly resolve against a physical coordinate.
Instead of anchoring the pin to your building, the geocoding process encounters a string it can’t fully parse at the street level, loses confidence, and falls back, often all the way to the city centroid. This may cause clients to drive to another location or the middle of the highway.
Proximity at the pin vs. proximity at the address
A profile verified at a physical address doesn’t rank based on the visible address.
I recently managed a new listing where a geocoding conflict forced the map pin to the city center of Houston, miles from the actual office. While the text on the profile showed the correct street address, the ranking was anchored entirely to a misplaced coordinate in the downtown centroid.
In this instance, a suite number was embedded directly into the primary address field. When Google’s system can’t cleanly parse a street number and name, it often defaults to the city centroid as the best available data point. This isn’t an edge case.
Whether it’s a suite number on the wrong line or a new construction site, these formatting errors trigger geocoding failures that are notoriously difficult to unwind.
The client’s ranking data confirmed the technical reality. For high-competition terms like “water damage restoration,” the business didn’t rank based on its physical office. It ranked based on where the pin was dropped.
If your pin is in a highway median or a city center due to a formatting error, that is where your proximity authority lives.
If you have a service-area business, the stakes are higher, and the scenarios are more complex.
When Google reprocesses that address, and the geocoding fails to anchor cleanly from the beginning, the business owner has no easy way to know. A storefront owner can open Google Maps, pull up driving directions to their location, and immediately see where the pin landed. An SAB with a hidden address can’t do the same quick check.
The address isn’t visible on the profile, and the pin placement isn’t clearly surfaced in the dashboard or on Maps. The business is left with poor ranking reports and no obvious explanation. They may never realize the pin drifted at all.
Their verified address may be a home office or a shared workspace, and if it’s a shared workspace, the geocoding problem gets worse. Regus locations and similar co-working buildings are among the most geocoding-hostile addresses an SAB can use. These are large commercial buildings with dozens or hundreds of unit numbers, multiple tenants, and high address turnover.
My hypothesis is that Google’s geocoding engine assigns lower confidence to these addresses precisely because the unit-level data is so dense and inconsistently mapped. The result is a pin that may never anchor properly to begin with, and an SAB operator who has no easy way to verify where Google actually thinks they’re located.
My business’s GBP functioned as a verified storefront in Farmington Hills for years. Three years ago, I moved the operation to a new office in Pontiac and updated the address accordingly. The listing appeared as a storefront until I triggered a reverification while testing a separate case study.
Because I work primarily from home, and hadn’t invested in signage at the new Pontiac location, Google forced the profile into service area business status.
Even though the dashboard displayed a Pontiac address for several months, the map pin reverted to Farmington Hills as soon as I toggled to hide the address. This fallback exists behind the scenes, effectively anchoring the business to a location it hasn’t occupied in over a thousand days.
This is a ranking disaster for any business owner. I struggle to rank in my city for the “marketing agency” category because Google is calculating my proximity from an old office.
If a business transitions from a storefront to an SAB after changing addresses, editing the existing listing is a risk. I was set up as a storefront at the new address for several months.
The most effective path forward is to create a new listing for the business and request a review transfer. This can’t be fixed by Google support.
Supporting evidence: What Google’s own patents say
Google has filed and been granted multiple patents that describe the underlying systems at work. These patents are directly relevant to how geocoding, pin placement, and local ranking interact.
Outlines the core pipeline connecting an address to a map pin, establishing that the inputted address and the resolved geocode are two separate entities.
Scoring Local Search Results Based on Location Prominence
Describes a dual scoring system: documents within a geographic area are scored by location prominence factors (authoritative document score, citation volume, review count, and mention count), while documents outside the area are scored by distance from a defined center point such as a postal code centroid or the midpoint of the active map window.
Explains how ambiguous or improperly parsed address components produce lower-confidence geocode outputs, resulting in broader map pin placements rather than rooftop-level matches.
Describes the geocoding/geomap server that converts a street address into a single latitude/longitude coordinate and overlays it as a location marker on a map image. Establishes the mechanical basis for map pin placement and documents that pin position is derived from the resolved coordinate, not the inputted address.
Best practices for properly anchoring your map pin
A well-geocoded address with a narrow service radius gives Google the most confident, stable picture of where your business operates.
Check your Address line 1: Suite numbers, unit numbers, floor numbers, and building names belong in Address line 2. Line 1 should contain only the street number and street name.
Check whether your building geocodes cleanly: You can test this in Google Maps directly, or search your address in the developer’s geocoding page and see where the pin lands. Or more importantly, see how Google is parsing the address, and enter it the same exact way.
Be prepared for verification: Correcting a geocoding conflict in an existing profile almost always triggers a new verification request. This is expected. Work through it. Don’t make additional edits until verification is complete, as multiple pending changes can restart the cycle.
Why geocoding confidence is your local ranking foundation
The friction between an address string and Google’s geocoding confidence isn’t a minor technical glitch. It’s a fundamental ranking blocker.
Google values data stability and confidence over your recent dashboard edits. If you’re struggling with a pin that refuses to anchor, or an SAB that won’t rank, you’re likely fighting a geocoding pin placement issue that can’t be solved with standard optimizations or Google support, for that matter.
Stop trying to out-content a broken map pin. It’s the ultimate proximity indicator that Google needs to confidently rank your business. The underlying issue isn’t complicated. Google needs a clean, parseable address string to anchor your pin at the building level.
“This is a regular update designed to better surface relevant, satisfying content for searchers from all types of sites. The rollout may take up to 2 weeks to complete.”
About core updates. Core updates roll out several times each year. They introduce broad, significant changes to Google’s search algorithms and systems, which is why Google announces them.
It has been a long time since the last core update. While many expected Google to roll out core updates more frequently, that didn’t happen.
What to do if you are hit. Google didn’t share new guidance specific to the March 2026 core update. However, Google has previously offered advice on what to consider if a core update negatively impacts your site:
There aren’t specific actions you can take to recover. A negative rankings impact may not mean anything is wrong with your pages.
You may see some recovery between core updates, but the biggest changes tend to follow another core update.
In short: write helpful content for people, not for search engines.
“There’s nothing new or special that creators need to do for this update as long as they’ve been making satisfying content meant for people. For those that might not be ranking as well, we strongly encourage reading our creating helpful, reliable, people-first content help page,” Google said previously.
Why we care. With any core update, you often see significant volatility in Google search results and rankings. These updates may improve visibility for your site or your clients’ sites, but you may also see fluctuations or declines in rankings and organic traffic. We hope this update rewards your efforts and drives strong traffic and conversions.
Today, Google released Google Search Live globally where AI Mode is available, for these languages and regions. This brings Search Live to more than 200 countries and territories.
Google credits its new audio and voice model, Gemini 3.1 Flash Live, which it says “delivers even more natural and intuitive conversations.” The “new model is also inherently multilingual, which means that people around the world can now speak with Search in their preferred language,” Google added.
How it works. To use Search Live, open the Google app on Android or iOS and tap the Live icon under the Search bar. From there, you can ask your question out loud to get a helpful audio response, then continue the conversation with follow-up questions or dive deeper with helpful web links. If you want to ask about something in front of you, like how to install a new shelving unit, you can enable your camera to add visual context. This way, Search can see what your camera sees and offer helpful suggestions, plus links to more information on the web.
You can also access Search Live if you’re already pointing your camera with Google Lens — just tap the Live option at the bottom of the screen to have a real-time, back-and-forth conversation about what you see in the real world.
Why we care. This is another way users can have conversations with Google’s AI instead of typing queries. Answers could increasingly bypass traditional clicks, and further erode traffic to websites. The inclusion of links (citations at the bottom) means publishers and brands could still see some benefits, but most searchers likely will have little need or desire to click on those links or dig deeper after getting their answer.
Google is launching new Performance Max controls and reporting: audience exclusions, expanded reporting, and budget forecasting tools.
What’s new. Google announced a mix of “steering updates” and “actionable insights” for PMax:
First-party audience exclusions: You can exclude customer lists to shift spend toward net-new customer acquisition instead of repeat conversions.
Budget reporting: A new in-platform report projects end-of-month spend and shows how daily budget changes impact performance.
Full audience reporting: You get detailed breakdowns by demographics, including age and gender.
Network segmentation: You can segment placement reports by network, now under When and where ads showed.
Why we care. These updates help address concerns about PMax’s lack of control and transparency. Exclusions help you avoid wasting spend on existing customers, while improved reporting gives you clearer signals for optimization, budgeting, and brand safety decisions.
Automated traffic grew 23.5% year over year in 2025 — about eight times faster than human traffic, which rose 3.1%, according to HUMAN Security’s State of AI Traffic report.
AI-driven traffic appears to be a major contributor to that growth, with average monthly volume increasing 187% year over year, while traffic from AI agents and agentic browsers (e.g., OpenAI’s Atlas, Perplexity’s Comet) grew nearly 8,000% year over year.
Automated traffic is defined in the report as: “All internet traffic generated by software systems rather than human users, including traditional automation such as search engine crawlers, monitoring bots, and conventional scraping tools, as well as AI-driven traffic.”
Why we care. Search is increasingly shaped by more than human queries, crawling, and indexing. AI agents now participate in discovery, comparison, and transactions — within Google’s evolving results and across AI-driven interfaces.
The details. HUMAN groups AI-driven traffic into three broad categories:
Training crawlers collecting data for models. They still dominate at 67.5% of AI traffic, but their share is declining as scrapers and agents scale.
Real-time scrapers that feed AI search and answers. Scraper traffic grew nearly 600% in 2025, driven by AI-powered search and real-time answer engines.
Agentic AI systems that execute tasks autonomously. Smaller in share, but growing fastest and most disruptive.
AI agents behave more like users. These systems aren’t limited to reading content. They increasingly navigate funnels, log in, and transact. In 2025:
77% of observed agent activity (requests) occurred on product and search pages.
Nearly 9% touched account-level interactions.
More than 2% reached checkout flows.
About the data. HUMAN analyzed more than one quadrillion interactions (requests/events) across its customer base in 2025, with aggregated, anonymized data from 2022 to 2025. It classified AI-driven traffic into training crawlers, AI scrapers, and agentic AI using user-agent strings, infrastructure signals, and observed behavior, noting limits in self-declared bot identity, which may undercount or misclassify some AI-driven activity.
Bottom line. Traffic is becoming less purely human, and discovery is no longer confined to search engines. Optimization now means deciding which machines can access, interpret, and act on your content.
Google introduced a new user agent, called Google-Agent, that signals when AI agents act on users’ behalf, marking an early shift toward agent-driven web interactions.
What happened. Google added Google-Agent to its list of user-triggered fetchers on March 20 and has begun a gradual rollout.
The Google-Agent user agent identifies requests made by AI agents running on Google infrastructure, including experimental tools like Project Mariner.
How it works. Google-Agent appears in HTTP requests when an AI agent visits a site to complete a user-initiated task.
Example use cases include browsing pages, evaluating content, or taking actions such as submitting forms.
This differs from Googlebot and other crawlers, which run continuously in the background without direct user prompts.
IP ranges. Google shared the IP ranges for its desktop agent:
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent) Chrome/W.X.Y.Z Safari/537.36
And the IP ranges for its mobile agent:
Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Google-Agent; +https://developers.google.com/crawling/docs/crawlers-fetchers/google-agent)
Why we care. This lets you identify agent-driven traffic in server logs. You can now distinguish traditional crawl activity from visits triggered by real users through AI agents. That should help you track agent-assisted conversions, understand emerging user behavior, and prepare for agentic search.
“The Google-Agent user agent is rolling out over the next few weeks, and will be used by Google agents hosted on Google infrastructure to navigate the web and perform actions upon user request.”
What to watch. Early volumes will be low as the rollout continues, but now is the time to establish a baseline. What to do:
Monitor logs for Google-Agent activity.
Make sure CDNs and WAFs aren’t blocking the published IP ranges.
Validate that key site actions, including forms and flows, work for automated agents.
Visibility is no longer just about ranking. It depends on whether your content is discovered, evaluated, and selected in AI-driven search experiences.
We’re kicking off our new monthly SMX Now webinar series on April 1 at 1 p.m. ET with iPullRank’s Zach Chahalis, Patrick Schofield, and Garrett Sussman on how you must adapt.
The session introduces iPullRank’s Relevance Engineering (r19g) framework for executing Generative Engine Optimization (GEO) through an omnichannel content strategy. You’ll learn how AI search uses query fan-outs to discover and select sources, and how to structure content so it’s retrieved, surfaced, and cited.
It also emphasizes that GEO success isn’t universal. It requires testing, tailored strategies, and a three-tier measurement model spanning discovery, selection, and citation impact.
While initially criticized as a black box, Performance Max has evolved into a fairly critical campaign type. With each passing quarter, Google has introduced more functionality and visibility.
Additional reporting is helpful, but what matters is what you can actually act on. While you can’t control everything in Performance Max, there are specific levers that can have a meaningful impact on performance. Here are the parts of PMax you can control and how to use them effectively.
Control what you can: Search terms and placements
One of the most exciting updates in the last year to Performance Max has been the ability to add these campaign-level negative keywords.
In the past, you could contact Google to add these in. It was somewhat cumbersome and involved filling out an Excel doc, forwarding it to Google, and giving them permission to implement.
With the inclusion of the search terms report, we’re now able to select a keyword and quickly add it to the campaign-level negative keyword list, just as we can with a search or shopping campaign.
Another way to optimize PMax is to review and monitor the placements report. Most recently, Google has moved the Performance Max placements report out of the reporting section of the Google Ads account and into the Where ads have shown section at the campaign level. While this makes analysis easier by removing additional steps, we still only have impression-level reporting on placements.
We can use this information to decide whether to add these placements as negative placements at the account level. This is found in Tools > Content suitability > Advanced settings > Excluded placements.
While this isn’t ideal, there’s still useful insight we can glean from this report, such as ads appearing in kids’ programming or driving a high number of impressions from mobile apps.
Also located in the When and where ads showed section is the ad schedule. Even if you hadn’t selected an ad schedule when creating the campaign, Google automatically dayparts performance hourly.
Google typically recommends an open ad schedule, but if you have a limited budget, restricting your ad schedule during off-peak or non-converting hours is an excellent way to increase efficiency.
You can do this by creating a campaign-level ad schedule within Campaigns > Audiences, keywords, and content > Ad schedule. Make sure your Performance Max campaign is selected in the top left dropdown menu.
Demographic exclusions are a relatively new feature at the campaign settings level for Performance Max. Unfortunately, reports for these campaigns are hard to obtain, limiting informed decisions on demographic exclusions.
This functionality is helpful if you’re aware of specific demographics that aren’t actively in the market for specific products or services. To make adjustments, go to Campaign-level settings > Other settings > Demographic exclusions. From here, you can turn on age or gender exclusions:
While PMax initially didn’t even provide device-level reporting, a new feature lets you opt out of serving on certain devices.
If you opt into all device targeting when launching a PMax campaign, you should periodically review device performance and adjust accordingly. This is best done by segmenting at the campaign or asset group level by device. Device-level data is extremely helpful for determining which device is better suited to reach your goal.
Likewise, if you almost always opt out of certain devices when launching a campaign, this data makes it easier to either launch with all device targeting enabled and monitor performance, or add a device you hadn’t initially added to see how it impacts performance. Device-level targeting is also available at the campaign level, under Other settings.
Improve inputs: Creative and AI assets
Ad assets play a large role in the display, YouTube, and Discover network performance of a PMax campaign. For many, there’s still a gap in producing high volumes of quality image and video creative.
While still evolving, AI assets are getting closer to filling these gaps — enabling us to more effectively target these additional networks. As newer iterations of LLMs emerge, this will become a primary way to generate video content and professional-looking images.
Google already offers generative AI image assets from shopping feed products that look relatively impressive. But we’re still a ways out from seeing high-quality AI-generated videos without the well-known glitches we typically see in this type of content.
Understand the limits of control in Performance Max
The channel controls report gave more insight into where ads were serving. I have an unpopular opinion on this report. While helpful, there’s little we can do within the campaign to improve performance. Because of this, the report is frustrating.
We’ll likely see channel controls available within Performance Max in the near future — similar to what we already have in Demand Gen campaigns. For now, adjust creative and bids to sway volume within certain networks. To opt out of certain networks completely and focus on shopping, then a feed-only Performance Max campaign will do just that.
Performance Max is evolving from a black box to a critical asset in a marketer’s toolkit. The steady stream of new functionality, from campaign-level negative keywords to detailed placement and ad schedule reports, shows Google’s commitment to providing greater control.
Use these levers — strategic exclusions, device adjustments, and budget-aware scheduling — to move beyond set-it-and-forget-it and run Performance Max campaigns with precision and efficiency.
A company called Clickout Media is being called out for buying trusted news and niche sites, replacing them with AI-generated gambling content, and abandoning them after Google penalties. Some call this “parasite SEO,” but to me it sounds more like large-scale search spam.
What’s happening. The company acquired sports, gaming, and tech sites, then rapidly shifted them from editorial coverage to casino and crypto content, PressGazette reported.
Sites were stripped of original reporting, filled with AI-written articles, and used to push offshore gambling links, according to former employees.
How it works. The strategy relies on buying domains with existing authority, then exploiting their ability to rank in Google. Content typically followed a pattern:
Legitimate coverage continues briefly to preserve credibility
Gambling content is introduced and scaled
AI-generated articles and fake author profiles replace human writers
Revenue comes from affiliate deals with casino operators, sometimes tied to player losses
The impact. Several previously active publications now appear deindexed, with layoffs and closures following. In some cases, even charity websites were repurposed to host gambling content.
What they’re saying. Google prohibits publishing content at scale for the primary purpose of manipulating rankings. It refers to extreme cases like this as “site reputation abuse,” a violation that can trigger manual actions and removal from Google’s index and search results.
“While we aren’t able to comment on a specific site’s ranking on Search, our policies prohibit publishing content at scale for the primary purpose of manipulating search rankings,” Google said about this case.
Why we care. This isn’t SEO in any meaningful sense. It’s reputation abuse designed to game rankings at scale.
Like it or not, everyone is fishing in the same pond. As content marketers and SEO practitioners, we all have the same subscriptions to Semrush and other SEO tools, giving us access to the same data as our competitors.
If we all have the same tools, aren’t we just writing the same content?
There’s a better way.
You may be sitting on a wealth of data about your target audience and your existing customers, and you don’t even know it. These insights are invisible to your competitors, yet they’re unread, unanalyzed, and underutilized by the marketing team.
The problem: Third-party tools can create an over-commoditized content echo chamber
While SEO toolsets are invaluable (and I’ll always be using one, pretty much daily, for the rest of my career), they aren’t a failsafe way to ensure you’re creating the best content for your audience. These tools measure existing search demand through their own data, giving the best estimate of keyword traffic and search results.
However, when these aren’t viewed through the lens of your own customers, the result can be content that’s oversaturated in your market, overwhelming anyone looking for help or answers online.
When your content isn’t unique to your current or target audience, your organization and its offerings may get lost in the sea of SEOs and content strategists at your competitor organizations, who are trying to follow the same best practices and strategy.
It’s time to better utilize your own data to implement content campaigns that drive interest from the very audience that’s already shown a proven interest.
For the purposes of this article and marketing content creation, first-party data is any data from current, potential, or past customers that’s only accessible internally. The top “5 goldmines” where I’ve consistently found nuggets of content foundations and insight are:
Internal site search queries: What visitors couldn’t find on your site, but keep searching for.
Sales call transcripts: The exact language and questions prospects say before they buy.
CRM data: Spotting patterns in deal stages, objections, and lost deals.
Support tickets: The issues and questions your product or service keeps failing to answer, leading to frustrated customers.
Email replies and metrics: What the audience actually responds to versus what they ignore.
These five areas are a great place to start collecting and utilizing first-party data to its full potential.
This data is key to better, more-targeted content marketing for three reasons.
It’s proprietary
This data is confidential and only available to your internal team. Often, it’s not even accessible to everyone and may require favors from data analysts or web developers to pull. That’s what makes it so unique. Competitors can’t find or replicate it, no matter what SEO tools they have.
It reflects real buyer language
This relates to the “curse of knowledge” cognitive bias, where you know so much about a topic that you assume others do as well. One of my favorite examples is the “facial tissue” market. You may know facial tissue as “Kleenex,” even though that’s technically a brand name for a type of facial tissue.
With many consumers using a competitor’s brand name colloquially, how do competitors refer to their own product? Because most people likely aren’t searching “facial tissue” with the intent to buy, it’s up to manufacturers to determine the language their audience uses to find alternatives.
Even though employees at XYZ Tissue Co. know the product is technically “facial tissue,” that doesn’t mean their customers do.
It maps to your full marketing funnel
While third-party keyword data usually skews to the top of the funnel, first-party data captures mid- and bottom-funnel content gaps that drive conversions and brand loyalty, not just traffic.
How to get content ideas from first-party data: The specifics
We know these data sources are valuable. So, how do we use them? Let’s break it down.
Internal site search
Site search is one of my favorite sources of insight and inspiration. It’s active, ongoing, real-time data showing how your target audience is trying to interact and engage with you through internal site search. No matter what the data looks like, it can hold a wealth of information about what content your users expect to find on your website.
If you don’t have site search on your website, you can create it using Google’s programmable site search feature. While it will provide internal site search data, it may also display ads or external results on users’ results pages.
To use site search effectively, export the queries monthly, clean the data to remove spam, then cluster by theme (such as product collections or service offerings). Finally, run it through keyword research tools to flag anything with high keyword volume and low competition that’s missing from your site.
Bonus: For products or services your customers are searching for that don’t exist, it might be useful to send that data to the R&D department for potential new offerings to consider.
Use a service like Gong, Chorus, or manual transcriptions from sales calls and CRM data to look for recurring needs, questions, and objections across customers from all stages of the purchasing funnel.
If, for instance, you see continued resistance to your enterprise SaaS analytics platform due to the long onboarding process, consider creating a time-bound, step-by-step guide that makes it painless for anyone to switch analytics platforms. This can be great collateral for the sales team to address popular objections.
In the CRM, you can also filter lost deals by reason. For instance, finding “went with competitor” + common objection could lead to a comparison or differentiation article that highlights your features vs. the competitors you keep losing deals to.
Besides reviewing the data, ask the sales team directly on a call or email about their most common objections. Because they’re constantly in communication with potential customers, they’ll likely know immediately the top objections they receive regularly.
Support tickets
The support team can also be an invaluable resource. In addition to asking the support team directly what problems they solve for customers on a daily basis, look in your customer support ticket queue and dashboard to find old and new tickets with recurring issues (your top 10 most common complaints are probably content gaps you need to address ASAP).
An explainer blog post, knowledge base article, or PDF guide that tackles the issue from an actionable angle can not only give you more content to promote, but also help the support team with materials to share with your customers.
Email replies and metrics
Depending on the industry, your email lists’ reply inboxes may be exploding with valuable customer data. At a supplements company I worked at, we regularly received customer responses to our email marketing campaigns. They asked questions about products, gave suggestions, and even offered enthusiastic reviews we could feature on our website.
You can also look at the metrics.
If your monthly newsletter is the highest-performing email, should you increase it to a biweekly newsletter?
If your product features never get high conversions, is that because of the content, or are they more interested in value-focused blog posts and videos?
Don’t take your first-party data for granted. Build automated pipelines for report generation, conversation follow-ups, and content creation from these sources to build momentum around the topics your audience most wants to hear.
While competitors can copy your articles, they can never copy your customer conversations. Try it out this week: audit a first-party data source and see what content ideas you can find.
Google expanded its structured data support for forum and Q&A pages, adding properties that help you signal reply threads, quoted content, and whether content is human- or machine-generated. The update aims to reduce how Google misreads discussion and Q&A content.
What changed. Google’s QAPage docs now support commentCount and digitalSourceType. DiscussionForumPosting docs now support sharedContent plus the same commentCount and digitalSourceType.
The details. In Q&A markup, you can use commentCount on questions, answers, and comments to show total comments even if not fully marked up. answerCount + commentCount should equal total replies of any type.
How it works. digitalSourceType lets you flag whether content comes from a trained model or simpler automation. Use TrainedAlgorithmicMediaDigitalSource for LLM-style output and AlgorithmicMediaDigitalSource for simpler bots. If omitted, Google assumes human-generated content.
What’s new for forums. sharedContent lets you mark the primary item shared in a post. Google accepts WebPage, ImageObject, VideoObject, and referenced DiscussionForumPosting or Comment, including quotes or reposts.
Why we care. This gives you more precise control over how Google reads modern community content — especially forum-heavy sites, support communities, UGC platforms, and Q&A sections. Google can better distinguish answers from comments, count partial threads across pagination, and identify when a post mainly shares a link, image, video, or quoted reply.
For years, we’ve relied on regular expression (regex) filters, custom dashboards like Looker Studio, or third-party tools — approaches that were often inconsistent and difficult to maintain. Now, GSC’s branded query filter brings that capability natively into one of the most widely used organic reporting platforms.
With this shift, a key gap in SEO reporting becomes easier to address — along with some of the assumptions behind it. Brand demand and discovery can now be evaluated independently, improving performance interpretation and enabling clearer, more defensible reporting grounded in first-party data.
How GSC’s branded query filter works
At its core, the feature does exactly what it promises. It automatically filters queries into:
Why branded vs. non-branded reporting has been inconsistent
Separating branded from non-branded search performance isn’t new. What’s changed is how practical it is to do consistently.
Historically, we’ve built this segmentation manually using:
Regex rules in GSC performance reports.
Keyword tagging in third-party rank-tracking tools.
Custom dashboards pulling from GA4 or BigQuery.
Query classification via exports.
These approaches worked, but they were fragile and difficult to maintain at scale. Common challenges included:
Character limits on regexes.
International sites with language variants.
Misspellings that would slip through.
No shared standard for what counts as a branded term.
Without a consistent framework, segmentation varied by team, tool, and implementation — making it difficult to rely on as a repeatable reporting practice. When data is difficult to access, it doesn’t shape everyday decisions.
GSC’s branded query filter doesn’t make third-party tools obsolete. They remain valuable for competitor brand analysis. GSC becomes the authoritative source for first-party branded performance, while cross-tool comparison shifts from a workaround to a validation step.
The center of gravity shifts back to GSC — right where we want it.
Why SEO performance looks different when you split the data
Branded traffic is both a signal of brand awareness and a high-converting traffic source. It also skews performance when blended with non-branded data.
Without segmentation, reporting often leads to misleading narratives:
“Our organic CTR is improving” (driven mostly by branded growth).
“I’m seeing rankings as stable” (while non-branded discovery is declining or vice versa).
“Traffic was flat year-over-year” (masking rising/declining brand demand).
These patterns make it difficult to understand what’s actually driving performance.
Separating branded and non-branded data allows you to distinguish between brand demand and discovery and evaluate each on its own terms. It also makes it easier to answer key questions:
Are we growing brand demand or non-branded reach?
Is our content strategy increasing non-branded visibility?
If nothing else, is the current strategy working as it should be?
How branded vs. non-branded data reveals what’s really happening
Measuring brand health
Branded search trends are among the clearest signals of brand awareness and trust. Monitoring organic performance for branded terms can surface gaps and opportunities across other channels.
For example, using a regex filter to isolate branded performance, this ecommerce property shows clear year-over-year declines over the last three months. That raises important questions:
Has search demand for the primary branded term increased or decreased?
Was paid search spend for branded terms adjusted?
Are there social, video, or PR opportunities that aren’t being fully leveraged?
In this case, further analysis using tools like Keyword Planner (via Google Ads), Google Trends, and third-party keyword platforms showed a 12% year-over-year decline in branded search demand. That contributed to a 32% decrease in branded clicks.
There are additional factors worth exploring — including paid spend and brand sentiment — but isolating branded performance helps pinpoint where to investigate next.
Non-branded queries typically drive the majority of organic traffic, while branded queries make up a smaller share but convert at significantly higher rates. These differences reflect user intent.
Searches that include a brand name are usually navigational or transactional, while non-branded queries signal discovery.
As a result, impressions, clicks, CTR, and conversions behave differently across branded and non-branded segments.
Searches that include a brand name often indicate intent to visit that brand’s website (see the ecommerce property CTR comparison chart below). Because of this, branded queries are considered bottom-of-funnel and more likely to convert.
Efficiency, strategy, and measuring discovery
Non-branded performance remains the clearest proxy for:
Topical authority.
Content effectiveness.
Organic discovery and reach.
Tracking non-branded visibility separately allows teams to answer:
Are we reaching new users?
Is our content strategy expanding keyword footprints?
Did recent core algorithm updates, which typically create keyword volatility, impact non-branded traffic?
In the ecommerce example above, non-branded impressions dropped sharply around Sept. 12, 2025 — a period when performance should have been trending upward heading into back-to-school, Halloween, and the holiday season.
In this case, the decline was not tied to SEO strategy. Instead, non-branded impressions dipped following Google’s retirement of the &num=100 parameter in Search Console reporting in mid-September 2025.
Because branded queries typically rank higher, they were less affected by this change, making the issue harder to detect in blended data.
Most SEO teams already separate branded and non-branded performance, but consistency has been the challenge.
With native segmentation now built into GSC, achieving that consistency becomes far easier. What once required workarounds can now be done directly within the primary reporting interface.
It’s easy to view the branded query filter as just another GSC feature. In reality, it represents something larger:
Standardized brand classification.
Native segmentation inside first-party data.
More consistent and reliable SEO reporting.
Stronger ties between SEO and broader marketing performance.
This shift changes how SEO work gets done. Teams gain clearer visibility into brand demand trends and discovery performance, and can spend less time reconciling discrepancies across tools and more time interpreting results.
As adoption grows, branded versus non-branded reporting will likely become the default rather than an advanced, custom setup. Reporting becomes more consistent, and performance narratives are easier to support with shared data.
If you’re focused on driving impact, the opportunity is to move beyond reconciling data and toward more confident, consistent interpretation and communication.
LinkedIn Ads consistently delivers some of the highest-quality B2B leads in paid media. But it also has a reputation for being very expensive — for both cost-per-click (CPC) and cost-per-lead (CPL) metrics.
Because of that reputation, I wanted to test a theory: that I could get low CPCs and low-cost qualified leads from LinkedIn Ads by creating a highly valuable, audience-specific piece of content.
As an agency, we usually run LinkedIn Ads campaigns for our clients. We don’t really run many paid ads for ourselves. However, to have the most control over this test, I decided that Saltbox Solutions would be the guinea pig. (Disclosure: I’m the director of strategy at Saltbox Solutions, a B2B-focused PPC and SEO agency.)
The results were impressive.
We spent less than $1,000 and generated a significant volume of leads at a sub-$10 CPL. For advertisers on a shoestring budget, LinkedIn Ads may not be out of reach as previously thought. It just requires a solid strategy.
Here’s what I did, why it worked, and how you can apply the same framework to your own campaigns — regardless of your advertising budget.
The campaign setup
The goal of this campaign was to get our target audience to download our 2026 B2B Demand Gen Playbook — a hefty, 23-page guide created specifically for B2B marketing decision-makers. The timing was key because many marketing leaders were already planning for 2026 in Q4 2025.
For this LinkedIn Ads campaign, I used a document ad format + a lead generation objective. The document ad lets the audience flip through and preview the content before downloading, with four pages available to preview before requiring a download to access more.
I also used a lead gen form for contact capture, since it’s fairly frictionless — the form lives within the LinkedIn platform and autofills most of the contact information from a user’s profile. There was just one campaign for this test, with three ad copy variations for the document ad.
In terms of budget and bid strategy, the campaign used a $600 lifetime budget and a $15 manual bid.
This is what allowed for such low CPLs. Before writing a single word, I did deep audience research to figure out what they really cared about and what would be useful to them.
I knew exactly who I wanted to talk to (and who would be a good fit for the agency): B2B marketing decision-makers at larger companies with a dedicated marketing team. They worked mostly in a demand generation capacity and needed help prioritizing the channels that would make sense for their 2026 goals.
From there, the research focused on understanding what they would actually need in that planning process. It involved:
Mining client meeting notes and calls for recurring questions, common pain points, and frequent requests that kept coming up during planning season.
Using SparkToro to plug in my ideal customer profile (ICP) details and explore the questions, topics, and channels the audience was already engaging with.
Scanning LinkedIn, where I’m active and where a majority of my network is in B2B marketing, for real-time insight into what people were worried about.
Reviewing Reddit threads and B2B marketing communities I’m part of, which were super helpful for getting at the questions marketing leaders had.
The main question throughout this process was, “If I were in my audience’s shoes, what resource would actually be helpful right now?”
One big advantage I had: My audience is me. I’m a B2B marketer talking to other B2B marketers. Being plugged into the same communities and conversations made it much easier to put a personal spin on the content and write like a human.
Once I had a clear picture of what my audience needed, the focus shifted to going deep. The goal was to create a genuinely useful resource, not a thinly veiled sales pitch disguised as a playbook.
That took time to get right. But that depth is likely what drove the 76% lead form completion rate. When people could preview the document in their feed and see that it was substantive, they trusted it was worth downloading.
A few other notes on creating the playbook:
Timeliness: It was created to address a very timely and important marketing activity – annual planning. Because of that, 2026 became the focal point of the cover, and the content was framed around the moment the audience was already in.
Contextual CTAs: Calls to action to get a free audit were sprinkled into sections that dealt with PPC and SEO/GEO, which are the services we actually provide. The CTAs felt earned rather than forced because they were relevant to the surrounding content.
Cover design: A lot of effort went into how the guide looked. Knowing it would be promoted as an ad, the goal was to make it pop in the LinkedIn feed and grab the audience’s attention.
The targeting strategy
For audience targeting, I used a few different layers:
I also excluded a few attributes deliberately after viewing the audience insights:
The resulting audience was about 54,000 people. It could’ve been smaller and still delivered great results.
Job title targeting would also be worth testing. The leads were qualified as-is, but it would be interesting to see what the results would look like with more specific role targeting.
Three ad variations were used to test different copy angles. All three used the same document ad format and lead gen form. The only variable was the copy.
Here are the variations.
Version 1:
Version 2:
Version 3:
A few principles guided the ad copy process:
Each variation led with a strong hook. The first sentence had to grab attention and make people want to keep reading.
The copy ran longer than you typically see in ads to give a clearer sense of the guide’s tone and value before the click.
Common fears and questions the audience already had were addressed, such as translating high-level strategy into execution and staying visible in AI search results.
The tone leaned into a “we’ve got you” approach rather than being overhyped or promotional. B2B buyers are skeptical and respond to guidance and valuable information, not pressure.
The copy also had some personality, with a slightly cheeky edge while staying professional. For example, it called out common situations, such as having a beautiful strategy deck but never executing the plan.
Campaign and ad results
Recapping the campaign’s overall performance from Jan. 5 to Jan. 31:
One interesting note is that while the CPC bid was set at $15, the average CPC actually came in way under that at $5.41.
The average CTR was also above LinkedIn’s typical benchmark of 0.50%, and the lead form completion rate was over 75%.
LinkedIn lead gen campaigns have delivered strong results across many client engagements. But even by those standards, this performance was pretty good.
And for the specific ads, V2 was the winner by far:
The LinkedIn Ads algorithm zeroed in on that one and gave it pretty much all the airtime. It makes sense — that had the most eye-catching hook, “Steal our best demand gen ideas.”
The campaign was intentionally stopped at 60 leads. We’re a small, boutique agency, and the goal was to be thoughtful about nurturing the leads generated rather than flooding the funnel with volume that couldn’t be followed up on well.
Of the 60 leads, roughly 56 were qualified — a remarkable outcome for a prospecting campaign.
Our approach to working these leads has been organic LinkedIn engagement rather than a hard sell. No cold pitch sequences. Just showing up in their world as a familiar, credible presence.
As the person who wrote the playbook, I’m also personally reaching out to downloaders to ask for feedback on what they found useful and what they were hoping to see that wasn’t there. That insight will directly shape the next version of the guide and any future content assets created.
The campaign is still in the nurture phase. The primary goal of this test was to validate the model, not generate an immediate pipeline. On that measure, it exceeded expectations.
What made this work and what could be done differently
Looking back at the campaign as a whole, a few things stand out as the real drivers of performance:
Audience research came first. The target audience was clearly defined before anything was created. The content, the targeting, and the copy all flowed from that. As a result, it was very specific.
The content was timely. Releasing a 2026 planning guide early in the year, when everyone was back from the holidays, really worked in this campaign’s favor.
Depth built trust before the form appeared. The preview paired with substantive ad copy had a positive impact on lead form completion rate.
The copy sounded like a person, not a brand.
What could be done differently next time:
Despite the high conversion rates, adding a bit more friction to the form completion process may help. The fact that it was so easy to fill out the form means that the audience may not remember actually downloading it.
Following up with the leads faster after downloading would be a priority. The same approach of asking for feedback would still apply, rather than a sales pitch.
Running it longer and getting more leads would provide a larger data set to learn from.
Testing more ad copy variations against the winner.
How to do this yourself
Whether you’re running lead gen for a client or testing it on your own business, here are some tips to make it work:
Do your audience research before you create the asset: Reddit, SparkToro, community forums, and your own client conversations are all underutilized sources of real audience pain points, and you get pointers on the language they use.
Build something genuinely useful: If it’s a thinly veiled promotion, you’re wasting your audience’s time.
Match your content topic to a timely moment your audience is already in: What season, event, or planning cycle are they navigating right now?
Give your ad copy some personality: Test a hook that stands out, or at least is something that sounds like it was written by a real person.
Start small intentionally: Validate CPL and lead quality before scaling. A $500 test can tell you a lot.
Let the winner run: Early creative testing gives you the signal you need to spend efficiently at scale.
Align your content and your targeting precisely: If you wrote the guide for marketing decision-makers, make sure the campaign isn’t picking up sales roles.
We plan to relaunch this campaign once we’ve gathered enough feedback from the first wave of downloaders. The playbook itself is a living document. It will be updated as the industry shifts, particularly with the wave of ads in AI Overviews and responses.
This was one content asset and one campaign. More are in the works, and this test gave a lot of confidence in the approach.
The platform isn’t the problem. The strategy and offering might be what is driving up the cost.
If you’re willing to put the work into research, producing a quality asset, and getting the messaging right, LinkedIn Ads can be one of the most efficient B2B lead generation channels available.
WordStream by LocaliQ’s 2025 benchmarks show nearly 87% of industries saw year-over-year CPC increases. The cross-industry Google Ads average reached $5.26 per click. High-intent verticals are higher: legal services average $8.58, and the most competitive B2B categories approach or exceed $8 to $9 per click.
These increases reflect structural shifts in how search results pages are designed, how auctions are optimized, and how inefficiencies compound across paid search accounts. Many remain invisible until a structured PPC audit uncovers them. Protecting the budget you already have — starting with your branded terms — is where recovery begins.
Here are the five trends every advertiser needs to understand right now.
What’s driving your CPC
More advertisers are chasing the same finite inventory
Search advertising is, at its core, an auction. When more advertisers compete for the same keywords, prices rise. Global PPC spend continues to surge (Quantumrun Research), while available click slots on results pages haven’t grown at the same rate. More money chasing the same inventory yields higher prices.
The pandemic permanently accelerated this shift—brands that hadn’t invested seriously in paid search entered Google’s auction and didn’t leave.
Google’s AI Overviews are squeezing in
One of the most consequential structural changes in paid search over the past decade is the SERP itself. Google’s AI Overviews now occupy prominent space for informational and exploratory queries. As they expand through 2024 and 2025, they reduce the number of organic and paid listings visible above the fold.
A late-2025 Seer Interactive analysis of 3,119 search terms across 42 organizations found paid CTR on queries with AI Overviews dropped 68%—from 19.7% to 6.34%.
The mechanism is straightforward: as AI Overviews take more real estate (Skai), fewer paid placements appear above the fold. Impression share tightens. Automated bidding competes more aggressively for what remains, and prices rise.
The nuance: users who click past an AI Overview tend to be further along in the buying journey. WordStream’s data shows roughly 65% of industries saw higher conversion rates despite rising CPCs. The implication is clear: shift budget toward high-intent transactional queries where AI Overviews are less likely, and away from informational queries where they dominate.
Smart bidding is making the whole auction more expensive
Modern Google Ads campaigns increasingly rely on automated bidding strategies, such as maximizing conversions or target CPA. Per Google’s Smart Bidding documentation, the system sets a precise bid for each auction based on predicted conversion likelihood — prioritizing performance over cost control.
When nearly every competitor uses the same logic, it creates a self-reinforcing loop of rising bid pressure. This is a market-wide dynamic you can’t reverse — only adapt to.
Unauthorized brand bidding is inflating your costs from the inside
While you can’t control platform algorithms or the macroeconomy, one major driver of CPC inflation is within your control.
When affiliates, partners, or competitors bid on your trademarked keywords, they enter an auction that should be nearly uncontested. Each additional bidder drives your branded CPC up, and you pay twice: once to create the demand, and again when a third party captures that same searcher at the bottom of the funnel.
The effects compound. AI Overviews have already compressed available click inventory; unauthorized brand bidding then inflates the cost of the inventory you win.
Detecting violations requires more than manual SERP checks. Unauthorized bidders often use cloaking—geotargeting away from your headquarters or dayparting outside business hours—to evade detection. With a self-service platform like Bluepear, you can run automated 24/7 monitoring across search engines, geographies, and devices—capturing ad copy and landing page evidence to dispute invalid affiliate commissions and enforce trademark guidelines at scale. Fewer bidders on your branded terms mean less auction pressure and lower CPCs on traffic you already own. It’s one of the few paid search levers that doesn’t require a broader strategy overhaul to move.
What to do about it: three priorities for advertisers
The data points to three clear priorities as you navigate this environment:
Protect your branded baseline. Branded keywords reflect demand you already created. Systematically monitor who else is in that auction and remove unauthorized bidders with automated brand protection tools — one of the highest-leverage actions available right now.
Anchor optimization to cost per acquisition. WordStream’s 2025 benchmarks show a higher CPC can deliver a higher-quality, further-down-funnel user and a lower CPA. The headline CPC number is increasingly a poor proxy for campaign health.
Build first-party data infrastructure. You’re best insulated from continued CPC inflation when your bidding algorithms use high-quality, proprietary conversion signals — reducing reliance on the platform’s broad audience approximations.
Average CPCs are at their highest levels in years, and that trend is unlikely to reverse. Advertisers who manage costs most effectively have adapted their strategies accordingly.
Not sure how many unauthorized bidders are in your branded auction right now? Register with promo code BRANDAUDIT: Bluepear team will deliver a customized audit of your branded search landscape within 48 hours!
For the latest insights on branded search and paid search protection, follow Bluepear on LinkedIn.
Once upon a time, in the delightfully chaotic 1990s, web copywriting was all about exact-match keywords and relentless meta tag stuffing. As algorithms matured, so did SEOcopywriting.
Now, with proposition-based retrieval systems, writing like you’re in the business of tricking a crawler into seeing relevance through keyword repetition is no longer a viable strategy.
Below is a playbook for generative AI-friendly copywriting, broken down into self-contained, high-density concepts.
The ‘grounding budget’: Quality over quantity
Large language models (LLMs) don’t seek less information. They seek higher information density. Google’s Gemini operates on a limited budget of retrieved information, according to research by DEJAN AI, which analyzed over 7,000 queries.
The grounding budget is roughly 1,900 words per query, split across multiple sources. For an individual webpage, your typical allocation is around 380 words. You’re competing for a tiny slice of a fixed pie, so being precise helps the AI’s matching process.
If Schema.org is the external scaffolding of a building, structured language is the load-bearing internal frame. Language itself is the structure we provide machines, such as “semantic triplets” (subject → predicate → object). When a copywriter moves structure inside the language, the sentences become inherently machine-readable.
Google’s passage ranking, AI Overviews, and third-party LLMs like ChatGPT all evaluate content at the passage level using similar retrieval infrastructure. A sentence that works for one works for all of them.
A properly structured sentence fulfills four strict data criteria:
Names the entities: Explicitly identifies subjects and objects (e.g., “Notion Team Plan”).
States the relationships: Defines how entities interact using clear verbs (e.g., “costs”).
Preserves the conditions: Includes context that makes the statement true (e.g., “$10 per user per month”).
Includes specifics: Provides verifiable details rather than marketing fluff (e.g., “includes 30-day version history”).
Feature
The marketing fluff
Structured language (GEO-friendly)
Example
“Our revolutionary platform makes managing your team easier than ever. It is affordable and comes with great support.”
“The Asana Enterprise Plan [Entity] streamlines [Relationship] cross-functional project tracking [Specifics] for teams over 100 people [Condition], starting at $24.99 per user [Data].”
Machine utility
Low (Vague, hard to extract)
High (Decomposable into atomic claims)
Best practices for AI-friendly copywriting
Traditional copywriting flows like a row of dominoes. When an AI “chunks” your page, it snaps those dominoes apart. If your sentences aren’t load-bearing on their own, the logic collapses.
Rule 1: Every sentence must survive in isolation
Ensure every single sentence explicitly names its subject. Vague pronouns like “this,” “it,” or “the above” become dead bits when extracted.
Broken: “It also includes unlimited cloud storage.”
Anchorable: “The Dropbox Business Standard Plan includes 5TB of encrypted cloud storage.”
Rule 2: State relationships, don’t just list entities
Keyword stuffing introduces inference errors. Effective structured language explicitly states the relationship between nodes.
The keyword dump: “We offer SEO, PPC, and content marketing services.”
The structured relationship: “Our agency integrates PPC data into SEO strategies to lower the cost per acquisition (CPA) by an average of 15% within the first 90 days.”
“Ramon Eijkemans is a freelance SEO specialist at Eikhart.com, specializing in enterprise SEO for platforms with 100,000 or more pages. He developed the LLM Utility Analysis framework, a five-lens content scoring system that measures the likelihood of content being selected and cited by AI systems, covering structural fitness, selection criteria, extractability, entity and propositional completeness, and natural language quality, based on research into passage retrieval architectures, Google patent evidence, and proposition-based extraction systems. The framework is the subject of this Search Engine Land article.”
The AI inverted pyramid: Engineering ‘citation bait’
Research shows LLMs reliably extract claims near the beginning or end of a text. Adding more content often dilutes your coverage.
“Pages under 5,000 characters get about 66% of their content used. Pages over 20,000 characters? 12%. Adding more content dilutes your coverage.”
Here’s the four-step formula for citation bait.
The direct answer: Open with a dense, 40-60 word declarative statement answering the “who, what, why, or how.”
Context and detail: Follow up with nuance, maintaining high semantic density.
Structured evidence: Use bulleted lists, tables, or numbered steps (extractable data).
Follow-up alignment: Anticipate the next logical prompt in clearly labeled H2 or H3 subheadings.
Clear headings above a paragraph can improve its mathematical relevance (cosine similarity) to AI systems by up to 17.54%.
To ensure your high-value pages are programmatically extractable, run these four stress tests on your mid-page copy.
The isolation test
The action: Select a single sentence completely at random from the middle of a webpage and read it in total isolation.
The goal: If the sentence relies on preceding paragraphs to make sense or uses vague pronouns (e.g., “This allows for…”), the page has a utility gap. Every sentence should be self-contained.
The context test (‘Scroll twice and read’)
The action: Scroll down twice on a homepage so the hero banner and primary H1 disappear, then start reading from wherever your eyes land.
The goal: If a reader (or a machine “chunking” that section) can’t immediately identify the product or service without the top visual layout, the mid-page text fails the context test.
The disambiguation test
The action: Read a mid-page sentence out loud and ask: Could this apply to the deforestation of the Amazon or a steamy romance novel?
The goal: If a sentence is wildly generic (e.g., “We empower our clients to achieve more”), an LLM will struggle to map it to your specific entity. Specifics prevent misinterpretation.
The URL accessibility test
The action: Run the live URL through an LLM agent or NotebookLM.
The goal: If convoluted JavaScript, heavy code bloat, or aggressive bot protection prevents an agent from “seeing” the raw text, generative search engines may skip the content entirely.
AI search content optimization FAQs
Here are answers to common questions about optimizing content for AI search.
Is generative engine optimization (GEO) a legitimate discipline?
Traditional SEO relies on bolt-on machine-readable code to make human narratives SEO-worthy. AI search optimization requires embedding explicit entity relationships and structure directly inside your copy.
What is the ideal section length for chunking?
Open with a dense 40-60-word declarative statement. Information buried deep in long paragraphs is rarely retrieved.
Does copywriting for AI search help traditional SEO?
Yes. Because Google uses vector embeddings to evaluate content at the passage level, structuring language for an LLM improves traditional visibility.
Is longer content better?
No. Density beats length. Pages under 5,000 characters see a 66% extraction rate, while pages over 20,000 characters plummet to 12%.
What is the inverted pyramid for AI copywriting?
The AI inverted pyramid means abandoning the slow, conversational introduction and placing your core entities, exact claims, and specific conditions in the very first sentence to guarantee flawless machine extraction.
The content creator is now a machine-readability engineer. Our job is to build narratives that are persuasive to humans while being programmatically extractable for neural networks.
If your content lacks explicit entity relationships, perfectly self-contained sentences, and highly “anchorable” citable claims, the machines will simply look right through you.
Google released the March 2026 spam update less than 24 hours ago and it is already done rolling out. The update finished today at 10:40 a.m. ET.
This update was released yesterday (March 24) at 3:20 p.m. It took 19 hours and 30 minutes to fully roll out, which is super fast.
Why we care. This is the second Google algorithm update announced in 2026. It’s unclear what spam it targeted, but if you see ranking or traffic changes in the next few days, the Google March 2026 spam update could be the cause.
“While Google’s automated systems to detect search spam are constantly operating, we occasionally make notable improvements to how they work. When we do, we refer to this as a spam update and share when they happen on our list of Google Search ranking updates.
For example, SpamBrain is our AI-based spam-prevention system. From time-to-time, we improve that system to make it better at spotting spam and to help ensure it catches new types of spam.
Sites that see a change after a spam update should review our spam policies to ensure they are complying with those. Sites that violate our policies may rank lower in results or not appear in results at all. Making changes may help a site improve if our automated systems learn over a period of months that the site complies with our spam policies.
In the case of a link spam update (an update that specifically deals with link spam), making changes might not generate an improvement. This is because when our systems remove the effects spammy links may have, any ranking benefit the links may have previously generated for your site is lost. Any potential ranking benefits generated by those links cannot be regained.”
Impact. This update should only impact sites spamming Google Search, so hopefully you didn’t see any major negative impact.
Influencer content isn’t just a brand awareness play. It’s showing up in Google SERPs, Google AI Overviews, and AI answers, making keyword strategy an essential part of every influencer brief.
When we brief an influencer, we assign them a keyword. Not as a nice-to-have, but as a required part of the strategy, usually woven into the script, the caption, the on-screen text, and the hashtags.
That might sound like an SEO team overreaching into an influencer team’s lane. But in 2026, the lane lines don’t exist.
Social content is search inventory. If your influencer marketing program isn’t built around that reality, you’re leaving a significant and measurable share of voice on the table.
Search journeys now span platforms, formats, and sources
For most of search’s history, optimization meant ranking on Google. That’s still important, but it’s no longer the full story.
Over a third of consumers now prefer to start their search journey with AI tools like ChatGPT over Google. Platforms like YouTube, Instagram, and Pinterest have also become primary discovery engines for product research, how-to queries, and purchase decisions.
A user searches “best lightweight running shoes” on TikTok and watches three creator videos.
Then they ask ChatGPT for a comparison.
Next, they Google for brand reviews to look at Reddit commentary and What People Are Saying content.
Then they navigate to a brand’s site.
Each of these touchpoints is a search moment, and there’s a strong chance they involve influencer content. The brands showing up at every step are the ones treating influencer marketing content as search content from the beginning.
Ross Simmonds, CEO of Foundation Marketing, shared with me:
“Influencers exist on practically every platform, whether we’re talking about LinkedIn, Reddit, Instagram, or TikTok. They’re creating content every day. When people search, whether through Google or directly on these platforms through things like Ask Reddit or TikTok search, they’re coming across content that influencers have created.”
“If those influencers understand best practices around search and discoverability, they’re more likely to create content that ranks not only on native platforms, but also directly in the SERP. That’s a marketer’s dream.”
What people are saying SERP feature for “best skin care for moms”
Google’s What people are saying SERP feature is a carousel that appears directly in search results and surfaces user-generated and creator content from platforms like YouTube, TikTok, LinkedIn, Instagram, and Reddit for relevant queries.
It’s now a default feature in U.S. search results and consistently shows up for mid- to bottom-of-funnel keywords, exactly where purchase decisions are made. A brand can appear in this SERP feature (either directly or indirectly via an influencer) without ranking in the traditional Top 10 results.
“Short videos” SERP feature for “skin routine for moms”
Additionally, the Short videos SERP feature is another prime spot for your influencer content to take up shelf space on Google. This means an influencer video optimized with the right SEO keyword can surface in multiple spots on Google for a commercial query your brand’s own site might never rank for.
It’s not theoretical. It’s happening now.
Google AI Mode referencing TikTok and Instagram content for a hair curling prompt
Meanwhile, AI answers are pulling from social content at scale. An analysis of 40 million AI search results found Reddit to be the single most-cited domain across ChatGPT, Copilot, and Perplexity. Ahrefs research confirms that YouTube mentions and branded web mentions are among the top factors correlating with AI brand visibility in ChatGPT, AI Mode, and AI Overviews.
“YouTube is the No. 1 cited domain for Gemini. And 35% of the channels getting cited have under 10K subscribers. We checked the correlation between views and citations. It’s basically zero.
“What actually correlates? How well the creator describes the topic in their video description. So if an influencer makes a video about your product and writes a lazy two-line description, you’re leaving AI visibility on the table.”
The more creators talk about your product with consistent language, the more confident AI becomes in recommending you. So if your influencer content doesn’t contain the SEO keywords your audience is actually searching for, it won’t be surfaced in all the places that matter.
Sample influencer brief with keyword included as a standard
Keyword research should be a standard step in every influencer campaign. Start by identifying your target keyword from data across three sources:
Existing keyword targets shared by the organic strategists.
In platform searches for what’s trending and/or suggested auto completes.
AnswerThePublic searches for both brand and non-brand terms related to the campaign theme.
Once the keyword is identified, embed it into every element of the creator’s content:
Script: Spoken naturally, ideally in the first half of the video, where TikTok’s algorithm is most attentive to audio signals.
Caption: Written to open with or include the keyword, supporting both platform and Google indexing.
On-screen text: Reinforcing the keyword visually for accessibility and algorithm legibility.
Hashtags: Used to connect the content to the broader topic the keyword lives in.
Don’t confuse this with keyword stuffing. It’s modern content architecture.
There’s a big difference between a creator naturally saying, “If you’re searching for the best running shoes right now…” versus a brand clunkily forcing a phrase into otherwise natural content. The influencer brief sets the requirement, yes, but the creator’s job is to incorporate their unique voice.
Ashley Liddell, co-founder and Search Everywhere director at Deviation, shared:
“We assign keywords to influencers based on real search behaviour across platforms, not just brand messaging, and map demand from TikTok, YouTube, Reddit, and Google, then align specific queries to creators whose content style and audience best fit that intent.
“Each brief gives a clear search-led direction, including topic, angles, and format, while leaving room for the creator’s own creativity. The goal is to make influencer content discoverable in-platform search while ensuring it remains engaging in-feed.”
Once the content is live, track whether the creator’s post is surfacing for the target keyword across:
The native platforms (e.g., TikTok, Instagram, etc.)
Google SERP features
Videos and Short videos carousel
What people are saying
Standard organic results
Screenshot and log positions immediately (because rankings can quickly shift). This data tells a story clients aren’t used to seeing from an influencer program.
Influencers extend your search everywhere footprint
Our search everywhere optimization framework
There’s a reason this matters beyond any individual campaign. Google organic CTRs have declined dramatically, by as much as 61% on queries where AI Overviews appear.
With Google SERP features increasingly highlighting video and social content, traditional web content is losing surface area on the SERPs. Social content, conversely, is gaining traction, and we cannot ignore this.
For brands, influencer content has taken on a much stronger value: scalable, authentic, human-first search inventory distributed across platforms where their audiences spend time. It doesn’t replace a traditional SEO program, but it extends reach into channels where creator voices tend to outperform brand-owned content.
Younger audiences search socially first. In some categories, a meaningful share of consideration-stage audiences see creator content before they ever search for your brand. If your influencers don’t use the language your audience searches, you’re invisible in the moments that matter most.
Search everywhere optimization comes down to one thing: showing up where your audience actually searches with content worth stopping for.
The operational reality: Putting things into practice
The biggest barrier to building keyword optimization into influencer programs is structural. SEO and influencer teams often sit within different parts of an organization, owned by different teams with different KPIs, and little reason to collaborate.
Even when those teams are close, a common hesitation remains: adding a keyword requirement to a creator brief may make the content feel scripted or inauthentic. That concern is valid, but somewhat misplaced. A keyword isn’t a constraint on creativity — it’s a topic signal.
Creators integrate talking points, product messaging, and brand language into their content all the time. A search term is no different, as long as the brief gives them room to use it in their own voice.
Closing that gap requires a few concrete changes.
SEO and influencer strategy should share a brief template. The target keyword, along with guidance on how to integrate it naturally, should be a standard field, not an afterthought. If the influencer lead and the SEO lead aren’t in the same briefing conversation, that’s the first thing to fix.
Keyword selection should be platform-specific. What users search on TikTok differs from what they search on Google. TikTok search is more conversational and trend-based. Pull keywords from TikTok’s own autocomplete, not just a traditional keyword tool, then validate on AnswerThePublic, and cross-reference with existing organic targets to find terms that work across surfaces.
Approval workflows should include keyword checks. When reviewing a script, a caption, or a live post, include a keyword compliance check. If the keyword is missing, ask the influencer for a revision before the content goes live. This sounds small, but it’s the difference between content that ranks and content that doesn’t.
Reporting should include search metrics. Did the post surface on TikTok for the target keyword? Did it appear in one of Google’s video sections or “What People Are Saying”? These are trackable, reportable metrics, and they belong in campaign reports alongside reach, engagement, and conversions.
Influencer content has always shaped brand perception. Today, it also shapes search visibility across social platforms, Google’s evolving SERP features, and AI-generated answers.
Brands that recognize this apply a search strategy to a channel that, until recently, operated without it. You treat every influencer video as search content — briefing keywords and reporting on search performance as you would for other organic channels.
Influencer content is search inventory. The only question is whether you’re optimizing it.
Does schema markup really benefit AI search optimization? Some suggest it can 3x your citations or dramatically boost AI visibility. But when you dig into the evidence, the picture is far more nuanced.
Let’s separate what’s known from what’s assumed, and look at how schema actually fits into an AI search strategy.
How schema fits into AI search now
Search is shifting from surfacing a SERP with blue links to AI Overviews, generative answers, and chat‑style summaries that collate content in addition to links.
To get your content to appear in this model, your site has to be understood as entities — singular, unique things or concepts, such as a person, place, or event — and the relationships between them, not just strings of text.
Schema markup is one of the few tools SEOs have to make those entities and relationships explicit and understandable for an AI: This is a person, they work for this organization, this product is offered at this price, this article is authored by that person, etc.
For AI, three elements matter the most:
Entity definition: Which brands, authors, services, or SKUs exist on the page.
Attribute clarity: Which properties belong to which entity (e.g., prices, availability, ratings, job titles).
Entity relationships: How entities connect (e.g., offeredBy, worksFor, authoredBy, and sameAs schema tags).
When schema is implemented with stable values (@id) and a structure (@graph), it starts to behave like a small internal knowledge graph.
AI systems won’t have to guess who you are and how your content fits together, and will be able to follow explicit connections between your brand, your authors, and your topics.
Two major platforms have confirmed that schema markup helps their AIs understand content. For these platforms, it is confirmed infrastructure, not speculation.
What about ChatGPT, Perplexity, and other AI search platforms?
We don’t know how these platforms use schema yet. They haven’t publicly confirmed whether they preserve schema during web crawling or use it for extraction. The technical capability exists for LLMs to process structured data, but that doesn’t mean their search systems do.
This doesn’t mean schema is useless, it means schema alone doesn’t drive citations. LLM systems appear to prioritize relevance, topical authority, and semantic clarity over whether content has structured markup.
Put differently, LLMs perform best when you give them a structured form to fill out, not a blank canvas. When models are asked to extract into predefined fields, they make fewer errors than when told to simply “pull out what matters.”
Schema markup on a page is the web equivalent of that form: a set of explicit entity, brand, product, price, author, and topic fields that a system can map to, rather than inferring everything from unstructured prose.
What the research tells us
This tells us that LLMs have the technical capability to process structured data more accurately than unstructured text.
However, this doesn’t tell us whether AI search systems preserve schema markup during web crawling, whether they use it to guide extraction from web pages, or whether this results in better visibility.
The leap from “LLMs can process structured data” to “web schema markup improves AI search visibility” requires assumptions we can’t verify for most platforms.
For Microsoft Bing and Google AI Overviews, schema likely improves extraction accuracy, since they’ve confirmed they use it. For other platforms, we don’t have confirmation of actual implementation.
AI search is so new — for example, ChatGPT search only launched in October 2024 — that companies haven’t disclosed their indexing methods. Measurement is difficult with non-deterministic AI responses. There are significant gaps in what we can verify.
To date, there are no peer-reviewed studies on schema’s impact on AI search visibility, or controlled experiments on LLM citation behavior and schema markup.
OpenAI, Anthropic, Perplexity, and other platforms besides Microsoft or Google haven’t published their indexing methods.
This gap exists because AI search is genuinely new (ChatGPT search launched in October 2024), companies don’t disclose indexing methods, and measurement is difficult with non-deterministic AI responses.
How schema builds an entity graph
In traditional SEO, many implementations stop at adding Article or Organization markup in isolation. For AI search, the more useful pattern is to connect nodes into a coherent graph using @id. For example:
An Organization node with a stable @id that represents your brand.
A Person node for the author who works for your organization.
An Article node authoredBy that person and publishedBy that organization, with about properties that declare the main topics.
That connected pattern turns your schema from a set of disconnected hints into a reusable entity graph. For any AI system that preserves the JSON‑LD, it becomes much clearer which brand owns the content, which human is responsible for it, and what high‑level topics it is about, regardless of how the page layout or copy changes over time.
Aspect
Traditional SEO schema
Entity graph schema
Structure
Single @type object per page
@graph array of interconnected nodes
Entity ID
None (anonymous)
Stable @id URLs for reuse across site
Relationships
Nested, one‑way (author: “name”)
Bidirectional via @id refs (worksFor, authoredBy)
Primary benefit
Rich snippets, SERP CTR
Entity disambiguation, extraction accuracy for AI
AI impact
Minimal (tokenization often strips)
Makes site a unified knowledge graph source if preserved
Recommendations for implementing schema for AI search
For AI search, the best way to position schema right now is to:
Make entities and relationships machine-readable for platforms that preserve and use structured data (confirmed for Bing Copilot and Google AI Overviews).
Reduce ambiguity around brand, author, and product identity so that extraction, when it happens, is cleaner and more consistent.
Complement topical depth, authority, and clear brand signals, not replace them.
Use schema markup for:
Improving visibility in Bing Copilot.
Supporting inclusion in Google AI Overviews.
Enhancing traditional SEO.
Making content easier to parse (good practice regardless of AI).
Maintaining a low-cost implementation with potential upside as platforms evolve.
However, don’t expect:
Guaranteed citations in ChatGPT or Perplexity.
A dramatic visibility lift from schema alone.
Schema to compensate for weak content or low authority.
Priority schema types (based on platform guidance) include:
Organization (brand entity identity).
Article or BlogPosting (content attribution and authorship)
Schema markup is infrastructure, not a magic bullet. It won’t necessarily get you cited more, but it’s one of the few things you can control that platforms such as Bing and Google AI Overviews explicitly use.
The real opportunity isn’t schema in isolation. It’s the combination of structured data with proper entity relationships, high-quality, topically authoritative content, clear entity identity and brand signals, and the strategic use of @graph and @id to build entity connections.
You launch a new TikTok ad. Early metrics look great — low CPCs, high engagement, and a ROAS that makes you look like a pro. Then, a few days later, performance slips.
Ad frequency creeps up, the hook rate drops, and you’re suddenly back at the drawing board.
Some call it creative fatigue. On TikTok, it’s closer to creative exhaustion.
A TikTok ad’s “half-life” is shorter than any other platform. If you’re still treating it like a Meta ad campaign, you’ll lose.
To win, treat creative like a supply chain, not a campaign asset.
Why TikTok creative decays so quickly
On intent-based platforms like Google, Amazon, or Pinterest, people search for things. On social platforms, people look for family, friends, and other people. On TikTok, above all, people go for entertainment (though they still discover things and people).
TikTok’s algorithm favors variety, and you consume content at lightning speed. The moment something feels repetitive or stale, you swipe.
Your creative decays faster because the platform runs on high-velocity novelty. You’re competing with thousands of creators and brands.
If your process relies on long feedback loops — from storyboarding to shooting to editing — you’ll fall behind. By the time your ad goes live, the trend has shifted, the audio is dated, the hooks are stale, and your audience has moved on.
Use ongoing content capture to avoid bottlenecks and keep up with TikTok’s shrinking content half-life.
Modular creative: Record five hooks, three body segments, and four CTAs. Get 60 ad permutations from one hour of filming. Block time on your calendar to shoot.
Creator-in-residence: Don’t rely on one-off shoots. Hire creators in-house or on retainer to capture footage and document the brand daily. Make content creation more efficient and effective.
The 80/20 fidelity rule: Keep 80% of your content lo-fi and native, as if it were shot on a phone. Use the other 20% for higher-production, polished hero assets. Blend into the feed, maximize performance, and elevate your brand where it matters.
Every high-performing TikTok ad can be broken down into three distinct modules.
The hook (0:00-0:03)
The most volatile part. It stops the scroll and fatigues fastest.
Film 5–7 variations for each concept. Use pattern interrupts—start mid-action, zoom in, throw a box. Try a negative constraint: “Stop doing [common mistake] if you want [result].”
Use green screen reactions with trending news or customer reviews as the backdrop, with your commentary over it. Strong statements and questions keep it open-ended.
The body (0:04-0:15)
This is where you retain attention, deliver value, and show the “why” or “how.” It’s more educational or narrative and lasts longer than the hook.
Test “us vs. them” in a split-screen showing your product solving a common problem.
Test first-person use in real settings—at home, in the kitchen, outside, at the gym, or at work.
The CTA (last 3-5 seconds)
This is where you close. Test psychological triggers to see what moves the needle:
Use scarcity: “Our last drop sold out in 48 hours—don’t miss this one.”
Test low-friction angles: “Take the 2-minute quiz to find your best fit.”
Offer incentives beyond “Shop Now” or “Link in bio”: “Use code (X) for (% off) your first order.”
When a winning ad fatigues, don’t kill it. Keep the body and CTA, swap in a new hook. TikTok weights the first seconds for audience matching — use that to reset fatigue and extend performance.
When to pause or reallocate
A common mistake is cutting an ad too soon and missing its potential—or letting it run too long and wasting budget.
Your intuition matters, but TikTok’s algorithm sees more. An ad may fatigue with one audience and find a second life with another, so don’t give up too quickly. Here’s when to pause and when to move it elsewhere:
Kill signal: If your thumb-stop rate (3-second views/impressions) drops below your benchmark for three straight days, your hook isn’t working—pause it. If your hook is very fast, use 2-second views/impressions.
Iterate signal: If engagement is high but conversions are low, your creative may work, but your offer, CTA, or landing page is adding friction.
Algorithm reallocation: Before you delete any asset, test broad targeting — especially with Smart+ campaigns. Let the algorithm find a new audience that hasn’t seen your ad and compare performance to manual targeting.
With fast iteration cycles, your TikTok budget can’t be static. Dedicate 20% to 30% of your monthly budget to testing new creative concepts. This budget isn’t for hitting your target ROAS — it’s for buying data and insight.
Once you find a winner, move it into scaling campaigns. This prevents performance from dropping when a single creative hits its half-life.
Brands winning on TikTok aren’t the ones with the biggest budgets or name recognition. They create and test the most.
Capture everything—packaging, shipping, unboxings, product use, customer testimonials—as raw material in your creative supply chain. Shorten the distance between a brand event and launch.
The shrinking ad half-life won’t slow you down. It will become your advantage.
For the past several years, marketing strategy has reorganized itself around a simple premise. Third-party data is fading. Privacy expectations are rising. The solution, we are told, is first-party data.
Collect more of it. Centralize it. Build the customer view around it.
In many ways, the shift was necessary. Direct relationships with customers are more durable than rented audiences. Consent and transparency matter. Organizations that invested early in their own data ecosystems are better positioned today than those that relied entirely on external signals.
But the industry’s confidence in first-party data has grown so strong that it now obscures a more complicated reality.
Owning customer data does not automatically translate into understanding customers.
Most marketing leaders have sensed this tension already. Despite increasingly sophisticated technology stacks, many organizations still struggle with familiar questions. Which records represent active individuals? Which identities are stale or misattributed? How much of the customer view reflects current behavior versus historical assumptions?
These are not philosophical concerns. They surface in everyday operational decisions. Campaigns that reach fewer real customers than expected. Personalization efforts that plateau. Measurement models that appear precise but produce inconsistent outcomes.
The problem is not the absence of data. If anything, the opposite is true.
The problem is the assumption that the data sitting inside our systems still reflects reality.
When first-party data becomes historical data
One of the quiet characteristics of customer data is how quickly it shifts from present tense to past tense.
Most organizations gather identity information at moments of interaction. Account creation, purchases, subscriptions, service requests. These events create durable records that enter CRM systems, marketing platforms and data warehouses.
From that point forward, the records largely persist as they were captured.
What changes is the world around them.
Consumers rotate devices. Email addresses evolve from primary to secondary. People move, change jobs, create new accounts, abandon others. Behavioral patterns shift with new platforms, new habits, and new privacy controls.
The record still exists, but the certainty surrounding the identity begins to loosen.
Marketing teams encounter this reality in subtle ways. Lists that appear healthy but deliver diminishing engagement. Customer profiles that fragment across systems. Identity graphs that require constant reconciliation as signals drift out of alignment.
None of this means first-party data is wrong. It simply means it ages.
The moment of collection is precise. The months and years that follow are less so.
The distance between records and reality
The idea of a unified customer profile has become foundational to modern marketing infrastructure. Customer data platforms, identity graphs and advanced analytics environments all attempt to bring scattered signals together into a coherent picture.
When the signals align, the results can be powerful.
But the effectiveness of these systems depends heavily on the integrity of the identifiers entering them. Email addresses, login credentials, device associations and other identity anchors serve as the connective tissue between records.
When those anchors drift or degrade, the unified profile begins to lose clarity.
This is not a failure of the technology itself. Most identity platforms perform exactly as designed. They connect the signals available to them.
The challenge is that many of those signals were captured months or years earlier, during moments when the system had limited visibility into the broader identity context surrounding the individual.
As the digital environment evolves, the original record becomes one reference point among many.
Marketing leaders recognize this gap when their systems produce technically accurate profiles that still fail to explain current customer behavior. The database reflects what was known. The customer reflects what is happening now.
Closing that gap requires something more dynamic than stored attributes alone.
The value of activity signals
In recent years, some organizations have begun looking beyond the traditional boundaries of customer records and focusing more closely on signals that indicate whether an identity is still active within the broader digital ecosystem.
Activity signals provide a different kind of intelligence.
Instead of asking what information was collected about a customer in the past, they ask whether the identity attached to that information continues to exhibit real-world behavior today.
Is the email address still being used?
Does the identity appear in recent digital interactions?
Are the signals surrounding it consistent with genuine consumer activity?
These questions are becoming increasingly important for teams responsible for both growth and risk management.
For marketing, activity signals help clarify which audiences remain reachable and which identities have quietly gone dormant. For fraud teams, they help differentiate legitimate consumers from synthetic identities that appear valid on the surface but lack authentic behavioral patterns.
Both disciplines are ultimately trying to answer the same question.
Does this identity correspond to a real person who is active in the digital world right now?
Stored data alone rarely answers that question with confidence.
A more durable identity anchor
Among the many identifiers circulating through the digital ecosystem, one has proven particularly resilient over time.
Email.
For decades it served as both a communication channel and a persistent identity anchor. It appears in authentication systems, commerce transactions, subscriptions, customer service interactions and countless other digital touchpoints.
That ubiquity produces a secondary effect. Email addresses generate a continuous stream of activity signals that reflect how identities move through the online world.
When those signals are analyzed across large networks, they reveal patterns that extend far beyond a single company’s customer database.
They can indicate whether an identity is actively engaged in digital life or has fallen silent. They can highlight inconsistencies that suggest risk. They can surface connections that help reconcile fragmented customer views.
In other words, they transform a simple identifier into a dynamic indicator of identity health.
Organizations that understand this dynamic tend to treat email differently. It becomes less of a campaign endpoint and more of a reference point for understanding identity across channels.
Rethinking what it means to know the customer
Over the past decade, marketing technology has made extraordinary progress in storing and organizing customer data. Few organizations today lack the infrastructure to capture and analyze enormous volumes of information.
The next frontier is not accumulation. It is validation.
Knowing a customer increasingly depends on the ability to verify that the identities inside a database still correspond to real individuals with ongoing digital activity.
This shift changes how teams think about data quality.
Instead of focusing solely on completeness, forward-looking organizations pay closer attention to vitality. Which identities remain active. Which have quietly faded. Which exhibit patterns that suggest fraud or synthetic creation.
These distinctions influence everything from campaign reach to attribution accuracy to risk exposure.
When identity signals are strong, the rest of the marketing ecosystem performs more reliably. Personalization becomes more relevant. Measurement reflects real outcomes. Customer experiences align more closely with actual behavior.
When identity signals weaken, even the most advanced tools begin operating on uncertain ground.
Moving beyond the illusion
The industry’s embrace of first-party data was an important correction after years of dependence on opaque third-party sources.
But ownership alone does not guarantee clarity.
Customer records capture moments in time. The people behind them continue to evolve.
For organizations that want to truly understand their customers, the challenge is no longer simply collecting data. It is maintaining an accurate connection between stored identities and real-world activity.
That requires looking beyond the database itself and paying closer attention to the signals that reveal whether an identity remains alive in the digital ecosystem.
Companies that make that shift discover something important.
The most valuable customer data is not the information they collect once.
It is the intelligence that helps them keep that data connected to real people over time.
Google released its March 2026 spam update today at 3:20 p.m. It’s the second announced Google algorithm update of 2026, following the February 2026 Discover core update.
This is the first spam update of 2026.
Google’s most recent spam update was in August 2025.
Timing. This update may only “take a few days to complete,” Google said. On LinkedIn, Google added:
“This is a normal spam update, and it will roll out for all languages and locations. The rollout may take a few days to complete.”
Why we care. This is the second announced Google algorithm update of 2026. It’s unclear what spam this update targets, but if you see ranking or traffic changes in the next few days, it could be due to it.
“While Google’s automated systems to detect search spam are constantly operating, we occasionally make notable improvements to how they work. When we do, we refer to this as a spam update and share when they happen on our list of Google Search ranking updates.
For example, SpamBrain is our AI-based spam-prevention system. From time-to-time, we improve that system to make it better at spotting spam and to help ensure it catches new types of spam.
Sites that see a change after a spam update should review our spam policies to ensure they are complying with those. Sites that violate our policies may rank lower in results or not appear in results at all. Making changes may help a site improve if our automated systems learn over a period of months that the site complies with our spam policies.
In the case of a link spam update (an update that specifically deals with link spam), making changes might not generate an improvement. This is because when our systems remove the effects spammy links may have, any ranking benefit the links may have previously generated for your site is lost. Any potential ranking benefits generated by those links cannot be regained.”
Reddit is rolling out new Dynamic Product Ad features, including a shoppable Collection Ads format and Shopify integration, the company announced today.
What’s new.
Collection Ads: A new Dynamic Product Ad format that pairs a lifestyle hero image with shoppable product tiles in one carousel, bridging discovery and purchase. Early adopters following best practices are seeing an 8% ROAS lift.
Community and Deal overlays: Reddit-native labels like “Redditors’ Top Pick” and automatic discount callouts surface social proof and pricing signals without extra work from you.
Shopify integration: Now in alpha, this simplifies catalog and pixel setup for new DPA advertisers, automatically matching products to the right users and context.
The numbers. Reddit DPA delivered an average 91% higher ROAS year over year in Q4 2025. Liquid I.V. reports DPA already accounts for 33% of its total platform revenue and outperforms its other conversion campaigns by 40%.
Why now. Reddit has seen a 40% year-over-year increase in shopping conversations. Also, 84% of shoppers say they feel more confident in purchases after researching products on Reddit.
Why we care. The new tools, especially the Shopify integration, lower the barrier to getting started with Dynamic Product Ads. Reddit might still be viewed by some as an undervalued paid media channel, but there’s an opportunity to get in before competition and costs rise.
Bottom line. Reddit is increasingly a serious performance channel for ecommerce, and these tools make it easier to get started. If you’re not yet running DPA on Reddit, the combination of undervalued inventory and improving ad formats makes this a good time to test.
AI search citations favor a small set of formats. Listicles, articles, and product pages drive over half of all mentions across major LLMs, according to new Wix Studio AI Search Lab research analyzing 75,000 AI answers and more than 1 million citations across ChatGPT, Google AI Mode, and Perplexity.
The findings. Listicles led at 21.9% of citations, followed by articles (16.7%) and product pages (13.7%). Together, these three formats made up 52% of all AI citations.
Articles dominated informational queries, cited 2.7x more than other formats.
Listicles captured 40% of commercial-intent citations, nearly double any other type.
Why intent wins. Query intent — not industry or model — most strongly predicts which content gets cited. This pattern held across industries, from SaaS to health.
Informational queries skewed heavily toward articles (45.5%) and listicles (21.7%).
Commercial queries were led by listicles (40.9%).
Transactional and navigational queries favored product and category pages (around 40% combined).
Why we care. This research indicates that you want to map content types to user goals rather than just creating more content. Articles educate, listicles drive comparison, and product pages convert. Aligning content format with user intent could help you capture more AI citations and increase visibility.
Not all listicles perform equally. Third-party listicles accounted for 80.9% of citations in professional services, compared to 19.1% for self-promotional lists. That seems to indicate LLMs prefer neutral, editorial comparisons over brand-led rankings.
Model differences. All models favored listicles, but diverged after that.
ChatGPT leaned heavily into articles and informational content.
Google AI Mode showed the most balanced distribution.
Perplexity stood out, with 17% of citations coming from discussions like Reddit and forums.
Industry patterns. Content preferences shifted slightly by vertical:
SaaS and professional services over-indexed on listicles.
Health favored authoritative articles.
Ecommerce spread citations across listicles, articles, and category pages.
Home repair showed the most even distribution across formats.
A quiet but important policy update is coming to Google Shopping ads next month, requiring some merchants to verify their accounts before running ads featuring political content.
What’s changing. From April 16, merchants running Shopping ads with certain political content in nine countries will need to verify their Google Ads account as an election advertiser. Google will also outright prohibit some political Shopping ads in India.
The countries affected. Argentina, Australia, Chile, Israel, Mexico, New Zealand, South Africa, the United Kingdom, and the United States.
Why we care. Shopping ads aren’t typically associated with political advertising — this update signals that Google is broadening its election integrity efforts beyond search and display into commerce formats. Merchants selling politically themed merchandise, campaign materials, or other related products in the affected countries need to act before the April 16 deadline.
What to do now.
Review the updated policy language to determine if your Shopping ads feature content that falls under the new restrictions
If affected, apply for election advertiser verification through Google Ads before April 16 to avoid disruption to your campaigns
The bottom line.This affects a narrow but specific set of merchants — but the consequences of missing the deadline could mean ads being disapproved or accounts being flagged. If you sell anything with a political angle in the listed countries, check your eligibility now.