Normal view

Today — 23 April 2026Search Engine Land

OpenAI adds CPC ads to ChatGPT

22 April 2026 at 23:12
OpenAI ChatGPT ad platform

OpenAI is shifting its ad model inside ChatGPT from pure impressions to performance, a move that puts it in more direct competition with Google’s core business.

What’s happening. OpenAI has begun testing cost-per-click (CPC) ads within ChatGPT, allowing advertisers to pay only when users click rather than when ads are shown. Early reports suggest clicks are being priced in the $3 to $5 range, and the feature is rolling out through a limited ads manager alongside the earlier CPM-based model.

Why now. Pricing pressure appears to be a key driver. ChatGPT’s CPMs have fallen significantly since launch, dropping from around $60 to closer to $25 in some cases. Moving to CPC helps offset that decline by tying revenue to measurable outcomes instead of impressions.

Why we care. ChatGPT is evolving into a performance channel, not just a branding environment. With CPC pricing, budgets can now be tied directly to measurable actions, making it easier to test ROI and compare against channels like Google Search.

It also opens early access to a potentially high-intent audience in a new format, giving advertisers a chance to gain a first-mover advantage before competition — and costs — increase.

The bigger picture. This is more than a pricing adjustment — it’s a strategic shift. CPC advertising has long been dominated by Google, built on strong user intent signals. By adopting the same model, OpenAI is positioning ChatGPT to compete for performance marketing budgets, not just brand spend, effectively turning the product into a full-fledged ad platform.

Between the lines. The real challenge will be proving intent. Search advertising works because users actively look for something; ChatGPT must demonstrate that its conversational context can generate similarly valuable clicks. Advertisers are likely to benchmark performance directly against Google, raising the bar for quality and conversion.

Zoom out. Advertising is becoming central to OpenAI’s long-term revenue strategy, with investments in ad infrastructure, measurement tools and a broader self-serve platform.

Bottom line. By introducing CPC ads, OpenAI is now competing for the performance-driven ad dollars that power the veteran search platforms.

Yesterday — 22 April 2026Search Engine Land

Google Ads adds app consent diagnostics to improve privacy performance

22 April 2026 at 20:45
Google Shopping Ads - Google Ads

Google is rolling out App Consent Insights in Google Ads, giving advertisers a clearer view into how consent signals impact app campaign performance.

What’s new. The new diagnostics view breaks down consent data across apps, platforms, regions, and traffic sources, helping marketers pinpoint gaps in their setup.

Zoom in. Advertisers can see an overall consent rating — like “Excellent,” “Good,” or “Poor” — alongside a live count of apps actively sending consented data. A detailed table also shows consent rates for conversions, including splits between EEA and non-EEA users.

Why we care. As privacy regulations tighten, consent isn’t just a compliance box — it directly affects measurement and optimization. Advertisers now get more visibility into where consent setups may be limiting performance.

Between the lines. Google is making consent more measurable — and more actionable — as signal loss continues to impact campaign performance.

What to watch. Expect advertisers to start optimizing not just for conversions, but for consent rates themselves as a performance lever.

Bottom line. Better consent visibility means better data — and ultimately, better campaign outcomes.

First seen. This update was first spotted by Google Ads specialist Thomas Eccel on LinkedIn.

Advertisers test ChatGPT Ads Manager

22 April 2026 at 20:37
From scripts to agents- OpenAI’s new tools unlock the next phase of automation

Advertisers are sharing their experience of a new Ads Manager interface for ChatGPT, signaling a shift toward a more mature advertising platform with real-time campaign control.

What’s new. The Ads Manager is described as a dashboard where marketers can run, monitor, and optimize campaigns in real time — a major step up from current reporting and controls. Digital marketers Juozas Kaziukėnas and Glenn Gabe shared images of what they saw.

Why we care. Until now, ChatGPT ads have been early-stage and limited, with advertisers reportedly relying on basic reporting like weekly CSV files. The move to a full Ads Manager suggests OpenAI is building infrastructure closer to platforms like Google Ads or Meta.

Zoom in. Advertisers are already seeing more ads appear inside ChatGPT, with brands like Best Buy and Expedia spotted in early tests. That increase in inventory, paired with a proper management interface, points to a rapid expansion of monetization efforts.

What to watch: As the Ads Manager evolves, expect improvements in targeting, reporting, and automation — areas where early feedback suggests ChatGPT ads are still limited.

First seen. Glenn Gabe shared the images of the ChatGPT ads manager on X.

Google changes budget pacing rules for scheduled campaigns

22 April 2026 at 19:05
How to use Performance Planner and Reach Planner in Google Ads

Google is updating how Google Ads paces budgets for campaigns using ad schedules, shifting toward full monthly spend targets regardless of how many days ads actually run.

What’s changing. Starting June 1, campaigns will pace toward the full monthly budget limit (30.4x the daily budget), even if ads are only eligible to run on certain days. Previously, pacing was typically based on the number of active days in the schedule.

What’s not changing. Daily and monthly caps remain the same. Campaigns still won’t exceed 2x the daily budget in a single day or 30.4x over a month, and ads won’t serve on disabled days.

Why we care. Advertisers using limited schedules — like weekdays only or specific hours — may see spend accelerate, as Google now aims to hit the full monthly cap instead of scaling down on active days.

Zoom in. This means campaigns with fewer serving days can spend more aggressively on those days. For example, if ads run only half the month, Google can hit the daily max each day without needing to pull back elsewhere — and still stay under the monthly cap.

Between the lines. Google is prioritizing full budget utilization over evenly distributed spend, giving its systems more flexibility to capture demand when campaigns are eligible to run.

What to watch. Advertisers with tight schedules may need to revisit budgets and performance expectations, as spend could concentrate more heavily on active days.

Bottom line. Budget pacing is becoming less about when ads run — and more about ensuring the full budget gets spent.

First seen. Several advertisers mentioned receiving the comms from Google but from Google Ads Coach Jyll Saskin Gales, we got a clarification of what the update means and what isn’t changing on LinkedIn.

Want to increase visibility? Start by building trust

22 April 2026 at 19:00
Want to increase visibility? Start by building trust

Attention is fragmenting further every day as the platforms providing information continue to multiply.

There are new players on the scene, like AI search, while companies build proprietary spaces through social networks and communities. Smaller spaces pop up daily through vibe-coded apps. Many of these platforms are noisier than ever, with everyone demanding our attention at once.

We’re drowning in information, and trust is eroding in sources like search engines and social media. We still use these platforms for research, but go elsewhere to validate what we find and make decisions.

We’re shifting back to a source we’ve trusted since the beginning: other people. That means showing up across multiplying platforms and in as many people-led sources as possible.

Search is a trust experience

Rachel Botsman is a leading expert and author on trust in the modern world. Botsman defines trust as:

  • “A confident relationship with the unknown.”

I’ve read tons of different definitions of trust, but this is by far my favorite. It’s the simplest and touches on the core component of dealing with the unknown or uncertainty.

We don’t need trust when outcomes feel certain. We need trust when we’re dealing with the unknown.

Searching for information is what humans do when they’re uncertain. There are three trust layers that occur every time we search for information:

  • Self-trust (I’m uncertain.): I don’t trust that I have the information I need to make a decision at this moment in time.
  • Platform trust (Where I trust to search for answers.): Which platform, community, or real-world space do I trust to find answers to my questions?
  • Source trust (Whose or what information I act on.): Do I trust this enough to believe it, click on it, buy it, let it guide me, or change my mind? People can absolutely skip platform trust and jump directly here.

Searching for information is a trust experience from start to finish. It’s a human behavior, and, as we’ll discover, the best way to support human behavior is through other humans.

An example of my own search journey to find a trusted answer

Here’s what a recent search journey of mine looked like when I was interested in buying a new pair of shoes.

I started with AI tools and did some low-trust research, getting a list of options that met my requirements from ChatGPT and cross-referencing that list with Claude’s output.

Then I wanted a sense of pricing and delivery timelines (high trust), so I quickly read through reviews while I was still working with the AI outputs (low trust). I searched Amazon for the options surfaced by ChatGPT and Claude, read reviews, got pricing, and noted who ships the quickest.

From there, I moved on to Google and found my medium-trust people sources. I checked Reddit for brand and model commentary, read third-party articles on running sites and from running influencers, and watched YouTube video breakdowns.

Then I got bombarded with low-trust advertising on social media, seeing retargeting ads everywhere.

Finally, I turned to my high-trust people sources. I asked a trusted running community, a neighbor I often see running, and my dad, a former marathon runner. I also went to a running shop and spoke with the sales team.

Search journeys now span dozens of platforms and sources

Yext’s 2025 research of 2,237 global consumers found more platforms getting used in a single search journey:

  • Approximately 75% of consumers use new search tools more today than they did one year ago.
  • Just 10% trust the first result, while 48% of consumers cross-check answers across platforms.

These results very much mirrored my personal search experience. I hit roughly 65 sources in my search journey: 

  • Two AI tools, hitting ~10 links in each. 
  • Amazon, hitting ~15 products with reviews.
  • Google, scanning ~10 Reddit threads, approximately five third-party sites, and five YouTube videos.
  • Social media, seeing ~10 retargeting ads.
  • Community, receiving seven direct replies.
  • Conversations, three directly with other people. 

In a similar vein, Expedia’s The Path to Purchase research found that huge amounts of source content are now consumed by travelers planning a trip. In the 45 days prior to booking travel, users spend an average of 303 minutes viewing ~141 pages of travel content.

Of my 65 sources, 45 were people-led. This trend can also be seen in professional decisions via the Censuswide – Global Professionals sentiment study (commissioned by LinkedIn) data, which shows 43% of people rate their professional network as their most trusted source,  ahead of search engines and AI tools. 

And the 2026 Edelman Trust Barometer shows a general trend of uncertainty rising and people placing their trust in the people closest to them:

Time and time again, we see that when people feel uncertain and need trusted advice, they often turn to others.

So how do you turn trust into visibility?

During someone’s search journey, you ideally want to show up in:

  • All the platforms they use to find information.
  • As many people-led sources as possible.

That sounds pretty overwhelming. To make this workable, you need a playbook that reverses the order:

  • Get mentioned in people-led sources often (by building genuine trust with these people).
  • As a result of these mentions, show up in the major search platforms as they continue rewarding people-led sources.

If we optimize at the people layer, the platform layer follows. Build trust, earn mentions, and get visibility.

Back to my shoe-purchasing journey. Many folks have taken to social media and review sites to talk about Adidas Terrex (the shoes I finally purchased after my trust-seeking journey), so they were highly visible in all my touchpoints.

This means that Adidas is actively engaging in trust-building activities. Adidas has its own running club, events, and communities. They’re engaging with people.

Here’s an example of a recent event where they collaborated with the Underground Fan Club to support more women getting into trail running.

People are mentioning their brand and products.

This single event had hundreds of posts on Instagram from the participants and attendees. Multiply that by their other events and community initiatives, and you can see how their visibility quickly adds up.

Plus, they’re appearing via hashtags, account tags, and mentions on social media platforms like TikTok more generally:

Adidas Terrex is also getting mentioned in forums — there are full Reddit threads devoted to advice on these shoes.

Their people-led source mentions are reflected in AI search platform results:

You’ve seen the research:

When you genuinely earn the trust of people willing to mention you positively of their own accord, you also capture visibility within search platforms. Because visibility is a byproduct of trust. 

Get the newsletter search marketers rely on.


Where to go to earn people’s trust

Relationships are the bedrock of trust, and there are plenty of places you can go to start building them. These are a few people-led places you can start with:

  • Communities: Online and in person.
  • Events: Conferences and meetups.
  • Social media: LinkedIn, Instagram, TikTok, and similar platforms.
  • Forums: Reddit and Quora.

Look for people-led places with the components listed below. The stronger they are in these characteristics, the higher the trust: 

  • Where smooth, two-way conversations happen in real time.
  • Where you have the ability to show up consistently.
  • Where your audience gathers for specific, niche reasons and support.
  • Where people are not anonymous and can show up as themselves (not personas).

Here’s a general guide for how these environments, when highly engaged, are typically trusted:

Trust-building componentsCommunitiesEventsSocialForums
Two-way conversationsHighHighLowMedium
The ability to show up consistentlyHighMediumHighMedium
People gather for specific, niche reasonsHighHighLowMedium
Where you can be yourself (not anonymous)HighHighMediumMedium

Communities and events require lengthier time commitments and higher financial investment, but the trust-building components are very strong. Entering these spaces gives you more of the tools you need to build both relationships and trust.

Social media and forums have lower barriers to entry, but the trust-building components are weaker.

You can find the places you want to start with by:

  • Directly surveying your customers and audience on where they spend time.
  • See who’s frequently mentioned in your industry’s newsletters, podcasts, and other publications.
  • Perform a search in your search platform of choice.

How to engage in trust-building spaces

People are seeking information to help them gain confidence in what they’re unsure about. They’re seeking help, and help builds trust. 

This means helping is your primary objective – not building brand awareness, pushing folks through your consideration funnel, or selling. Helping people.

Start by listening, not talking

Once you’ve identified your places, don’t rush in and start talking about yourself, your brand, or your challenges. Listen first. This is a two-part process:

What does ‘helpful’ look like in this space?

This is about understanding why people gather in this space — what they get out of it. What high-level needs or wants are getting met that people continue coming back? These typically don’t change much over time.

Maybe they’re looking for connection, education, amplification, or inspiration. Figure that out, and then cross-reference it with what you have to offer. 

Find the intersections that make sense for you and identify the ways in which you can offer support.

What topics are people focused on?

This is about understanding what’s “trending” right now for folks in the space. What immediate needs or wants are getting met at the moment? These typically fluctuate.

Listen. Find your intersections. Figure out what you can help with.

Engage to build trust

This will start with 1:1 conversations in community Slack groups, at events, or in the comments of social media and forums. Trust takes time to build. There are no shortcuts.

Show up as yourself. You’re not your brand; you’re a person behind your brand. People want advice from real people, and if you begin by labeling yourself as a brand representative advocating for your product, it’s game over.

Show up consistently, have these conversations, provide help on a 1:1 basis, and keep track of what’s actually helping. While trust takes time to build, your learnings can help you scale how you help based on real audience insights. 

Once you have a good sense of that, you can take the most frequently helpful themes and build out systems or assets that scale your ability to help. 

Turn conversations into scalable trust

These assets may not build as strong a level of trust as your 1:1 conversations. Those 1:1 conversations with the right audience will have the most trust and the most depth. But if you focus your scaled assets on helping people become who they want to be, it will greatly strengthen trust in your 1:many initiatives more than your typical “how to do x” content.

So take a deeper look at the pain points mentioned in your conversations and ask, “Who is this person trying to become?” Then build an asset from the ways you’ve helped those folks in 1:1 conversations.

Create a mention power-up that helps people showcase their desired identity and who you helped them become. Something that proves their credibility and that they’re excited to share!

Here are a few examples of what this playbook could look like for different audiences:

AudienceHigh-level needTimely needScaled help assetMention power-up
ProfessionalsAmplificationDesire to grow personal brandGuest-posting programThe content is the power-up! They’ll share and tag you.
ProfessionalsOpportunitiesNew job roleSkill training and job boardShareable certification for skill-training completion
MusiciansEducationWanting to learn to play drumsVideo library of drum lessonsPersonalized “I’m a drummer” social image
CraftersAdviceCan’t find sustainable materialsCurated resource of eco-friendly materials Citable asset built with “[your brand’s] eco-friendly resources”
ReadersInspirationDesire to break into a new genreQuiz that helps them decide Sharable quiz output boldly defining their new genre
BudgetersEducationWhat to cut back spend onBudget template and trackerSharable “I saved $x with [your brand] asset” 

What does this actually look like in action?

Over the past few years, I have transitioned my career from marketing to community building. I’ve learned the power of shifting my mindset from selling to helping. And I’ve seen brands use the above playbook to earn visibility and real business impact.

In our community, we partner with an SEO SaaS platform that uses this playbook powerfully. We’ve seen them listen to what it means to be helpful in their community — people want opportunities to be amplified.

We’ve seen them show up consistently — their marketing manager has 400+ messages in our Slack community.

We’ve seen Jojo have tons of 1:1 conversations offering help.

We’ve seen Jojo continuously show up as herself in these helpful answers and in general as a valued member of the community.

And we’ve seen those 1:1 connections pay off in terms of visibility on the content itself as their sharable mention power-up.

They then did the work to build their scaled asset of help. First, by listening through surveying members:

Identifying the core challenges that people had within this topic:

And further boosted their trust by collaborating with the community and featuring community members within their scaled asset.

Again, they reaped the rewards of visibility with their shareable mention power-up.

While earlier I told you to go in without a sales mindset, the beauty is that the trust you build can grow into just that: real business impact.

Our SEO SaaS partner has earned £50,000+ in new annual revenue through the partnership so far. This stuff works when you find the right space, listen, learn, and consistently show up to help.

Building trust is a long-term visibility bet

Trust will always be a throughline in how people search for information.

When you make building trust an ongoing part of your strategy, you prepare your business beyond any single platform or system. You’ll show up in AI search today and whatever comes next tomorrow.

Make trust the priority, and visibility follows. That’s how you move from chasing algorithms to building something that lasts.

How to run an AI-assisted SEO competitor analysis that actually works

22 April 2026 at 18:00
How to run an AI-assisted SEO competitor analysis that actually works

You can now do in 20 minutes what used to take a full afternoon. Feed two Semrush exports into Claude or ChatGPT, and you’ll get a polished competitor analysis – complete with topic clusters, gap tables, and prioritized briefs.

The output looks convincing. The tables are clean. The recommendations sound confident.

That’s the problem. AI can organize and summarize data quickly, but it can’t make strategic decisions. Without the right workflow, prompts, and validation, you risk acting on insights that sound right but lack depth.

Used correctly, though, AI can surface meaningful patterns – revealing differences in topical depth, content coverage, and authority signals that influence search visibility.

Here’s a walkthrough of a real two-competitor analysis using Claude and Semrush data, showing how to turn fast AI outputs into a reliable strategy. You’ll get a repeatable workflow, tested prompts, and a validation checklist to catch common mistakes, along with a clear sense of where to trust AI — and where to rely on your judgment.

AI won’t run a competitor analysis for you. But it can compress the manual work — clustering, pattern matching, and synthesis — so you can focus on interpreting intent, validating opportunities, and deciding what’s worth pursuing.

Note: The sites in this analysis are real but anonymized. Site Y is our client, while Competitors A and B are direct competitors in the same niche. The data is from real Semrush exports pulled in early 2026.

Start with data, not a prompt

Whenever possible, start by exporting data from your SEO tool. Don’t ask an AI assistant to guess what an SEO tool can tell you.

Otherwise, you assume your AI assistant is a measurement tool. Although it isn’t, it’ll try its best to respond to your request. This often looks like plausible-sounding traffic estimates, keyword lists, and competitive assessments that are partially or entirely fabricated.

Here’s what we exported and why each piece matters.

Export 1: Organic Research > Pages (top 100 by estimated traffic)

This report tells you which pages are winning. Key columns include the URL, estimated traffic per page, number of ranking keywords per page, the intent breakdown (commercial, informational, navigational, transactional), and the traffic change column that shows momentum.

For example, a page pulling 14,500 visits from 1,632 keywords is a different asset from a page pulling 400 visits from 12 keywords. The intent split tells you why that traffic matters.

Export 2: Organic Research > Positions (top 100 keywords by traffic)

This export tells you which keywords are winning. Key columns here are keyword and position, search volume, keyword difficulty , search engine results page (SERP) features (image packs, video carousels, and People Also Ask), and keyword intent tags.

Instead of telling you which URLs perform best, this report reveals which search queries drive the most traffic. You need both reports for a complete picture.

The export checklist 

For each competitor and for your own site, pull:

  • Semrush Organic Research > Pages, top 50-100, sorted by traffic.
  • Semrush Organic Research > Positions, top 100-500, sorted by traffic.
  • Semrush Keyword Gap report (optional).
  • Screaming Frog crawl with URLs, titles, H1s, word count, crawl depth, and internal links. This optional report adds structural context (like how deep pages are buried in the site architecture) that the Semrush exports don’t include.

Conduct a 20-minute competitive review

Next, feed your exports into your AI assistant. Ask it to do three things: classify, cluster, and compare.

Topic taxonomy (per site)

Here’s the prompt I used:

I'm going to give you a Semrush Organic Pages export for a website. Each row is a URL with its estimated organic traffic, number of ranking keywords, and intent breakdown.

Please:
1. Assign each URL to a topic category (e.g., "Product - Roof Racks," "Editorial - Buying Guides," "Support - Technical," "Category - Inventory")
2. Assign a page type: Homepage, Product Page, Category Page, Editorial/Guide, Blog Post, Support/Info, Landing Page, or Other
3. Create a summary table showing: topic category, number of pages, total traffic, and dominant intent

Rules:
- Base classifications on the URL path and any context available. Do NOT guess traffic numbers or keyword data. Use only what's in the export.
- If a URL is ambiguous, flag it as "needs manual review" rather than guessing.
- Group similar topics (e.g., don't create separate categories for "off-road accessories" and "off-road bumper kits." Cluster them).
- After classifying, list any URLs where you're less than 80% confident in the classification. I'll verify those manually.

Here's the data:
[PASTE PAGES EXPORT]

For Site Y, Claude identified seven topic clusters across 100 pages. Here’s the summary:

Topic ClusterPagesTrafficDominant intent
Homepage/Brand314,651Mixed (commercial and informational)
Buying guides and comparisons25~10,600Informational and commercial
Roof racks and cargo (product)2~5,100Commercial and transactional
Bumpers and armor (product)38~2,300Commercial
Installation and how-to content4~1,300Informational
Inventory/Category4~540Transactional
Other (brand, manufacturer, thin)24~1,300Mixed

Even before comparing competitors, this taxonomy tells a story. Our client’s organic traffic is driven more by editorial content (buying guides and comparisons) than by all product pages combined.

In fact, a single buying guide pulled 7,336 visits on its own, and the top product page drove 5,021. That editorial strength is both a strategic asset and a vulnerability, since editorial rankings can be more volatile than product page rankings.

Competitor comparison

Once you’ve created a taxonomy for each site, use this prompt to compare them:

I now have topic taxonomies for three competing sites in the same niche. I'm going to give you the summary tables for all three.

Please:
1. Build a comparison table showing how each site's traffic distributes across topic categories
2. Identify each site's "content strategy signature": what type of content drives the majority of their organic traffic
3. Flag any categories where one site dominates and the others are weak or absent
4. Note the traffic concentration: what percentage of each site's total traffic comes from their top 3 pages

Rules:
- Use only the data provided. Do not estimate or infer traffic for categories not present in a site's export.
- If a category doesn't exist for a site, mark it "Not present" rather than zero. We don't know if they have content there, only that it doesn't appear in their top 100.

Site Y taxonomy:
[PASTE]

Competitor A taxonomy:
[PASTE]

Competitor B taxonomy:
[PASTE]

When we used this prompt, Claude revealed three completely different strategies from the same niche.

Site YInfo/support pages (60 of the top 100)Competitor B
Content strategyEditorial-ledUtility/support-ledProduct page-led
Top content typeBuying guides and comparisonsInfo/support pages (60 of top 100)Product pages and category pages
Non-homepage hero pageTow capacity and fitment calculator (7,336 visits)Bolt pattern lookup guide (1,245 visits)Off-road bumper category (3,200 visits)
Traffic concentration (top three)75.3%81.2%71.8%
Estimated traffic (top 100)35,6817,01711,093
MomentumGrowing (+1,743 net)Flat (-264 net)Declining (-1,525 net)

Manually developing this comparison could require hours of spreadsheet work between categorizing 300 URLs, building pivot tables, and trying to spot patterns across three tabs. But Claude did it in minutes.

The pattern recognition alone (three completely different strategies from three sites selling in the same market) is genuinely valuable output.

The numbers show that Site Y pulls five times the organic traffic of Competitor A and three times that of Competitor B, despite all three competing in the same space.

Competitor A’s second-highest traffic page is a bolt pattern guide on a support subdomain. Competitor B is losing ground fast, with its top category page dropping by 1,184 visits.

If you’re running a competitive analysis and you don’t spot patterns like these, you’re missing the strategic story behind the data.

Apply human judgment

If you were to stop after generating the clusters and comparison chart, you’d have a plausible-looking competitive analysis. But the AI-generated output needs human intervention before you make any strategic decisions.

Check the classifications

Spot-check 10-15% of classifications by visiting the URLs. Correct the taxonomy, and then re-run the comparison. This turns an 85% accurate first draft into one with 95% or higher accuracy.

The “confidence flag” line in the prompt (“list any URLs where you’re less than 80% confident”) saves you from having to guess which ones to check. If you skip this step, the misclassifications can distort your entire competitive profile.

For example, when I checked Claude’s page classifications against the actual live pages, roughly 15% needed correction. It tagged a product comparison page as a blog post. It classified a regional landing page as a category page. And it lumped an FAQ page into the “Other” category even though it served as the site’s primary buyer’s guide for a specific product line.

These misclassifications were the kind of accidental calls that come from categorizing URLs by path structure alone, without seeing the page content. For example, if a URL path says /blog/best-off-road-accessories/, AI assistants will call it a blog post even if the page functions as a commercial comparison guide.

Consider the intent

AI assistants can surface data points in seconds, but they can’t make strategic calls for you. Interpreting the data requires understanding your client’s business model, their authority level, and their content capacity.

I’ve seen teams burn an entire content sprint on high-volume informational keywords that drove plenty of traffic and zero leads. If the intent doesn’t match your business goals, the volume is irrelevant.

For example, Competitor A’s second-highest-traffic page is a bolt pattern lookup guide, pulling 1,245 visits per month. Claude flagged this as a content strategy gap for Site Y, since our client had no equivalent utility content.

While this is technically correct, it’s strategically misleading. The bolt pattern guide targets purely informational intent. So, the page builds authority and earns links, but it’s not a commercial driver.

While it can be helpful to create utility content like this, it should be a steady background effort, not a priority sprint. The commercially relevant gaps (product categories, buying guides) come first.

Use this prompt fix:

For each opportunity you flag, check the intent breakdown from the Semrush data. 
If more than 60% of the traffic is informational or navigational intent, flag it separately as "authority builder, not direct conversion driver" so I can prioritize accordingly.

Compare the SERP reality vs. the ranking position

AI assistants work from the position numbers and volume data in your SEO reports. They don’t know what the SERP looks like.

For example, Claude saw that Site Y ranks Position 3 for “off-road roof rack” (22,200 monthly searches, driving 1,443 visits) and treated it as a straightforward optimization opportunity. Push the page to position one, and capture more traffic. Simple.

But in reality, the SERP is packed with rich features: popular products, an image pack, and People Also Ask. The traditional organic blue links appear barely above the fold on desktop and well below the fold on mobile.

Ranking in position one likely wouldn’t deliver the traffic increase you’d normally expect from a 22,200-volume keyword because the SERP features absorb most of the clicks.

For your top five or 10 priority keywords, do a manual SERP check. If the page is dominated by shopping carousels and video results, then a traditional organic push may not be the right play. Instead, a product feed optimization or video content strategy might be more effective.

Get the newsletter search marketers rely on.


Do a gap analysis

Your SEO tool already has a keyword gap report. But a raw list of missing keywords isn’t a strategy.

Use it as a starting point. Then, let AI clusteri those gaps into themes, tiering them by intent and business relevance and turning raw gap data into prioritized actions.

Start with the tool data

We pulled two Semrush Keyword Gap reports comparing Site Y against both competitors. They revealed:

  • Missing keywords: 217 keywords where both competitors rank and Site Y doesn’t appear at all. Combined search volume ~49,700/month.
  • Weak keywords: 106 keywords where Site Y ranks but gets outperformed by both competitors. Combined search volume: ~33,650/month.

Feed the gap data to AI

Use this prompt with your AI assistant:

I'm going to give you two Semrush Keyword Gap reports:
1. MISSING: keywords where both competitors rank and Site Y doesn't
2. WEAK: keywords where Site Y ranks but competitors outrank us

Each row includes: keyword, intent tags, search volume, keyword difficulty, CPC, and the ranking position for each site.

Please:
1. Cluster the keywords into thematic groups (e.g., "bumpers," "roof racks," "overlanding gear," "light bar kits," "torque specs/fitment"). A keyword can only belong to one cluster.
2. For each cluster, provide: number of keywords, total search volume, dominant intent, and average keyword difficulty.
3. Separate the clusters into tiers based on intent:
  - Tier 1 (Commercially relevant): Clusters with predominantly commercial or transactional intent that align with the site's core product/service offering
  - Tier 2 (Adjacent commercial): Clusters that are commercially relevant to the broader market but may not be the site's primary product focus
  - Tier 3 (Authority builders): Clusters with primarily informational or navigational intent that build topical authority but are unlikely to drive direct conversions
  Note: I will review the tier assignments and adjust based on business model fit. AI should make its best guess and flag any clusters where the tier assignment is uncertain.
4. Within each tier, sort by combined search volume
5. Flag any keywords that are branded competitor terms (e.g., a competitor's product or brand name). These are generally not pursuable gaps
6. For the WEAK keywords, separate into "close wins" (Site Y in positions 1-10) vs. "long shots" (Site Y in positions 50+)

Rules:
- Use ONLY the keywords in these exports. Do not suggest keywords not present in the data.
- If intent data is missing or ambiguous, mark it "verify manually" rather than guessing.
- Do not invent search volume or ranking data. If a field is empty, say "not available."

MISSING keywords:
[PASTE]

WEAK keywords:
[PASTE]

When we used this prompt with Claude, clear thematic clusters emerged from the 217 missing keywords:

ClusterKeywordsCombined volumeDominant intentClaude’s tier
Bumpers / skid plates30+~12,000/moCommercial1
Roof racks / cargo systems10+~8,000/moCommercial1
Winches (for sale)15+~5,500/moTransactional1
LED light bar kits12+~3,200/moCommercial1
Overlanding gear / overlanding accessories10+~2,800/moCommercial1
Torque specs / installation guides8+~1,500/moInformational3
Branded competitor terms6+~1,200/moNavigationalSkip

Correct AI’s priorities

This step determines where you spend the next quarter’s content budget, so human judgment is essential.

If you let an AI assistant set your content priorities based purely on search volume and intent labels, you’ll end up chasing someone else’s market instead of dominating your own. Volume is seductive, but business alignment is what drives revenue.

For example, Claude clustered 323 keywords and tiered them by intent in minutes. But it assigned bumpers/skid plates (~12,000/month volume) the same priority as overlanding gear (~2,800/month) because it doesn’t know what Site Y sells.

Without our human override, we may have built our content calendar around the wrong cluster.

ClusterClaude’s tierCorrected tierReasoning
Overlanding gear / overlanding accessories11: Core businessDirectly aligned with Site Y’s primary product line. These are the keywords that drive qualified buyers.
Bumpers / skid plates12: AdjacentHigh volume, commercially relevant to the broader market, and Site Y stocks some of these products. Worth targeting through editorial/guide content over time, but not the priority sprint.
Roof racks / cargo systems12: AdjacentRelated to what Site Y does, but not the core offering.
Winches (for sale)12: AdjacentTransactional intent is appealing, but these are a different product category.
LED light bar kits12: AdjacentRelated market, but not core inventory.
Torque specs / installation guides33: AuthorityInformational content that builds topical relevance. Steady background effort.
Branded competitor termsSkipSkipCan’t realistically win these anytime soon.

Identify small pushes that make big differences

Next, find the low-effort opportunities with the biggest payoffs.

For example, from 106 weak keywords, we separated 17 close wins where Site Y already ranks in positions one through 10. These have real potential:

KeywordVolumeSite Y PositionBest Competitor PositionGap
overlanding accessories1,600312 positions
overlanding gear720312 positions
overlanding roof rack720413 positions
overlanding accessory kit590312 positions
overlanding storage system390312 positions
overland vehicle accessories320312 positions
overland accessories260312 positions
overlanding cargo rack210312 positions

Site Y sits at position three across virtually every “overlanding” variant, while Competitor A holds position one. These are optimization opportunities. A focused push toward better on-page targeting, internal linking adjustments, and content updates incorporating “overlanding” language more explicitly could flip several of these to position one or two.

That’s a different action than writing a new page. Claude would have defaulted to the latter if we hadn’t split the data into close wins and long shots.

Factor in authority context

As a final validation step, pull the backlink profiles for your competitors.

When we did this, we found that both had relatively thin link profiles. Competitor B had 199 backlinks with an average page authority score of just 1.1 (on Semrush’s 0-100 scale), while Competitor A had 128 backlinks, averaging a 3.1 authority score. The highest quality links for both came from the same handful of overlanding and off-road vehicle publications.

The most-linked pages and the top organic pages barely overlapped for either competitor. Only the homepages appeared in both lists.

Competitor B’s top backlinks pointed to product pages, while its top organic traffic came from category pages. Competitor A’s best links came from editorial features, while their organic traffic was dominated by the homepage and a support page.

This tells us their organic rankings are driven more by topical relevance and on-page SEO than by direct link equity to individual pages. It means the keyword gaps we identified are likely winnable through content and optimization rather than requiring a major link building campaign.

Turn the gap analysis into a brief

Use your competitor analysis to draft a content brief with AI. Input this prompt:

Based on the gap analysis we ran, [DESCRIBE PRIORITY CLUSTER] emerged as a priority. Draft a content brief for optimizing the existing presence and/or creating a new page to capture this cluster.

Include:
1. Primary and secondary target keywords (from our data only)
2. Recommended page type and format (based on what's currently ranking for these terms)
3. Content structure with suggested H2s
4. Content elements the ranking competitors include that our page should match or exceed
5. Estimated word count range based on competing content

Then, in a separate section called "Differentiation: For Human Review," suggest 3 possible angles that would make this page genuinely different from what already ranks. These are suggestions for me to evaluate, not final decisions.

Before finalizing the brief, cross-reference the target keywords against Site Y's existing pages export. Flag any existing pages that already rank for or target similar keywords. These are potential cannibalization risks that need to be resolved before creating new content.

Rules:
- Do not fabricate competitor content details. Base element recommendations on what we know from our data (URLs, page types, keyword footprints)
- If you need information you don't have (e.g., actual competitor page content), say "manual review needed: [specific thing to check]" rather than guessing

From this prompt, Claude drafted a clean brief with target keywords from our data, recommended format (long-form guide with product integration), and an H2 structure.

It also performed a cannibalization check. Because we added a cross-reference line to the prompt, Claude flagged that Site Y already had a related page pulling 838 visits. If we’d created a new page without checking, it would have competed with the existing page. That one line in the prompt saved us from unnecessary internal competition.

But the differentiation section needed human input. Only someone who knows Site Y’s brand voice and customer objections could pick the right angle from these suggested options:

  • First-hand testing and review angle: Site Y installs and tests these products, so they can show real usage via trail tests, installation photos, and customer experiences.
  • Comparison angle: What’s the difference between overlanding versus off-road? This directly addresses the keyword overlap we noticed in the gap data.
  • Buyer qualification angle: Who needs overlanding gear versus who would be fine with standard off-road accessories?

The experience signals (actual trail tests, customer stories, installation details) also need substantial human oversight. This is where Google’s emphasis on experience, expertise, authoritativeness, and trustworthiness meets practical execution. If you don’t have genuine first-hand experience to draw on, no amount of keyword optimization will close that gap.

Run through a validation checklist

Before you act on any AI-assisted competitor analysis, go through this checklist to prevent the most common errors.

Data validation

  • Base all analysis on tool exports (Semrush, Ahrefs, Screaming Frog), not AI-generated estimates.
  • Check for export dates (if data is older than 90 days, recent algorithm updates or market shifts may have changed the picture).
  • Use a meaningful sample size (top 50+ pages per competitor, not just top 10).
  • Include both Pages and Positions exports.

Classification validation

  • Spot-check 10-15% of the AI assistant’s page type and topic classifications against live pages.
  • Correct any misclassifications and re-run the comparison.
  • Check whether AI created overly granular or overly broad categories.
  • Verify that pages on subdomains or unusual URL structures were classified correctly.

Intent validation

  • Check intent tags (not just search volume) on all flagged opportunities.
  • Separate commercially relevant gaps from informational and authority-building gaps.
  • Verify intent interpretation with a manual SERP check on your top three to five priority keywords.
  • Make a conscious decision to pursue, defer, or skip high-volume informational keywords.

Prioritization validation

  • Confirm your AI assistant’s priority ranking aligns with your business goals, not just search volume.
  • Check whether the product or service matches what you sell if a cluster looks like tier one based on volume alone.
  • Determine if opportunities are achievable given site authority and content resources.
  • Confirm no opportunities are branded competitor terms you can’t realistically win.
  • Check whether a gap is better addressed by optimizing existing content versus creating new content.

Brief validation

  • Choose a differentiation angle for AI-generated briefs (not just keywords and structure).
  • Verify the recommended content format matches what ranks in SERPs.
  • Confirm the brief doesn’t target keywords that your own site already ranks for.
  • Identify E-E-A-T signals and determine what original content the page needs that AI can’t generate.

The shift to AI-assisted SEO competitor analysis

AI tools have changed where you spend your time when conducting a competitor analysis. The data gathering, clustering, cross-referencing, and initial synthesis that used to consume most of your time? AI handles that efficiently.

Instead, AI assistants free up thinking time. Now, you can spend that time on the parts that determine whether your analysis leads to results: interpreting intent, validating classifications, and making strategic calls about what’s worth pursuing and what’s a distraction.

AI safety risk: How Best-of-N jailbreaking bypasses safeguards

22 April 2026 at 17:00
AI safety risk- How Best-of-N jailbreaking bypasses safeguards

As artificial intelligence integrates deeper into our workflows, understanding its vulnerabilities is critical. A recently exposed vulnerability known as Best-of-N (BoN) jailbreaking has redefined how we view AI safety. 

Here’s a breakdown of BoN jailbreaking, how the attack works, and why it creates real risk for your data, brand, and the AI tools you rely on.

First, a quick vocabulary check

Before getting into BoN, there are two terms you need to actually understand, not just nod at.

  • Brute force attack: Imagine trying to crack a four-digit PIN by starting at 0000, then 0001, then 0002, all the way to 9999. No cleverness, no strategy, just trying every single combination until one works. That’s brute force. It’s dumb, slow, and works disturbingly often if nobody stops it.
  • Stochastic: This just means random, or more precisely, probabilistic. AI models are stochastic because they don’t produce the exact same output every time you ask the same question. There’s built-in variability in how they generate responses. That’s by design. It’s what makes AI feel less robotic. It’s also a liability.

What is Best-of-N jailbreaking?

BoN is brute force, but smarter. Instead of trying every possible combination from scratch, it exploits the built-in randomness of AI models. 

The logic is simple: if an AI gives slightly different answers every time, and some of those answers slip past its own safety rules, then the attacker just needs to ask enough times, in enough slightly different ways, until one version of the question gets the forbidden answer through.

That’s not just a technical edge case. It means safeguards can be bypassed at scale, with direct implications for how your team uses AI tools every day.

Diagram showing a single prompt splitting into five noisy variations — including random capitalization, character substitution, extra spaces, typos, and filler tokens — with one variant breaking through an AI safety filter

The research behind this technique describes it as a “simple black-box algorithm.” Black-box means the attacker doesn’t need to see inside the model. No access to the code, no insider knowledge required. They’re working from the outside, just like any regular user would.

Think of it like a kid asking for candy when you’ve already said no. The first “no” doesn’t stop them. They rephrase, change their tone, ask at a slightly different moment, and try from a different angle. 

They ask another adult or wear you down, not by finding a magic phrase, but by generating enough variations that eventually one lands at the exact moment your patience runs out. BoN is that kid, automated, running thousands of variations per minute.

How the attack works — and how easy it is to set up

This is the part that should make you uncomfortable, because it shows how little effort it takes to turn this into a real-world risk. The setup isn’t sophisticated.

Three-column diagram showing how Best-of-N jailbreaking adapts by modality: text attacks use random capitalization, character scrambling, and typos; image attacks change background color, font, or text position; audio attacks adjust pitch, speed, or background noise

Step 1: Augmentation 

The attacker takes a forbidden prompt, something the AI is trained to refuse, and generates hundreds or thousands of variations. 

Not clever rewrites, just noise: random capitalization (HoW Do I…), scrambled characters, inserted typos, and meaningless filler tokens. 

Ugly, broken-looking text that a human would immediately recognize as weird, but that an AI processes token by token.

Step 2: Bombardment 

All those variations get sent to the model simultaneously, or in rapid succession, using a simple script. This isn’t a complex operation. 

Anyone with basic Python knowledge and access to an API can automate this. The compute cost is low. The barrier to entry is lower than most people assume.

Step 3: Selection 

An automated grader, often just another LLM, scans all the outputs and flags the one response that bypassed the safety filter and delivered the restricted content. The attacker doesn’t read thousands of responses. The second AI does the screening for them.

That’s the full attack. No special hardware, no insider access, and no advanced degree in machine learning.

Get the newsletter search marketers rely on.


The numbers behind BoN

The original research clocked an 89% attack success rate on GPT-4o and 78% on Claude 3.5 Sonnet when running 10,000 augmented prompt variations. 

With just 100 variations, Claude 3.5 Sonnet still failed 41% of the time. This didn’t quietly fade into the research archives when the models got updated. It was presented as a poster at NeurIPS in December 2025. 

NeurIPS is the most prestigious machine learning conference in the world. And the attack has only gotten faster. Newer BoN-based techniques can now achieve comparable success rates while cutting the time to attack from hours to seconds.

Meanwhile, OWASP, the gold standard for cybersecurity risk rankings, listed prompt injection, the category BoN falls under, as the No. 1 vulnerability in their 2025 LLM Top 10

The success rate also follows a predictable power-law curve, meaning attackers can mathematically forecast how many attempts they need before they break through. 

Forget luck, we’re talking about a calibrated, scalable operation. BoN also works across all modalities: text, images (change the font, background, and color), and audio (adjust pitch, speed, and background noise). Every format and frontier model tested.

Why it’s a marketing and branding problem

Cybersecurity and marketing used to be separate conversations. AI collapsed that boundary and put brand risk directly inside your AI workflows.

Safety filters are porous, not protective

The research is unambiguous: given enough augmented attempts, some will get through. This applies to every AI tool in your stack, whether it’s internal, customer-facing, or embedded in your content workflows.

Your prompt inputs carry legal risk

When your team pastes a client brief, a competitor’s ad copy, or licensed third-party content into a prompt to “get AI help,” you’re introducing material that could later be extracted. 

BoN jailbreaking demonstrates that copyrighted content can be physically retrieved from model weights under the right conditions. If an AI can reproduce verbatim text when sufficiently probed, that content is encoded in there. The safety filter was the only thing standing between it and the output.

Brand exposure through your own AI tools

If someone uses BoN to jailbreak an AI tool your brand has deployed, a customer chatbot, or a content generation tool and extracts harmful, offensive, or legally compromising output, the story doesn’t start with “AI was jailbroken.” It starts with your brand name. You know this, journalists know this, and social media content creators know this.

Attack composition makes this worse 

BoN doesn’t operate alone. Combining it with a “prefix attack,” a carefully crafted phrase attached to the start of each prompt, boosted success rates by an additional 35% while requiring fewer attempts. The technique actively evolves toward greater efficiency.

What you should do now

Audit what goes into your prompts

Treat prompt inputs with the same sensitivity you’d apply to data under GDPR. Licensed content, client briefs, proprietary information — none of it belongs in a third-party AI tool without a clear data policy from the vendor.

Stop treating safety filters as compliance

If your AI vendor says the model is safe and that settles it for you, you’ve outsourced your risk assessment to the party that profits from minimizing it. Output monitoring, anomaly detection on request volume spikes, and continuous red-teaming are due diligence.

Understand that the attack surface spans every modality

Text, image, and audio. BoN applies across all of them. If your brand uses any AI-powered tool that handles user inputs in multiple formats, the vulnerability applies.

Flowchart of a Best-of-N attack in three steps: Step 1 Augmentation turns one prompt into N noisy variations; Step 2 Bombardment sends all variations to the AI simultaneously; Step 3 Selection uses an automated grader to find the response that bypassed the safety filter

Log everything

Prompts in, outputs out. If an incident happens, legal will ask what the model was given and what it produced. Without logs, you have no defense and no evidence.

What BoN jailbreaking reveals about AI safety limits

The same built-in randomness that makes AI useful for creative and marketing work makes it exploitable at scale. BoN jailbreaking is an active, validated, and accelerating threat that the cybersecurity community is racing to defend against. 

Most marketing teams haven’t yet priced in the brand, legal, and reputational stakes. The ones that do first will build defensible practices before they need them. The rest will learn it through an incident they didn’t see coming, and won’t be able to explain after the fact.

Why ugly ads outperform polished creative and how to test them

22 April 2026 at 16:00
Why ugly ads outperform polished creative and how to test them

You’ve been told to follow a familiar set of rules for years: always use high-quality creative, keep your brand polished, stay scripted, and follow platform-recommended formats.

If you’ve been in ad accounts lately or browsing feeds, you may have noticed something. Attention-grabbing ads don’t always follow those rules. They’re scrappier, less polished, and sometimes even called “ugly ads.” The beauty is that they’re coming out on top.

More brands are breaking best practices on purpose to stand out. After all, best practices are an average of what worked best for everyone else in the last six months, give or take. By the time a tactic becomes a platform-recommended rule, the edge has already been sanded off.

That’s why breaking best practices works — but only if you understand what’s behind them.

Why breaking best practices leads to better-performing ads

Before getting into what to change, it helps to understand why the rules exist in the first place. Platforms like Meta and TikTok have a dual incentive: 

  • They want you to spend money on advertising.
  • They want users to stay engaged on their platforms. 

The best practices they promote are designed to create a frictionless experience, pushing ads to look and behave like ads. 

The problem is that what feels familiar eventually becomes invisible. When you follow the rules too closely, your ads blend into the background noise users have trained themselves to ignore.

High-production ads signal “this is an ad” almost instantly, triggering a skip reflex before your hook lands. When your ad looks like something a friend might send, the brain’s defenses stay down just a bit longer, and that can be the difference between a scroll and a conversion.

That’s why many of the top-performing ads today don’t look polished or on-brand in the traditional sense. They interrupt patterns instead. Think:

  • Grainy phone footage. 
  • Notes app screenshots.
  • Green-screened reaction or commentary videos.
  • Other lo-fi formats are outperforming studio-grade creative. 

To apply this, intentionally lower your production value and experiment with formats like point-of-view (POV) shots tailored to different personas.

Dig deeper: TikTok ad creative has a shorter shelf life. Here’s how to keep up

Founder-led ads: The return of the human

Many brands have guidelines designed to make the company look faceless and invincible. They may not want to show a messy, lived-in office, a founder who hasn’t been professionally coached, or anything that breaks a tight, corporate script. But others are tossing that playbook and leaning into founder-led ads that aren’t the polished executive-profile version that was more common.

There’s a catch.

Rule-breaking only works if it’s authentic. If you fake it, the web will spot it in seconds, and it won’t land the way you expect. We saw this play out in a viral series of videos where McDonald’s CEO appeared in a promotional spot to introduce a new burger. 

As highlighted in a Dineline video, the execution felt stiff and staged. The CEO carefully lifted the burger, looked into the camera, called it a “product,” and took a small bite from the edge. People online quickly pointed out that it didn’t look like he actually liked the food, so why should consumers?

Soon after, Burger King entered the conversation, and its president appeared in one of its kitchens holding a burger with a completely different tone. No hesitation, no corporate pauses — just a big, genuine bite. 

The lesson is clear: One felt like a product presentation, and the other felt like a real moment.

If your leadership, your founder, and your team don’t look genuinely excited about what they’re selling, your customers won’t be either. Rule-breaking should give you the courage to be real, not just “unpolished” for the sake of it.

Get the newsletter search marketers rely on.


The comment hook hijack

You’ve likely seen — and maybe used — a video hook best practice like “show the product in the first two seconds and state the value prop clearly.” Sound familiar? 

Your ad starts with a screenshot of a negative comment. Let’s say you have a skincare ad that opens with a text bubble: “This probably smells like old socks, and does it even work?” Your founder then spends the next 15-20 seconds smiling, proving it wrong in an unscripted, unpolished way, while applying the product.

Using the platform’s native comment bubble and opening with conflict breaks your brand’s positive-association rule, but you’ll gain attention by tapping into users’ natural tendency to watch a digital argument. 

By the time viewers realize it’s an ad, they’ve already heard your main points and may be on their way to trying the product. Effective advertising still relies on psychology, but now it requires understanding user behavior and how algorithms work.

The rebel’s safety net

Don’t delete all your polished assets just yet.

Breaking the rules is strategic. When it fails, it’s often because the “80/20 rule” gets overlooked.

Shifting your entire budget to shaky phone footage overnight isn’t the move. Maintain a baseline of about 80%, and use the remaining 20% to test new, unconventional ads. Standing out doesn’t mean producing bad advertising.

Give these a try in your next test campaign:

  • The silent test: Skip trending audio and run a fully silent ad with large, bold captions. In a noisy feed, silence can interrupt patterns.
  • The UI ghost: Create a static image that looks like a platform notification or a low-battery warning, if relevant. It may annoy some viewers, but it can stop the scroll.
  • The algorithmic trust fall: Turn off auto-optimizations in one campaign and use broad targeting if you aren’t already. Let your ugly creative do the filtering. You may find the algorithm performs better when you remove manual guardrails.

Don’t follow the rules, understand them

Best practices are a starting point, not a strategy. If you’re going to move beyond them, do it systematically. 

Start with the rule, understand why it exists, ask whether it still applies, and then test the opposite in a structured way. Compare polished and lo-fi, scripted and unscripted, and brand voice and personal voice.

In a feed full of brands playing it safe, those who understand the rules — and how to break them intentionally — are the ones getting attention and conversions. Focus on learning faster than everyone else. Skip the guesswork.

The hidden ‘bland tax’ that could erase your brand from AI search

22 April 2026 at 06:17
AI filter

AI isn’t just changing search — it’s deciding which brands get ignored.

At Adobe Summit today, Andrew Warden, CMO of Semrush, argued that visibility has fundamentally changed — and that brands now risk being systematically filtered out by AI systems.

  • “The idea of standing out is no longer optional. There’s a real risk of sameness,” Warden said.

Because AI systems decide what to surface and what to ignore, brands now must compete for visibility in answers.

AI is changing how discovery works

You can already see the shift in the data, as 60% of Google searches now end without a click to a website.

Users are still searching, but they’re not always visiting websites. They get answers directly from AI systems like Google AI Overviews, ChatGPT, Perplexity, and others.

AI systems are becoming what Warden described as the “new gatekeepers.”

This is part of a broader shift toward the agentic era — where AI systems act as intermediaries, guiding users through the entire journey from question to decision in a single interface.

At the same time, user behavior is changing. People are spending more time in conversational environments, asking follow-up questions, refining queries, and exploring options without leaving the interface.

The result is fewer clicks, but often higher-intent users. Consumers who use LLMs convert 4.4x higher than those using search alone, Warden said citing Semrush research.

SEO is the foundation

Despite ongoing claims that AI will replace search, Warden pushed back.

Instead, SEO has become more foundational. It’s no longer just about ranking pages — it’s about making sure your brand exists in the data layer that AI systems rely on.

  • “SEO isn’t just for humans anymore. This is a training manual for AI right now,” Warden said.

That includes the fundamentals:

  • Crawlability
  • Indexability
  • Structured data
  • Authority signals

Without them, your brand won’t show up.

  • “If you do not have the core SEO principles in place… LLMs will actually wipe you out of the conversation.”

Research supports this: 94% of Google AI Overviews cite at least one top organic result, reinforcing that traditional search signals still underpin AI outputs.

The rise of the ‘bland tax’

One of the session highlights was when Warden discussed what he called the “bland tax.”

  • “AI is conditioning itself right now to ignore blandness.”

That means content that feels generic or repetitive disappears.

  • “If you are generic, you are average. And if you are average or bland… [you are] invisible.”

AI systems don’t reward sameness. Instead of highlighting your brand, they summarize similar content into a single answer — often stripping away attribution entirely.

  • “This is an invisible penalty that you pay,” Warden said.

The consequences show up in three ways:

  • Your brand identity gets erased in AI-generated summaries.
  • Your content gets filtered out as low-value.
  • Your work becomes training data for AI without visibility.
  • “You also become a free training ground for LLMs,” he said.

What visibility depends on

Warden reframed brand visibility as the combination of:

  • Discoverability: Can LLMs find you?
  • Authority: Do they trust you enough to include you?
  • “You absolutely need both,” Warden said.

SEO ensures discoverability. Authority determines whether you show up in AI-generated answers.

Without authority, you risk becoming “a commodity that isn’t worth being mentioned.”

How to win: three key signals

Warden outlined three areas that determine whether a brand shows up or gets filtered out.

1. Entity authority

AI systems map entities and relationships.

  • “AI has to recognize your brand as an authority on a topic,” Warden said.

One key signal is brand demand.

  • “If people aren’t looking for you, then neither is AI,” Warden said.

Strong brands reinforce their authority across multiple surfaces — owned content, media coverage, and community conversations — making it clear what they stand for.

2. Information density and originality

AI systems prioritize citing content that adds something new. So don’t just publish content. Contribute something meaningful.

  • “They’re prioritizing new facts,” Warden said.

That includes:

  • Proprietary data
  • Original research
  • Unique perspectives
  • Expert insights

Original insights can boost visibility by 30 to 40%, according to Warden.

3. Signal alignment

AI evaluates not just what you say — but what others say about you.

That includes:

  • Reviews
  • Reddit and YouTube discussions
  • Media coverage
  • Customer conversations
  • “If there are conflicting signals… AI flags you with unreliable,” Warden said.

Consistency across all of these creates what he called a “consensus signal” — a unified narrative that AI systems can trust.

Why most organizations aren’t ready

One of the biggest challenges is organizational.

  • “Visibility isn’t… a channel problem… it’s an organizational problem.”

Today, responsibility is fragmented:

  • SEO teams focus on rankings.
  • PR and brand teams manage messaging.
  • Growth teams run experiments.

But no one owns visibility across AI systems.

This leads to inconsistent signals and missed opportunities.

To compete, companies need alignment across teams, with a shared strategy for how the brand shows up everywhere LLMs are pulling data from.

The measurement problem

Meanwhile, traditional performance metrics are breaking down.

Warden described a pattern many marketers are seeing:

  • Rankings remain stable.
  • Traffic declines.
  • Leads increase — but attribution is unclear.

Warden said:

  • “Demand is still there. But… traffic is no longer the proxy for that.”
  • “Your content is being used, but not in the way that sends people back to you.”

This creates a growing gap between impact and measurement.

From rankings to relevance

The nature of competition has changed.

  • “You’re no longer competing for a position. You’re actually competing to be in a synthesized answer,” Warden said

Authority is also harder to control than it used to be. It now depends heavily on external validation — what others say, not just what you publish.

  • “Algorithms are no longer your ally… they are the ultimate arbiter of what is meaningful.”

That is one of the biggest changes in search since Google itself.

The new rules of brand visibility

AI hasn’t changed what makes a brand strong, but it has changed how strength is measured and rewarded.

The brands that win will:

  • Build real authority in a focused niche.
  • Publish original, high-value content.
  • Align messaging across every platform and channel.
  • Earn consistent validation from third parties.

In this new environment, visibility must be earned across an ecosystem.

Or as Warden put it:

  • “Make it impossible for [LLMs] to ignore you.”
Before yesterdaySearch Engine Land

Google adds AI-qualified call leads to improve measurement

21 April 2026 at 22:29
Google Ads

Google is upgrading Google Ads call campaign measurement with a new AI-qualified call leads feature, designed to optimize for lead quality — not just call length.

What’s new. AI-qualified call leads use machine learning to analyze calls and determine whether they represent meaningful business opportunities. The system then feeds that higher-quality data into bidding and reporting.

Zoom in. Advertisers will get AI-generated call summaries and tags, giving more transparency into what happened during each interaction. At the same time, smart bidding can prioritize higher-value leads based on these signals rather than simple time thresholds.

Why we care. Call campaigns have long relied on blunt metrics like duration to signal value. This update shifts optimization toward actual lead quality, filtering out low-value interactions like spam or robocalls. This should result in better ROI, less wasted spend, and clearer insight into which calls actually matter.

How it works. Call recording is turned on by default for most advertisers so AI can assess call quality, though industries like healthcare and financial services are excluded. Advertisers can still adjust call length thresholds or disable recording in account settings.

The fine print. The feature is currently limited to calls in the U.S. and Canada.

Bottom line. Google is turning call tracking into call qualification, helping advertisers focus on leads that are more likely to convert.

The funnel flip: Why AI forces a bottom-up acquisition strategy

21 April 2026 at 19:00
The funnel flip- Why AI forces a bottom-up acquisition strategy

The industry has been building top-down for 30 years. Start with awareness, get in front of as many people as possible, and work them down through the acquisition funnel.

The logic made sense in the broadcast era, and it wasn’t entirely wrong in the search era.

In AI-driven environments, it’s simply wrong.

Search engines, assistive engines, and agents build their ability to recommend your brand from the bottom up. They need to understand who you are before they can evaluate whether you’re credible. They need to evaluate your credibility before they recommend you to anyone.

If you build from the top down, you’re wasting budget on awareness while the engines and agents have no foundation to attach it to.

Agential systems make the stakes absolute. An agent acting on behalf of a user evaluates your brand, your offers, and your credibility, then commits.

If the machine doesn’t understand who you are, what you offer, and whom you serve, the agent can’t act in your favor. If it understands you but doesn’t find you the most credible option, it selects your competitor.

This is the ultimate zero-sum moment in AI: the recommendation you never saw happening, to the prospect you never knew was considering.

The acquisition funnel runs simultaneously in opposite directions

The user experience of the acquisition funnel hasn’t changed. Someone hears about you, considers you, and decides whether to commit. That journey runs wide to narrow, top to bottom: awareness first, evaluation second, and decision at the bottom.

This is the familiar funnel. Elias St. Elmo Lewis formalized it in 1898. Every marketing model since has been built around it, and for 128 years, nothing fundamental has changed. The channels evolved, but the direction was always the same: reach first, relationship second, commitment third. 

In 2002, my friend Philippe Lanceleur described the web perfectly for search: building a website and hoping people find it is like opening a shop in the middle of a field. Nobody passes by accident. You go where your audience hangs out, engage with them, and invite them to cross the field and visit your shop. Awareness was still the prerequisite, and your marketing had no chance of working without it.

The shift to entities changed the prerequisite. When Google introduced the Knowledge Graph in 2012, the machine began forming opinions about brands independently of what users were searching. The machine was drawing its own map and building roads for you. 

Those machine-built roads are built from the shop outwards by the machines, which means brand understanding and reputation, not awareness, become the prerequisite. All my work since 2012 has been focused on brand understanding and reputation for exactly this reason.

AI makes the acquisition funnel flip more powerful still. Assistive engines and agents now actively direct users toward destinations they’ve assessed as credible. Lanceleur’s shop in the field is no longer a handicap if the machines know it’s there and believe it’s the best destination for their users: they provide the roads.

This is the first genuine structural break in how brands must think about marketing since 1898. The display funnel is unchanged: the user still travels from awareness to decision. What makes you a candidate at the top of that funnel in AI engines and agents is built by training the machine to bring users to you.

How top-down and bottom-up coexist

The big takeaway is that the build funnel runs in the opposite direction. 

  • The machine starts at the bottom. Does it know who you are? 
  • It works up through credibility. Does it trust what you do? 
  • Only then does it reach advocacy. Will it recommend you proactively? 

The moment of commitment by the user stays the same: know-like-trust the brand, but the only way for the user to arrive at that moment in AI assistive engines is that the machine knows, likes, and trusts your brand.

The coexistence of the bi-directional funnel is real. You can build top-down in channels you control: paid media, broadcast, and direct outreach. You can still buy awareness and pull people to decision. In the engines themselves, the user still has the top-down experience. 

The difference is that within the engines for organic, you have to build from the bottom of the funnel (BOFU) up because that’s how the machines build the roads to your brand.

Every algorithm, assistive engine, and agent operates on entity and brand signals, not on how loudly you push. Reach on social media has always been influenced by brand recognition, engagement, and topic, and here too, brand understanding and trust are gaining increasing weight.

With AI, roads to your shop in the field are increasingly machine-built, and machine-built roads are built from brand understanding outwards to awareness.

The original 1898 funnel still describes what users experience. In AI assistive engines and agents, it no longer describes the strategy that gets you in front of them: for that, you need to flip the funnel.

In short, you can’t build your funnel in AI engines and agents top-down in a world where those machines are the mediators between you and your audience. The machine won’t recommend brands it doesn’t understand, and it will only advocate for brands it trusts. This is a mechanical fact.

AI infrastructure works like this, so you also must. 

  • Understandability creates the entity node.
  • Credibility gives it preferential consideration.
  • Deliverability gives it visibility.

Foundation. Proof. Reach. Put like that, it really does seem obvious, unavoidable, and comfortable.

Get the newsletter search marketers rely on.


How the funnel becomes a guided sequence in AI

The user journey on Google used to be a series of single-composed SERPs that users navigated themselves. Search engines composed those pages cleverly (Google and Bing have run a whole page algorithm since universal search launched in 2007, Darwinistically pulling elements from across verticals and scoring the composition as the “product”), but the navigation across the funnel was the user’s job.

As an SEO, you optimized for a position in the composition, and the user carried themselves from awareness to consideration to decision by browsing, comparing, and choosing.

Over the last few years, the algorithmic trinity has fundamentally changed that dynamic. The LLM reasons about what the user is asking, decides whether to answer directly, ground, search, or fact-check via the knowledge graph, and runs fan-out queries to retrieve across multiple angles of the question.

Those fan-out queries (which I’ve also called cascading queries) help the assistive engine answer the question more completely and more accurately than a single query would. But the breadth of what it gathers also lets it do one more thing — and this is the mechanic that actually matters in the funnel that leads to the perfect click: it can anticipate what the user is likely to do next, and set the current answer up to flow toward it.

The explicit representation of the LLM’s prediction of “next step” is the follow-up questions you see in the results. But there’s an additional implicit side to this architecture you might have missed: the way it composes the current answer shapes what the user is likely to do next. The AI is, to a very large extent, defining the acquisition journey. It seems to me the user is less in control than they feel.

That means your job appears to be to fight for a slot in a sequence the machine has already built.

That’s fair. But I’d argue that the brand’s job is also to train the machine’s expectations about what a logical next step looks like, so that when the LLM composes, your content is the natural thing it reaches for. 

You supply the ideas, you structure the follow-ups, you publish the logical bridges (“if you’re thinking about X, the next thing to consider is Y, and here’s the evidence”) in enough places, and with enough corroboration, that the machine treats those bridges as settled, not speculative. The machine then guides users toward you because your content is what its prediction landed on, because your framing is what made that prediction logical in the first place.

Now, is the AI thinking one step ahead? Or playing chess and planning several moves in advance? It depends. How far ahead the machine can usefully look depends on the territory. 

On well-traveled ground, the paths are well-worn, and the branches are narrow, so the LLM can stage two, three, or more moves ahead. Think of this as established neurological synapses: your influence on the paths is limited here. 

In unusual territory, the branches collapse the prediction horizon back to one, perhaps two steps. That’s an opportunity for a brand to create the synapses with your brand firmly anchored. Here’s yet another good reason to niche down, solve very specific problems, and have a very clear funnel pathway.

When defining the content I work on and terms I track, I use the concept of funnel pathway for exactly that reason — a top-of-funnel (TOFU) query that naturally leads to my brand at BOFU with a series of steps that are logical and relatively predictable.

So, track a set of terms that have a natural pathway to your brand at the zero-sum moment at the bottom of the funnel. Some start at TOFU and move through MOFU to BOFU. Others begin at MOFU with a clear path to BOFU, and some start (and end) at BOFU.

I’ll probably get pushback here. The number of possible paths is effectively infinite because conversations with AI can go anywhere. True. But this is a better system than chasing search volume or tracking the terms the boss likes: it forces you to think, focus, and prioritize — and it works.

Get your foot in the door, and keep it there

Strategically, you have to get a foot in the door as early as possible in the conversation, and ensure that you keep your foot there as the conversation evolves and the AI guides the user down the funnel.

The stronger your foot in the door, the more you shape the conversation the machine builds, the more that conversation thins the field of competitors the machine considers for the next step, and, by virtue of elimination, the more likely you are to get the perfect click at the zero-sum moment at the bottom of the funnel.

I’m advocating for educating the algorithms (remember, Google is a child?). The better you guide, the more the machine’s best-brand prediction converges on you step after step, because the path it’s following is the path you built into its brain. 

Get in high, and the compounding works in your favor. Get in late, and your competitors’ bridges become the machine’s bridges, and every subsequent step is a fight to re-enter a sequence where your competitor is Top of Algorithmic Mind.

Display is where your acquisition funnel lives in the AI engine pipeline

The AI engine pipeline runs 10 gates from discovered to won. 

  • Everything up to annotation (Gate 5) is infrastructure: can the machine access, store, and classify your content? 
  • From recruitment (Gate 6) onward, the engine compares you to every alternative. 
  • The understandability, credibility, and deliverability (UCD) layer is where the user sees the machine evaluation at display (Gate 8). Understandability is the key to won (Gate 9).

The three dimensions of brand visibility at display

Display is the moment when the machine can make or break your brand by being the most visible in the market at every touchpoint when your ideal customer profile (ICP) is having a conversation with the engine or agent. 

It’s obvious that this is the key moment when you need the engine or agent to be absolutely convinced that you’re the best solution to the specific user’s problem at the exact moment they convert (see the 95/5 rule here).

Understandability (U) is the trusted partner/decision layer, without which nothing else will work long term. Does the machine know who you are, what you do, and who you do it for? 

U is BOFU, which is both the moment of decision and (logically) the deepest trust layer for both the AI user and the human user. When someone searches your brand name or asks an AI assistant directly about you, the machine draws on its understanding of your entity. 

If that understanding is weak, contradictory, or absent, the machine either hedges or stays silent. Typical failure modes show up in AI responses as “claims to be,” “appears to offer,” or “no idea who you are talking about.” The doubt tax — where prospects ready to buy get a hedge instead of a confirmation — is a U failure.

Credibility (C) is the recommender/consideration layer. Does the AI believe you’re genuinely better than your competitors at what you do? 

C is MOFU, the comparison and evaluation layer. When someone asks an AI who is the best in market, the machine draws on its confidence in your N-E-E-A-T-T credibility and will exclude you if you haven’t built a rock-solid argument to be cited. 

If AI confidence in you is weaker than its confidence in the credibility of your competitor, you lose the comparison. The ghost tax – absent from competitive evaluation and ignored in shortlists — is a C failure.

Deliverability (D) is the advocate/awareness layer. Does the AI surface your brand to people who aren’t searching for you, recommend you unprompted when they research the market, and treat you as the reference option in your category? 

D is TOFU, the reach layer. When someone asks an AI about a problem, you solve without knowing your brand exists, the machine draws on its confidence that you are the right answer to put in front of them. 

Advocacy only happens when the machine has first understood who you are (U), and judged you better than the alternatives (C). The invisibility tax — never mentioned to prospects researching the market — is a D failure.

The business case for UCD: The three taxes

My untrained salesforce framing is super clear for a non-technical audience. Google, ChatGPT, Perplexity, Claude, Copilot, Siri, and Alexa are seven employees working 24/7, and they’re either selling for your brand or for your competitors. AAO can be defined as training AI assistive engines and agents to sell for you at the top, middle, and bottom of the funnel.

Here’s the part most of the industry still hasn’t internalized: machines aren’t an alternative audience. They’re a mirror of how people process information, with the noise filtered out. 

Optimizing for machines is optimizing for humans with less guesswork. A brand SERP is Google’s opinion of the world’s opinion of you, and Google’s opinion is built from the same signals that form human opinion, only weighted more consistently, and corroborated across millions of data points. 

When you optimize to improve what Google believes about your brand, you’re not gaming an algorithm. You’re correcting and reinforcing what the world already believes about you, expressed with the precision humans rarely articulate. The algorithm is the clearest feedback loop marketing has ever had. 

Each tax is a specific failure mode of that untrained salesforce. 

  • The doubt tax is what you pay when they can’t confirm who you are to a prospect ready to buy. 
  • The ghost tax is what you pay when they can’t argue your case against competitors in a shortlist. 
  • The invisibility tax is what you pay when they don’t mention you at all to the prospect researching the market. 

The fixes run in one order: U before C, C before D, because the taxes are mechanically ordered, and the remediation has to match.

Content was king in the keyword era, context took the throne around 2016, and confidence is king now. The AI engines don’t just store and retrieve. They stake their own credibility on the brands they recommend, and that staking runs on accumulated confidence at every layer. 

Build U to retire the doubt tax. Build C to retire the ghost tax. Build D to retire the invisibility tax. Every tax retired is a recommendation earned, and every recommendation earned is revenue the machine now generates on your behalf instead of your competitor’s. 

Strategy: Your brand SERP and AI résumé tell you where to begin

Brand SERP is what Google shows when someone searches your brand name. The AI résumé is the same object in conversational format. The agent dossier is the machine’s silent judgment during evaluation before any recommendation reaches a person. 

All three are dual-function objects. They’re the machine’s output to every audience that asks about you, and your diagnostic instrument for reading the machine’s current confidence. That dual function is why they’re both the product and the audit.

Read all three as the machine’s understanding of you, its assessment of your credibility, and its confidence in you as a solution provider. The diagnostic triage is short.

If the machine gets things wrong, hedges facts, or the results don’t reflect your brand narrative, that’s an understandability problem. The entity record is inconsistent, weak, or contradictory, and the work is on your entity home: clean structured data, consistent descriptions, clear schema, and entity resolution that points to a single authoritative source.

If the results are unconvincing, unflattering, or don’t do you full justice, that’s a credibility problem. Your N-E-E-A-T-T is weak, and the work is offsite: third-party mentions, review platforms, earned media, and co-citations from sources the machine trusts.

If the results don’t reflect your digital marketing strategy, that’s a deliverability issue. The work is in content, both on your channels and on third-party properties, the type of material the machine treats as proof rather than a claim.

In every case, the diagnosis comes before the tactics. U before C, C before D, and the sequence isn’t optional.

Acquisition is one act in a 15-stage play

The acquisition funnel feels dominant because it’s where conversion happens. The funnel sits on the display gate, where UCD determines whether the machine recommends you. 

Everything else, the work that lets display happen at all and the work that compounds afterward, runs across the nine gates before it and the five gates after it.

Those five gates after Won are where most of the money is made and most of the confidence is generated. Onboarded, performed, integrated, devoted, and codified — every client outcome feeds signals back into gate zero for the next prospect who has never heard of you. 

The flywheel is the mechanism. Get it right, and every satisfied client strengthens the machine’s confidence in your brand for the next one. Get it wrong, and every neutral outcome decays it.

That’s more than just an acquisition strategy; it’s a business strategy, with the machine as a constant participant at every stage.

The final articles in this series will show you what happens after won: how every satisfied client either trains the machine to recommend you more confidently next time, or quietly erodes the confidence you’ve already built. 

The funnel isn’t where the money is made, but it is the critical moment the flywheel feeds where the path to money is.

This is the 10th piece in my AI authority series. 

Google rolls out new AI safety features in Ads Advisor

21 April 2026 at 18:30
What 23 tests reveal about AI Max performance in Google Ads

Google is adding three new “agentic” safety features to Ads Advisor, its AI assistant inside Google Ads, aimed at reducing manual work while tightening security and compliance.

As campaigns grow more complex, advertisers are spending more time fixing policy issues, managing access, and handling certifications. Google’s pitch: let AI handle the heavy lifting so marketers can focus on performance.

What’s new. The update introduces proactive troubleshooting, always-on security monitoring, and instant certifications — all powered by AI and Gemini capabilities.

Zoom in:

  • Ads Advisor can now flag and help resolve policy violations automatically, even before advertisers notice them.
  • It monitors accounts 24/7, surfacing risks like suspicious domains or inactive users through a new security dashboard.
  • Certifications that once took weeks can now be granted instantly or submitted with a single click.

How it works. Instead of waiting for user prompts, Ads Advisor scans accounts and websites proactively, suggests fixes, and confirms resolution before appeals are submitted. On the security side, it continuously evaluates account health and recommends improvements, while new passkey support reduces reliance on passwords.

Why we care. Tasks that used to take hours — fixing policy issues, monitoring account security, and handling certifications — can now be done proactively by Ads Advisor, reducing delays and aims to reduce risks. The result is faster campaign execution, fewer disruptions, and less manual overhead.

What to watch. These features are rolling out in the coming months to English-language accounts, with more languages expected later.

Bottom line. Google is turning Ads Advisor into a hands-on operator, not just a helper — aiming to make ad accounts safer, faster, and far less manual to manage.

💾

Google adds AI features to Google Ads that automate policy fixes, security, and certifications to speed up campaigns.

How to build a YouTube analytics report in Data Studio

21 April 2026 at 18:00
How to build a YouTube analytics report in Data Studio

Creating video content takes time and budget, so understanding how it performs is critical.

YouTube’s native analytics in YouTube Studio are robust, but they’re locked behind account access. That can make reporting difficult — especially when you need to share data or don’t have direct login access.

Moving that data into Google Data Studio (formerly Looker Studio) makes it easier to analyze and distribute.

With Data Studio, you can:

  • Pull YouTube data into reports you already use.
  • Schedule automated updates for stakeholders.
  • Customize dashboards around the metrics that matter.
  • Track performance without relying on backend access.

Here’s how to pull your YouTube analytics into a Data Studio report.

Using a template or starting from scratch

You have two options when setting up a YouTube report in Data Studio.

  • If you want something quick and easy, you can use Google’s YouTube Analytics template from their template gallery. It’s a great place to start because it provides a clean, well-designed report with foundational metrics and puts you in a good position to understand which metrics are available. But know that this template has problems you’ll need to fix, which I’ll discuss below.
  • The other option is to create a report from scratch, which is a great choice if you already have a report you want to add a new YouTube Analytics page to, or if you just want to learn how to use Data Studio.

The information below will help you do both.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

If you’re not the YouTube account owner

If you’re setting up this report for a client, or if you’re not the owner of the YouTube account, you’re going to run into an issue where the YouTube account doesn’t show up as a usable source in Data Studio. Here’s how to get around it:

  • Go to YouTube Studio settings > Permissions, and give Manager permissions to the account email that you’re using in Data Studio.
  • Get the Channel ID from the channel’s YouTube URL.
  • Add a YouTube connector to Data Studio, go to Advanced, and paste the Channel ID.

You should now have access to that YouTube account.

Using the Data Studio YouTube Analytics template

From the Data Studio home page, click Templates > Template Gallery. Under the category dropdown, click on YouTube Analytics.

Clicking this will create a brand new Data Studio report that’s mostly ready to use. It loads up with sample data from the Google Analytics YouTube channel. Click the button at the top that says “Use my own data.”

The first time you set up a report, you need to authorize access to your data. Click the Authorize button.

Choose the Google Account connected to your YouTube channel, and then you’ll see any connected channels in the dropdown at the top of the page.

You’ll notice that the data doesn’t change when you select a site here. That’s because this dropdown is connected only to the other dropdowns next to it, not any of the charts on the page.

To update everything else on the page, click the Edit and Share button.

If this is the first time using Data Studio, you’ll also need to do some basic account setup.

Then click the Edit button at the top of the page.

Now you’ll need to add your YouTube channel as a source. Click the Add data button and then search for the YouTube Analytics connector.

If the Google Account is the owner of the YouTube account you connected to this Data Studio report, it’ll show up in the Channel section as an option. 

Your main YouTube channel will be in the My Channel tab, and other channels are in the All Channels tab, as shown below. 

If you don’t own the channel, see the section above to connect other channels that you don’t own, but have access to.

Now you’ll be able to change the data source on any charts on the page. Simply click a chart, and you’ll see the data sources available to you in the right Properties panel.

You can change the source of all of the charts on the page by selecting a chart, right-clicking on it, going to the Select menu, and then choosing “Charts with this data source on page” and then choosing your data source in the Properties sidebar.

You’re mostly done, but as mentioned earlier, there are some errors in this report that you’ll need to fix. The charts at the bottom of the report are using the wrong metrics.

I don’t know why Google hasn’t updated this template. It’s been like this for a long time, so I don’t know if they ever will. In the meantime, you’ll need to update the following.

Change:

  • Likes from “Average Watch Time” to “Video Likes Added”
  • Subscriptions from “Video Link” to “User Subscriptions Added”
  • Dislikes from “Average View Percentage” to “Video Dislikes Added”

The charts in the Comments section are correct, so you don’t need to change anything there.

Click on each of the charts highlighted above, one by one, and change the metric in the Properties sidebar.

And now the report is finished and ready to use. Click the View button at the top of the page to view the report in a view-only format.

Get the newsletter search marketers rely on.


Copying a template into an existing report

Data Studio doesn’t support the ability to add or import templates into an existing report, but you can copy a page from one report to another. Follow the steps above to create a report using the YouTube Analytics Channel template, then copy it into another report.

To do that, go into Edit mode, select all (Ctrl+A or Cmd+A), and copy all (Ctrl+C or Cmd+C). Then, in your existing report, create a new page, and paste everything you’ve copied into the page (Ctrl+V or Cmd+V), or right-click on the page and select Paste.

All of the charts will likely come in broken, but you can easily update them using the tip mentioned earlier – right-clicking a chart, choosing Charts with this data source on page, and then choosing the correct source in the Properties sidebar.

Customizing your report

The YouTube template in Data Studio has most of what you need, but you can add much more.

There are some metrics you simply can’t get in Data Studio that you’d find in the official YouTube Analytics backend, such as revenue, how viewers find your videos, watch behavior, popular viewing times, device types, genders, and retention, so there are some big limitations, but there’s still plenty to work with.

To add more charts to your report, you’ll need to create more space at the bottom. In the menu, click on Page > Current page settings.

In the Style tab of the Current Page Settings sidebar, set the canvas size to something like 3,000 pixels. This will give you plenty of space to work with, and you can always shorten or lengthen it as needed later.

Now you can add many types of charts with a wide range of dimensions and metrics.

You can add multiple metrics to graphs to get the data you need for better analysis. You can also rename headers to clean them up, and make them look less cluttered.

You can pull in quite a lot of data. Here’s what’s currently offered:

Using Data Studio for ongoing YouTube reporting

Setting up a Data Studio report for YouTube is a great way to track your top-level metrics, and can be especially useful for monthly client reporting. It takes siloed, hard-to-share data from YouTube, and puts it into a clean, automated, centralized tool for easier decision-making.

You can also set up scheduling so that Data Studio sends automated PDF exports to your email.

That’s it. As you can see, it’s fairly simple to set up, but you can also add more advanced customizations to track many other KPIs.

Why IBM says every brand now needs a GEO playbook

21 April 2026 at 17:35
GEO playbook

Search has changed, and brands need to catch up fast, according to IBM’s Alexis Zamkow (global lead of Marketing Transformation solutions) and Sandhya Ranganathan Iyer (associate partner – AI), speaking yesterday at Adobe Summit.

AI tools don’t just help people search. They answer questions, compare products, and recommend brands. In many cases, users never even visit a website.

That means if your brand isn’t part of the AI-generated answer, you may not be part of the decision.

To keep up, brands need more than new tactics. They need a system — a GEO (Generative Engine Optimization) playbook. Here’s a recap of their presentation, Adapt or Disappear: How Brands Win with AI-Powered Search.

The AI shift: You’re marketing to machines

AI agents now sit between you and your customer.

They take a complex market and simplify it. They decide what information to show. And they often speak on your behalf.

  • “These machines are disintermediating the brand experience,” Zamkow said.

At the same time:

  • Consumers are using AI for research and decisions
  • Businesses are adopting it even faster
  • Many searches now end without a click

Zamkow said an estimated 75% of search visibility could shift to AI agents in the next two years.

That’s why visibility today depends on being part of the answer itself.

The GEO playbook: 12 components every brand needs

To respond, the speakers outlined a 12-part playbook. It spans content, technology, and operations.

1. Strategic content foundations

Your content must tell one clear story — everywhere.

That includes your website, PR, social, and third-party mentions. If each channel says something different, AI won’t trust your brand.

For example, if your site highlights premium quality, but reviews focus on low price, that mixed message weakens your authority.

Consistency builds trust for people and machines.

2. Retrieval-grade passage standards

AI doesn’t rank webpages. It extracts answers. So your content must be easy to extract.

Good content looks like:

  • Clear questions and answers.
  • Short, focused sections.
  • Direct language.

For example, instead of a long paragraph, write:

  • Question: What are the best running shoes for beginners?
  • Answer: A short, clear response

This makes it easier for AI to reuse your content in answers.

3. Technical foundations

Even great content won’t work if AI can’t read it.

Machines rely on:

  • Clean HTML (not just visual design)
  • Structured data (schema, metadata)
  • Pages that load content directly

One example from the session: a beautiful website appeared to AI as “a headline and a blank page.”

If your content isn’t readable, it won’t be used.

4. On-site search + genAI search alignment

Start with your own site.

If your internal search — especially AI-powered search — works well, you’re already ahead.

Think of it this way: If your own system can’t find answers on your site, external AI tools won’t either

Strong internal search helps train your content for external visibility.

5. AI search citation qualification model

In GEO, the goal isn’t just to be mentioned. It’s to be cited.

  • Mentions mean you show up.
  • Citations mean AI trusts you.

AI looks for signals like:

  • Clear expertise.
  • Consistent messaging.
  • Agreement across sources.

Zamkow called citations the “holy grail” of visibility.

6. Extraction optimization

AI tools pull content from many places and combine it.

To be included, your content must be:

  • Easy to extract.
  • Clearly structured.
  • Rich in context.

If your content is hard to break apart, AI will skip it and use something else.

7. Real estate: third-party strategy

Your website is no longer your main source of visibility.

  • 85% of mentions come from external domains.
  • Third-party content drives most citations.

That includes:

  • Reddit
  • Social media
  • Reviews and forums
  • Media coverage

This means your PR and social teams are now critical to search success.

Your brand lives across the internet — not just on your site.

8. Measurement, KPIs, and reporting

Old metrics don’t tell the full story anymore.

Instead of just tracking clicks, you need to track:

  • How often AI mentions your brand.
  • Where you’re cited.
  • Which platforms show your content.

The key question changes from “Did we get traffic?” to “Did AI recommend us?”

9. SOPs (standard operating procedures)

Consistency doesn’t happen by accident. Teams need clear rules for:

  • How content is written.
  • How it is structured.
  • How it is published.

Without SOPs, different teams will create different formats. That confuses AI and weakens your visibility.

10. Prompting best practices

Search is now conversational.

While people still type keywords, they are increasingly describing their needs using more conversational language. For example:

  • Old search: “running shoes”
  • New search: “I’m training for a marathon. What shoes should I buy?”

Your content needs to match these types of questions.

That means thinking like the user — and writing like the answer.

11. Change management

This shift affects the whole organization.

Marketing, IT, PR, and product teams all play a role.

That means:

  • Training teams on new workflows.
  • Aligning goals and KPIs.
  • Breaking down silos.

This is bigger than just a marketing update. It’s a company-wide change.

12. Governance + versioning

GEO is never finished.

AI systems change constantly. Competitors update content. Rankings shift fast.

To keep up, brands need:

  • Ongoing monitoring.
  • Regular content updates.
  • Clear ownership of changes.

If your content becomes outdated, you can quickly lose your position in AI answers.

From SEO tactics to GEO systems

The GEO playbook reflects a larger change in how marketing works:

  • From keywords to prompts.
  • From links to citations.
  • From websites to ecosystems.
  • From traffic to answer eligibility.
  • From campaigns to continuous content.

The focus has shifted to building a system that consistently feeds AI the right information.

This is now a leadership issue

This shift is already reaching the top of the organization.

In one example, a product leader asked why their brand didn’t show up in an AI recommendation. The issue quickly escalated beyond marketing.

  • “This is not a problem for your SEO team,” Zamkow said. “This is at the CEO level.”

As AI becomes the front door to discovery, every leader will care about visibility.

Adapt or disappear

AI is already shaping how people discover and choose brands.

Consumers trust it. Businesses are using it. And it’s growing fast.

Brands that build and follow a clear GEO playbook — across all 12 components — will stay visible.

Everyone else risks being left out of the answer.

SEO reporting outgrew Data Studio — here’s what comes next

21 April 2026 at 17:00
SEO reporting outgrew Data Studio — here’s what comes next

Picture this: Your company relies on Data Studio for SEO reporting. 

It’s right before your next big meeting when you’re planning to present results… but Data Studio has an outage (again) and suddenly you have nothing to show. 

That’s embarrassing. And it happens more than it should.

It wasn’t even a year ago that I touted the benefits of Looker Studio (now Data Studio) for SEO reporting. Now the platform feels archaic compared to the agentic coding tools available today.  

Here’s how rigid SEO dashboards like those produced in Data Studio are holding you back and why code-driven SEO reporting is the only way to remain efficient and competitive.

The problem with Data Studio 

In the not-too-distant past, Data Studio was considered one of the best ways to customize SEO reporting. 

But things have evolved, and with new technology at our fingertips, Data Studio’s flaws are only becoming more pronounced. 

Here are some common issues that you may recognize when generating reports using Data Studio.

It’s easy to explode your dataset, and then everything breaks

You assume Data Studio can handle massive “Google-scale” data, but it’s buggy. For example, there are low limits on rows and fields, and even adding a few dimensions or joining multiple data sources can break the report at the worst times. 

You’re manually clicking through a slow interface

Every change in Data Studio requires manual updates. You’re clicking, refreshing and waiting to see whether it worked. That makes iteration painfully slow. Even with added AI features, they only address a small part of the report development workflow.

Relatedly, debugging reports is a nightmare

Whereas agents can simply scan files with code-based reports, in Data Studio, a user has to laboriously click around the interface. 

The API is weak

Like a lot of Google services, Data Studio isn’t built as an API-first platform. This is something Google got institutionally wrong decades ago. Not being able to manage the platform using external tools creates bottlenecks.

Despite its recent rebrand, Data Studio hasn’t become any more relevant — not with the technologies that are now in play for SEO reporting.

But it’s not just Data Studio. Really, what SEO teams are up against is the rigidity of any dashboard-based reporting tool. Now all that is changing.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

What’s changed: AI, APIs, and coding 

The shift away from rigid SEO dashboards is now possible because large language models are becoming more capable of generating reliable code, and APIs are accessible across many platforms.

This has led to the rise of AI-driven coding tools, including Claude Code, OpenAI Codex, and Gemini CLI.

At a high level, it works like this: You describe what you want in your SEO report, and they handle the heavy lifting. 

These tools are “agentic” because they can execute multi-step workflows like pulling data, transforming it, analyzing it, and then generating reports with minimal intervention.

You don’t need advanced coding skills to use them, but a basic understanding of data structures and APIs will make the process effective.

In practice, the entire reporting workflow can be done programmatically from start to finish.

They generate code that connects directly to data sources through APIs, removing the need to rely on dashboard connectors or preconfigured data pipelines.  

From there, they can analyze the data and create full reports. This can happen in minutes as you become more familiar with the tools.

While each of the tools I mentioned has different strengths (for instance, some are better at reasoning, others at speed or integrations), they essentially do the same thing: transform SEO reporting from a manual, rigid process into something with endless possibilities. 

The power of this technology is hard to overstate. 

Why AI coding tools are better for SEO teams

AI coding tools are removing the roadblocks between data, development, and reporting for SEO teams. 

Faster SEO reporting and analysis

Speed is the most obvious advantage. 

Agentic coding assistants are enabling SEOs to create reports that previously required support from developers.  

In many cases, tasks that previously took days can be done in hours and tasks that took hours can be done in minutes. 

You can see this improvement even in small interactions.

For example, when data is processed directly in the browser (instead of re-querying a dashboard), it makes filtering, sorting, and slicing data significantly faster. 

Instead of waiting for a dashboard to refresh after every change, you can interact with the data in real time.

That’s just one way these technologies make you more agile.

Flexible and custom reporting workflows

Instead of having to work in predefined templates and a fixed structure, you can build the report for exactly what the situation requires. 

Plus, every major data visualization and plotting library is available on demand in any programming language. 

If you feel like one approach isn’t capturing the whole story in your SEO report, you can switch or combine multiple frameworks in the same output. 

From rankings and traffic trends to keyword clusters or content performance, you can apply nearly any chart. 

The examples below come from Observable Plot, created by data visualization expert Mike Bostock, but many other charting libraries are available.

While setup and onboarding take some initial effort, these tools are accessible to most roles on the team and immediately become more efficient than traditional reporting.

Transparent data constraints

Data limitations are clearer, too. 

For example, when you’re working with browser-based charting libraries, you have a better feel for how many rows you’re handling and what the system can realistically process. 

And when you do hit a limit, you understand exactly what’s happening and how to adjust. This helps prevent misleading or incomplete reporting. 

Get the newsletter search marketers rely on.


Real-world SEO reporting applications

What are some practical ways you can use these agentic coding assistants to run SEO reporting? 

Pre-meeting reports

Before client meetings, you can pull data from Google Search Console and GA4 via APIs, then have it cleaned and segmented programmatically and generate a notebook, dashboard or slide deck in a single workflow.

Technical SEO analysis

Say you need to analyze crawl data or log files. Instead of exporting, filtering, and then visualizing the data manually, you could get the raw data, process it with code, and generate custom visualizations tailored to the exact problem you’re trying to solve.

Ad hoc stakeholder requests

Once data connections are established, last-minute reporting requests no longer have to mean staying up late to pull data and build reports. The next time someone asks for something like “non-brand CTR trends by device over the last 90 days,” you can produce this data with much less effort. 

Really, if you can imagine it, you can do it with these agentic coding assistants. As a result, SEO teams can do more proactive analyses.

What this means for agencies and in-house teams

AI is impacting all knowledge workers, not just SEOs. 

By now, many have seen the viral article “Something Big Is Happening” by Matt Shumer, which paints a startling picture about the future of AI-powered work and adopting an “adapt or die” mentality.  

Research is beginning to show how these types of technologies are impacting productivity. 

One study by Stanford and MIT researchers found that access to AI tools in the workplace increases productivity by at least 14% on average, with a 34% increase for low-skilled workers. 

The bottom line is that anything that can be generated with code is going to be eaten by these CLI tools and agents, because they’re just so much faster. 

Businesses are catching on. Up to 64% of businesses now generate a majority of their code with AI assistance, according to a Business Insider report, and high-adoption teams are producing nearly double the output. 

For SEO teams, they’re experiencing faster reporting cycles, more iterative analyses, and the ability to handle more complex data.

AI coding assistants are also helping analysts become builders. Non-technical users can build and iterate in ways that were previously out of reach.

Ultimately, this shift is becoming table stakes. The SEO teams that integrate these tools into their workflows will move faster and produce better results. 

The competitive advantage is going to those who adopt these technologies first.

Where to begin, though? Consider piloting a small project:

  • Start with one repeatable reporting workflow.
  • Connect a data source like Google Search Console via API.
  • Test and refine a single report before expanding to other use cases.
See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

The future of SEO reporting is agentic and code-driven

Traditional SEO reporting tools are quickly becoming a bottleneck. 

AI coding assistants are helping SEO teams respond to any type of reporting without the added friction, while delivering faster, better insights. 

The companies that adapt will gain the advantage in SEO execution. Start by replacing one recurring report with a code-driven workflow and build from there.

Microsoft launches AI Max and new ad tools for the “agentic web” era

21 April 2026 at 17:00
Microsoft (Credit: Shutterstock)

Microsoft is rolling out a suite of updates across Microsoft Advertising to help brands stay visible — not just to people, but to AI agents increasingly making decisions on their behalf.

What’s new. The update spans measurement, commerce, and media, with new tools designed to help advertisers show up in AI-driven experiences and transactions.

On the ads side. Microsoft is introducing AI Max for Search campaigns, which expands query matching and personalizes ad delivery across AI surfaces like Copilot and Bing. It’s also launching “Offer Highlights,” new ad formats that surface key selling points — like free shipping — directly within AI conversations.

Zoom in:

  • Expanded AI Visibility in Microsoft Clarity shows how brands appear in AI-generated answers, including which content gets cited and where competitors outperform.
  • New Universal Commerce Protocol support in Microsoft Merchant Center structures product data so AI agents can discover and transact on it more easily.
  • Copilot Checkout enhancements enable purchases directly inside Microsoft Copilot, reducing friction from discovery to sale.

Also notable. A new AI-powered audience generation tool lets advertisers describe their ideal customer in plain language, with the system building targeting segments automatically.

Why we care. Microsoft is changin how visibility works in Microsoft Advertising — shifting from clicks and rankings to being selected by AI systems. Tools like AI Max, AI Visibility, and Offer Highlights help brands show up in AI-driven decisions, not just search results. As AI agents take a bigger role in discovery and transactions, advertisers who adapt early will have a clear advantage.

Between the lines. This is a shift from optimizing for clicks to optimizing for selection — ensuring your brand is chosen by AI systems, not just seen by users.

What to watch. Early data suggests AI-driven traffic is growing far faster than human traffic, signaling where future demand may concentrate.

Bottom line. Microsoft is preparing advertisers for a world where winning means being understood — and trusted — by AI agents, not just ranking in search results.

Dig deeper. Win Across All Three Eras of the Web

How to measure Demand Gen creative impact with asset uplift tests

21 April 2026 at 16:00
How to measure Demand Gen creative impact with asset uplift tests

Demand Gen campaigns have high visibility across YouTube, Discover, and Gmail. However, they pose a key challenge: the “attribution illusion.” You’ll often question whether reported conversions in the platform are truly incremental or if these users would’ve converted through search either way.

That’s why in November, Google launched asset uplift experiments, giving you the ability to measure the impact of Demand Gen creative through an A/B split test. This means you can replace assumptions with a clearer view of what’s actually driving incremental results.

Relying too heavily on creative instinct or default reporting can lead you down an inefficient path and divert valuable creative resources toward poor-performing assets. Using Google’s A/B testing capabilities helps you isolate the impact of individual assets and avoid that outcome.

Why attribution doesn’t equal incrementality

If a user views a Demand Gen ad on YouTube and doesn’t click but then searches for the brand and converts, Google may attribute partial or full credit to the Demand Gen campaign and creative. This attribution more so reflects correlation rather than causation.

Accurate measurement and the scientific method show the need to understand the scenario in which the creative isn’t shown. By withholding the test assets from a segment of the target audience, it’s possible to establish a baseline. 

The difference in conversion rates or any primary KPI between the treatment group — those who were exposed to the ad — and the control group — those who weren’t exposed — shows the actual incremental lift the creative is driving.

Dig deeper: Why incrementality is the only metric that proves marketing’s real impact

What you need before testing creative uplift

One common mistake is launching experiments without enough data to reach statistical significance. To avoid inconclusive or invalid results, make sure your campaign meets these prerequisites before setting up the test.

Conversion volume 

Google recommends having at least 50 conversions across treatment and control arms during the experiment to measure lift accurately. If your primary conversion doesn’t receive this volume, consider optimizing the test around high-intent micro-conversion actions, such as “Add to Cart.”

Budget minimums

Experiments should run with continuous, uninterrupted spending. If your Demand Gen campaign is limited by budget and stops early each day, the control group data will be skewed. 

The campaign must have a sufficient budget to run for at least four weeks, or until a statistically significant result is achieved.

Creative isolation

Test only one new variable at a time. To determine if a specific video asset drives uplift, keep all other campaign elements, such as audience, bidding, and standard image assets, unchanged.

Dig deeper: Why Demand Gen is the most underrated campaign type in Google Ads

Get the newsletter search marketers rely on.


How to run an asset uplift test in Google Ads

Setting up a creative uplift test is now more streamlined within Google Ads. To build a valid experiment, follow these steps.

1. Define a clear hypothesis

Every valid scientific test begins with a clear hypothesis. Avoid running tests without a defined objective. For example:

  • Bad hypothesis: “Let’s see if our new video works.”
  • Good hypothesis: “Adding user-generated content (UGC) to our Demand Gen asset group will drive a 10% incremental lift in ‘purchase’ conversions compared to standard static image carousels.”

Navigate to the Experiments interface

Log in to your Google Ads account and navigate to the left menu. Select Campaigns > Experiments. Click the plus (+) button to create a new experiment, choose Asset tests provided by you, and make it a Demand Gen campaign experiment.

Configure a 50/50 split

Google will prompt you to define your split. To set up statistically sound results, use a 50/50 cookie-based split. 

This ensures both control and treatment groups have equal historical data and algorithmic weighting, and prevents users from ending up in both arms of the test. Assign your existing campaign as the control, and the duplicated campaign with new assets as the treatment.

Lock your variables

Once the experiment begins, you must practice extreme discipline. Don’t change audiences or targeting, and avoid drastic bid and budget changes. 

Any adjustment made to either campaign during the testing window will introduce noise and could invalidate the statistical significance of your results.

Set the duration

Run the experiment for at least four weeks. 

  • Week 1 serves as a learning period while the algorithm adjusts to the audience split, new creative, and bid model learning (especially if leveraging smart bidding). 
  • Weeks 2 to 4 provide actionable performance data. 

For longer conversion cycles, such as B2B SaaS, consider extending the test to six or eight weeks.

Dig deeper: What it takes to make demand gen work for B2B and ecommerce

What your experiment results actually mean

When the experiment concludes, review results in the Experiments dashboard, where a report showing the performance of each arm and its confidence interval across metrics is available. Interpret the outcomes as follows to validate your hypothesis made earlier.

Outcome 1: Positive lift (statistically significant)

If the treatment group shows a positive lift with 95% confidence, your creative asset has been proven to drive incremental conversions. 

From there, you can calculate incremental cost per acquisition (iCPA) by dividing the treatment group’s total ad spend by the incremental conversions above the control arm. 

Use this iCPA as your benchmark for scaling the campaign going forward.

Outcome 2: Negative lift

Occasionally, a new creative asset may suppress performance. It may be too disruptive, or the video may have a high skip rate, causing the algorithm to reduce delivery to high-intent users. Pause the treatment asset immediately. This allows you to let data guide your budget decisions vs. preference.

Outcome 3: Inconclusive result

If the difference between groups is negligible and the system cannot confidently attribute conversions to the ad after four weeks and adequate conversion volume, consider extending the test for two more weeks to collect additional data. 

If results are still inconclusive, it could be that creatives are too similar. Test a significantly different creative asset, as small changes rarely produce a statistically significant lift in Demand Gen.

Prove creative impact with incrementality testing

Creative is a key remaining lever and differentiator you can pull to drive performance. Producing high-quality video or UGC is just the first step in this world, where creative bandwidth and impact must be proven as a driver of results. 

Demand Gen is a powerful tool for visual storytelling, but justifying its budget to stakeholders requires rigorous, scientific evidence of its impact. Asset uplift experiments enable just that. Begin your first holdout test, establish a baseline, and let data guide your creative decisions and roadmap.

Dig deeper: The Google Ads Demand Gen playbook

groas introduces a fully autonomous approach to Google Ads management by groas

21 April 2026 at 15:00
 groas distributed AI agent network managing Google Ads campaigns across multiple screens.

For 20 years, Google Ads management has followed the same basic model: you log in, review performance, make changes, and hope they work before the next check-in. 

Agencies, freelancers, and in-house teams all work this way, even as the tools have changed. Spreadsheets gave way to scripts, and scripts gave way to automated bidding, but the core loop never changed — someone still had to sit in the account.

groas aims to change that model by introducing a system designed to automate campaign execution end-to-end.

Our company announced today it has developed a fully end-to-end autonomous system that’s designed to match or exceed PPC performance benchmarks observed in internal testing. It’s designed to operate without routine manual approvals or constant dashboard monitoring.

From campaign creation through bid management, ad copy generation, keyword expansion, negative keyword pruning, budget allocation, and dynamic landing page deployment — along with everything else you can do in the Google Ads console and beyond — the entire workflow now runs autonomously, 24/7. 

The system runs on a distributed network of specialized AI agents that handle different parts of campaign management and communicate in real time.

We didn’t start here. 

A year ago, groas launched as a lightweight product that surfaced optimization recommendations for you to review and implement. The same model most PPC products still follow. 

By the founder’s own admission, it was a fairly unremarkable v1. But what it lacked in sophistication, it made up for in something more valuable: real data from large volumes of real campaigns at scale.

Hundreds of early customers across the world signed up and connected their Google Ads accounts, representing a wide range of ad spend levels, campaign structures, and conversion goals.

These weren’t a narrow slice of one vertical. They spanned dozens of industries and niches — from local service businesses spending a few thousand a month to large agencies managing seven-figure monthly budgets across full client portfolios.

That diversity became the most important asset groas built. 

The custom-trained, fine-tuned models that now power the system were shaped by this breadth — not a static dataset or simulation, but live campaigns with real money on the line across every industry and budget tier. 

Without that base of early adopters, what groas is today couldn’t exist. The training data that enables autonomous management came from actively managing real dollars across real campaigns, learning what worked and what didn’t in conditions no synthetic environment could replicate.

David Pourquery, founder and CEO of groas, said:

“We kept seeing the same pattern. We’d surface a recommendation that would clearly improve performance, and it would sit there for days or weeks because the account manager was busy, or the client needed to approve it, or someone was on vacation. The insight had a shelf life, and by the time it got implemented, the data had moved on. So we stopped recommending and started doing.”

That realization drove a complete six-month rebuild. The result is a system of interconnected AI agents, each specialized in a different part of campaign management, collectively processing over 100,000 data points per hour per campaign. 

The network handles a wide range of tasks typically performed inside the Google Ads console without the limits of working hours, cognitive load, or the tradeoffs that come with managing multiple accounts. The system automates most day-to-day campaign management tasks that would typically require manual input. If you wouldn’t have time to do it, the agents would.

From day one, groas built dynamic landing pages into the system, deployed and continuously A/B tested to find winning combinations of messaging, layout, and calls to action for every campaign. groas deploys them with a single line of JavaScript on your existing site — no developer resources, no new hosting, no CMS changes. The system tests and iterates 24/7, designed to improve conversion rates through continuous testing.

There’s a full undo capability for each agent action, but the point is you don’t need to regularly check into groas or Google Ads. Weekly reports are emailed, summarizing what was done, while a dedicated human PPC account manager oversees everything groas does around the clock.

Onboarding is fully hands-off. After sign-up, your groas account manager learns your business, audits your existing Google Ads accounts, and delivers a detailed action plan within 24 hours. From there, they implement everything across groas and Google Ads with zero work on your side.

In less than a year since shifting to full autonomy, groas now manages eight figures in monthly ad spend across its client base. Every account came through organic discovery or direct referrals — the company hasn’t spent anything on paid acquisition to date.

The client base has consolidated around two profiles:

  • Businesses moving away from agency relationships where results haven’t kept pace with cost. These are companies paying $5,000 to $15,000 per month and looking for more consistent performance and transparency. groas provides an alternative by automating day-to-day execution while reducing management overhead.
  • Agencies. This is now the larger segment. Agencies plug groas into their clients’ accounts behind the scenes, bundle the cost into your existing fees, and let the agent network handle day-to-day execution while their teams focus on strategy, creative direction, and client relationships. The implementation runs behind the scenes within agency workflows. groas turns a labor-intensive, low-margin service into something that scales without added headcount. groas offers a 30% lifetime recurring commission for referrals, but most of you choose to pay for it yourselves and keep the margin.

Google’s automation — from Performance Max to AI Max to broad match expansion — has pushed the industry toward more black-box control for years. Many advertisers feel they are losing visibility into what’s actually happening inside their campaigns. Meanwhile, agencies and recommendation-based products still run the old loop: review, recommend, wait for approval, implement, repeat.

groas occupies a category that didn’t exist. Instead of helping you manage campaigns better or relying on Google’s automation, it removes you from the execution loop while keeping you in the strategic loop through a dedicated account manager.

The PPC industry has spent two decades debating how much to automate. groas is the first to answer “everything” and back it up with eight figures in managed spend. 

The growth points to something the industry has been circling for years without arriving at. The bottleneck in Google Ads performance has often been the limits of manual execution — constrained by time, attention, and the volume of data modern campaigns generate.

groas didn’t build a better recommendation engine — it reduced the need for traditional recommendation-based workflows.

groas starts at $999 per month for up to $15,000 in managed ad spend, scaling to $6,999 per month for up to $150,000. No contracts, lock-ins, or setup fees. The only requirement is at least $2,000 per month in Google Ads spend — below that, there isn’t enough data for the agents to optimize effectively.

Learn more about how groas works at groas.ai.

💾

A distributed network of AI agents now manages eight figures in monthly ad spend, built on years of live campaign data most companies never get.

Yelp launches AI-powered Assistant to streamline local search and bookings

21 April 2026 at 15:00
yelp

Yelp is rolling out its most significant AI update yet, centered on a new conversational “Yelp Assistant” designed to move users from searching to actually booking, ordering, and scheduling — all in one flow.

What’s new. Yelp Assistant sits at the center of the update, acting as a chatbot that can answer complex queries, recommend businesses, and complete actions like reservations or appointments without leaving the app.

Zoom in. The assistant pulls from Yelp’s massive base of user reviews and photos to generate tailored recommendations, explain why a business fits, and let users refine results conversationally. It can then take the next step — booking a table, ordering food, or requesting a quote — directly within the same interaction.

What else is new. Yelp is expanding integrations with platforms like Vagaro, Zocdoc, and Calendly to streamline bookings across categories like beauty, healthcare, and home services, while deepening delivery ties with DoorDash.

Also notable. An upgraded “Menu Vision” feature uses AI and visual overlays to show dishes, reviews, and photos in real time when scanning a menu, helping users decide what to order faster.

Why we care. Yelp is shifting from a discovery platform to a transaction-driven experience powered by AI. With Yelp Assistant handling recommendations and bookings in one flow, visibility alone may not be enough — businesses will need to be optimized for conversion within the platform. The update also signals more competition for high-intent users as Yelp tightens control over the path from search to purchase.

Between the lines. Yelp is leaning into AI not just for discovery, but for conversion — turning intent into transactions without sending users elsewhere.

What’s next. The assistant is live on iOS and Android with broader expansion across categories and desktop coming later this year.

Bottom line. Yelp wants to own the full local journey — from “where should I go?” to “it’s booked.”

💾

Yelp’s AI Assistant shifts discovery toward direct booking, increasing competition at conversion.

YouTube & Discover political ad rules updated

21 April 2026 at 00:17
How to measure YouTube ad success with KPIs for every marketing goal

Google updated its YouTube and Discover Feed ad requirements as of April 2026 to clarify how election-related ads are handled, without changing how the rules are enforced.

Why it matters. Advertisers using YouTube and Discover placements already operate under tight guidelines, and election ads have historically been a gray area. This update is meant to remove confusion rather than introduce new restrictions.

What’s new (and what’s not). The update explicitly states that election ads are exempt from YouTube and Discover Feed ad requirements, but this is purely a clarification. There are no changes to enforcement, meaning advertisers who were compliant before should not need to adjust their approach.

Why we care. Google is removing confusion around how election ads are treated on YouTube and Discover. While these ads are exempt from placement-specific requirements, they still must follow Google Ads policies. The result is clearer guidance, fewer approval issues, and more predictable campaign launches.

Zoom in. Election ads exception Election ads do not need to follow the specific YouTube and Discover Feed ad requirements, but they are still governed by broader Google Ads policies. To qualify for this exemption, advertisers must complete the Election Ads verification process and be verified in the region where the ad will run.

Between the lines. This is not a policy shift but a documentation update. Google is drawing a clearer distinction between placement-specific requirements and its wider ads policy framework.

What advertisers should do. Advertisers running political campaigns should confirm their verification status and continue to follow Google Ads policies closely, as the exemption does not mean reduced scrutiny.

Dig deeper. YouTube and Discover Feed ad requirements (April 2026)

Google tests video ads in local search results

20 April 2026 at 20:58
Google Local Services Ads.

Google is experimenting with video ads inside the local pack, signaling a shift toward more immersive, visual formats in location-based search.

Driving the news. The test was spotted by Anthony Higman, who shared that Google is integrating “immersive map view videos” into PPC ads tied to local results.

These video ads appear within the local pack — the map-based listings that show businesses near a user’s search.

What’s new. Instead of static listings or text-based ads, some advertisers may now have the option to surface video content directly in local search results.

  • The feature appears tied to settings within Google Ads’ Location Manager.
  • It may be enabled through a pre-opted setting in the shared library.
  • The format blends paid ads with Google Maps-style immersive experiences.

Why we care. This update could significantly increase visibility and engagement in high-intent local searches. Video ads in the local pack offer a new way to stand out and showcase locations, products, or services more effectively than static listings. This could also mean advertisers needing to start investing in video creative to stay competitive local listings.

Yes, but. The feature appears to be in early testing, and it’s unclear how widely it’s available or how performance compares to traditional local ads.

There’s also the question of creative requirements, as video production adds complexity for advertisers.

The bottom line. Google is bringing video into one of its most intent-driven surfaces — local search — as it looks to make ads more immersive and engaging.

First spotted. This update was spotted by Adsquire founder Anthony Higman who shared spotting the new local listing ad type on LinkedIn.

The digital PR duplication method: Rinse, reuse, repeat

20 April 2026 at 19:00
The digital PR duplication method- Rinse, reuse, repeat

Every digital PR (DPR) team’s been there: New data drops and the team huddles while someone stares at a blank Google doc spiraling over angles and journalist targets. Eventually, a pitch limps out the door just in time to hit “Send” before end of day.

The pitch then lands in a top-tier publication, everyone celebrates, and the next month the whole team does the exact same thing over again, like it never happened.

But here’s the thing nobody talks about: That winning pitch is a valuable asset, and most teams will just leave it sitting in their sent folder collecting virtual dust.

Whether it was a data study, a product launch, or an expert quote, that pitch is a template. And with AI, you can clone its DNA onto every new campaign rather than reinventing the wheel every single time.

By the numbers

The stakes for getting this right have never been higher. About 46% of journalists receive six or more pitches every single workday, and of those, 49% seldom or never respond to a pitch, per Muck Rack’s State of Journalism report. 

Pitch volume keeps climbing while relevance drops, with 47% of journalists saying they seldom or never receive pitches relevant to what they cover, Cision’s 2025 State of the Media Report found.

The volume problem is real, and AI is making it worse by enabling everyone to quickly and easily generate pitches. This means journalist inboxes are quickly filling up with content that sounds more generic than ever. 

So how do you get your pitches in front of as many journalists as possible while actually getting noticed? The answer is deceptively simple: Rather than blindly scaling your pitch generation, scale what you already know lands.

Meet the DPR duplication method

I call it the “DPR duplication method,” and the idea behind it is simple: rinse, reuse, repeat.

The process is straightforward. You take a pitch that generated coverage previously, determine exactly what made it work structurally, and then use AI to replicate that structure for your next campaign rather than prompting from a blank slate.

It works across pitch types, too, which is the part I love most about it. Data studies, product launches, expert quotes, reactive commentary — it doesn’t matter. If the structure worked once, it can work again, and if it worked 10 times, it can work 20.

One of my favorite pitches to use with this method is one I sent to an editor at PR Daily, and the subject line read: “Your basset hound is the cutest [New SEO study for PR Daily].”

The pitch was built around a data study on YouTube thumbnail performance, with findings that were specific, visual, and easy for a journalist to turn into a standalone story without much heavy lifting on their end. It landed. Same-day response.

Anatomy of a winning pitch: What made it work?

So why did it work? There are four reasons, and you can replicate every single one:

  • The subject line led with a personal connection before it ever mentioned the pitch, directly referencing the editor’s dog before dropping the study hook in brackets. This made it impossible to ignore because initially it didn’t feel like a pitch. Instead, it felt like a personal message from someone who actually knew them.
  • The opening hook built rapport before it built a case, acknowledging their pet and sharing something personal before naturally transitioning into the actual reason for the email. By the time the data showed up, they were already reading and receptive.
  • The stat sequencing moved from the broadest behavioral finding down to the most specific and visual. This gave them multiple angles to work with, depending on what their audience needed most. It didn’t force them to figure out the story themselves. Plus, it was also about a topic they were already covering.
  • The CTA was framed entirely around their readers and not around my study or client. It asked whether their audience of growing businesses interested in videography would benefit from the findings. The CTA wasn’t simply, “Would you like to cover this?” Instead, it was, “Would your readers benefit?” That’s a very different ask, and journalists immediately feel the difference.
Anatomy of a winning pitch: What made it work?

Steal the structure: Prompt by prompt

Don’t describe your best pitch to the AI. Instead, give it the pitch by pasting in the full text. Then, ask it to mirror the specific parts that made the pitch work rather than having it write something new from scratch.

Here’s how that looks using a hypothetical campaign. Say you are pitching a new survey for a financial wellness company that shows one in three Americans have skipped a doctor’s appointment in the last year because of cost. This is strong data with a clear emotional hook that a lot of journalists covering personal finance or healthcare would care about.

You need to pitch it, and you need it to land. So you open the PR Daily pitch above, and you use it as your blueprint, duplicating each component that made it work for the new campaign.

Duplicate the subject line

That PR Daily subject line worked because it opened with something personal to the journalist before it ever mentioned the study, and you want that same energy in every new pitch you send:

  • “Create seven headlines with each provided stat. For example: [paste your winning subject line format].”
  • “Make this subject line more focused on [new topic]: [paste winning subject line].”
  • “Make this subject line more newsworthy based on the articles I provided: [paste current subject line draft].”
  • “Make this statistic into a newsworthy headline: [paste stat].”
  • “Make this headline more personal to a journalist covering [beat]: [paste headline].”

Duplicate the opening hook

The opening worked because it felt human before it felt like a pitch, and injecting that same warmth and specificity into a new campaign is as simple as showing the AI exactly what you mean rather than trying to describe it:

  • “Love this opening. Make the new opening mimic more of this: [paste opening from winning pitch].”
  • “Here is some trending news. Highlight this in the opening hook: [paste URL].”
  • “Make this opening more [inflation/healthcare/financially] focused: [paste current opening].”
  • “Here is another example of what is happening right now. Let’s incorporate it: [paste URL].”
  • “Make this intro feel more like a journalist would write it and less like a press release: [paste current intro].”

Get the newsletter search marketers rely on.


Duplicate the stat sequencing

The stats in the PR Daily pitch moved from the broadest finding down to the most specific and surprising, which handed the journalist a ready-made narrative she could work with instead of a list of numbers she had to interpret herself:

  • “Here are my key statistics: [paste stats]. Make the stats mimic this verbiage: [paste stat section from winning pitch].”
  • “Make this statistic more clear and newsworthy but not misleading: [paste stat].”
  • “Rewrite these stats so they flow like a story, starting broad and getting more specific: [paste stats].”
  • “Make these stats feel more conversational and less like a press release: [paste stats].”

Duplicate the CTA

The CTA worked because it put the journalist’s readers at the center of the ask rather than the study or the client, and that shift in framing is something you want to carry into every pitch you send:

  • “Make the CTA more like this: [paste CTA from winning pitch]. New topic is [insert topic].”
  • “Make this CTA more [topic] focused: [paste current CTA].”
  • “Rewrite this CTA so it leads with what the journalist’s readers will get, and not what we want covered: [paste current CTA].”
  • “Make this feel less salesy and more like a genuine offer: [paste current CTA].”

Duplicate the follow-up

The follow-up gets the exact same treatment, because there is a version of your best follow-up already sitting in your sent folder. You should be using this winning follow-up as the model every time instead of writing a new one:

  • “Mimic this follow-up and add the link [paste URL]: [paste your winning follow-up].”
  • “Mention [insert trend] from [insert article] in this follow-up: [paste follow-up].”
  • “Rewrite this follow-up so that it leads with a new stat we did not include in the original pitch: [paste follow-up and new stat].”
  • “Make this follow-up shorter and punchier while keeping the same structure: [paste follow-up].”

Every component has a proven version already sitting in your sent folder, so use it. Re-prompting with the actual text of the original rather than describing it will consistently yield more faithful results, as the AI won’t need to guess at your voice. Instead, it has a blueprint.

You can duplicate anything

Ask yourself what is preventing your current pitches from landing. The first answer that comes to mind probably isn’t the lack of a new AI tool. Rather, it’s likely a structural ingredient from something that already worked and that you stopped using the moment it landed coverage.

The DPR duplication method can apply to every part of your outreach (e.g., headlines, pitch intros, stat formatting, CTAs, sign-offs, and follow-ups). Every single component can be duplicated and evolved from a version that has already proven its effectiveness. 

I know what you might be thinking at this point: Won’t pitches start to sound the same if they all pull from the same structure? The answer is no, because the structure is yours, built from your wins, your voice, and your relationship with a specific editor about her specific dog. Nobody else has that blueprint.

Here are some questions worth considering before your next campaign:

  • What group of stats did you love from a past pitch, and how can you use them as a formatting model for new data?
  • What pitch generated an outsized amount of press, and what was the structural reason it actually worked?
  • What headlines received responses from journalists, and what was the pattern that made them land?
  • What in your past experience can be enhanced with AI rather than replaced by it?

Using AI doesn’t require sacrificing the secret sauce of what generates press — because the strategy is still yours. AI just helps you execute it faster and more consistently without losing the specific ingredients that made your best work actually work.

Your next pitch starts with your last win

Open the pitch that generated your best coverage in the last 12 months, whether it was a data study, product launch, or expert quote pitch. Identify the things that made it work, including the subject line, opening hook, stat or story sequence, and the CTA. Notice what made each one feel specific, human, and impossible to ignore.

Then prompt AI to duplicate each component individually using that pitch as the model. Add current news context where it fits, combine everything, refine as needed, and duplicate the follow-up, too.

You’re not copying. You’re compounding.

Rinse, reuse, repeat.

Utility news content: How to win beyond clicks in AI search

20 April 2026 at 18:00
Utility news content- How to win beyond clicks in AI search

In 2026, news SEO content performance isn’t just defined by page views and clicks — brand awareness is taking center stage. With the emergence of multimodal search, digital editorial strategy is no longer just about the first page of Google. You have to meet readers anywhere and everywhere they consume content. 

Amid this industry shift, AI platforms are an increasingly important traffic source for publishers to consider. If publishers want to remain relevant, it’s critical to find ways to play ball with Google AI Overviews, chatbots, voice assistants, and other emerging technologies. 

Fortunately, utility news content is a key deliverable that can connect with audience needs across platforms throughout a variety of breaking news and evergreen windows. 

What is utility news content? 

Utility news content is service journalism that’s specifically crafted to provide simple and straightforward answers to topline questions. The recent rise of answer engine optimization (AEO) is driven by a similar methodology. 

Service journalism encourages readers to contemplate:

  • What does this topic mean?
  • Why does this angle connect with my interests and needs? 
  • How can I apply this information to my life? 

When constructing a utility content strategy, we must remember: Simple isn’t stupid. Don’t overcomplicate the process. Listen to the needs of your audience and let those signals guide you to the right places. 

In terms of execution, the “set it and forget it” days of evergreen content are fading in favor of more proactive audience engagement strategies. 

To maximize the impact of utility news content, it’s essential to:

  • Map out evergreen targets in advance with trend forecasting around seasonal events and recurring search patterns.
  • Track the breaking news cycle closely to pinpoint new areas of opportunity.
  • Refresh existing explainers when corresponding breakout queries arise.
  • Create new utility posts when content gaps exist.
  • Recirculate related resources across appropriate platforms in timely windows.
  • Track article performance to assess overall impact and share key takeaways with editorial stakeholders.
  • Consolidate related articles in a streamlined content library for easy access and regular review.

What are traditional utility news content examples? 

These helpful guides show that simple and straightforward content can serve reader needs by breaking news within a crucial window of time, zoning in on evergreen themes of interest, and connecting with seasonal tentpole event calendars. 

ESPN utility news AI Overviews case study 

During my tenure as SEO Director at ESPN from 2022-2026, I spearheaded a utility content initiative that prioritized fan-forward queries throughout a variety of game and event windows. In managing that workflow, I picked up helpful dos and don’ts for making utility content shine within a newsroom. 

These examples demonstrate why utility news content can resonate in AI modules if you have a proper editorial strategy in place.

Create content that can maintain relevance throughout long-term event cycles

Which NBA teams have never won a championship - AI Overview

When the Indiana Pacers started trending for the “NBA teams that have never won an NBA championship” theme at the end of the 2025-26 NBA season, updating this evergreen piece of content to maintain accuracy secured consistent AI Overview placement through to the championship. 

Answer breaking news questions with evergreen resources 

How many titles did Hulk Hogan win - AI Overview

Following his unexpected passing in July 2025, Hulk Hogan’s wrestling titles were a major search topic that translated well into this breakout explainer. Its evergreen potential can resonate with audiences beyond the initial post-demise trending window. 

Create evergreen lists in advance that can spike off of breaking news

WNBA jersey retirements - AI Overview

Candace Parker’s 2025 jersey retirement gave this previously published evergreen roundup a fresh window to reach new fans and drive traffic. 

Recirculate guides that can benefit from frequent updates

Most successful father-son duos in NBA - AI Overview

With LeBron and Bronny James frequently in the news, this fun evergreen take reflects their evolving stats and provides a related link to feature complementary content. 

Lean into your brand 

Lee Corso's college game day record - Google Search

Whenever possible, it’s great to showcase in-house talent with breakout posts that spotlight unique elements that are synonymous with your brand

Why is utility news content still relevant? 

With the rise of zero-click search, some concerns have been raised about investing in service journalism when related SERP modules regularly snatch up topline shelf space in time-sensitive windows. 

Though declining click-through rates are alarming, service journalism isn’t only about traffic. Publishers have a duty to showcase legitimate sourcing and provide accurate information that serves audience needs across top platforms. 

Among many recent studies, Ahrefs presented new data in December 2025 that showed how easy it is for LLMs to get confused and present inaccurate information to users. Google AI Overviews can also sometimes produce “predictions” that share outcomes on events that haven’t happened yet. 

Inaccuracies are especially concerning in breaking news windows, which AI Overviews have been increasingly staking their claim on (as noted by Glenn Gabe). 

Additionally, innocent online interactions can quickly turn dangerous, which The Guardian emphasized in a 2025 investigation that uncovered how Google’s AI Overviews gave “very dangerous mental health advice” to billions of searchers.

Though visibility challenges are frustrating, we can’t sit idly by and let the general public be led astray when seeking timely information. In 2026 (and beyond), AI-friendly utility news content is still worth championing in your newsroom.  

Get the newsletter search marketers rely on.


How can publishers pinpoint the best topics for utility news content? 

An ideal utility content workflow should function under a healthy combination of breaking news reaction and evergreen trend forecasting. 

A variety of tools and interfaces can help publishers during the ideation process:

Google Trends

‘Trending now’ section

  • Toggle upper navigational features to explore trends within different regions, date ranges, and content categories.
  • “Past 4 hours” is ideal for breaking news brainstorming.  
  • Use “Search volume” and “Started” filters alongside search interest activity chart to gauge the timing and format of potential pieces.
  • Older trends can be repurposed past the breaking news window in the form of timelines and “bigger picture” explainers.
  • Sift through the “Trend breakdown” section for angles of interest that could translate into breakout explainers. 
  • Tap into the “In the news” section for brand performance validation and competitor intel. 

Standalone topic searches

  • Discover trending questions, people, places, events, and things in “Rising queries” section.
  • Determine essential phrases to target in headlines and subheadlines with the “Top queries” section. 
  • Conduct localized research with the “Interest by subregion” module in “Classic Explore” view. 
  • Use the comparison bar to narrow down potential topics of interest for breakout articles and establish the most search-friendly phrasing for headlines. 
  • Experiment with “YouTube,” “News,” and “Image” search filters to assess how searches on topics of interest may vary by platform. 
  • Regularly share related insights with external departments that can incorporate search-friendly angles into their workflows and deliverables (e.g., The video team with “YouTube” search, the photo team with “Image” search).
  • Use “Past hour” filter during breaking news windows to assess urgent audience queries and predict where search behavior may be going next. 
  • Use “Past 5 years” and “2004-present” filters to identify seasonal audience trends that can positively influence year-over-year content planning and “all-time highs” in search interest
    • When do search interest spikes occur every single year? 
    • What content can you refresh on an annual basis to capitalize on cyclical audience behavior? 
    • How should you stagger your content rollout during a recurring event window?  
  • Experiment with the “Suggest search terms” Gemini feature for supplementary content research (Note: If you use AI tools in a prominent way during the content creation process, it’s important to be transparent with your audience and include a corresponding disclosure statement within the final deliverable).

Curated pages (ad hoc basis) 

  • Zone in on trending takeaways around tentpole events with mass interest.
  • Regularly check the Google Trends homepage for featured modules around elections, sporting events, awards shows, etc.  

Curated newsletter (typically Monday-Friday) 

  • Take the guesswork out of daily Google Trends analysis.
  • Receive top trends, breakout queries, data visualizations, and interesting stats from industry experts that can be applied to breakout articles.  
  • Sign up on the Google Trends homepage. 

Competitor research throughout all modules

  • Gauge where you’re winning and pick up on lingering content gaps where other publishers may have an edge. 

Google News

  • Explore primary topics of interest in the “Top stories” homepage section.
  • Dig into upper navigational bar content categories based off of newsroom beats.
  • Discover regional opportunities in the “Local” section.
  • Access the platform regularly to receive a curated selection of articles in the “For you” section based off of your personal user behavior and interests. 
  • “Follow” searches that would benefit from regular monitoring to streamline the daily research process. Press the star button on the upper right-hand side of an individual search to save topic to your “Following” section.
  • Utilize standalone topic searches for targeted content ideation. 
  • Explore standalone source searches for validating brand performance and conducting competitor research.

People Also Ask

Converse with “AI Mode” to uncover topic clusters that extend beyond your initial search. 

Semrush

Pinpoint high-volume Q&A angles that maintain long-term relevance in seasonal windows. 

Alternative search platforms

Identify trends that can spark content with compatibility across a variety of formats, including articles, videos, and social posts:

  • Google Autocomplete: Ideal for research on high-intent long-tail keywords.  
  • YouTube search bar: Ideal with topics that can be enhanced by strong visuals and/or a video walk-through approach.
  • TikTok search bar: Ideal for targeting younger demographics. 

How should utility news content be constructed for success on AI platforms?  

Once the brainstorming process is complete, search strategists need to adopt the right techniques to make sure that corresponding content serves utility needs. 

Utility news content can be structured to be more “AI-friendly,” so to speak. Specifically, LLMs are more likely to cite content that contains: 

  • Simple and straightforward formatting
  • FAQ styling
  • Easily extractable answers 
  • Fresh updates
  • Objective stats that lean into substance  

Include AI-friendly tactics like bullet point lists, numbered steps, tables, keyword-targeted subheadings, and snackable paragraphs to better position your content for LLMs.

Don’t bury the lead 

Answer the most search-friendly questions in the top half of the article using the five journalism staples: Who, what, where, when, why, how? 

Break out the buzziest angles from live blogs, rolling roundups, and extensive features into standalone articles. Quick-hit explainers and deep dive analyses can cover the same general topic and serve different audiences. 

Important themes can get lost in bigger pieces and fail to surface in related external searches, whereas separate articles with targeted headlines can increase search potential. 

Highlight E-E-A-T (experience, expertise, authoritativeness, and trustworthiness)

Promote need-to-know information that can appeal to the masses while elevating the unique value your brand can offer:

  • Quotes from brand experts.
  • In-house data.
  • Original reporting.
  • Regional angles.  
  • Historical context.

Be sure to create author pages to consolidate content from in-house experts, make articles more discoverable, and encourage ongoing followership.

Utilize timestamps to your advantage 

Implement a “Last updated” marker to produce the freshest search signal possible to readers and crawlers. Refresh articles with new and noticeable updates, such as content, headlines, photos, videos, and links.

Tweak and recirculate content across a variety of related news windows to get articles back into feeds and provide readers with related context. Small-scale updates can build up to substantial traffic and AI Overview placements throughout a calendar year. 

You should also be adding new links to supplementary stories as news cycles evolve. These updated links send a fresh signal to Google, provide essential context to readers, and reinforce your E-E-A-T on top priority topics for your brand. Create short, concise titles and headlines that prioritize search-friendly entities

When crafting headlines, avoid conversational fluff (including quotes, which are better utilized elsewhere) and instead zone in on people, places, events, etc. 

Try to keep your headlines within 60 characters or less to stay on the safe side with Google’s roulette of SERP formatting. Google is increasingly randomizing the appearance of search results with the influx of multimodal sources flooding the scene.

For instance, “Top stories” carousels have been disappearing on more newsy searches, which can shift SERPs back to their traditional title tag structure (which can sometimes cut off titles and headlines at less than 60 characters) vs. a headline structure (which tends to have more wiggle room with character count). 

Though frontloading keywords isn’t a requirement in titles and headlines (variety and readability are helpful for UX), you should keep top-priority themes away from the 60-character cutoff point as a precaution. 

Remember, it’s okay to tweak titles and headlines if readers aren’t connecting with them. Fresh angles can provide a late traffic surge, especially when paired with homepage and/or app placement. 

If SERPs lag in reflecting your latest updates, re-index articles in Google Search Console

Be strategic with keyword placement

As Google tests out AI-generated headline rewrites, it’s increasingly important to optimize original headlines with essential terms that readers are searching for. 

You’ll also want to showcase supplementary keywords in meta descriptions. Utilize a call to action when appropriate, especially with guides in urgent windows like natural disasters, shootings, etc. 

Mirror top keywords from titles and headlines into URLs, but tread carefully with years and specific numbers in URLs to maintain evergreen status as news cycles evolve. 

Don’t forget to optimize your images! Include keyword-rich alt text and captions on any images in your news content, which help AI models better understand visuals within your content and improve your odds of discoverability. 

Implement sitemaps and structured data

Enable a news-specific sitemap to optimize delivery of timely content. This will emphasize freshness, streamline indexing, and boost overall search visibility.    

Additionally, employ schema markup to help  ensure the proper indexation of articles. “NewsArticle,” “LiveBlogPosting,” and “FAQPage” are especially relevant for surfacing utility news content.

How can you recirculate utility news content effectively? 

Once publishers have established a productive utility content workflow, it’s essential to employ a strategic recirculation strategy to maximize visibility in all appropriate channels. 

“SEO is dead” messaging has been spreading over the past year. Everyone’s entitled to their own opinion, but I believe that as long as people are searching for the information they need online, traditional search best practices are still very much alive. 

However, certain old-school SEO ideologies are dying off, chief among them being that search performance lives and dies with the first page of SERPs. With ongoing AI visibility challenges, search strategies must extend beyond Google’s digital walls. 

We must recirculate always, in all ways

In 2026, search strategists need to be audience strategists to surface content across all the places people visit online. 

Collaborate and find common ground with departments across your organization to be able to quickly elevate search-friendly angles and content within crucial news windows.

A strong strategy is essential to ensure your brand stays top of mind across platforms when timely audience needs arise. 

Channels that can benefit from cross-departmental search and distribution strategies include: 

  • Your website/homepage
  • Apps
  • Alerts 
  • Newsletters
  • Podcasts
  • Instagram/Threads
  • Facebook
  • X
  • Bluesky 
  • Reddit
  • TikTok
  • Linkedin 
  • YouTube 
  • Google Discover 
  • News aggregators (Apple News, Smart News, etc.) 

How should newsrooms assess the performance of utility news content? 

With a growing list of platforms in the content recirculation mix, performance tracking is evolving with additional nuances for publishers to consider. 

Prior to Google’s Search Generative Experience and AI Overviews, utility news content was primed for placement in knowledge panels, featured snippets, and “Top stories” carousels. Google started to take up more of that top shelf space with its own bespoke charts and modules over time, minimizing publisher activity during top priority events. 

The public rollout of AI Overviews in May 2024 changed the playing field in a big way, but the development didn’t come out of nowhere. 

As AI Overviews have become increasingly prominent in search results, respected institutions such as the Pew Research Center have noted declining click-through rates across the news industry. This development has pushed publishers to place greater emphasis on overall brand visibility alongside their traditional prioritization of page views and clicks. 

Though standard metrics remain important, publishers should rethink what “successful content” means as audience engagement shifts.

In 2026 (and beyond), search strategists should pay extra attention to: 

  • AI Overview placements.
  • Featured snippet placements.
  • People Also Ask placements.
  • “Top stories” placements. 
  • Percentage of traffic from chatbots. 
  • Overall search impressions.
  • Organic search traffic across multiple utility pieces under one general topic. 
  • Year-over-year growth of evergreen SEO content. 

Other metrics that can indicate a positive editorial experience and encourage long-term brand loyalty include:

  • Scroll depth. 
  • Time spent on site.
  • Return visits on evergreen content. 
  • Bookmarked entry pages.
  • Newsletter, app, and other subscription signups from individual pages.  

Dedicated AI platforms from companies like Profound, Semrush, Similarweb, Ahrefs, and other industry vendors can help demystify the performance tracking process. 

Though every AI interaction may not lead to a click or page view, consistent placements in related modules can psychologically trigger trust and encourage long-term reader loyalty, as pointed out by Go Fish Digital. 

Since this performance ideology may differ from what some stakeholders are accustomed to, ongoing newsroom search training is critical to ensure that leadership understands the broader industry implications. 

For instance, positive performance snapshots should be regularly shared with editorial partners to reinforce the impact of the content investment. Postmortem reports can also provide performance insights following tentpole events, driving home key takeaways and reinforcing best practices for the future.

How does personalization play a role in surfacing utility news content? 

The recent rise in personalization features underscores the growing need for publishers to adopt brand-first editorial strategies. To maximize brand reach despite decreased visibility in traditional SERPs, it’s critical for publishers to leverage features that can strengthen brand loyalty. 

Preferred sources in Google “Top stories” carousels and new follow capabilities in Google Discover can increase the value of everyday interactions that are likely to trigger needs utility content can address. 

For example, Google shared that when someone picks a preferred source in “Top stories,” they click that site twice as often on average. With such benefits, publishers should demystify these offerings and encourage readers to curate their content consumption habits in their favor.  

Instructions to guide your readers to choose your brand as a preferred source in Google’s “Top stories” carousel: 

  • Log into your Google account. 
  • Search for a trending topic that would populate a “Top stories” carousel. 
  • Click on the star icon next to “Top stories.” 
  • Enter [source name] in the search bar and check the corresponding box. 
  • Reload results and watch the content shift based on your new selection.
  • Once you pick your favorite sources, they’ll appear more frequently in “Top stories” carousels or in a dedicated “From your sources” section on search results pages.

Instructions to guide your readers to enable the “Follow” feature in Google Discover:

  • Log into your Google account. 
  • Scroll through your Google Discover feed.
  • Find a story from your favorite source.
  • Click the “Follow” button in the upper right-hand corner. 
  • Once you track sources, you’ll see more of their content in your feed.

To maximize visibility around these new features and simplify the signup process, publishers can install related buttons on their article pages and create standalone documentation that illustrates implementation.

How can utility news content benefit a newsroom and the industry at large? 

We know that readers benefit from personalized strategies, but there are also advantages to sharing the utility content ideation and creation process with more colleagues in your newsroom. Service journalism can have a positive internal impact by creating a pathway for more colleagues to participate in the content creation process. 

Opening up the utility workflow within your organization can encourage colleagues across the following departments to showcase unique expertise and encourage a culture of inclusivity that can elevate search-friendly coverage:

  • Editorial sections: In-house experts to loop in during the research process and support with content gap coverage.  
  • Audience engagement: Trend trackers who pinpoint which emerging topics are worth creating content around and featuring on the website, apps, and other spaces.  
  • Social media: Cross-platform collaborators to link up with on shared trends that can maximize brand visibility in multimodal search. 
  • Data and analytics: Methodical minds who can explain how performance insights should influence future content roadmaps. 
  • Design: Visual visionaries who can create bold new environments for search stats to live on, including maps and infographics. 
  • Product: Technical talents who can build proprietary tools that address reader needs in unique ways during timely windows. 
  • Features: Outside-the-box thinkers with strong sourcing who can highlight newsy angles in a narrative and/or investigative fashion. 
  • Copy editors: Streamlined strategists who ensure maximum accuracy in explainers, guides, and other objective resources.  
  • Freelance writers: External partners who can bolster internal efforts with outside expertise and supplemental bandwidth. 
  • PR and communications: Internal partners who can spotlight brand priorities that can be elevated through search-friendly content.

Visibility, trust, and why utility content still wins

Though the SEO industry faces unique challenges in 2026, publishers can still benefit from creating utility content. Amid LLM inaccuracies and AI growing pains, we must continue to serve our readers with accurate, authoritative articles in digestible formats that align with evolving content preferences. 

With Google testing out adding more links in AI Overviews, I remain cautiously optimistic that publishers and AI platforms can work in tandem to provide optimal editorial experiences to audiences in the future. 

In the meantime, keep these fundamental best practices in mind:

  • Prioritize audience needs.
  • Elevate newsroom expertise.
  • Forge a path forward that champions evolution while honoring lasting fundamentals.

Google adds Read more links best practices

20 April 2026 at 17:25

Back in December, Google began showing read more links on some of the search result snippets within Google Search. Today, Google published new documentation around best practices on how to show Read more links in the Google search results.

The best practices. The new documentation was posted over here in the snippets section and it lists three best practices:

  • Make sure content is immediately visible on the page to a human (and not hidden behind an expandable section or tabbed interface, for example).
  • Avoid using JavaScript to control the user’s scroll position on page load (for example, don’t force the user’s scroll position to the top of the page).
  • If you make history API calls or window.location.hash modifications on page load, make sure you don’t remove the hash fragment from the URL, as this breaks deep linking behavior.

What it looks like. Google also posted an illustration of these links, here it is:

Here is an example of how they look:

Why we care. These read more links do add an additional eye-catching link to the search result snippets. Hopefully, this leads to encouraging more clicks to websites and no less.

More clicks to websites is a good thing, so make sure to review the best practices to encourage more clicks to your site.

Rand Fishkin: Zero-click search began long before AI

20 April 2026 at 17:20

Rand Fishkin didn’t get into SEO because he saw the future.

He got into it because he had no choice.

In the early 2000s, Fishkin helped run a small web business with his mom in Seattle. They hired another company to do SEO until they couldn’t afford to pay them anymore.

That moment pushed him into search marketing. More than 20 years later, Fishkin has become one of the best-known voices in SEO — and one of Google’s biggest critics.

In this interview, he looks back at how search has changed, what went wrong, and what may happen next.

Early SEO was wild

SEO today can feel messy. But in the early days, it was even more chaotic.

“There was no social media,” is how Fishkin described that era, where forums like WebmasterWorld and Search Engine Watch were the center of the industry.

People shared tactics openly. Many of those tactics were risky. Buying links was common — and effective.

Fishkin did it, too. Then Google’s Matt Cutts called him out in public.

That moment changed how he approached SEO. He spent years focusing on “white hat” practices and following Google’s guidelines.

Looking back, though, Fishkin now questions whether that shift went too far. He believes Google’s own behavior over time has made those guidelines harder to trust.

The early industry wasn’t just chaotic — it was also full of strange and memorable moments. Fishkin recalled massive conference parties with huge budgets and over-the-top ideas, including a staged “retirement” of the Ask Jeeves mascot.

But what stood out most to him wasn’t the tactics or the parties.

“My favorite thing… is people,” he said, pointing to the relationships and friendships built over decades in search.

When Google stopped sending traffic

Many people think AI is the big turning point in search.

Fishkin says the shift started much earlier — around 2011.

That’s when the idea of “zero-click search” first appeared. Google began answering more queries directly on the results page instead of sending users to websites.

At first, it was small features like weather boxes and calculators.

Then it grew:

  • Around 2016–2017: nearly half of searches ended without a click
  • By 2018: more than half
  • Today: more than two-thirds

Fishkin emphasized that this trend didn’t start with AI — it has been building for more than a decade.

Publishers had a chance — and missed it

Fishkin believes publishers could have taken action early — but didn’t.

  • “The time to fight back… was 15 or 20 years ago,” he said.

In his view, large media companies should have worked together to push back against Google’s growing control. They could have demanded payment for content or limited how Google used it.

Instead, they allowed Google to crawl and use their content freely.

At the same time, Google expanded its influence through lobbying and policy.

  • “Publishers just missed that opportunity,” Fishkin said.

Now, he argues, the focus has to shift to adapting:

  • Build subscription businesses
  • Monetize attention, not just traffic
  • Learn how to operate within platform ecosystems

Some companies have already made that shift. Fishkin pointed to The New York Times as an example of a business evolving beyond traditional news consumption.

Did Google change?

Fishkin does not believe Google has become worse for users.

  • “If it was easier or better to search on Bing… people would go to those places,” he said.

But he does believe Google has become much harder for publishers and creators.

The change, he said, was gradual. As Google grew, went public, and aligned with investor expectations, its priorities shifted toward growth and revenue.

  • “They became the people that they spent time with,” Fishkin said.

The biggest AI mistake people make

Fishkin says most people misunderstand how AI works.

They treat AI answers like search results — consistent and reliable.

But they aren’t.

If you ask the same question multiple times, the answers can vary widely.

  • “You will get completely different answers. And if you do that 10 times, you will get 10 incredibly unique different answers,” he said.

His advice is simple: don’t rely on a single response. Ask multiple times and look for patterns. If the same answer shows up consistently, it’s more likely to be trustworthy.

This matters most for important decisions, like health or finance, where relying on one answer could be risky.

What he misses about the early days of SEO

Fishkin doesn’t miss a specific tactic or tool.

He misses the level of opportunity that existed in the early web.

Back then, smaller creators and independent sites had a better chance to succeed. Traffic was more evenly distributed.

  • “The world of clicks and traffic… was so… flat compared to… today,” he said.

What’s next?

Fishkin believes the future of media and search may look more like the past.

He expects a smaller number of powerful platforms to control most of the flow of information.

At the same time, individual creators will still produce much of the content — but within those systems.

Still, he hopes the web can evolve again.

💾

Fishkin also discussed AI’s unreliable answers, Google reducing organic visibility, and why early SEO offered more open opportunities.

Is Google Ads Asset Studio a game changer? Not so fast

20 April 2026 at 17:00
Is Google Ads Asset Studio a game changer? Not so fast

If you know anything about Google Ads Asset Studio, you’ve heard the hype:

  • “Google just killed every excuse for not running video ads.”
  • “Total game changer! You don’t need a production budget anymore.”
  • “Upload a few product images and get campaign-ready video in minutes.”

From Google Ads > Tools > Asset Studio, you can build, manage, and scale images and videos across ad formats.

The recent addition of Veo (Google’s AI video generation model) and Nano Banana Pro means you can now turn a handful of product images into full-motion video ads, for free, in no time.

Apparently, video creative is no longer a constraint. But does Asset Studio actually change the game? Read on to find out if it’s worth your time.

A tale of two Veos

Google is its own biggest cheerleader for the power of its AI images and video.

A recent Think with Google article showcases AI-generated ads for Cosmorama, a Greek travel agency. The videos are genuinely imaginative: think a flamenco dancer in the clouds, not just close-ups of headphones and sneakers.

As part of learning Asset Studio, I set out to reverse-engineer their approach. I wasn’t trying to match the quality. I just wanted a proof of concept using Nano Banana and Veo.

What I got instead was a series of dead ends.

  • No scene-level control: I’d read that prompting plays a major role in video output. But there’s actually no prompt function for scenes in Asset Studio. You select an image from your Asset Library, and that’s it. Google decides how to animate it. There’s no way to direct motion, pacing, or narrative.
  • Human performer restrictions: Video generation repeatedly failed with errors about “specific individuals.” I assumed that meant celebrities or real people. In practice, anything that resembled a human face — even AI-generated — triggered issues. The only assets that consistently worked were tightly cropped: hands, partial torsos, and abstract scenes.
  • No real audio control: The Cosmorama video featured cinematic music. In Asset Studio, you’re limited to a small set of preloaded audio. There’s no way to upload custom music or meaningfully shape the sound layer.
Veo vs. Veo in Asset Studio

After so many false starts, I returned to the article. It mentioned Nano Banana and Veo by name. It never said they were used inside Asset Studio.

When Veo 3 became available in Asset Studio, I didn’t realize how many limitations it would have, resulting in a completely different experience from the stand-alone version.

CapabilityVeo (Full Version)Veo (Asset Studio)
Control levelAdvanced control
(API, model tiers, audio support)
Simplified UI with fixed constraints
Text-to-video promptingFull prompt control:
– Scene
– Camera movement
– Lighting
– Style
– Subject/action
None
Use casesProduction-ready pipelinesLightweight asset generation
Scene stitchingMulti-scene / narrative workflows
(stitching and extensions)
None
Human generationSupport (with policy constraints)Limited / often restricted

What’s available may still help you create some great 10-second motion ads, but don’t go into it expecting flamenco dancing.

Does Asset Studio actually save time and effort?

That depends: Whose time? Whose effort?

For years, paid search managers had one move for visual assets: push back.

  • “I need a vertical version.”
  • “The first five seconds need to be more engaging.”
  • “Can you remove the text overlay?”

Creative’s been a constraint, but always someone else’s constraint to solve. Asset Studio changes that. You can edit, adapt, and post YouTube video ads, even without access to the brand’s YouTube channel.

But the constraint doesn’t disappear. It just changes hands.

Expectation vs. reality

Managing creative strategy and production — even within Asset Studio — takes more time than not owning that role.

Using Asset Studio, I’ve manually adapted logos to new aspect ratios, generated variations that need further edits, and written voiceover scripts I never would have been involved in creating before.

And since production can’t exist without a strategy, I’m spending more time on that too. This is definitely game-changer territory, but maybe not the way you’d hoped:

  • If you’re a brand that would otherwise need a production team: This is likely faster and more affordable than the alternative, satisfying the velocity mandate.
  • If you’re an agency absorbing this work on top of an existing scope: You’re likely taking on a new responsibility that wasn’t priced in.

It removes a bottleneck and replaces it with ownership. If that shifts what your role actually covers, it’s worth revisiting your contract scope.

Will this get me in trouble? AI ad compliance explained

No federal laws in the U.S. prohibit the use of AI in ads. But that’s starting to change.

New York recently passed a law requiring advertisers to clearly disclose when an ad includes a “synthetic performer,” and it’s set to take effect in June 2026. (Hat tip to Sam Tomlinson for his LinkedIn post flagging this.)

Asset Studio doesn’t generate a visible watermark (such as the Gemini sparkle), and there’s no way to add an AI disclosure in Google Ads.

A couple of things worth knowing if you’re using Asset Studio specifically:

  • You’re likely covered for now. Asset Studio can’t generate content with human performers. As mentioned above, anything resembling a face consistently triggers errors. That means the New York law’s “synthetic performer” provision wouldn’t apply to what Asset Studio actually produces today.
  • There’s a watermarking layer. Google uses SynthID to invisibly tag AI-generated images. If disclosure requirements become more explicit, that infrastructure already exists to support it.
Gemini SynthID

Asset Studio’s limitations may actually insulate you from the most immediate compliance concerns, but if you want to proactively disclose AI use for ethical reasons, there’s no built-in way to do that.

Get the newsletter search marketers rely on.


AI without the slop

Josh Spanier, Google’s VP of AI and Marketing Strategy, has this advice for marketers running AI-generated ads:

  • “Stop fearing ‘AI slop.’ Humans made bad ads long before robots.”

Interesting suggestion, but not all of our clients and stakeholders will be quite so enthusiastic about paying to run AI slop ads.

Fortunately, tight control of Asset Studio images and video is easier than you might think. Unlike AI Max, where AI-generated assets can run before you’ve reviewed them, Asset Studio output isn’t automatically published. From your Asset Library, you choose which assets to run. The rest never see the light of day.

What you can produce in Asset Studio is somewhat limited, but here are some of the non-sloppy features I’m most excited about.

Image fidelity: Product images that actually look like your product

Asset Studio’s Nano Banana 2 is built specifically for product integrity. Unlike general-purpose AI image tools like Midjourney, it lets you add up to five reference images and effectively “locks” the product. Only the surrounding environment is up for reinterpretation.

Asset Studio's Nano Banana 2

Trim: Cut right to the action

Client-produced video is rarely built for YouTube. Long intros and slow builds lose viewers before the message lands. Trim lets you jump straight to the action, without going back to the client for a new cut.

Voiceovers and templates: Sleeper tools

For a tool suite that promises to replace a production department, Asset Studio’s constrained voiceover and template options may seem underwhelming. Voiceover only works with audio ads or pre-existing video, and templates feel like glorified slide decks.

But the more I reviewed the landscape of YouTube video ads, the more I realized: most companies struggle with messaging more than production quality. Low budget isn’t limiting sales, but bad scripts and concepts are.

Templates and voice-overs let you test the right words faster than waiting for a new creative brief and a published video.

Asset Studio Voiceovers

In one campaign I’m running, an Asset Studio video I built in under 30 minutes using a template is already showing 10x the CTR of the client’s best-performing video.

Beating the control may not be the highest bar to clear, but it’s a start.

The output isn’t the outcome

Is Asset Studio a game changer? Not yet. But I’m not sure it needs to be.

Positioning it as real competition against global creative brands sets everyone up for disappointment.

The more useful frame: it’s a tool suite that makes creative faster and more accessible for accounts that couldn’t justify a production budget before.

It does shift some of that strategy and production work onto the paid search manager who didn’t traditionally live in that role.

But the bigger question is: what does any of this actually lead to? The point of digital marketing creative isn’t to produce more assets. It’s to drive conversions and sales. That’s still what needs to be proven.

Tests are running now. I’ll share what holds up, and what doesn’t.

How to use the three-act structure for data storytelling

20 April 2026 at 16:00
How to use the three-act structure for data storytelling

You’ve audited your client’s website and compiled performance data. You’ve identified what’s working, what can be improved, and your recommendations for future strategies. But how do you turn that data into a presentation that’s easy to explain and builds trust? 

Start with stories. Storytelling isn’t just for entertainment. It’s how people make sense of information. That’s what makes it so effective for data presentation. 

One of the simplest ways to structure that story is the three-act structure. It’s a familiar framework used everywhere, from Aristotle’s Poetics to Star Wars.

What is the three-act structure?

The three-act structure is a simple framework that shows how a story moves from beginning to middle to end. It shows how a protagonist moves from their starting point to a meaningful change.

Applied to data storytelling, it helps you organize your insights, position your client as the main character (the protagonist), and clearly show what happens next.

While similar to the five-point narrative arc, this framework is organized into three manageable sections: what the story is about, what happens when the main character is introduced to conflict, and how that conflict is resolved.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Act 1: The beginning

This is where the protagonist’s norm and conflict — the issue the main character is meant to face, also known as the antagonist — are established. The protagonist wants something, and the conflict is holding them back from what they want. 

An event or circumstance occurs that incites the protagonist into action. The background is established, the goals are defined, and the audience is invested in the protagonist’s success.

Act 2: The middle 

The story is developed, and tension builds. The protagonist experiences roadblocks caused by the conflict/antagonist that hinder them from their ultimate goal. Conflict arises until it can no longer be ignored, causing a pivotal moment that leads into the final act.

Act 3: The end

The narrative is affected by the change in Act 2, bringing the story to a final showdown between the protagonist and the conflict/antagonist, ultimately resulting in a resolution. The protagonist may find closure or know what path lies ahead (this may set the stage for a sequel).

The three-act structure helps you understand website data on a deeper level. It also prepares the data to be presented to your client in a way that places them at the center of the story.

Using the three-act structure to identify your data’s narrative

Why bother using the three-act structure as a framework for strategy analysis? It builds trust, showing your client that you’re going on a journey alongside them. 

You and your client are on the same team, with the same destination in mind: their success, even if the data isn’t communicating immediate results.

The application of the three-act structure to data storytelling happens in three steps.

  • Step 1: Briefly recap the existing strategies, establish previous wins, and identify the challenge currently affecting performance. This sets the baseline of Act 1.
  • Step 2: Explain the roadblocks and how they stand in the way of the overall strategy’s success. This parallels the growing conflict found in the structure’s Act 2.
  • Step 3: Recommend the next steps and how you plan to address the conflict. Show what success looks like by providing examples of how your recommendations fit the narrative of your client’s goals. This is Act 3, the resolution of the structure.

Get the newsletter search marketers rely on.


Where is your client’s story in the three-act structure?

Your client is the protagonist of their story. To work more effectively together, you need to communicate to your client that you’re invested in the story of their success. 

At the heart of each data set is the story of how your client is impacted. When you communicate what the data is saying, position yourself as the guide who helps the main character get where they need to go.

An example of applying the three-act structure framework to data analysis and presenting the data’s narrative would look like this:

ActGoalScenarioApproach
1Set the stage, center your client as the protagonist while introducing the challenge as the antagonist.Your client’s website has received a substantial increase in organic traffic as a result of your most recent strategy, but is experiencing a high bounce rate on select pages.Recap the strategy that led to the traffic increase and summarize the outcome from a high-level perspective.
2Identify the conflict, potential roadblocks, and related stakes.The high bounce rate is preventing your website from experiencing consistent traffic flow. Explain why a high bounce rate is detrimental to overall performance, and connect the affected pages to the overall strategy.
3Recommend strategies and outline next steps.Your client’s high bounce rate indicates low page speed due to large images that take a long time to load.Help the client visualize how best practices lead to better outcomes. Recommend image compression as a next step.

The conclusion doesn’t always mean the end of the story

Finding the story in your data — and communicating it clearly — is how you build trust with clients.

Clients don’t want industry jargon. They want to feel seen, understood, and that they’ve entrusted their digital marketing success to the right person. Stories, and the connections they form, get them there.

Reaching the conclusion of your data’s narrative isn’t the end, but the beginning: the start of strategy implementation, of collaborative partnerships, and of greater results. 

When looking at data, you and your client are on a journey together. A downward trend in your data doesn’t mean your story is over, and an upward trend doesn’t mean there’s no hope for a sequel. In either case, a new journey (your next strategy) can begin.

Is your AI readiness a mirage? by AtData

20 April 2026 at 15:00

AI has quickly become the most overconfident line item in the modern marketing roadmap.

Budgets are shifting. Teams are being restructured. Vendors are being evaluated almost exclusively through the lens of how “AI-powered” they appear. There is a growing assumption that once the right models are in place, performance will follow. Better targeting. Smarter segmentation. Higher conversion. More efficient spend.

It sounds almost inevitable.

But there is a quieter reality beneath the momentum. One that rarely makes it into boardroom conversations or conference keynotes.

Most organizations are not struggling to use AI. They are struggling to feed it.

And what they are feeding it is far less reliable than they think.

The uncomfortable truth about inputs

AI does not create truth. It scales whatever it is given.

If the underlying data is fragmented, outdated or manipulated, the model does not correct it. It operationalizes it. At speed. At scale. With confidence.

This is where the gap begins.

Marketers have spent years investing in data infrastructure, pipelines and orchestration layers. On paper, the foundation looks strong. There is more data available than ever before. There are more signals, more touchpoints, more attributes tied to every customer.

The assumption is that this abundance translates into readiness. But volume is not the same as validity.

A customer profile built from five disconnected identifiers is not a unified identity. An email address that exists in a CRM is not necessarily active, reachable or even tied to a real person. Engagement signals that appear recent may be the result of automated activity, privacy shielding or bot interaction.

AI models are not designed to question these inputs. They are designed to find patterns within them.

So, when the inputs are flawed, the outputs become convincingly wrong.

Identity is the fault line

At the center of this problem is identity.

Every AI-driven use case in marketing depends on the assumption that you know who you are analyzing, targeting or predicting. Whether it is propensity modeling, churn prediction, audience creation or personalization, identity is the anchor.

Yet identity remains one of the least stable components of the data stack.

Consumers move across devices, channels and environments constantly. They use different email addresses. They share accounts. They create new profiles. They disengage and re-engage in ways that are difficult to track cleanly. Over time, what appears to be a single customer often becomes a composite of partial truths.

Even within authenticated environments, identity degrades. Touchpoints go inactive. Behavioral signals lose relevance. Records persist long after the underlying reality has shifted.

Most systems are not built to continuously reconcile these changes. They capture identity at a moment in time and treat it as durable.

And AI inherits that assumption.

Which means many models are making decisions based on identities that no longer exist in the way they are represented.

The hidden impact of fraud and synthetic activity

Another layer omplicates the picture further. Not all data is simply outdated. Some of it is intentionally misleading.

Fraud is evolving alongside marketing technology. The barriers to creating accounts, generating engagement, or exploiting promotional systems have decreased significantly. Automated tools and AI itself have made it easier to simulate legitimate behavior at scale.

Fake accounts are not always obvious. They can pass basic validation checks. They can engage with content. They can move through funnels in ways that resemble real users.

From a model’s perspective, they are indistinguishable unless additional context is applied.

This creates a subtle but meaningful distortion.

Acquisition models begin to optimize toward patterns that include fraudulent behavior. Lifecycle strategies adapt to engagement that is not human. Performance metrics improve on the surface while underlying efficiency erodes.

The result is a feedback loop where AI reinforces the very issues it should be helping to solve.

And because the outputs look sophisticated, the problem becomes harder to detect.

Why traditional data strategies fall short

Most organizations are aware that data quality matters. Significant effort goes into cleansing, deduplication and normalization. Records are standardized. Fields are filled. Duplicates are merged.

These steps are necessary, but they are not sufficient. Clean data is not the same as accurate data.

A perfectly formatted email address can still be inactive. A deduplicated profile can still represent multiple individuals. A normalized dataset can still be missing critical context about behavior, risk or authenticity.

Traditional data practices tend to focus on structure. AI requires substance.

It requires an understanding of whether an identity is real, whether it is active, whether it is behaving in ways that align with genuine consumer patterns.

Without that layer, even the most sophisticated models are operating on incomplete information.

The illusion of readiness

This is how the mirage takes shape.

Dashboards show high match rates. Databases contain millions of records. Models produce outputs that appear precise. Campaigns are executed with increasing automation.

From the outside, it looks like progress.

But underneath, there are unresolved questions.

  • How many of those identities are actually reachable today?
  • How many represent real individuals versus synthetic or low-quality accounts?
  •  How often are behavioral signals refreshed and validated?
  • How much of the model’s learning is influenced by noise?

These are no longer rare. They are foundational.

And yet they are often overlooked because they sit below the level where most AI initiatives begin.

A different way to think about AI readiness

True AI readiness does not start with model selection. It starts with input integrity.

It requires a shift in focus from how much data you have to how much of it you can trust.

That trust is built on a few critical dimensions.

First, identity accuracy. Not just the ability to match records, but to ensure that those records reflect real, current individuals. This includes understanding when identities change, when they become inactive and when they should no longer be used as the basis for decisioning.

Second, activity validation. Knowing that a signal occurred is not enough. You need confidence that it represents meaningful human behavior. This is where distinguishing between genuine engagement and automated or manipulated activity becomes essential.

Third, risk awareness. Every dataset contains some level of fraud or abuse. The question is whether it is visible and accounted for. Without that visibility, models will absorb and propagate those patterns.

When these elements are in place, AI begins to operate on a different plane. Predictions become more reliable. Segments become more actionable. Optimization aligns more closely with real outcomes.

Where this creates advantage

Organizations that address these foundational issues are creating a structural advantage.

They are able to suppress low-value or risky identities before they enter the modeling process. They can prioritize outreach to individuals who are both reachable and likely to engage. They can detect and mitigate fraudulent behavior before it distorts performance metrics.

Over time, this compounds.

Models trained on higher-quality inputs learn faster and generalize better. Campaigns become more efficient. Measurement becomes more trustworthy.

Perhaps most importantly, decision-making becomes more grounded in reality.

This is where AI begins to deliver on its promise.

The path forward

There is no question that AI will continue to reshape marketing. The capabilities are real, and the pace of innovation is not slowing down.

But the idea that AI alone will solve underlying data challenges is a misconception. If anything, it raises the stakes.

Because AI does not just expose weaknesses in your data. It amplifies them.

The organizations that recognize this early are taking a more deliberate approach. They are investing in understanding their identity layer. They are prioritizing the validation of activity and the detection of risk. They are treating data not as a static asset, but as a dynamic system that requires continuous refinement.

They are not asking, “How do we apply AI to our data?”

They are asking, “Is our data worthy of AI?”

It is a more difficult question. It requires a deeper level of introspection. It challenges assumptions that have been in place for years.

But it is also the question that separates real readiness from the illusion of it.

And in a landscape where everyone is accelerating toward AI, clarity at the foundation is what ultimately determines who moves forward, and who simply moves faster in the wrong direction.

The latest jobs in search marketing

17 April 2026 at 21:33
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • We Are: NoGood is an award-winning, tech-enabled growth consultancy that has fueled the success of some of the most iconic brands. We are a team of growth leads, creatives, engineers and data scientists who help unlock rapid measurable growth for some of the world’s category-defining brands. We bring together the art and science of strategy, […]
  • Job Description American Humane Society (AHS) is seeking a dynamic and strategic Vice President, Marketing to steward and elevate the integrity of both the American Humane Society (AHS) and Global Humane Society (GHS) brands. This leader will drive the development and execution of integrated marketing strategies that advance critical organizational priorities, strengthen national leadership as […]
  • Job Description Council & Associates is one of Atlanta’s fastest-growing PI firms — handling serious cases across truck accidents, premises liability, daycare injury, negligent security, and wrongful death. The firm is led by a nationally recognized trial attorney and built on a brand that goes beyond the courtroom into the community. We need a marketer […]
  • Job Description Content Marketing Specialist Malta Dynamics | Malta, OH (Hybrid) About the Role Malta Dynamics is seeking a Content Marketing Specialist to own the execution and consistency of Malta Dynamics’ brand voice across all channels. This role is responsible for producing, publishing, and optimizing high-quality content that drives inbound leads, supports sales, and reinforces […]
  • We are seeking an intermediate-level SEO Specialist for Hive Digital, a cutting-edge and award-winning agency that prides itself on helping change the world for the better. We offer a highly collaborative team that works together to deliver the best possible outcomes for our clients in a fast-paced, fun work environment. Are you ready to bring […]
  • About The Company goop is a lifestyle platform dedicated to exploration, curation, and groundbreaking conversation. From its award-winning beauty and fashion lines to its expansive editorial lens, goop invites women to embrace the process of becoming, and to discover deep joy in the pursuit of pleasure, beauty, and growth in all phases of life. Gwyneth […]
  • Job Description LK Distribution is a leading distributor of several brands and products offered on both e-commerce and wholesale. Specialized in the Alternative Product category in the CBD/Hemp Industry ranging from a large category of products. We are seeking a creative and dynamic individual with experience with independent online storefronts for each of our brands […]
  • Job Description Benefits: 401(k) Paid time off Dental insurance Health insurance Vision insurance A Digital Marketing Specialist at a leading real estate company requires high-energy, creative, and data-driven team member who helps elevate our brand and our agents’ digital presence. As the Digital Marketing Specialist, you will be the “engine room” of our online strategy. […]
  • Director, Global Digital Marketing, Integrated Marketing Communication (IMC) Team Position Overview The Director of Digital Marketing is at the center of 10x Genomics’ digital marketing engine, delivering measurable business impact and innovating across channels to ensure leadership in scientific markets. This position reports to the Vice President of Integrated Marketing Communications as is responsible for […]
  • Job Description Digital Marketing Specialist OURCU is looking for a Digital Marketing Specialist who is equal parts data-driven strategist and collaborative teammate. This role is ideal for someone excited to build and optimize HubSpot from the ground up, create meaningful campaigns, and clearly demonstrate the why behind marketing performance. If you love blending creativity with […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • About Us: Naadam is redefining luxury by delivering the world’s finest cashmere at an accessible price. Founded in 2013, with a vision to bring premium, sustainably made cashmere to the everyday wardrobe, we’ve built a brand that values innovation, transparency, and connection with our customers. At Naadam, we are dedicated to pushing limits, nailing the […]
  • About the Role You’ll play a key role in driving Kashable’s customer activation, acquisition and retention. You’ll begin owning the execution and performance of one paid media channel and as you demonstrate results expand your scope into broader strategic decision-making and greater channel ownership. We’re looking for someone who combines strong strategic thinking with hands-on […]
  • Job Description Job Description Our client, an elite national Am Law firm, is seeking a Regional Marketing Specialist to support its New York office. This role offers the opportunity to work closely with firm leadership to ensure local marketing initiatives align seamlessly with firmwide and practice‐specific priorities. You will lead marketing efforts for the New […]
  • Job Description Job Description Salary: $85K-$110K Mason Interactive | Hybrid (3 days in office) | $85K-$110K Who We Are Mason Interactive is a 30-person full-service digital agency with offices in Brooklyn and Charlotte. We work with clients in education, fashion, wellness, and luxury across all channels: paid search, paid social, SEO, programmatic, creative, and affiliate. […]
  • A property management firm in New York is seeking a Leasing Coordinator to manage marketing, leasing, and renewal strategies. This position involves performing all activities related to leasing to new residents, ensuring resident satisfaction, and executing lease renewals. The ideal candidate will be responsible for conducting tours, processing applications, and developing marketing plans. This role […]

Other roles you may be interested in

SEO Manager, Veracity Insurance Solutions, LLC, (Remote)

  • Salary: $100,000 – $135,000
  • Lead, coach, and develop a high-performing team of SEO Specialists
  • Set clear expectations, quality standards, workflows, and growth paths across the team

Senior SEO Manager, Lunar Solar Group (Remote)

  • Salary: $80,000 – $100,000
  • Lead strategy, execution, and deliverables across 4–6 client accounts independently
  • Own end-to-end SEO strategy and execution across all core deliverables and processes

Performance Marketing Manager, Recruitics (Hybrid, Lafayette,CA)

  • Salary: $70,000 – $90,000
  • Work in platform to configure campaigns – set up budgets, targeting, creative, and run dat
  • Monitor ongoing performance to identify areas of opportunity

Marketing, Social Media & PR Manager, PARTNERS Staffing (Fort Myers, FL)

  • Salary: $75,000 – $85,000
  • Develop and execute integrated marketing campaigns for shows, content releases, events, and brand initiatives
  • Identify target audiences and create strategies to grow reach and engagement

Local Search & Listings Manager, TurnPoint Services (Remote)

  • Salary: $80,000 – $90,000
  • Own the strategy and governance for local search visibility across all business locations.
  • Develop optimization frameworks and standards for Google Business Profiles and other listing platforms.

Senior Branding manager, rednote (Hybrid, New York, US)

  • Salary: $228,000 – $320,000
  • Define and drive rednote’s global brand strategy, shaping its positioning across key international markets
  • Lead integrated marketing initiatives end-to-end, ensuring alignment across creative development and media execution

Performance Marketing Manager, Hirewell (Remote)

  • Salary: $85,000 – $95,000
  • Paid Search: Lead daily execution and management of Google Ads. This is a “hands-on” role requiring deep platform expertise.
  • Multi-Channel Management: Oversee and optimize campaigns across Meta, LinkedIn, and Programmatic channels.

Senior Paid Media Manager, Brightly Media Lab (Remote)

  • Salary: $70,000 – $100,000
  • Directly build, manage, and optimize campaigns within Google Ads, Microsoft Ads, and Facebook Ads (Meta).
  • Serve as the lead point of contact for your book of clients, taking full ownership of their success and growth.

Marketing Specialist, The Bradford group (Hybrid, The Greater Chicago area)

  • Salary: $60,000 – $62,000
  • Launch and manage paid social campaigns primarily on Meta platforms.
  • Oversee daily budgets and performance optimizations against revenue and ROI goals, using data-driven insights to continuously improve results.

Paid Search Specialist, Maui Jim Sunglasses (Peoria, IL)

  • Salary: $65,000 – $70,000
  • Plan, set up, and manage paid search, display, and shopping campaigns on Google Ads.
  • Manage and optimize advertising budgets to achieve revenue and efficiency targets.

Note: We update this post weekly. So make sure to bookmark this page and check back.

❌
❌