Reading view

Google tests video ads in local search results

Google Local Services Ads.

Google is experimenting with video ads inside the local pack, signaling a shift toward more immersive, visual formats in location-based search.

Driving the news. The test was spotted by Anthony Higman, who shared that Google is integrating “immersive map view videos” into PPC ads tied to local results.

These video ads appear within the local pack — the map-based listings that show businesses near a user’s search.

What’s new. Instead of static listings or text-based ads, some advertisers may now have the option to surface video content directly in local search results.

  • The feature appears tied to settings within Google Ads’ Location Manager.
  • It may be enabled through a pre-opted setting in the shared library.
  • The format blends paid ads with Google Maps-style immersive experiences.

Why we care. This update could significantly increase visibility and engagement in high-intent local searches. Video ads in the local pack offer a new way to stand out and showcase locations, products, or services more effectively than static listings. This could also mean advertisers needing to start investing in video creative to stay competitive local listings.

Yes, but. The feature appears to be in early testing, and it’s unclear how widely it’s available or how performance compares to traditional local ads.

There’s also the question of creative requirements, as video production adds complexity for advertisers.

The bottom line. Google is bringing video into one of its most intent-driven surfaces — local search — as it looks to make ads more immersive and engaging.

First spotted. This update was spotted by Adsquire founder Anthony Higman who shared spotting the new local listing ad type on LinkedIn.

The digital PR duplication method: Rinse, reuse, repeat

The digital PR duplication method- Rinse, reuse, repeat

Every digital PR (DPR) team’s been there: New data drops and the team huddles while someone stares at a blank Google doc spiraling over angles and journalist targets. Eventually, a pitch limps out the door just in time to hit “Send” before end of day.

The pitch then lands in a top-tier publication, everyone celebrates, and the next month the whole team does the exact same thing over again, like it never happened.

But here’s the thing nobody talks about: That winning pitch is a valuable asset, and most teams will just leave it sitting in their sent folder collecting virtual dust.

Whether it was a data study, a product launch, or an expert quote, that pitch is a template. And with AI, you can clone its DNA onto every new campaign rather than reinventing the wheel every single time.

By the numbers

The stakes for getting this right have never been higher. About 46% of journalists receive six or more pitches every single workday, and of those, 49% seldom or never respond to a pitch, per Muck Rack’s State of Journalism report. 

Pitch volume keeps climbing while relevance drops, with 47% of journalists saying they seldom or never receive pitches relevant to what they cover, Cision’s 2025 State of the Media Report found.

The volume problem is real, and AI is making it worse by enabling everyone to quickly and easily generate pitches. This means journalist inboxes are quickly filling up with content that sounds more generic than ever. 

So how do you get your pitches in front of as many journalists as possible while actually getting noticed? The answer is deceptively simple: Rather than blindly scaling your pitch generation, scale what you already know lands.

Meet the DPR duplication method

I call it the “DPR duplication method,” and the idea behind it is simple: rinse, reuse, repeat.

The process is straightforward. You take a pitch that generated coverage previously, determine exactly what made it work structurally, and then use AI to replicate that structure for your next campaign rather than prompting from a blank slate.

It works across pitch types, too, which is the part I love most about it. Data studies, product launches, expert quotes, reactive commentary — it doesn’t matter. If the structure worked once, it can work again, and if it worked 10 times, it can work 20.

One of my favorite pitches to use with this method is one I sent to an editor at PR Daily, and the subject line read: “Your basset hound is the cutest [New SEO study for PR Daily].”

The pitch was built around a data study on YouTube thumbnail performance, with findings that were specific, visual, and easy for a journalist to turn into a standalone story without much heavy lifting on their end. It landed. Same-day response.

Anatomy of a winning pitch: What made it work?

So why did it work? There are four reasons, and you can replicate every single one:

  • The subject line led with a personal connection before it ever mentioned the pitch, directly referencing the editor’s dog before dropping the study hook in brackets. This made it impossible to ignore because initially it didn’t feel like a pitch. Instead, it felt like a personal message from someone who actually knew them.
  • The opening hook built rapport before it built a case, acknowledging their pet and sharing something personal before naturally transitioning into the actual reason for the email. By the time the data showed up, they were already reading and receptive.
  • The stat sequencing moved from the broadest behavioral finding down to the most specific and visual. This gave them multiple angles to work with, depending on what their audience needed most. It didn’t force them to figure out the story themselves. Plus, it was also about a topic they were already covering.
  • The CTA was framed entirely around their readers and not around my study or client. It asked whether their audience of growing businesses interested in videography would benefit from the findings. The CTA wasn’t simply, “Would you like to cover this?” Instead, it was, “Would your readers benefit?” That’s a very different ask, and journalists immediately feel the difference.
Anatomy of a winning pitch: What made it work?

Steal the structure: Prompt by prompt

Don’t describe your best pitch to the AI. Instead, give it the pitch by pasting in the full text. Then, ask it to mirror the specific parts that made the pitch work rather than having it write something new from scratch.

Here’s how that looks using a hypothetical campaign. Say you are pitching a new survey for a financial wellness company that shows one in three Americans have skipped a doctor’s appointment in the last year because of cost. This is strong data with a clear emotional hook that a lot of journalists covering personal finance or healthcare would care about.

You need to pitch it, and you need it to land. So you open the PR Daily pitch above, and you use it as your blueprint, duplicating each component that made it work for the new campaign.

Duplicate the subject line

That PR Daily subject line worked because it opened with something personal to the journalist before it ever mentioned the study, and you want that same energy in every new pitch you send:

  • “Create seven headlines with each provided stat. For example: [paste your winning subject line format].”
  • “Make this subject line more focused on [new topic]: [paste winning subject line].”
  • “Make this subject line more newsworthy based on the articles I provided: [paste current subject line draft].”
  • “Make this statistic into a newsworthy headline: [paste stat].”
  • “Make this headline more personal to a journalist covering [beat]: [paste headline].”

Duplicate the opening hook

The opening worked because it felt human before it felt like a pitch, and injecting that same warmth and specificity into a new campaign is as simple as showing the AI exactly what you mean rather than trying to describe it:

  • “Love this opening. Make the new opening mimic more of this: [paste opening from winning pitch].”
  • “Here is some trending news. Highlight this in the opening hook: [paste URL].”
  • “Make this opening more [inflation/healthcare/financially] focused: [paste current opening].”
  • “Here is another example of what is happening right now. Let’s incorporate it: [paste URL].”
  • “Make this intro feel more like a journalist would write it and less like a press release: [paste current intro].”

Get the newsletter search marketers rely on.


Duplicate the stat sequencing

The stats in the PR Daily pitch moved from the broadest finding down to the most specific and surprising, which handed the journalist a ready-made narrative she could work with instead of a list of numbers she had to interpret herself:

  • “Here are my key statistics: [paste stats]. Make the stats mimic this verbiage: [paste stat section from winning pitch].”
  • “Make this statistic more clear and newsworthy but not misleading: [paste stat].”
  • “Rewrite these stats so they flow like a story, starting broad and getting more specific: [paste stats].”
  • “Make these stats feel more conversational and less like a press release: [paste stats].”

Duplicate the CTA

The CTA worked because it put the journalist’s readers at the center of the ask rather than the study or the client, and that shift in framing is something you want to carry into every pitch you send:

  • “Make the CTA more like this: [paste CTA from winning pitch]. New topic is [insert topic].”
  • “Make this CTA more [topic] focused: [paste current CTA].”
  • “Rewrite this CTA so it leads with what the journalist’s readers will get, and not what we want covered: [paste current CTA].”
  • “Make this feel less salesy and more like a genuine offer: [paste current CTA].”

Duplicate the follow-up

The follow-up gets the exact same treatment, because there is a version of your best follow-up already sitting in your sent folder. You should be using this winning follow-up as the model every time instead of writing a new one:

  • “Mimic this follow-up and add the link [paste URL]: [paste your winning follow-up].”
  • “Mention [insert trend] from [insert article] in this follow-up: [paste follow-up].”
  • “Rewrite this follow-up so that it leads with a new stat we did not include in the original pitch: [paste follow-up and new stat].”
  • “Make this follow-up shorter and punchier while keeping the same structure: [paste follow-up].”

Every component has a proven version already sitting in your sent folder, so use it. Re-prompting with the actual text of the original rather than describing it will consistently yield more faithful results, as the AI won’t need to guess at your voice. Instead, it has a blueprint.

You can duplicate anything

Ask yourself what is preventing your current pitches from landing. The first answer that comes to mind probably isn’t the lack of a new AI tool. Rather, it’s likely a structural ingredient from something that already worked and that you stopped using the moment it landed coverage.

The DPR duplication method can apply to every part of your outreach (e.g., headlines, pitch intros, stat formatting, CTAs, sign-offs, and follow-ups). Every single component can be duplicated and evolved from a version that has already proven its effectiveness. 

I know what you might be thinking at this point: Won’t pitches start to sound the same if they all pull from the same structure? The answer is no, because the structure is yours, built from your wins, your voice, and your relationship with a specific editor about her specific dog. Nobody else has that blueprint.

Here are some questions worth considering before your next campaign:

  • What group of stats did you love from a past pitch, and how can you use them as a formatting model for new data?
  • What pitch generated an outsized amount of press, and what was the structural reason it actually worked?
  • What headlines received responses from journalists, and what was the pattern that made them land?
  • What in your past experience can be enhanced with AI rather than replaced by it?

Using AI doesn’t require sacrificing the secret sauce of what generates press — because the strategy is still yours. AI just helps you execute it faster and more consistently without losing the specific ingredients that made your best work actually work.

Your next pitch starts with your last win

Open the pitch that generated your best coverage in the last 12 months, whether it was a data study, product launch, or expert quote pitch. Identify the things that made it work, including the subject line, opening hook, stat or story sequence, and the CTA. Notice what made each one feel specific, human, and impossible to ignore.

Then prompt AI to duplicate each component individually using that pitch as the model. Add current news context where it fits, combine everything, refine as needed, and duplicate the follow-up, too.

You’re not copying. You’re compounding.

Rinse, reuse, repeat.

Utility news content: How to win beyond clicks in AI search

Utility news content- How to win beyond clicks in AI search

In 2026, news SEO content performance isn’t just defined by page views and clicks — brand awareness is taking center stage. With the emergence of multimodal search, digital editorial strategy is no longer just about the first page of Google. You have to meet readers anywhere and everywhere they consume content. 

Amid this industry shift, AI platforms are an increasingly important traffic source for publishers to consider. If publishers want to remain relevant, it’s critical to find ways to play ball with Google AI Overviews, chatbots, voice assistants, and other emerging technologies. 

Fortunately, utility news content is a key deliverable that can connect with audience needs across platforms throughout a variety of breaking news and evergreen windows. 

What is utility news content? 

Utility news content is service journalism that’s specifically crafted to provide simple and straightforward answers to topline questions. The recent rise of answer engine optimization (AEO) is driven by a similar methodology. 

Service journalism encourages readers to contemplate:

  • What does this topic mean?
  • Why does this angle connect with my interests and needs? 
  • How can I apply this information to my life? 

When constructing a utility content strategy, we must remember: Simple isn’t stupid. Don’t overcomplicate the process. Listen to the needs of your audience and let those signals guide you to the right places. 

In terms of execution, the “set it and forget it” days of evergreen content are fading in favor of more proactive audience engagement strategies. 

To maximize the impact of utility news content, it’s essential to:

  • Map out evergreen targets in advance with trend forecasting around seasonal events and recurring search patterns.
  • Track the breaking news cycle closely to pinpoint new areas of opportunity.
  • Refresh existing explainers when corresponding breakout queries arise.
  • Create new utility posts when content gaps exist.
  • Recirculate related resources across appropriate platforms in timely windows.
  • Track article performance to assess overall impact and share key takeaways with editorial stakeholders.
  • Consolidate related articles in a streamlined content library for easy access and regular review.

What are traditional utility news content examples? 

These helpful guides show that simple and straightforward content can serve reader needs by breaking news within a crucial window of time, zoning in on evergreen themes of interest, and connecting with seasonal tentpole event calendars. 

ESPN utility news AI Overviews case study 

During my tenure as SEO Director at ESPN from 2022-2026, I spearheaded a utility content initiative that prioritized fan-forward queries throughout a variety of game and event windows. In managing that workflow, I picked up helpful dos and don’ts for making utility content shine within a newsroom. 

These examples demonstrate why utility news content can resonate in AI modules if you have a proper editorial strategy in place.

Create content that can maintain relevance throughout long-term event cycles

Which NBA teams have never won a championship - AI Overview

When the Indiana Pacers started trending for the “NBA teams that have never won an NBA championship” theme at the end of the 2025-26 NBA season, updating this evergreen piece of content to maintain accuracy secured consistent AI Overview placement through to the championship. 

Answer breaking news questions with evergreen resources 

How many titles did Hulk Hogan win - AI Overview

Following his unexpected passing in July 2025, Hulk Hogan’s wrestling titles were a major search topic that translated well into this breakout explainer. Its evergreen potential can resonate with audiences beyond the initial post-demise trending window. 

Create evergreen lists in advance that can spike off of breaking news

WNBA jersey retirements - AI Overview

Candace Parker’s 2025 jersey retirement gave this previously published evergreen roundup a fresh window to reach new fans and drive traffic. 

Recirculate guides that can benefit from frequent updates

Most successful father-son duos in NBA - AI Overview

With LeBron and Bronny James frequently in the news, this fun evergreen take reflects their evolving stats and provides a related link to feature complementary content. 

Lean into your brand 

Lee Corso's college game day record - Google Search

Whenever possible, it’s great to showcase in-house talent with breakout posts that spotlight unique elements that are synonymous with your brand

Why is utility news content still relevant? 

With the rise of zero-click search, some concerns have been raised about investing in service journalism when related SERP modules regularly snatch up topline shelf space in time-sensitive windows. 

Though declining click-through rates are alarming, service journalism isn’t only about traffic. Publishers have a duty to showcase legitimate sourcing and provide accurate information that serves audience needs across top platforms. 

Among many recent studies, Ahrefs presented new data in December 2025 that showed how easy it is for LLMs to get confused and present inaccurate information to users. Google AI Overviews can also sometimes produce “predictions” that share outcomes on events that haven’t happened yet. 

Inaccuracies are especially concerning in breaking news windows, which AI Overviews have been increasingly staking their claim on (as noted by Glenn Gabe). 

Additionally, innocent online interactions can quickly turn dangerous, which The Guardian emphasized in a 2025 investigation that uncovered how Google’s AI Overviews gave “very dangerous mental health advice” to billions of searchers.

Though visibility challenges are frustrating, we can’t sit idly by and let the general public be led astray when seeking timely information. In 2026 (and beyond), AI-friendly utility news content is still worth championing in your newsroom.  

Get the newsletter search marketers rely on.


How can publishers pinpoint the best topics for utility news content? 

An ideal utility content workflow should function under a healthy combination of breaking news reaction and evergreen trend forecasting. 

A variety of tools and interfaces can help publishers during the ideation process:

Google Trends

‘Trending now’ section

  • Toggle upper navigational features to explore trends within different regions, date ranges, and content categories.
  • “Past 4 hours” is ideal for breaking news brainstorming.  
  • Use “Search volume” and “Started” filters alongside search interest activity chart to gauge the timing and format of potential pieces.
  • Older trends can be repurposed past the breaking news window in the form of timelines and “bigger picture” explainers.
  • Sift through the “Trend breakdown” section for angles of interest that could translate into breakout explainers. 
  • Tap into the “In the news” section for brand performance validation and competitor intel. 

Standalone topic searches

  • Discover trending questions, people, places, events, and things in “Rising queries” section.
  • Determine essential phrases to target in headlines and subheadlines with the “Top queries” section. 
  • Conduct localized research with the “Interest by subregion” module in “Classic Explore” view. 
  • Use the comparison bar to narrow down potential topics of interest for breakout articles and establish the most search-friendly phrasing for headlines. 
  • Experiment with “YouTube,” “News,” and “Image” search filters to assess how searches on topics of interest may vary by platform. 
  • Regularly share related insights with external departments that can incorporate search-friendly angles into their workflows and deliverables (e.g., The video team with “YouTube” search, the photo team with “Image” search).
  • Use “Past hour” filter during breaking news windows to assess urgent audience queries and predict where search behavior may be going next. 
  • Use “Past 5 years” and “2004-present” filters to identify seasonal audience trends that can positively influence year-over-year content planning and “all-time highs” in search interest
    • When do search interest spikes occur every single year? 
    • What content can you refresh on an annual basis to capitalize on cyclical audience behavior? 
    • How should you stagger your content rollout during a recurring event window?  
  • Experiment with the “Suggest search terms” Gemini feature for supplementary content research (Note: If you use AI tools in a prominent way during the content creation process, it’s important to be transparent with your audience and include a corresponding disclosure statement within the final deliverable).

Curated pages (ad hoc basis) 

  • Zone in on trending takeaways around tentpole events with mass interest.
  • Regularly check the Google Trends homepage for featured modules around elections, sporting events, awards shows, etc.  

Curated newsletter (typically Monday-Friday) 

  • Take the guesswork out of daily Google Trends analysis.
  • Receive top trends, breakout queries, data visualizations, and interesting stats from industry experts that can be applied to breakout articles.  
  • Sign up on the Google Trends homepage. 

Competitor research throughout all modules

  • Gauge where you’re winning and pick up on lingering content gaps where other publishers may have an edge. 

Google News

  • Explore primary topics of interest in the “Top stories” homepage section.
  • Dig into upper navigational bar content categories based off of newsroom beats.
  • Discover regional opportunities in the “Local” section.
  • Access the platform regularly to receive a curated selection of articles in the “For you” section based off of your personal user behavior and interests. 
  • “Follow” searches that would benefit from regular monitoring to streamline the daily research process. Press the star button on the upper right-hand side of an individual search to save topic to your “Following” section.
  • Utilize standalone topic searches for targeted content ideation. 
  • Explore standalone source searches for validating brand performance and conducting competitor research.

People Also Ask

Converse with “AI Mode” to uncover topic clusters that extend beyond your initial search. 

Semrush

Pinpoint high-volume Q&A angles that maintain long-term relevance in seasonal windows. 

Alternative search platforms

Identify trends that can spark content with compatibility across a variety of formats, including articles, videos, and social posts:

  • Google Autocomplete: Ideal for research on high-intent long-tail keywords.  
  • YouTube search bar: Ideal with topics that can be enhanced by strong visuals and/or a video walk-through approach.
  • TikTok search bar: Ideal for targeting younger demographics. 

How should utility news content be constructed for success on AI platforms?  

Once the brainstorming process is complete, search strategists need to adopt the right techniques to make sure that corresponding content serves utility needs. 

Utility news content can be structured to be more “AI-friendly,” so to speak. Specifically, LLMs are more likely to cite content that contains: 

  • Simple and straightforward formatting
  • FAQ styling
  • Easily extractable answers 
  • Fresh updates
  • Objective stats that lean into substance  

Include AI-friendly tactics like bullet point lists, numbered steps, tables, keyword-targeted subheadings, and snackable paragraphs to better position your content for LLMs.

Don’t bury the lead 

Answer the most search-friendly questions in the top half of the article using the five journalism staples: Who, what, where, when, why, how? 

Break out the buzziest angles from live blogs, rolling roundups, and extensive features into standalone articles. Quick-hit explainers and deep dive analyses can cover the same general topic and serve different audiences. 

Important themes can get lost in bigger pieces and fail to surface in related external searches, whereas separate articles with targeted headlines can increase search potential. 

Highlight E-E-A-T (experience, expertise, authoritativeness, and trustworthiness)

Promote need-to-know information that can appeal to the masses while elevating the unique value your brand can offer:

  • Quotes from brand experts.
  • In-house data.
  • Original reporting.
  • Regional angles.  
  • Historical context.

Be sure to create author pages to consolidate content from in-house experts, make articles more discoverable, and encourage ongoing followership.

Utilize timestamps to your advantage 

Implement a “Last updated” marker to produce the freshest search signal possible to readers and crawlers. Refresh articles with new and noticeable updates, such as content, headlines, photos, videos, and links.

Tweak and recirculate content across a variety of related news windows to get articles back into feeds and provide readers with related context. Small-scale updates can build up to substantial traffic and AI Overview placements throughout a calendar year. 

You should also be adding new links to supplementary stories as news cycles evolve. These updated links send a fresh signal to Google, provide essential context to readers, and reinforce your E-E-A-T on top priority topics for your brand. Create short, concise titles and headlines that prioritize search-friendly entities

When crafting headlines, avoid conversational fluff (including quotes, which are better utilized elsewhere) and instead zone in on people, places, events, etc. 

Try to keep your headlines within 60 characters or less to stay on the safe side with Google’s roulette of SERP formatting. Google is increasingly randomizing the appearance of search results with the influx of multimodal sources flooding the scene.

For instance, “Top stories” carousels have been disappearing on more newsy searches, which can shift SERPs back to their traditional title tag structure (which can sometimes cut off titles and headlines at less than 60 characters) vs. a headline structure (which tends to have more wiggle room with character count). 

Though frontloading keywords isn’t a requirement in titles and headlines (variety and readability are helpful for UX), you should keep top-priority themes away from the 60-character cutoff point as a precaution. 

Remember, it’s okay to tweak titles and headlines if readers aren’t connecting with them. Fresh angles can provide a late traffic surge, especially when paired with homepage and/or app placement. 

If SERPs lag in reflecting your latest updates, re-index articles in Google Search Console

Be strategic with keyword placement

As Google tests out AI-generated headline rewrites, it’s increasingly important to optimize original headlines with essential terms that readers are searching for. 

You’ll also want to showcase supplementary keywords in meta descriptions. Utilize a call to action when appropriate, especially with guides in urgent windows like natural disasters, shootings, etc. 

Mirror top keywords from titles and headlines into URLs, but tread carefully with years and specific numbers in URLs to maintain evergreen status as news cycles evolve. 

Don’t forget to optimize your images! Include keyword-rich alt text and captions on any images in your news content, which help AI models better understand visuals within your content and improve your odds of discoverability. 

Implement sitemaps and structured data

Enable a news-specific sitemap to optimize delivery of timely content. This will emphasize freshness, streamline indexing, and boost overall search visibility.    

Additionally, employ schema markup to help  ensure the proper indexation of articles. “NewsArticle,” “LiveBlogPosting,” and “FAQPage” are especially relevant for surfacing utility news content.

How can you recirculate utility news content effectively? 

Once publishers have established a productive utility content workflow, it’s essential to employ a strategic recirculation strategy to maximize visibility in all appropriate channels. 

“SEO is dead” messaging has been spreading over the past year. Everyone’s entitled to their own opinion, but I believe that as long as people are searching for the information they need online, traditional search best practices are still very much alive. 

However, certain old-school SEO ideologies are dying off, chief among them being that search performance lives and dies with the first page of SERPs. With ongoing AI visibility challenges, search strategies must extend beyond Google’s digital walls. 

We must recirculate always, in all ways

In 2026, search strategists need to be audience strategists to surface content across all the places people visit online. 

Collaborate and find common ground with departments across your organization to be able to quickly elevate search-friendly angles and content within crucial news windows.

A strong strategy is essential to ensure your brand stays top of mind across platforms when timely audience needs arise. 

Channels that can benefit from cross-departmental search and distribution strategies include: 

  • Your website/homepage
  • Apps
  • Alerts 
  • Newsletters
  • Podcasts
  • Instagram/Threads
  • Facebook
  • X
  • Bluesky 
  • Reddit
  • TikTok
  • Linkedin 
  • YouTube 
  • Google Discover 
  • News aggregators (Apple News, Smart News, etc.) 

How should newsrooms assess the performance of utility news content? 

With a growing list of platforms in the content recirculation mix, performance tracking is evolving with additional nuances for publishers to consider. 

Prior to Google’s Search Generative Experience and AI Overviews, utility news content was primed for placement in knowledge panels, featured snippets, and “Top stories” carousels. Google started to take up more of that top shelf space with its own bespoke charts and modules over time, minimizing publisher activity during top priority events. 

The public rollout of AI Overviews in May 2024 changed the playing field in a big way, but the development didn’t come out of nowhere. 

As AI Overviews have become increasingly prominent in search results, respected institutions such as the Pew Research Center have noted declining click-through rates across the news industry. This development has pushed publishers to place greater emphasis on overall brand visibility alongside their traditional prioritization of page views and clicks. 

Though standard metrics remain important, publishers should rethink what “successful content” means as audience engagement shifts.

In 2026 (and beyond), search strategists should pay extra attention to: 

  • AI Overview placements.
  • Featured snippet placements.
  • People Also Ask placements.
  • “Top stories” placements. 
  • Percentage of traffic from chatbots. 
  • Overall search impressions.
  • Organic search traffic across multiple utility pieces under one general topic. 
  • Year-over-year growth of evergreen SEO content. 

Other metrics that can indicate a positive editorial experience and encourage long-term brand loyalty include:

  • Scroll depth. 
  • Time spent on site.
  • Return visits on evergreen content. 
  • Bookmarked entry pages.
  • Newsletter, app, and other subscription signups from individual pages.  

Dedicated AI platforms from companies like Profound, Semrush, Similarweb, Ahrefs, and other industry vendors can help demystify the performance tracking process. 

Though every AI interaction may not lead to a click or page view, consistent placements in related modules can psychologically trigger trust and encourage long-term reader loyalty, as pointed out by Go Fish Digital. 

Since this performance ideology may differ from what some stakeholders are accustomed to, ongoing newsroom search training is critical to ensure that leadership understands the broader industry implications. 

For instance, positive performance snapshots should be regularly shared with editorial partners to reinforce the impact of the content investment. Postmortem reports can also provide performance insights following tentpole events, driving home key takeaways and reinforcing best practices for the future.

How does personalization play a role in surfacing utility news content? 

The recent rise in personalization features underscores the growing need for publishers to adopt brand-first editorial strategies. To maximize brand reach despite decreased visibility in traditional SERPs, it’s critical for publishers to leverage features that can strengthen brand loyalty. 

Preferred sources in Google “Top stories” carousels and new follow capabilities in Google Discover can increase the value of everyday interactions that are likely to trigger needs utility content can address. 

For example, Google shared that when someone picks a preferred source in “Top stories,” they click that site twice as often on average. With such benefits, publishers should demystify these offerings and encourage readers to curate their content consumption habits in their favor.  

Instructions to guide your readers to choose your brand as a preferred source in Google’s “Top stories” carousel: 

  • Log into your Google account. 
  • Search for a trending topic that would populate a “Top stories” carousel. 
  • Click on the star icon next to “Top stories.” 
  • Enter [source name] in the search bar and check the corresponding box. 
  • Reload results and watch the content shift based on your new selection.
  • Once you pick your favorite sources, they’ll appear more frequently in “Top stories” carousels or in a dedicated “From your sources” section on search results pages.

Instructions to guide your readers to enable the “Follow” feature in Google Discover:

  • Log into your Google account. 
  • Scroll through your Google Discover feed.
  • Find a story from your favorite source.
  • Click the “Follow” button in the upper right-hand corner. 
  • Once you track sources, you’ll see more of their content in your feed.

To maximize visibility around these new features and simplify the signup process, publishers can install related buttons on their article pages and create standalone documentation that illustrates implementation.

How can utility news content benefit a newsroom and the industry at large? 

We know that readers benefit from personalized strategies, but there are also advantages to sharing the utility content ideation and creation process with more colleagues in your newsroom. Service journalism can have a positive internal impact by creating a pathway for more colleagues to participate in the content creation process. 

Opening up the utility workflow within your organization can encourage colleagues across the following departments to showcase unique expertise and encourage a culture of inclusivity that can elevate search-friendly coverage:

  • Editorial sections: In-house experts to loop in during the research process and support with content gap coverage.  
  • Audience engagement: Trend trackers who pinpoint which emerging topics are worth creating content around and featuring on the website, apps, and other spaces.  
  • Social media: Cross-platform collaborators to link up with on shared trends that can maximize brand visibility in multimodal search. 
  • Data and analytics: Methodical minds who can explain how performance insights should influence future content roadmaps. 
  • Design: Visual visionaries who can create bold new environments for search stats to live on, including maps and infographics. 
  • Product: Technical talents who can build proprietary tools that address reader needs in unique ways during timely windows. 
  • Features: Outside-the-box thinkers with strong sourcing who can highlight newsy angles in a narrative and/or investigative fashion. 
  • Copy editors: Streamlined strategists who ensure maximum accuracy in explainers, guides, and other objective resources.  
  • Freelance writers: External partners who can bolster internal efforts with outside expertise and supplemental bandwidth. 
  • PR and communications: Internal partners who can spotlight brand priorities that can be elevated through search-friendly content.

Visibility, trust, and why utility content still wins

Though the SEO industry faces unique challenges in 2026, publishers can still benefit from creating utility content. Amid LLM inaccuracies and AI growing pains, we must continue to serve our readers with accurate, authoritative articles in digestible formats that align with evolving content preferences. 

With Google testing out adding more links in AI Overviews, I remain cautiously optimistic that publishers and AI platforms can work in tandem to provide optimal editorial experiences to audiences in the future. 

In the meantime, keep these fundamental best practices in mind:

  • Prioritize audience needs.
  • Elevate newsroom expertise.
  • Forge a path forward that champions evolution while honoring lasting fundamentals.

Google adds Read more links best practices

Back in December, Google began showing read more links on some of the search result snippets within Google Search. Today, Google published new documentation around best practices on how to show Read more links in the Google search results.

The best practices. The new documentation was posted over here in the snippets section and it lists three best practices:

  • Make sure content is immediately visible on the page to a human (and not hidden behind an expandable section or tabbed interface, for example).
  • Avoid using JavaScript to control the user’s scroll position on page load (for example, don’t force the user’s scroll position to the top of the page).
  • If you make history API calls or window.location.hash modifications on page load, make sure you don’t remove the hash fragment from the URL, as this breaks deep linking behavior.

What it looks like. Google also posted an illustration of these links, here it is:

Here is an example of how they look:

Why we care. These read more links do add an additional eye-catching link to the search result snippets. Hopefully, this leads to encouraging more clicks to websites and no less.

More clicks to websites is a good thing, so make sure to review the best practices to encourage more clicks to your site.

Rand Fishkin: Zero-click search began long before AI

Rand Fishkin didn’t get into SEO because he saw the future.

He got into it because he had no choice.

In the early 2000s, Fishkin helped run a small web business with his mom in Seattle. They hired another company to do SEO until they couldn’t afford to pay them anymore.

That moment pushed him into search marketing. More than 20 years later, Fishkin has become one of the best-known voices in SEO — and one of Google’s biggest critics.

In this interview, he looks back at how search has changed, what went wrong, and what may happen next.

Early SEO was wild

SEO today can feel messy. But in the early days, it was even more chaotic.

“There was no social media,” is how Fishkin described that era, where forums like WebmasterWorld and Search Engine Watch were the center of the industry.

People shared tactics openly. Many of those tactics were risky. Buying links was common — and effective.

Fishkin did it, too. Then Google’s Matt Cutts called him out in public.

That moment changed how he approached SEO. He spent years focusing on “white hat” practices and following Google’s guidelines.

Looking back, though, Fishkin now questions whether that shift went too far. He believes Google’s own behavior over time has made those guidelines harder to trust.

The early industry wasn’t just chaotic — it was also full of strange and memorable moments. Fishkin recalled massive conference parties with huge budgets and over-the-top ideas, including a staged “retirement” of the Ask Jeeves mascot.

But what stood out most to him wasn’t the tactics or the parties.

“My favorite thing… is people,” he said, pointing to the relationships and friendships built over decades in search.

When Google stopped sending traffic

Many people think AI is the big turning point in search.

Fishkin says the shift started much earlier — around 2011.

That’s when the idea of “zero-click search” first appeared. Google began answering more queries directly on the results page instead of sending users to websites.

At first, it was small features like weather boxes and calculators.

Then it grew:

  • Around 2016–2017: nearly half of searches ended without a click
  • By 2018: more than half
  • Today: more than two-thirds

Fishkin emphasized that this trend didn’t start with AI — it has been building for more than a decade.

Publishers had a chance — and missed it

Fishkin believes publishers could have taken action early — but didn’t.

  • “The time to fight back… was 15 or 20 years ago,” he said.

In his view, large media companies should have worked together to push back against Google’s growing control. They could have demanded payment for content or limited how Google used it.

Instead, they allowed Google to crawl and use their content freely.

At the same time, Google expanded its influence through lobbying and policy.

  • “Publishers just missed that opportunity,” Fishkin said.

Now, he argues, the focus has to shift to adapting:

  • Build subscription businesses
  • Monetize attention, not just traffic
  • Learn how to operate within platform ecosystems

Some companies have already made that shift. Fishkin pointed to The New York Times as an example of a business evolving beyond traditional news consumption.

Did Google change?

Fishkin does not believe Google has become worse for users.

  • “If it was easier or better to search on Bing… people would go to those places,” he said.

But he does believe Google has become much harder for publishers and creators.

The change, he said, was gradual. As Google grew, went public, and aligned with investor expectations, its priorities shifted toward growth and revenue.

  • “They became the people that they spent time with,” Fishkin said.

The biggest AI mistake people make

Fishkin says most people misunderstand how AI works.

They treat AI answers like search results — consistent and reliable.

But they aren’t.

If you ask the same question multiple times, the answers can vary widely.

  • “You will get completely different answers. And if you do that 10 times, you will get 10 incredibly unique different answers,” he said.

His advice is simple: don’t rely on a single response. Ask multiple times and look for patterns. If the same answer shows up consistently, it’s more likely to be trustworthy.

This matters most for important decisions, like health or finance, where relying on one answer could be risky.

What he misses about the early days of SEO

Fishkin doesn’t miss a specific tactic or tool.

He misses the level of opportunity that existed in the early web.

Back then, smaller creators and independent sites had a better chance to succeed. Traffic was more evenly distributed.

  • “The world of clicks and traffic… was so… flat compared to… today,” he said.

What’s next?

Fishkin believes the future of media and search may look more like the past.

He expects a smaller number of powerful platforms to control most of the flow of information.

At the same time, individual creators will still produce much of the content — but within those systems.

Still, he hopes the web can evolve again.

💾

Fishkin also discussed AI’s unreliable answers, Google reducing organic visibility, and why early SEO offered more open opportunities.

Is Google Ads Asset Studio a game changer? Not so fast

Is Google Ads Asset Studio a game changer? Not so fast

If you know anything about Google Ads Asset Studio, you’ve heard the hype:

  • “Google just killed every excuse for not running video ads.”
  • “Total game changer! You don’t need a production budget anymore.”
  • “Upload a few product images and get campaign-ready video in minutes.”

From Google Ads > Tools > Asset Studio, you can build, manage, and scale images and videos across ad formats.

The recent addition of Veo (Google’s AI video generation model) and Nano Banana Pro means you can now turn a handful of product images into full-motion video ads, for free, in no time.

Apparently, video creative is no longer a constraint. But does Asset Studio actually change the game? Read on to find out if it’s worth your time.

A tale of two Veos

Google is its own biggest cheerleader for the power of its AI images and video.

A recent Think with Google article showcases AI-generated ads for Cosmorama, a Greek travel agency. The videos are genuinely imaginative: think a flamenco dancer in the clouds, not just close-ups of headphones and sneakers.

As part of learning Asset Studio, I set out to reverse-engineer their approach. I wasn’t trying to match the quality. I just wanted a proof of concept using Nano Banana and Veo.

What I got instead was a series of dead ends.

  • No scene-level control: I’d read that prompting plays a major role in video output. But there’s actually no prompt function for scenes in Asset Studio. You select an image from your Asset Library, and that’s it. Google decides how to animate it. There’s no way to direct motion, pacing, or narrative.
  • Human performer restrictions: Video generation repeatedly failed with errors about “specific individuals.” I assumed that meant celebrities or real people. In practice, anything that resembled a human face — even AI-generated — triggered issues. The only assets that consistently worked were tightly cropped: hands, partial torsos, and abstract scenes.
  • No real audio control: The Cosmorama video featured cinematic music. In Asset Studio, you’re limited to a small set of preloaded audio. There’s no way to upload custom music or meaningfully shape the sound layer.
Veo vs. Veo in Asset Studio

After so many false starts, I returned to the article. It mentioned Nano Banana and Veo by name. It never said they were used inside Asset Studio.

When Veo 3 became available in Asset Studio, I didn’t realize how many limitations it would have, resulting in a completely different experience from the stand-alone version.

CapabilityVeo (Full Version)Veo (Asset Studio)
Control levelAdvanced control
(API, model tiers, audio support)
Simplified UI with fixed constraints
Text-to-video promptingFull prompt control:
– Scene
– Camera movement
– Lighting
– Style
– Subject/action
None
Use casesProduction-ready pipelinesLightweight asset generation
Scene stitchingMulti-scene / narrative workflows
(stitching and extensions)
None
Human generationSupport (with policy constraints)Limited / often restricted

What’s available may still help you create some great 10-second motion ads, but don’t go into it expecting flamenco dancing.

Does Asset Studio actually save time and effort?

That depends: Whose time? Whose effort?

For years, paid search managers had one move for visual assets: push back.

  • “I need a vertical version.”
  • “The first five seconds need to be more engaging.”
  • “Can you remove the text overlay?”

Creative’s been a constraint, but always someone else’s constraint to solve. Asset Studio changes that. You can edit, adapt, and post YouTube video ads, even without access to the brand’s YouTube channel.

But the constraint doesn’t disappear. It just changes hands.

Expectation vs. reality

Managing creative strategy and production — even within Asset Studio — takes more time than not owning that role.

Using Asset Studio, I’ve manually adapted logos to new aspect ratios, generated variations that need further edits, and written voiceover scripts I never would have been involved in creating before.

And since production can’t exist without a strategy, I’m spending more time on that too. This is definitely game-changer territory, but maybe not the way you’d hoped:

  • If you’re a brand that would otherwise need a production team: This is likely faster and more affordable than the alternative, satisfying the velocity mandate.
  • If you’re an agency absorbing this work on top of an existing scope: You’re likely taking on a new responsibility that wasn’t priced in.

It removes a bottleneck and replaces it with ownership. If that shifts what your role actually covers, it’s worth revisiting your contract scope.

Will this get me in trouble? AI ad compliance explained

No federal laws in the U.S. prohibit the use of AI in ads. But that’s starting to change.

New York recently passed a law requiring advertisers to clearly disclose when an ad includes a “synthetic performer,” and it’s set to take effect in June 2026. (Hat tip to Sam Tomlinson for his LinkedIn post flagging this.)

Asset Studio doesn’t generate a visible watermark (such as the Gemini sparkle), and there’s no way to add an AI disclosure in Google Ads.

A couple of things worth knowing if you’re using Asset Studio specifically:

  • You’re likely covered for now. Asset Studio can’t generate content with human performers. As mentioned above, anything resembling a face consistently triggers errors. That means the New York law’s “synthetic performer” provision wouldn’t apply to what Asset Studio actually produces today.
  • There’s a watermarking layer. Google uses SynthID to invisibly tag AI-generated images. If disclosure requirements become more explicit, that infrastructure already exists to support it.
Gemini SynthID

Asset Studio’s limitations may actually insulate you from the most immediate compliance concerns, but if you want to proactively disclose AI use for ethical reasons, there’s no built-in way to do that.

Get the newsletter search marketers rely on.


AI without the slop

Josh Spanier, Google’s VP of AI and Marketing Strategy, has this advice for marketers running AI-generated ads:

  • “Stop fearing ‘AI slop.’ Humans made bad ads long before robots.”

Interesting suggestion, but not all of our clients and stakeholders will be quite so enthusiastic about paying to run AI slop ads.

Fortunately, tight control of Asset Studio images and video is easier than you might think. Unlike AI Max, where AI-generated assets can run before you’ve reviewed them, Asset Studio output isn’t automatically published. From your Asset Library, you choose which assets to run. The rest never see the light of day.

What you can produce in Asset Studio is somewhat limited, but here are some of the non-sloppy features I’m most excited about.

Image fidelity: Product images that actually look like your product

Asset Studio’s Nano Banana 2 is built specifically for product integrity. Unlike general-purpose AI image tools like Midjourney, it lets you add up to five reference images and effectively “locks” the product. Only the surrounding environment is up for reinterpretation.

Asset Studio's Nano Banana 2

Trim: Cut right to the action

Client-produced video is rarely built for YouTube. Long intros and slow builds lose viewers before the message lands. Trim lets you jump straight to the action, without going back to the client for a new cut.

Voiceovers and templates: Sleeper tools

For a tool suite that promises to replace a production department, Asset Studio’s constrained voiceover and template options may seem underwhelming. Voiceover only works with audio ads or pre-existing video, and templates feel like glorified slide decks.

But the more I reviewed the landscape of YouTube video ads, the more I realized: most companies struggle with messaging more than production quality. Low budget isn’t limiting sales, but bad scripts and concepts are.

Templates and voice-overs let you test the right words faster than waiting for a new creative brief and a published video.

Asset Studio Voiceovers

In one campaign I’m running, an Asset Studio video I built in under 30 minutes using a template is already showing 10x the CTR of the client’s best-performing video.

Beating the control may not be the highest bar to clear, but it’s a start.

The output isn’t the outcome

Is Asset Studio a game changer? Not yet. But I’m not sure it needs to be.

Positioning it as real competition against global creative brands sets everyone up for disappointment.

The more useful frame: it’s a tool suite that makes creative faster and more accessible for accounts that couldn’t justify a production budget before.

It does shift some of that strategy and production work onto the paid search manager who didn’t traditionally live in that role.

But the bigger question is: what does any of this actually lead to? The point of digital marketing creative isn’t to produce more assets. It’s to drive conversions and sales. That’s still what needs to be proven.

Tests are running now. I’ll share what holds up, and what doesn’t.

How to use the three-act structure for data storytelling

How to use the three-act structure for data storytelling

You’ve audited your client’s website and compiled performance data. You’ve identified what’s working, what can be improved, and your recommendations for future strategies. But how do you turn that data into a presentation that’s easy to explain and builds trust? 

Start with stories. Storytelling isn’t just for entertainment. It’s how people make sense of information. That’s what makes it so effective for data presentation. 

One of the simplest ways to structure that story is the three-act structure. It’s a familiar framework used everywhere, from Aristotle’s Poetics to Star Wars.

What is the three-act structure?

The three-act structure is a simple framework that shows how a story moves from beginning to middle to end. It shows how a protagonist moves from their starting point to a meaningful change.

Applied to data storytelling, it helps you organize your insights, position your client as the main character (the protagonist), and clearly show what happens next.

While similar to the five-point narrative arc, this framework is organized into three manageable sections: what the story is about, what happens when the main character is introduced to conflict, and how that conflict is resolved.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Act 1: The beginning

This is where the protagonist’s norm and conflict — the issue the main character is meant to face, also known as the antagonist — are established. The protagonist wants something, and the conflict is holding them back from what they want. 

An event or circumstance occurs that incites the protagonist into action. The background is established, the goals are defined, and the audience is invested in the protagonist’s success.

Act 2: The middle 

The story is developed, and tension builds. The protagonist experiences roadblocks caused by the conflict/antagonist that hinder them from their ultimate goal. Conflict arises until it can no longer be ignored, causing a pivotal moment that leads into the final act.

Act 3: The end

The narrative is affected by the change in Act 2, bringing the story to a final showdown between the protagonist and the conflict/antagonist, ultimately resulting in a resolution. The protagonist may find closure or know what path lies ahead (this may set the stage for a sequel).

The three-act structure helps you understand website data on a deeper level. It also prepares the data to be presented to your client in a way that places them at the center of the story.

Using the three-act structure to identify your data’s narrative

Why bother using the three-act structure as a framework for strategy analysis? It builds trust, showing your client that you’re going on a journey alongside them. 

You and your client are on the same team, with the same destination in mind: their success, even if the data isn’t communicating immediate results.

The application of the three-act structure to data storytelling happens in three steps.

  • Step 1: Briefly recap the existing strategies, establish previous wins, and identify the challenge currently affecting performance. This sets the baseline of Act 1.
  • Step 2: Explain the roadblocks and how they stand in the way of the overall strategy’s success. This parallels the growing conflict found in the structure’s Act 2.
  • Step 3: Recommend the next steps and how you plan to address the conflict. Show what success looks like by providing examples of how your recommendations fit the narrative of your client’s goals. This is Act 3, the resolution of the structure.

Get the newsletter search marketers rely on.


Where is your client’s story in the three-act structure?

Your client is the protagonist of their story. To work more effectively together, you need to communicate to your client that you’re invested in the story of their success. 

At the heart of each data set is the story of how your client is impacted. When you communicate what the data is saying, position yourself as the guide who helps the main character get where they need to go.

An example of applying the three-act structure framework to data analysis and presenting the data’s narrative would look like this:

ActGoalScenarioApproach
1Set the stage, center your client as the protagonist while introducing the challenge as the antagonist.Your client’s website has received a substantial increase in organic traffic as a result of your most recent strategy, but is experiencing a high bounce rate on select pages.Recap the strategy that led to the traffic increase and summarize the outcome from a high-level perspective.
2Identify the conflict, potential roadblocks, and related stakes.The high bounce rate is preventing your website from experiencing consistent traffic flow. Explain why a high bounce rate is detrimental to overall performance, and connect the affected pages to the overall strategy.
3Recommend strategies and outline next steps.Your client’s high bounce rate indicates low page speed due to large images that take a long time to load.Help the client visualize how best practices lead to better outcomes. Recommend image compression as a next step.

The conclusion doesn’t always mean the end of the story

Finding the story in your data — and communicating it clearly — is how you build trust with clients.

Clients don’t want industry jargon. They want to feel seen, understood, and that they’ve entrusted their digital marketing success to the right person. Stories, and the connections they form, get them there.

Reaching the conclusion of your data’s narrative isn’t the end, but the beginning: the start of strategy implementation, of collaborative partnerships, and of greater results. 

When looking at data, you and your client are on a journey together. A downward trend in your data doesn’t mean your story is over, and an upward trend doesn’t mean there’s no hope for a sequel. In either case, a new journey (your next strategy) can begin.

Is your AI readiness a mirage? by AtData

AI has quickly become the most overconfident line item in the modern marketing roadmap.

Budgets are shifting. Teams are being restructured. Vendors are being evaluated almost exclusively through the lens of how “AI-powered” they appear. There is a growing assumption that once the right models are in place, performance will follow. Better targeting. Smarter segmentation. Higher conversion. More efficient spend.

It sounds almost inevitable.

But there is a quieter reality beneath the momentum. One that rarely makes it into boardroom conversations or conference keynotes.

Most organizations are not struggling to use AI. They are struggling to feed it.

And what they are feeding it is far less reliable than they think.

The uncomfortable truth about inputs

AI does not create truth. It scales whatever it is given.

If the underlying data is fragmented, outdated or manipulated, the model does not correct it. It operationalizes it. At speed. At scale. With confidence.

This is where the gap begins.

Marketers have spent years investing in data infrastructure, pipelines and orchestration layers. On paper, the foundation looks strong. There is more data available than ever before. There are more signals, more touchpoints, more attributes tied to every customer.

The assumption is that this abundance translates into readiness. But volume is not the same as validity.

A customer profile built from five disconnected identifiers is not a unified identity. An email address that exists in a CRM is not necessarily active, reachable or even tied to a real person. Engagement signals that appear recent may be the result of automated activity, privacy shielding or bot interaction.

AI models are not designed to question these inputs. They are designed to find patterns within them.

So, when the inputs are flawed, the outputs become convincingly wrong.

Identity is the fault line

At the center of this problem is identity.

Every AI-driven use case in marketing depends on the assumption that you know who you are analyzing, targeting or predicting. Whether it is propensity modeling, churn prediction, audience creation or personalization, identity is the anchor.

Yet identity remains one of the least stable components of the data stack.

Consumers move across devices, channels and environments constantly. They use different email addresses. They share accounts. They create new profiles. They disengage and re-engage in ways that are difficult to track cleanly. Over time, what appears to be a single customer often becomes a composite of partial truths.

Even within authenticated environments, identity degrades. Touchpoints go inactive. Behavioral signals lose relevance. Records persist long after the underlying reality has shifted.

Most systems are not built to continuously reconcile these changes. They capture identity at a moment in time and treat it as durable.

And AI inherits that assumption.

Which means many models are making decisions based on identities that no longer exist in the way they are represented.

The hidden impact of fraud and synthetic activity

Another layer omplicates the picture further. Not all data is simply outdated. Some of it is intentionally misleading.

Fraud is evolving alongside marketing technology. The barriers to creating accounts, generating engagement, or exploiting promotional systems have decreased significantly. Automated tools and AI itself have made it easier to simulate legitimate behavior at scale.

Fake accounts are not always obvious. They can pass basic validation checks. They can engage with content. They can move through funnels in ways that resemble real users.

From a model’s perspective, they are indistinguishable unless additional context is applied.

This creates a subtle but meaningful distortion.

Acquisition models begin to optimize toward patterns that include fraudulent behavior. Lifecycle strategies adapt to engagement that is not human. Performance metrics improve on the surface while underlying efficiency erodes.

The result is a feedback loop where AI reinforces the very issues it should be helping to solve.

And because the outputs look sophisticated, the problem becomes harder to detect.

Why traditional data strategies fall short

Most organizations are aware that data quality matters. Significant effort goes into cleansing, deduplication and normalization. Records are standardized. Fields are filled. Duplicates are merged.

These steps are necessary, but they are not sufficient. Clean data is not the same as accurate data.

A perfectly formatted email address can still be inactive. A deduplicated profile can still represent multiple individuals. A normalized dataset can still be missing critical context about behavior, risk or authenticity.

Traditional data practices tend to focus on structure. AI requires substance.

It requires an understanding of whether an identity is real, whether it is active, whether it is behaving in ways that align with genuine consumer patterns.

Without that layer, even the most sophisticated models are operating on incomplete information.

The illusion of readiness

This is how the mirage takes shape.

Dashboards show high match rates. Databases contain millions of records. Models produce outputs that appear precise. Campaigns are executed with increasing automation.

From the outside, it looks like progress.

But underneath, there are unresolved questions.

  • How many of those identities are actually reachable today?
  • How many represent real individuals versus synthetic or low-quality accounts?
  •  How often are behavioral signals refreshed and validated?
  • How much of the model’s learning is influenced by noise?

These are no longer rare. They are foundational.

And yet they are often overlooked because they sit below the level where most AI initiatives begin.

A different way to think about AI readiness

True AI readiness does not start with model selection. It starts with input integrity.

It requires a shift in focus from how much data you have to how much of it you can trust.

That trust is built on a few critical dimensions.

First, identity accuracy. Not just the ability to match records, but to ensure that those records reflect real, current individuals. This includes understanding when identities change, when they become inactive and when they should no longer be used as the basis for decisioning.

Second, activity validation. Knowing that a signal occurred is not enough. You need confidence that it represents meaningful human behavior. This is where distinguishing between genuine engagement and automated or manipulated activity becomes essential.

Third, risk awareness. Every dataset contains some level of fraud or abuse. The question is whether it is visible and accounted for. Without that visibility, models will absorb and propagate those patterns.

When these elements are in place, AI begins to operate on a different plane. Predictions become more reliable. Segments become more actionable. Optimization aligns more closely with real outcomes.

Where this creates advantage

Organizations that address these foundational issues are creating a structural advantage.

They are able to suppress low-value or risky identities before they enter the modeling process. They can prioritize outreach to individuals who are both reachable and likely to engage. They can detect and mitigate fraudulent behavior before it distorts performance metrics.

Over time, this compounds.

Models trained on higher-quality inputs learn faster and generalize better. Campaigns become more efficient. Measurement becomes more trustworthy.

Perhaps most importantly, decision-making becomes more grounded in reality.

This is where AI begins to deliver on its promise.

The path forward

There is no question that AI will continue to reshape marketing. The capabilities are real, and the pace of innovation is not slowing down.

But the idea that AI alone will solve underlying data challenges is a misconception. If anything, it raises the stakes.

Because AI does not just expose weaknesses in your data. It amplifies them.

The organizations that recognize this early are taking a more deliberate approach. They are investing in understanding their identity layer. They are prioritizing the validation of activity and the detection of risk. They are treating data not as a static asset, but as a dynamic system that requires continuous refinement.

They are not asking, “How do we apply AI to our data?”

They are asking, “Is our data worthy of AI?”

It is a more difficult question. It requires a deeper level of introspection. It challenges assumptions that have been in place for years.

But it is also the question that separates real readiness from the illusion of it.

And in a landscape where everyone is accelerating toward AI, clarity at the foundation is what ultimately determines who moves forward, and who simply moves faster in the wrong direction.

The latest jobs in search marketing

Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Job Description American Humane Society (AHS) is seeking a dynamic and strategic Vice President, Marketing to steward and elevate the integrity of both the American Humane Society (AHS) and Global Humane Society (GHS) brands. This leader will drive the development and execution of integrated marketing strategies that advance critical organizational priorities, strengthen national leadership as […]
  • Job Description Council & Associates is one of Atlanta’s fastest-growing PI firms — handling serious cases across truck accidents, premises liability, daycare injury, negligent security, and wrongful death. The firm is led by a nationally recognized trial attorney and built on a brand that goes beyond the courtroom into the community. We need a marketer […]
  • Job Description Content Marketing Specialist Malta Dynamics | Malta, OH (Hybrid) About the Role Malta Dynamics is seeking a Content Marketing Specialist to own the execution and consistency of Malta Dynamics’ brand voice across all channels. This role is responsible for producing, publishing, and optimizing high-quality content that drives inbound leads, supports sales, and reinforces […]
  • We are seeking an intermediate-level SEO Specialist for Hive Digital, a cutting-edge and award-winning agency that prides itself on helping change the world for the better. We offer a highly collaborative team that works together to deliver the best possible outcomes for our clients in a fast-paced, fun work environment. Are you ready to bring […]
  • About The Company goop is a lifestyle platform dedicated to exploration, curation, and groundbreaking conversation. From its award-winning beauty and fashion lines to its expansive editorial lens, goop invites women to embrace the process of becoming, and to discover deep joy in the pursuit of pleasure, beauty, and growth in all phases of life. Gwyneth […]
  • Job Description LK Distribution is a leading distributor of several brands and products offered on both e-commerce and wholesale. Specialized in the Alternative Product category in the CBD/Hemp Industry ranging from a large category of products. We are seeking a creative and dynamic individual with experience with independent online storefronts for each of our brands […]
  • Job Description Benefits: 401(k) Paid time off Dental insurance Health insurance Vision insurance A Digital Marketing Specialist at a leading real estate company requires high-energy, creative, and data-driven team member who helps elevate our brand and our agents’ digital presence. As the Digital Marketing Specialist, you will be the “engine room” of our online strategy. […]
  • Director, Global Digital Marketing, Integrated Marketing Communication (IMC) Team Position Overview The Director of Digital Marketing is at the center of 10x Genomics’ digital marketing engine, delivering measurable business impact and innovating across channels to ensure leadership in scientific markets. This position reports to the Vice President of Integrated Marketing Communications as is responsible for […]
  • Job Description Digital Marketing Specialist OURCU is looking for a Digital Marketing Specialist who is equal parts data-driven strategist and collaborative teammate. This role is ideal for someone excited to build and optimize HubSpot from the ground up, create meaningful campaigns, and clearly demonstrate the why behind marketing performance. If you love blending creativity with […]
  • This role offers you the opportunity to deepen your SEO expertise and develop your leadership skills within a tight-knit agency team. Sr. SEO Analysts lead our client relationships and bring our outcome-driven strategies to life. They are responsible for delivering value and results to our clients through their high-quality work, commitment to building deep SEO […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • About Us: Naadam is redefining luxury by delivering the world’s finest cashmere at an accessible price. Founded in 2013, with a vision to bring premium, sustainably made cashmere to the everyday wardrobe, we’ve built a brand that values innovation, transparency, and connection with our customers. At Naadam, we are dedicated to pushing limits, nailing the […]
  • About the Role You’ll play a key role in driving Kashable’s customer activation, acquisition and retention. You’ll begin owning the execution and performance of one paid media channel and as you demonstrate results expand your scope into broader strategic decision-making and greater channel ownership. We’re looking for someone who combines strong strategic thinking with hands-on […]
  • Job Description Job Description Our client, an elite national Am Law firm, is seeking a Regional Marketing Specialist to support its New York office. This role offers the opportunity to work closely with firm leadership to ensure local marketing initiatives align seamlessly with firmwide and practice‐specific priorities. You will lead marketing efforts for the New […]
  • Job Description Job Description Salary: $85K-$110K Mason Interactive | Hybrid (3 days in office) | $85K-$110K Who We Are Mason Interactive is a 30-person full-service digital agency with offices in Brooklyn and Charlotte. We work with clients in education, fashion, wellness, and luxury across all channels: paid search, paid social, SEO, programmatic, creative, and affiliate. […]
  • A property management firm in New York is seeking a Leasing Coordinator to manage marketing, leasing, and renewal strategies. This position involves performing all activities related to leasing to new residents, ensuring resident satisfaction, and executing lease renewals. The ideal candidate will be responsible for conducting tours, processing applications, and developing marketing plans. This role […]

Other roles you may be interested in

SEO Manager, Veracity Insurance Solutions, LLC, (Remote)

  • Salary: $100,000 – $135,000
  • Lead, coach, and develop a high-performing team of SEO Specialists
  • Set clear expectations, quality standards, workflows, and growth paths across the team

Senior SEO Manager, Lunar Solar Group (Remote)

  • Salary: $80,000 – $100,000
  • Lead strategy, execution, and deliverables across 4–6 client accounts independently
  • Own end-to-end SEO strategy and execution across all core deliverables and processes

Performance Marketing Manager, Recruitics (Hybrid, Lafayette,CA)

  • Salary: $70,000 – $90,000
  • Work in platform to configure campaigns – set up budgets, targeting, creative, and run dat
  • Monitor ongoing performance to identify areas of opportunity

Marketing, Social Media & PR Manager, PARTNERS Staffing (Fort Myers, FL)

  • Salary: $75,000 – $85,000
  • Develop and execute integrated marketing campaigns for shows, content releases, events, and brand initiatives
  • Identify target audiences and create strategies to grow reach and engagement

Local Search & Listings Manager, TurnPoint Services (Remote)

  • Salary: $80,000 – $90,000
  • Own the strategy and governance for local search visibility across all business locations.
  • Develop optimization frameworks and standards for Google Business Profiles and other listing platforms.

Senior Branding manager, rednote (Hybrid, New York, US)

  • Salary: $228,000 – $320,000
  • Define and drive rednote’s global brand strategy, shaping its positioning across key international markets
  • Lead integrated marketing initiatives end-to-end, ensuring alignment across creative development and media execution

Performance Marketing Manager, Hirewell (Remote)

  • Salary: $85,000 – $95,000
  • Paid Search: Lead daily execution and management of Google Ads. This is a “hands-on” role requiring deep platform expertise.
  • Multi-Channel Management: Oversee and optimize campaigns across Meta, LinkedIn, and Programmatic channels.

Senior Paid Media Manager, Brightly Media Lab (Remote)

  • Salary: $70,000 – $100,000
  • Directly build, manage, and optimize campaigns within Google Ads, Microsoft Ads, and Facebook Ads (Meta).
  • Serve as the lead point of contact for your book of clients, taking full ownership of their success and growth.

Marketing Specialist, The Bradford group (Hybrid, The Greater Chicago area)

  • Salary: $60,000 – $62,000
  • Launch and manage paid social campaigns primarily on Meta platforms.
  • Oversee daily budgets and performance optimizations against revenue and ROI goals, using data-driven insights to continuously improve results.

Paid Search Specialist, Maui Jim Sunglasses (Peoria, IL)

  • Salary: $65,000 – $70,000
  • Plan, set up, and manage paid search, display, and shopping campaigns on Google Ads.
  • Manage and optimize advertising budgets to achieve revenue and efficiency targets.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Advertisers are testing ChatGPT ads — but uncertainty remains high

From scripts to agents- OpenAI’s new tools unlock the next phase of automation

OpenAI is emerging as a new advertising channel, but early advertiser sentiment is mixed as brands grapple with limited data, unclear performance, and a rapidly evolving product.

Driving the news. Two months after launching ads in ChatGPT, advertisers are experimenting — but still lack clear measurement tools and performance benchmarks.

  • Early campaigns are largely impression-based, with little insight into outcomes.
  • CPMs have reportedly been high, with initial minimum spends in the six figures.
  • Some advertisers say the product feels early and slow to mature.

The vibe check. According to Ad Age reporting, advertiser sentiment sits somewhere between cautious optimism and frustration.

  • Optimism stems from ChatGPT’s position as a leading consumer AI platform.
  • Frustration centers on lack of transparency, targeting, and reporting.

Why we care. This report this highlights both the opportunity and risk of investing in AI ad platforms early. While ChatGPT offers access to a fast-growing, high-intent audience, the lack of measurement and evolving product features make it a challenging channel to justify at scale.

It’s a signal to test thoughtfully and start building an AI strategy without overcommitting budget too soon.

The bigger picture. OpenAI’s ad push comes as it juggles multiple priorities — from AI development to enterprise growth — while facing rising competition from Google and Anthropic.

Some in the industry see OpenAI as having “cast too wide a net,” experimenting across video, commerce, and other products before refocusing. Its Instant Checkout commerce feature was quietly pulled back whilst video ambitions have also lost ground to competitors.

How ads actually show up. Early tests suggest ads may influence user journeys — but not always directly.

In one example, a sponsored retailer appeared more prominently in recommendations, even when multiple options were listed. Still, platforms maintain that ads do not directly alter core answers.

Yes, but. There’s ongoing tension between consumer trust (keeping answers unbiased), and advertiser goals (increasing visibility and influence).

That balance will likely shape how AI ads evolve.

What marketers should do now. Experts say brands don’t need to rush in. Large brands may benefit from early testing whilst others can focus on strategy development while the space matures. The priority is understanding how AI fits into broader media and search behavior.

The bottom line. ChatGPT ads are still in their infancy — promising, but unproven — leaving advertisers to experiment carefully while waiting for the platform to catch up to expectations.

Google Ads API to require multi-factor authentication

Google is tightening security across its ads ecosystem, requiring multi-factor authentication (MFA) for API users — a move that could impact how developers and advertisers access and manage accounts.

Driving the news. Google will begin rolling out mandatory MFA for the Google Ads API starting April 21, with full enforcement expected over the following weeks.

The update applies to users generating new OAuth 2.0 refresh tokens through standard authentication workflows.

What’s changing. Users will now need to verify their identity with a second factor — such as a phone or authenticator app — in addition to their password when authenticating.

  • Existing OAuth refresh tokens will continue to work without interruption.
  • New authentications will require MFA by default.
  • Users without 2-step verification enabled will be prompted to set it up.

Why we care. This change affects how you access and manage Google Ads data through APIs and connected tools. While it improves account security and reduces the risk of unauthorized access, it may also require updates to workflows, especially for teams that regularly generate new credentials. Preparing early can help avoid disruptions.

Who’s affected. The change primarily impacts apps and workflows using user-based authentication.

  • User authentication workflows: Will require MFA for new token generation.
  • Service account workflows: Not affected, and recommended for automated or offline use cases.

The requirement also extends beyond the API to tools like Google Ads Editor, Scripts, BigQuery Data Transfer, and Data Studio.

The big picture. As ad platforms handle more sensitive data and automation, security is becoming a bigger priority — especially as API access expands across teams, tools, and integrations.

Yes, but. While the update improves protection against unauthorized access, it may add friction for teams that frequently generate new credentials or rely on manual authentication flows.

The bottom line. Google is making MFA standard for Ads API access, signaling a broader shift toward stricter security across advertising tools and workflows.

OpenAI begins rolling out ads in select markets

OpenAI launches Instant Checkout in ChatGPT – bringing agentic commerce to life

OpenAI is continuing its push into ad-supported monetization — a strategy it began earlier this year — by expanding ads to more countries while keeping premium tiers ad-free.

Driving the news. OpenAI is starting to roll out ads for users on Free and Go plans in Australia, New Zealand, and Canada.

  • The rollout applies only to lower-tier plans.
  • Paid tiers — including Pro, Business, Enterprise, and Education — will remain ad-free.

Why we care. This opens up a new and rapidly growing channel to reach users inside AI-driven experiences. As OpenAI expands ads into more markets, it signals early opportunities to test and understand how advertising works in conversational interfaces. It could also shape how future search and discovery happens, making it important to get in early.

The big picture. AI platforms have largely avoided traditional advertising so far, relying instead on subscriptions and enterprise deals.

This move suggests OpenAI is:

  • testing new revenue streams,
  • exploring how ads fit into conversational interfaces,
  • and balancing monetization with user experience.

Yes, but: OpenAI is clearly drawing a line between free and paid experiences — signaling that ad-free usage will remain a premium benefit.

The bottom line: OpenAI is cautiously entering the ads business, starting with limited markets and tiers as it experiments with how advertising works inside AI-driven products.

Google Ads tests direct Google Tag Manager integration for conversion setup

Google Ads tactics to drop

Google may be streamlining one of the most error-prone parts of campaign setup — conversion tracking — by reducing the need for manual tag implementation.

Driving the news. Google Ads is testing a new “Set up in Google Tag Manager” option within its conversion setup flow, according to screenshots shared by Google Ads Specialist, Natasha Kaurra.

The feature appears alongside existing installation methods and allows advertisers to push conversion tracking setups directly into Google Tag Manager.

What’s new. Instead of copying conversion IDs and labels between platforms, advertisers can click the new button to open a pre-filled tag setup inside GTM.

That means:

  • fewer manual steps,
  • less room for implementation errors,
  • and faster deployment across accounts.

Why we care. Conversion tracking is critical to measuring performance, and this update makes it faster and less error-prone to implement. By reducing manual steps between Google Ads and Google Tag Manager, it can help ensure data is set up correctly from the start. That means more reliable reporting and better optimization decisions.

How it works. Based on early screenshots, the flow prompts users to select a GTM container and then surfaces a suggested tag configuration ready to publish.

This could be especially useful for agencies managing multiple clients, teams working across multiple containers, or advertisers with complex tagging setups.

The bottom line. It’s a small UI change with outsized impact — making it easier for advertisers to get conversion tracking right the first time.

First seen. This update was shared by PPC News Feed who credited Google Ads Specialist Natasha Kaurra for spotting it.

Why bottom-of-funnel content is winning in AI search

Why bottom-of-funnel content is winning in AI search

Google search traffic is dropping. If you’ve spent years building organic strategies, watching it happen in real time is uncomfortable. But it’s also clarifying.

I started seeing the shift across SaaS clients. Pages that had driven steady traffic for years — educational, top-of-funnel (TOFU) content — were losing ground. Not because the content got worse, but because users no longer needed to click. AI Overviews were doing the job for them.

That forced a decision: keep defending the old model or adjust the strategy. I chose to adjust.

What became clear pretty quickly is that while informational content is losing clicks, bottom-of-funnel (BOFU) content is holding up — and in many cases, driving more qualified leads.

This isn’t just a trend. It’s a shift in how value is created through search.

The pivot: Making BOFU the priority

My approach now is straightforward: 60% to 80% of output goes toward bottom- and mid-funnel content, with the remainder covering supporting TOFU topics that fill content cluster gaps or address timely industry conversations.

When I pitched this shift to clients, the conversation was easier than I expected. I put it simply: 

  • “You have a choice between traffic and leads. If you want leads, here’s how we get there, even if it means less traffic.” 

I was upfront that overall traffic might dip. But whoever shows up is more likely to convert. That framing landed. Nobody argued for traffic when the alternative was a qualified pipeline.

The most effective bottom-of-funnel pieces are comprehensive comparison and listicle-style guides targeting high-intent queries.

One of the best examples is a guide to the best time-tracking software for construction. Before writing it, I built a reusable review methodology for the client. The guide called out pros and cons honestly, including the client’s own product, because that’s what builds credibility with readers evaluating their options.

It was factual, specific, and written for someone in the middle of a purchase decision, not someone casually browsing.

Within weeks, it became our most cited article in LLM responses. It’s now a cornerstone piece, regularly appearing in conversion paths and driving qualified leads. 

That single piece delivered more pipeline impact than a dozen informational posts from the previous quarter because it answers the question a buyer is actually asking, not the one that gets the most search volume.

Dig deeper: How to align your SEO strategy with the stages of buyer intent

TOFU isn’t dead. It just has a different job now.

I see many SEOs treating this as an either-or conversation. To be clear, I haven’t eliminated TOFU content. I’ve repositioned it.

TOFU’s job now is to build topical authority that helps BOFU pages rank. It’s the supporting structure, not the primary event. Guides and educational content:

  • Support the content cluster.
  • Establish expertise in Google’s eyes.
  • Pass internal link equity to BOFU pages.

For my clients’ content, we’ve revisited the best-performing TOFU pieces and made them work harder.

We added sections that connect the information directly to the client’s product, supported by screenshots and subject matter expert quotes. 

We also redesigned calls to action to match the context and placed them throughout the content, rather than just at the end. 

For several clients, this led to a measurable increase in visitors navigating to demo request pages, without changing the informational intent.

The key distinction: You should still produce a meaningful volume of TOFU content, but make sure it has a unique angle — something not widely known or discussed from your perspective. 

In a sea of AI-generated content, that specificity is what drives performance.

Get the newsletter search marketers rely on.


Why this works in AI-driven search

People arriving from AI platforms show up with context. They’ve already explored the problem. They’re evaluating options. This aligns with how AI Overviews are applied in search results.

AI Overviews still appear far more often for informational queries than commercial ones. Ecommerce searches trigger them far less frequently, which helps protect bottom-of-funnel content — at least for now, though coverage for commercial and transactional queries is rising quickly.

That shift in behavior changes what content performs. Informational content loses value when answers are summarized upfront, while decision-stage content becomes more useful because it helps users compare options, validate choices, and move forward.

That’s why bottom-of-funnel content holds up. It aligns with where the user is in the process, not just what they searched for.

The time tracking software comparison piece I mentioned is a clear example. It’s consistently cited when users ask about construction time tracking tools. That visibility doesn’t always show up as a click, but it appears later — in branded search, direct visits, and ultimately, leads.

The attribution problem you need to accept

Here’s the challenge: bottom-of-funnel content’s value is systematically underreported in traditional analytics.

Someone sees your solution mentioned in a ChatGPT response, researches your brand, and converts later through a direct visit or branded search. In GA4, that journey often shows up as direct traffic. It looks like SEO didn’t contribute — but it did.

That’s why I’ve shifted clients away from traffic as the primary success metric and toward a broader set of signals, including:

  • Brand search volume trends.
  • Citation frequency in LLM platforms.
  • Direct traffic movement after content publication.
  • Conversion rate changes, even when traffic stays flat.

The ROI of BOFU and LLM-optimized content is higher than what dashboards show. If you’re evaluating performance based only on immediate click attribution, you’re missing where SEO is actually creating value.

Your practical playbook for shifting to BOFU

Here’s how to turn this shift into a practical content strategy:

  • Audit your existing content for BOFU gaps: Before creating anything new, identify which high-intent, purchase-stage queries you have zero coverage on. These are often the easiest wins.
  • Build comparison content with real methodology: Create a review framework you can reuse. Be honest about pros and cons, including your client’s product. Credibility is what makes these pieces rank and get cited.
  • Retrofit your best TOFU pieces: Add product-connected sections, contextual CTAs, and subject matter expert input. Make the informational content do conversion work, too.
  • Build LLM tracking into GA4 now: A regex-based segment capturing ChatGPT, Perplexity, Claude, and other AI referrers gives you visibility into a channel most clients have zero data on.
  • Reset the success metrics conversation with clients: Traffic volume is increasingly a vanity metric. Lead quality, branded search growth, and conversion rate are what actually matter in this environment.

AI Overviews have fundamentally changed the economics of informational content.

But that disruption creates a strategic opening. Bottom-of-funnel content has always converted better. AI is simply removing the incentive to keep over-investing in content that drives traffic without driving revenue.

The window to shift strategy is still open. It won’t stay that way.

AI traffic converts better than non-AI visits for U.S. retailers: Report

AI traffic conversions grow

Traffic from AI sources increased 393% year-over-year in Q1 and 269% in March. But the real surprise? AI traffic is converting better than last year.

  • AI-driven visits converted 42% better than non-AI traffic in March. A year ago, AI traffic was 38% less likely to result in a purchase.

By the numbers. Traffic from AI sources increased engagement by 12%, time on site by 48%, and pages per visit by 13%. Adobe also surveyed consumers and found that:

  • 39% have used AI for shopping. Of those, 85% said it improved the experience.
  • 66% believe AI tools provide accurate results.

What they’re saying. According to Vivek Pandya, director of Adobe Digital Insights:

  • “Notably, AI traffic continues to convert better (visits that result in purchases) than non-AI traffic, which covers channels such as paid search and email marketing.”

Yes, but. While consumer adoption is up, and traffic, engagement, and conversions are growing, many retail sites still aren’t fully optimized for AI visibility, especially on product pages, according to Adobe.

Why we care. Until now, reports have been mixed on whether AI traffic is better, equal to, or worse than organic search traffic (see our Dig deeper resources below). That may be changing, as we expected it would. Like generative AI, AI shopping today is as bad as it will ever be, meaning this channel’s value will only increase.

About the data. Adobe’s findings are based on direct transaction data from more than 1 trillion visits to U.S. retail websites. The company also surveyed more than 5,000 U.S. consumers to understand how they use AI to shop.

The report. Adobe report: U.S. retailers see surge in AI traffic, but many websites are not entirely readable by machines.

Dig deeper.

U.S. search ad revenue reached $114.2 billion in 2025

digital advertising

Search remained the largest force in digital advertising in 2025. However, its growth slowed as total U.S. ad revenue climbed to a record $294.6 billion.

Search still dominates. Search generated $114.2 billion, accounting for 38.8% of total digital ad revenue, according to the latest IAB/PwC Internet Advertising Revenue Report. But growth slowed to 11%, down from 15.9% in 2024, as advertisers shifted more budget into faster-growing formats and as AI began reshaping how users discover information.

Overall market growth accelerated as the year went on. It climbed from 12.2% in Q1 to 15.4% in Q4. The fourth quarter alone brought in $85 billion, even without major cyclical events like the U.S. election or the Olympics, which boosted 2024.

Video, social, and programmatic all grew faster than search. Digital video revenue jumped 25.4% to $78 billion, making it the fastest-growing major format. Social rose 32.6% to $117.7 billion, while programmatic increased 20.5% to $162.4 billion — continuing the shift toward automated, performance-driven buying.

The market is more concentrated. The top 10 companies now control 84.1% of U.S. digital ad revenue, up from 80.8% a year ago, reflecting the advantages of scale, first-party data, and AI-driven platforms.

AI is no longer just a tool layered onto campaigns. AI is increasingly shaping discovery, media buying, and measurement as consumer journeys fragment across platforms.

Why we care. Search still delivers the most scale, but it’s no longer growing the fastest. More budget is flowing into video, social, and programmatic, where automation and AI are more deeply embedded. That means more competition for budget, less visibility into performance, and a greater need to prove incrementality.

About the data. The IAB/PwC report is based on U.S. internet advertising revenue data compiled across the industry.

The report. Internet Advertising Revenue Report Full-year 2025 results (PDF)

No-JavaScript fallbacks in 2026: Less critical, still necessary

No-JavaScript fallbacks in 2026- Less critical, still necessary

Google can render JavaScript. That’s no longer up for debate. But that doesn’t mean it always does — or that it does so instantly or perfectly.

Since Google’s 2024 comments suggesting it renders all HTML pages, many developers have questioned whether no-JavaScript fallbacks are still necessary. Two years later, the answer is clearer and more nuanced.

Google’s stance on JavaScript rendering

In July 2024, Google sparked debate during an episode of Search Off the Record titled “Rendering JavaScript for Google Search.” When asked how Google decides which pages to render, Martin Splitt said: 

  • “If it’s so expensive, how do we decide which page should get rendered and which one doesn’t?” 

Zoe Clifford, from Google’s rendering team, replied: 

  • “We just render all of them, as long as they’re HTML, and not other content types like PDFs.”

That comment quickly led developers, especially those building JavaScript-heavy or single-page applications, to argue that no-JavaScript fallbacks were no longer necessary.

Many SEOs weren’t convinced. The remark was informal, untested at scale, and lacking detail. It wasn’t clear:

  • How rendering fit into Googlebot’s process.
  • Whether pages were queued for later execution.
  • How the system behaved under resource constraints.
  • Whether Google might fall back to non-rendered crawling under load.

Without clarity on timing, consistency, and limits, removing fallbacks entirely still felt risky.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

What Google’s documentation actually says

Google’s documentation now gives us a much clearer picture of how JavaScript rendering actually works. Let’s start with the “JavaScript SEO basics” page:

What Google says:

  • “Googlebot queues all pages with a 200 HTTP status code for rendering, unless a robots meta tag or header tells Google not to index the page. The page may stay on this queue for a few seconds, but it can take longer than that. Once Google’s resources allow, a headless Chromium renders the page and executes the JavaScript. Googlebot parses the rendered HTML for links again and queues the URLs it finds for crawling. Google also uses the rendered HTML to index the page.”

Google clearly states that JavaScript rendering doesn’t necessarily happen on the initial crawl. Once resources allow, a headless browser is used to parse JavaScript. 

Googlebot likely won’t click on all JavaScript elements, so this probably only includes scripts that don’t require user interactions to fire.

This is important because it tells us Google may make some basic determinations before JavaScript is rendered, via subsequent execution queues. 

If content is generated behind elements (content tabs, etc.) that Google doesn’t click, it likely won’t be discovered without no-JavaScript fallbacks.

Looking at Google’s “How Search works” documentation:

The language is much simpler. Google states it will attempt, at some point, to execute any discovered JavaScript. There’s nothing here that directly contradicts what we’ve seen so far in other Google documentation.

On March 31, Google published a post titled “Inside Googlebot: demystifying crawling, fetching, and the bytes we process,” which further clarifies JavaScript crawling.

The notes on partial fetching are particularly interesting. Google will only crawl up to 2MB of HTML. If a page exceeds this, Google won’t discard it entirely, but instead examines only the first 2MB of returned code.

Google explicitly states that extreme resource bloat, including large JavaScript modules, can still be a problem for indexing and ranking. 

If your JavaScript approaches 2MB and appears at the top of the page, it may push HTML content far enough down that Google won’t see it. The 2MB limit also applies to individual resources pulled into a page. If a CSS file, image, or JavaScript module exceeds 2MB, Google will ignore it.

We’re beginning to see that Google’s claim that it renders all pages comes with important caveats. 

In practice, it seems unlikely that a page with no consideration for server-side rendering (SSR) or no-JavaScript fallbacks would be handled optimally. This highlights why it’s risky to take comments from Googlers at face value without following how the details evolve over time.

The question we opened with is also evolving. It’s less “Do I need blanket no-JavaScript fallbacks in 2026?” and more “Do I still need critical-path fallbacks and resilient HTML within my application?”

Google’s recent search documentation updates add more context:

Google has recently softened its language around JavaScript. It now says it has been rendering JavaScript for “multiple years” and has removed earlier guidance that suggested JavaScript made things harder for Search. 

It also notes that more assistive technologies now support JavaScript than in the past. 

Within that same documentation, Google still recommends pre-rendering approaches, such as server-side rendering and edge-side rendering.

So while the language is softer, Google isn’t suggesting developers can ignore how JavaScript affects SEO.

Looking again at the December 2025 updates:

Google states that non-200 pages may not receive JavaScript execution. This suggests no-JavaScript fallbacks for internal linking within custom 404 pages may still be important.

Google also notes that canonical tags are processed both before and after JavaScript rendering. If source HTML canonicals and JavaScript-modified canonicals don’t match, this can cause significant issues. Google suggests either omitting canonical directives from the source HTML so they’re only evaluated after rendering, or ensuring JavaScript doesn’t modify them.

These updates reinforce an important point: even as Google becomes more capable at rendering JavaScript, the initial HTML response and status code still play a critical role in discovery, canonical handling, and error processing.

Dig deeper: Google removes accessibility section from JavaScript SEO section

Get the newsletter search marketers rely on.


What the data shows

JavaScript rendering is introducing new inconsistencies across the web, according to recent HTTP Archive data:

We can see that since November 2024, the percentage of crawled pages with valid canonical links has dropped.

Via the HTTP Archives 2025 Almanac:

About 2-3% of rendered pages exhibit a “changed” canonical URL, something Google’s documentation explicitly states can be confusing for its indexing and ranking systems. That 2-3% doesn’t explain the larger drop in valid canonical deployment since November 2024.

Other factors are likely at play, such as the adoption of new CMS platforms that don’t properly handle canonicals. The rise of vibe-coded websites using tools like Cursor and Claude Code may also be contributing to these issues across the web.

In July 2024, Vercel published a study to help demystify Google’s JavaScript rendering process:

It analyzed more than 100,000 Googlebot fetches and found that all resulted in full-page renders, including pages with complex JavaScript. However, 100,000 fetches is a relatively small sample given Googlebot’s scale. 

The study was also limited to sites built on specific frameworks, so it’s unwise to assume Google always renders pages perfectly. It’s also unclear how deeply those renders were analyzed.

It does suggest that Google attempts to fully render most pages it encounters. Broadly speaking, Google can generate JavaScript-modified renders, but the quality of those renders is still up for debate. As noted earlier, the 2MB page and resource limits still apply.

Because this study dates to mid-2024, any contradictions with Google’s updated 2025–2026 documentation should take precedence.

Vercel also published a notable finding:

  • “Most AI crawlers don’t execute JavaScript. We tested the major ones (ChatGPT, Claude, and others), and the results were consistent: none of them render client-side content. If your Next.js site ships critical pages as JavaScript-dependent SPAs, those pages are inaccessible to the systems shaping how people discover information.”

So even if Google is far more capable with JavaScript than it used to be, that’s not true across the broader web ecosystem. Many systems still rely on HTML-first delivery. That’s why you shouldn’t rush to remove no-JavaScript fallbacks — they may still be critical to your future visibility.

Cloudflare’s 2025 review is also worth noting:

Cloudflare reported that Googlebot alone accounted for 4.5% of HTML request traffic. While this doesn’t directly explain how Google handles JavaScript, it does highlight the scale at which Google continues to crawl the web.

Dig deeper: How the DOM affects crawling, rendering, and indexing

No-JavaScript fallbacks in 2026

The question we set out to answer was whether no-JavaScript fallbacks are required in 2026.

Google is far more capable with JavaScript than in previous years. Its documentation shows that pages are queued for rendering, and that JavaScript is executed and used for indexing. For many sites, heavy reliance on JavaScript is no longer the red flag it once was.

However, the details of Google’s rendering process still matter. Rendering isn’t always immediate. There are resource constraints, and not all behaviors are supported.

At the same time, the broader web ecosystem hasn’t necessarily kept pace with Google. The risk of removing all no-JavaScript fallbacks hasn’t disappeared — it’s just changed shape.

Key takeaways:

  • Google doesn’t necessarily render JavaScript on the first crawl. There’s a rendering queue, and execution happens when resources allow.
  • Technical limits still exist, including a 2MB HTML and resource cap, and limited interaction with user-triggered elements.
  • Non-200 responses may not receive rendering treatment, which keeps basic HTML and linking important in some cases.
  • Differences between raw HTML and rendered output still exist at scale across the web.
  • Google’s guidance still leans toward SSR (server-side rendering), pre-rendering, and resilient HTML for critical content.
  • Other crawlers, especially AI-driven ones, often don’t execute JavaScript at all. As these systems become more important, the need for fallbacks may increase again.
  • Blanket, site-wide no-JavaScript fallbacks aren’t universally required in 2026, but critical content, links, and signals shouldn’t depend entirely on JavaScript. Many modern crawlers still rely on HTML-first delivery.

For now, no-JavaScript fallbacks for critical architecture, links, and content are still strongly recommended, if not required going forward.

Your ROAS looks great — but is it actually driving growth?

Your ROAS looks great — but is it actually driving growth

An ecommerce company hires your PPC agency to explore paid search. A solid plan follows, and after approval, the campaigns go live. Soon, you’re seeing stellar results: high conversion volumes and a healthy ROAS.

On the surface, the strategy is a resounding success.

But look closer.

Some of these conversions might have occurred anyway via direct or organic search traffic — meaning the campaigns may not be driving real growth. Too often, this goes unmeasured.

To truly understand performance, you need to look at incremental lift and marginal ROAS.

The truth about ROAS

Perhaps you’ve heard about eBay’s paid search experiment? They were spending heavily on brand PPC ads. Then they ran a controlled test, turning those ads off for a portion of users to measure impact.

Organic traffic picked up most of those conversions, with minimal impact on revenue. But guess what? Despite the clear results, eBay turned the branded ads back on. Fear, or smart? You tell me.

With search becoming increasingly automated, and the customer journey spreading across more surfaces than ever, attributing conversions to the right channels is harder than ever. Advertising platforms are quick to claim credit for these conversions, but be skeptical.

What most platforms report is attributed return, not causal lift. In other words, ROAS tells you how much revenue the platform says it influenced; it doesn’t tell you how much of that revenue would have happened without the ads.

When it comes to black-box automation like Performance Max and Advantage+, platforms have become exceptionally good at one thing: finding the path of least resistance to a conversion. They aren’t necessarily finding new customers. They’re often just becoming the most expensive touchpoint in a journey that was already destined to convert.

Without measuring incrementality, automation simply amplifies non-incremental signals, such as:

  • Brand search campaigns capturing existing demand.
  • Retargeting campaigns hitting users who were seconds away from purchasing.
  • Reporting that makes “safe” channels appear more valuable than they truly are.

Dig deeper: Paid media efficiency: How to cut waste and improve ROAS

Incrementality tells you whether marketing created something extra

Incrementality is causal lift — what changed because the campaign existed, typically measured by comparing exposed groups with holdout or control groups. So what did this campaign actually drive that wouldn’t have happened otherwise?

Even though you may not want to admit it, this is a much more useful lens for budget allocation than platform attribution alone.

A channel can have a fantastic in-platform ROAS and still generate a weak incremental impact. Why? Because it might be harvesting demand rather than creating it.

If you want to know whether a campaign genuinely drove growth, the better question is incrementality.

But it’s still not the full answer.

To decide what to do next, you also need marginal ROAS.

Dig deeper: Why incrementality is the only metric that proves marketing’s real impact

Get the newsletter search marketers rely on.


Marginal ROAS tells you what to do next

A channel may be incremental. But that still doesn’t tell you where the next $10,000 should go. That’s a marginal ROAS question.

Marginal ROAS measures the return on the next unit of spend, not the average return across all spend. Here’s how it works: the first tranche of budget often performs well, then the next performs worse.

Keep going, and the final dollars become dramatically less efficient than the average suggests. The same applies to CPA metrics: a blended CPA may look acceptable, while the last dollars spent were far less efficient, leaving many advertisers bidding beyond where they should.

Imagine you spend $10,000 and generate $50,000 in revenue (500% ROAS). You decide to scale and spend an additional $5,000. This extra spend generates only $5,000 in additional revenue.

  • Your new average ROAS: 366% 
  • Your marginal ROAS: 100% (You essentially traded $1 for $1.)

In this scenario, the last $5,000 you spent was entirely wasted, even though the total “average” performance still looks decent on your dashboard.

This is the trap of average ROAS. It makes a channel look scalable when it may only be efficient at lower spend levels, and it hides the difference between profitable core demand capture and weak incremental expansion.

To make better decisions, you need to look further. Platform ROAS helps with in-platform optimization, incrementality shows whether campaigns actually created value, and marginal ROAS tells you whether more budget should go there.

A strong ROAS can signal true efficiency, or it can mean the platform is capturing demand that would have converted anyway. That’s why you should focus more on incrementality tests.

Don’t ask whether the channel has been efficient. Ask whether the next dollar is efficient enough — that’s what determines smart scaling.

Dig deeper: The marketing measurement flywheel: A 4-step framework for proving impact

Options for incrementality testing

You don’t need a perfect measurement lab before you start. Geo tests, holdouts, audience exclusions, and controlled spend reductions can all teach you more than another month of attribution debates.

  • Geo-split testing: Divide your markets into two comparable geographic groups, keep your ads running in the “test” group, and turn them off in the “control” group. The difference in total revenue between the two regions reveals the true incremental lift of your ads.
  • Search lift tests (holdouts): Use platform tools to create holdout groups, a small percentage of users who are intentionally not shown your ads. By comparing their behavior to the exposed group, you can see the direct impact of your (for example) Search or YouTube campaigns.

Beyond these, you can also test the impact of remarketing, branding, awareness campaigns, or additional social channels.

The real shift: From reporting performance to allocating capital

Too many marketing teams still use measurement to explain what happened. The better use of measurement is to decide what should happen next.

Incrementality helps you understand whether a channel created value. Marginal ROAS helps you understand whether more investment is justified. Together, they move marketing measurement out of the reporting function and into capital allocation.

ROAS tells you who gets credit. Incrementality tells you what actually moved. Marginal ROAS tells you where the next budget should go. But be aware: incrementality is not the same as attribution. Attribution tells you who, or which channel, should get the credit, while incrementality shows you whether or not it was worth it.

Dig deeper: How to take your marketing measurement from crawl to sprint

❌