Normal view

Today — 20 April 2026Search Engine Land

How to use the three-act structure for data storytelling

20 April 2026 at 16:00
How to use the three-act structure for data storytelling

You’ve audited your client’s website and compiled performance data. You’ve identified what’s working, what can be improved, and your recommendations for future strategies. But how do you turn that data into a presentation that’s easy to explain and builds trust? 

Start with stories. Storytelling isn’t just for entertainment. It’s how people make sense of information. That’s what makes it so effective for data presentation. 

One of the simplest ways to structure that story is the three-act structure. It’s a familiar framework used everywhere, from Aristotle’s Poetics to Star Wars.

What is the three-act structure?

The three-act structure is a simple framework that shows how a story moves from beginning to middle to end. It shows how a protagonist moves from their starting point to a meaningful change.

Applied to data storytelling, it helps you organize your insights, position your client as the main character (the protagonist), and clearly show what happens next.

While similar to the five-point narrative arc, this framework is organized into three manageable sections: what the story is about, what happens when the main character is introduced to conflict, and how that conflict is resolved.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Act 1: The beginning

This is where the protagonist’s norm and conflict — the issue the main character is meant to face, also known as the antagonist — are established. The protagonist wants something, and the conflict is holding them back from what they want. 

An event or circumstance occurs that incites the protagonist into action. The background is established, the goals are defined, and the audience is invested in the protagonist’s success.

Act 2: The middle 

The story is developed, and tension builds. The protagonist experiences roadblocks caused by the conflict/antagonist that hinder them from their ultimate goal. Conflict arises until it can no longer be ignored, causing a pivotal moment that leads into the final act.

Act 3: The end

The narrative is affected by the change in Act 2, bringing the story to a final showdown between the protagonist and the conflict/antagonist, ultimately resulting in a resolution. The protagonist may find closure or know what path lies ahead (this may set the stage for a sequel).

The three-act structure helps you understand website data on a deeper level. It also prepares the data to be presented to your client in a way that places them at the center of the story.

Using the three-act structure to identify your data’s narrative

Why bother using the three-act structure as a framework for strategy analysis? It builds trust, showing your client that you’re going on a journey alongside them. 

You and your client are on the same team, with the same destination in mind: their success, even if the data isn’t communicating immediate results.

The application of the three-act structure to data storytelling happens in three steps.

  • Step 1: Briefly recap the existing strategies, establish previous wins, and identify the challenge currently affecting performance. This sets the baseline of Act 1.
  • Step 2: Explain the roadblocks and how they stand in the way of the overall strategy’s success. This parallels the growing conflict found in the structure’s Act 2.
  • Step 3: Recommend the next steps and how you plan to address the conflict. Show what success looks like by providing examples of how your recommendations fit the narrative of your client’s goals. This is Act 3, the resolution of the structure.

Get the newsletter search marketers rely on.


Where is your client’s story in the three-act structure?

Your client is the protagonist of their story. To work more effectively together, you need to communicate to your client that you’re invested in the story of their success. 

At the heart of each data set is the story of how your client is impacted. When you communicate what the data is saying, position yourself as the guide who helps the main character get where they need to go.

An example of applying the three-act structure framework to data analysis and presenting the data’s narrative would look like this:

ActGoalScenarioApproach
1Set the stage, center your client as the protagonist while introducing the challenge as the antagonist.Your client’s website has received a substantial increase in organic traffic as a result of your most recent strategy, but is experiencing a high bounce rate on select pages.Recap the strategy that led to the traffic increase and summarize the outcome from a high-level perspective.
2Identify the conflict, potential roadblocks, and related stakes.The high bounce rate is preventing your website from experiencing consistent traffic flow. Explain why a high bounce rate is detrimental to overall performance, and connect the affected pages to the overall strategy.
3Recommend strategies and outline next steps.Your client’s high bounce rate indicates low page speed due to large images that take a long time to load.Help the client visualize how best practices lead to better outcomes. Recommend image compression as a next step.

The conclusion doesn’t always mean the end of the story

Finding the story in your data — and communicating it clearly — is how you build trust with clients.

Clients don’t want industry jargon. They want to feel seen, understood, and that they’ve entrusted their digital marketing success to the right person. Stories, and the connections they form, get them there.

Reaching the conclusion of your data’s narrative isn’t the end, but the beginning: the start of strategy implementation, of collaborative partnerships, and of greater results. 

When looking at data, you and your client are on a journey together. A downward trend in your data doesn’t mean your story is over, and an upward trend doesn’t mean there’s no hope for a sequel. In either case, a new journey (your next strategy) can begin.

Is your AI readiness a mirage? by AtData

20 April 2026 at 15:00

AI has quickly become the most overconfident line item in the modern marketing roadmap.

Budgets are shifting. Teams are being restructured. Vendors are being evaluated almost exclusively through the lens of how “AI-powered” they appear. There is a growing assumption that once the right models are in place, performance will follow. Better targeting. Smarter segmentation. Higher conversion. More efficient spend.

It sounds almost inevitable.

But there is a quieter reality beneath the momentum. One that rarely makes it into boardroom conversations or conference keynotes.

Most organizations are not struggling to use AI. They are struggling to feed it.

And what they are feeding it is far less reliable than they think.

The uncomfortable truth about inputs

AI does not create truth. It scales whatever it is given.

If the underlying data is fragmented, outdated or manipulated, the model does not correct it. It operationalizes it. At speed. At scale. With confidence.

This is where the gap begins.

Marketers have spent years investing in data infrastructure, pipelines and orchestration layers. On paper, the foundation looks strong. There is more data available than ever before. There are more signals, more touchpoints, more attributes tied to every customer.

The assumption is that this abundance translates into readiness. But volume is not the same as validity.

A customer profile built from five disconnected identifiers is not a unified identity. An email address that exists in a CRM is not necessarily active, reachable or even tied to a real person. Engagement signals that appear recent may be the result of automated activity, privacy shielding or bot interaction.

AI models are not designed to question these inputs. They are designed to find patterns within them.

So, when the inputs are flawed, the outputs become convincingly wrong.

Identity is the fault line

At the center of this problem is identity.

Every AI-driven use case in marketing depends on the assumption that you know who you are analyzing, targeting or predicting. Whether it is propensity modeling, churn prediction, audience creation or personalization, identity is the anchor.

Yet identity remains one of the least stable components of the data stack.

Consumers move across devices, channels and environments constantly. They use different email addresses. They share accounts. They create new profiles. They disengage and re-engage in ways that are difficult to track cleanly. Over time, what appears to be a single customer often becomes a composite of partial truths.

Even within authenticated environments, identity degrades. Touchpoints go inactive. Behavioral signals lose relevance. Records persist long after the underlying reality has shifted.

Most systems are not built to continuously reconcile these changes. They capture identity at a moment in time and treat it as durable.

And AI inherits that assumption.

Which means many models are making decisions based on identities that no longer exist in the way they are represented.

The hidden impact of fraud and synthetic activity

Another layer omplicates the picture further. Not all data is simply outdated. Some of it is intentionally misleading.

Fraud is evolving alongside marketing technology. The barriers to creating accounts, generating engagement, or exploiting promotional systems have decreased significantly. Automated tools and AI itself have made it easier to simulate legitimate behavior at scale.

Fake accounts are not always obvious. They can pass basic validation checks. They can engage with content. They can move through funnels in ways that resemble real users.

From a model’s perspective, they are indistinguishable unless additional context is applied.

This creates a subtle but meaningful distortion.

Acquisition models begin to optimize toward patterns that include fraudulent behavior. Lifecycle strategies adapt to engagement that is not human. Performance metrics improve on the surface while underlying efficiency erodes.

The result is a feedback loop where AI reinforces the very issues it should be helping to solve.

And because the outputs look sophisticated, the problem becomes harder to detect.

Why traditional data strategies fall short

Most organizations are aware that data quality matters. Significant effort goes into cleansing, deduplication and normalization. Records are standardized. Fields are filled. Duplicates are merged.

These steps are necessary, but they are not sufficient. Clean data is not the same as accurate data.

A perfectly formatted email address can still be inactive. A deduplicated profile can still represent multiple individuals. A normalized dataset can still be missing critical context about behavior, risk or authenticity.

Traditional data practices tend to focus on structure. AI requires substance.

It requires an understanding of whether an identity is real, whether it is active, whether it is behaving in ways that align with genuine consumer patterns.

Without that layer, even the most sophisticated models are operating on incomplete information.

The illusion of readiness

This is how the mirage takes shape.

Dashboards show high match rates. Databases contain millions of records. Models produce outputs that appear precise. Campaigns are executed with increasing automation.

From the outside, it looks like progress.

But underneath, there are unresolved questions.

  • How many of those identities are actually reachable today?
  • How many represent real individuals versus synthetic or low-quality accounts?
  •  How often are behavioral signals refreshed and validated?
  • How much of the model’s learning is influenced by noise?

These are no longer rare. They are foundational.

And yet they are often overlooked because they sit below the level where most AI initiatives begin.

A different way to think about AI readiness

True AI readiness does not start with model selection. It starts with input integrity.

It requires a shift in focus from how much data you have to how much of it you can trust.

That trust is built on a few critical dimensions.

First, identity accuracy. Not just the ability to match records, but to ensure that those records reflect real, current individuals. This includes understanding when identities change, when they become inactive and when they should no longer be used as the basis for decisioning.

Second, activity validation. Knowing that a signal occurred is not enough. You need confidence that it represents meaningful human behavior. This is where distinguishing between genuine engagement and automated or manipulated activity becomes essential.

Third, risk awareness. Every dataset contains some level of fraud or abuse. The question is whether it is visible and accounted for. Without that visibility, models will absorb and propagate those patterns.

When these elements are in place, AI begins to operate on a different plane. Predictions become more reliable. Segments become more actionable. Optimization aligns more closely with real outcomes.

Where this creates advantage

Organizations that address these foundational issues are creating a structural advantage.

They are able to suppress low-value or risky identities before they enter the modeling process. They can prioritize outreach to individuals who are both reachable and likely to engage. They can detect and mitigate fraudulent behavior before it distorts performance metrics.

Over time, this compounds.

Models trained on higher-quality inputs learn faster and generalize better. Campaigns become more efficient. Measurement becomes more trustworthy.

Perhaps most importantly, decision-making becomes more grounded in reality.

This is where AI begins to deliver on its promise.

The path forward

There is no question that AI will continue to reshape marketing. The capabilities are real, and the pace of innovation is not slowing down.

But the idea that AI alone will solve underlying data challenges is a misconception. If anything, it raises the stakes.

Because AI does not just expose weaknesses in your data. It amplifies them.

The organizations that recognize this early are taking a more deliberate approach. They are investing in understanding their identity layer. They are prioritizing the validation of activity and the detection of risk. They are treating data not as a static asset, but as a dynamic system that requires continuous refinement.

They are not asking, “How do we apply AI to our data?”

They are asking, “Is our data worthy of AI?”

It is a more difficult question. It requires a deeper level of introspection. It challenges assumptions that have been in place for years.

But it is also the question that separates real readiness from the illusion of it.

And in a landscape where everyone is accelerating toward AI, clarity at the foundation is what ultimately determines who moves forward, and who simply moves faster in the wrong direction.

Before yesterdaySearch Engine Land

The latest jobs in search marketing

17 April 2026 at 21:33
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Job Description Council & Associates is one of Atlanta’s fastest-growing PI firms — handling serious cases across truck accidents, premises liability, daycare injury, negligent security, and wrongful death. The firm is led by a nationally recognized trial attorney and built on a brand that goes beyond the courtroom into the community. We need a marketer […]
  • Job Description Content Marketing Specialist Malta Dynamics | Malta, OH (Hybrid) About the Role Malta Dynamics is seeking a Content Marketing Specialist to own the execution and consistency of Malta Dynamics’ brand voice across all channels. This role is responsible for producing, publishing, and optimizing high-quality content that drives inbound leads, supports sales, and reinforces […]
  • We are seeking an intermediate-level SEO Specialist for Hive Digital, a cutting-edge and award-winning agency that prides itself on helping change the world for the better. We offer a highly collaborative team that works together to deliver the best possible outcomes for our clients in a fast-paced, fun work environment. Are you ready to bring […]
  • About The Company goop is a lifestyle platform dedicated to exploration, curation, and groundbreaking conversation. From its award-winning beauty and fashion lines to its expansive editorial lens, goop invites women to embrace the process of becoming, and to discover deep joy in the pursuit of pleasure, beauty, and growth in all phases of life. Gwyneth […]
  • Job Description LK Distribution is a leading distributor of several brands and products offered on both e-commerce and wholesale. Specialized in the Alternative Product category in the CBD/Hemp Industry ranging from a large category of products. We are seeking a creative and dynamic individual with experience with independent online storefronts for each of our brands […]
  • Job Description Benefits: 401(k) Paid time off Dental insurance Health insurance Vision insurance A Digital Marketing Specialist at a leading real estate company requires high-energy, creative, and data-driven team member who helps elevate our brand and our agents’ digital presence. As the Digital Marketing Specialist, you will be the “engine room” of our online strategy. […]
  • Director, Global Digital Marketing, Integrated Marketing Communication (IMC) Team Position Overview The Director of Digital Marketing is at the center of 10x Genomics’ digital marketing engine, delivering measurable business impact and innovating across channels to ensure leadership in scientific markets. This position reports to the Vice President of Integrated Marketing Communications as is responsible for […]
  • Job Description Digital Marketing Specialist OURCU is looking for a Digital Marketing Specialist who is equal parts data-driven strategist and collaborative teammate. This role is ideal for someone excited to build and optimize HubSpot from the ground up, create meaningful campaigns, and clearly demonstrate the why behind marketing performance. If you love blending creativity with […]
  • This role offers you the opportunity to deepen your SEO expertise and develop your leadership skills within a tight-knit agency team. Sr. SEO Analysts lead our client relationships and bring our outcome-driven strategies to life. They are responsible for delivering value and results to our clients through their high-quality work, commitment to building deep SEO […]
  • Job Summary We are seeking a versatile and data-driven Digital Marketing & Creative Specialist to join our team. In this multifaceted role, you will be responsible for the end-to-end marketing lifecycle, from high-quality visual content creation (design, photos, videos) to technical SEO and lead-generation strategy. In this role, creative and technical skills are blended to […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • About Us: Naadam is redefining luxury by delivering the world’s finest cashmere at an accessible price. Founded in 2013, with a vision to bring premium, sustainably made cashmere to the everyday wardrobe, we’ve built a brand that values innovation, transparency, and connection with our customers. At Naadam, we are dedicated to pushing limits, nailing the […]
  • About the Role You’ll play a key role in driving Kashable’s customer activation, acquisition and retention. You’ll begin owning the execution and performance of one paid media channel and as you demonstrate results expand your scope into broader strategic decision-making and greater channel ownership. We’re looking for someone who combines strong strategic thinking with hands-on […]
  • Job Description Job Description Our client, an elite national Am Law firm, is seeking a Regional Marketing Specialist to support its New York office. This role offers the opportunity to work closely with firm leadership to ensure local marketing initiatives align seamlessly with firmwide and practice‐specific priorities. You will lead marketing efforts for the New […]
  • Job Description Job Description Salary: $85K-$110K Mason Interactive | Hybrid (3 days in office) | $85K-$110K Who We Are Mason Interactive is a 30-person full-service digital agency with offices in Brooklyn and Charlotte. We work with clients in education, fashion, wellness, and luxury across all channels: paid search, paid social, SEO, programmatic, creative, and affiliate. […]
  • A property management firm in New York is seeking a Leasing Coordinator to manage marketing, leasing, and renewal strategies. This position involves performing all activities related to leasing to new residents, ensuring resident satisfaction, and executing lease renewals. The ideal candidate will be responsible for conducting tours, processing applications, and developing marketing plans. This role […]

Other roles you may be interested in

SEO Manager, Veracity Insurance Solutions, LLC, (Remote)

  • Salary: $100,000 – $135,000
  • Lead, coach, and develop a high-performing team of SEO Specialists
  • Set clear expectations, quality standards, workflows, and growth paths across the team

Senior SEO Manager, Lunar Solar Group (Remote)

  • Salary: $80,000 – $100,000
  • Lead strategy, execution, and deliverables across 4–6 client accounts independently
  • Own end-to-end SEO strategy and execution across all core deliverables and processes

Performance Marketing Manager, Recruitics (Hybrid, Lafayette,CA)

  • Salary: $70,000 – $90,000
  • Work in platform to configure campaigns – set up budgets, targeting, creative, and run dat
  • Monitor ongoing performance to identify areas of opportunity

Marketing, Social Media & PR Manager, PARTNERS Staffing (Fort Myers, FL)

  • Salary: $75,000 – $85,000
  • Develop and execute integrated marketing campaigns for shows, content releases, events, and brand initiatives
  • Identify target audiences and create strategies to grow reach and engagement

Local Search & Listings Manager, TurnPoint Services (Remote)

  • Salary: $80,000 – $90,000
  • Own the strategy and governance for local search visibility across all business locations.
  • Develop optimization frameworks and standards for Google Business Profiles and other listing platforms.

Senior Branding manager, rednote (Hybrid, New York, US)

  • Salary: $228,000 – $320,000
  • Define and drive rednote’s global brand strategy, shaping its positioning across key international markets
  • Lead integrated marketing initiatives end-to-end, ensuring alignment across creative development and media execution

Performance Marketing Manager, Hirewell (Remote)

  • Salary: $85,000 – $95,000
  • Paid Search: Lead daily execution and management of Google Ads. This is a “hands-on” role requiring deep platform expertise.
  • Multi-Channel Management: Oversee and optimize campaigns across Meta, LinkedIn, and Programmatic channels.

Senior Paid Media Manager, Brightly Media Lab (Remote)

  • Salary: $70,000 – $100,000
  • Directly build, manage, and optimize campaigns within Google Ads, Microsoft Ads, and Facebook Ads (Meta).
  • Serve as the lead point of contact for your book of clients, taking full ownership of their success and growth.

Marketing Specialist, The Bradford group (Hybrid, The Greater Chicago area)

  • Salary: $60,000 – $62,000
  • Launch and manage paid social campaigns primarily on Meta platforms.
  • Oversee daily budgets and performance optimizations against revenue and ROI goals, using data-driven insights to continuously improve results.

Paid Search Specialist, Maui Jim Sunglasses (Peoria, IL)

  • Salary: $65,000 – $70,000
  • Plan, set up, and manage paid search, display, and shopping campaigns on Google Ads.
  • Manage and optimize advertising budgets to achieve revenue and efficiency targets.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Advertisers are testing ChatGPT ads — but uncertainty remains high

17 April 2026 at 21:26
From scripts to agents- OpenAI’s new tools unlock the next phase of automation

OpenAI is emerging as a new advertising channel, but early advertiser sentiment is mixed as brands grapple with limited data, unclear performance, and a rapidly evolving product.

Driving the news. Two months after launching ads in ChatGPT, advertisers are experimenting — but still lack clear measurement tools and performance benchmarks.

  • Early campaigns are largely impression-based, with little insight into outcomes.
  • CPMs have reportedly been high, with initial minimum spends in the six figures.
  • Some advertisers say the product feels early and slow to mature.

The vibe check. According to Ad Age reporting, advertiser sentiment sits somewhere between cautious optimism and frustration.

  • Optimism stems from ChatGPT’s position as a leading consumer AI platform.
  • Frustration centers on lack of transparency, targeting, and reporting.

Why we care. This report this highlights both the opportunity and risk of investing in AI ad platforms early. While ChatGPT offers access to a fast-growing, high-intent audience, the lack of measurement and evolving product features make it a challenging channel to justify at scale.

It’s a signal to test thoughtfully and start building an AI strategy without overcommitting budget too soon.

The bigger picture. OpenAI’s ad push comes as it juggles multiple priorities — from AI development to enterprise growth — while facing rising competition from Google and Anthropic.

Some in the industry see OpenAI as having “cast too wide a net,” experimenting across video, commerce, and other products before refocusing. Its Instant Checkout commerce feature was quietly pulled back whilst video ambitions have also lost ground to competitors.

How ads actually show up. Early tests suggest ads may influence user journeys — but not always directly.

In one example, a sponsored retailer appeared more prominently in recommendations, even when multiple options were listed. Still, platforms maintain that ads do not directly alter core answers.

Yes, but. There’s ongoing tension between consumer trust (keeping answers unbiased), and advertiser goals (increasing visibility and influence).

That balance will likely shape how AI ads evolve.

What marketers should do now. Experts say brands don’t need to rush in. Large brands may benefit from early testing whilst others can focus on strategy development while the space matures. The priority is understanding how AI fits into broader media and search behavior.

The bottom line. ChatGPT ads are still in their infancy — promising, but unproven — leaving advertisers to experiment carefully while waiting for the platform to catch up to expectations.

Google Ads API to require multi-factor authentication

17 April 2026 at 21:06

Google is tightening security across its ads ecosystem, requiring multi-factor authentication (MFA) for API users — a move that could impact how developers and advertisers access and manage accounts.

Driving the news. Google will begin rolling out mandatory MFA for the Google Ads API starting April 21, with full enforcement expected over the following weeks.

The update applies to users generating new OAuth 2.0 refresh tokens through standard authentication workflows.

What’s changing. Users will now need to verify their identity with a second factor — such as a phone or authenticator app — in addition to their password when authenticating.

  • Existing OAuth refresh tokens will continue to work without interruption.
  • New authentications will require MFA by default.
  • Users without 2-step verification enabled will be prompted to set it up.

Why we care. This change affects how you access and manage Google Ads data through APIs and connected tools. While it improves account security and reduces the risk of unauthorized access, it may also require updates to workflows, especially for teams that regularly generate new credentials. Preparing early can help avoid disruptions.

Who’s affected. The change primarily impacts apps and workflows using user-based authentication.

  • User authentication workflows: Will require MFA for new token generation.
  • Service account workflows: Not affected, and recommended for automated or offline use cases.

The requirement also extends beyond the API to tools like Google Ads Editor, Scripts, BigQuery Data Transfer, and Data Studio.

The big picture. As ad platforms handle more sensitive data and automation, security is becoming a bigger priority — especially as API access expands across teams, tools, and integrations.

Yes, but. While the update improves protection against unauthorized access, it may add friction for teams that frequently generate new credentials or rely on manual authentication flows.

The bottom line. Google is making MFA standard for Ads API access, signaling a broader shift toward stricter security across advertising tools and workflows.

OpenAI begins rolling out ads in select markets

17 April 2026 at 19:13
OpenAI launches Instant Checkout in ChatGPT – bringing agentic commerce to life

OpenAI is continuing its push into ad-supported monetization — a strategy it began earlier this year — by expanding ads to more countries while keeping premium tiers ad-free.

Driving the news. OpenAI is starting to roll out ads for users on Free and Go plans in Australia, New Zealand, and Canada.

  • The rollout applies only to lower-tier plans.
  • Paid tiers — including Pro, Business, Enterprise, and Education — will remain ad-free.

Why we care. This opens up a new and rapidly growing channel to reach users inside AI-driven experiences. As OpenAI expands ads into more markets, it signals early opportunities to test and understand how advertising works in conversational interfaces. It could also shape how future search and discovery happens, making it important to get in early.

The big picture. AI platforms have largely avoided traditional advertising so far, relying instead on subscriptions and enterprise deals.

This move suggests OpenAI is:

  • testing new revenue streams,
  • exploring how ads fit into conversational interfaces,
  • and balancing monetization with user experience.

Yes, but: OpenAI is clearly drawing a line between free and paid experiences — signaling that ad-free usage will remain a premium benefit.

The bottom line: OpenAI is cautiously entering the ads business, starting with limited markets and tiers as it experiments with how advertising works inside AI-driven products.

Google Ads tests direct Google Tag Manager integration for conversion setup

17 April 2026 at 19:01
Google Ads tactics to drop

Google may be streamlining one of the most error-prone parts of campaign setup — conversion tracking — by reducing the need for manual tag implementation.

Driving the news. Google Ads is testing a new “Set up in Google Tag Manager” option within its conversion setup flow, according to screenshots shared by Google Ads Specialist, Natasha Kaurra.

The feature appears alongside existing installation methods and allows advertisers to push conversion tracking setups directly into Google Tag Manager.

What’s new. Instead of copying conversion IDs and labels between platforms, advertisers can click the new button to open a pre-filled tag setup inside GTM.

That means:

  • fewer manual steps,
  • less room for implementation errors,
  • and faster deployment across accounts.

Why we care. Conversion tracking is critical to measuring performance, and this update makes it faster and less error-prone to implement. By reducing manual steps between Google Ads and Google Tag Manager, it can help ensure data is set up correctly from the start. That means more reliable reporting and better optimization decisions.

How it works. Based on early screenshots, the flow prompts users to select a GTM container and then surfaces a suggested tag configuration ready to publish.

This could be especially useful for agencies managing multiple clients, teams working across multiple containers, or advertisers with complex tagging setups.

The bottom line. It’s a small UI change with outsized impact — making it easier for advertisers to get conversion tracking right the first time.

First seen. This update was shared by PPC News Feed who credited Google Ads Specialist Natasha Kaurra for spotting it.

Why bottom-of-funnel content is winning in AI search

17 April 2026 at 19:00
Why bottom-of-funnel content is winning in AI search

Google search traffic is dropping. If you’ve spent years building organic strategies, watching it happen in real time is uncomfortable. But it’s also clarifying.

I started seeing the shift across SaaS clients. Pages that had driven steady traffic for years — educational, top-of-funnel (TOFU) content — were losing ground. Not because the content got worse, but because users no longer needed to click. AI Overviews were doing the job for them.

That forced a decision: keep defending the old model or adjust the strategy. I chose to adjust.

What became clear pretty quickly is that while informational content is losing clicks, bottom-of-funnel (BOFU) content is holding up — and in many cases, driving more qualified leads.

This isn’t just a trend. It’s a shift in how value is created through search.

The pivot: Making BOFU the priority

My approach now is straightforward: 60% to 80% of output goes toward bottom- and mid-funnel content, with the remainder covering supporting TOFU topics that fill content cluster gaps or address timely industry conversations.

When I pitched this shift to clients, the conversation was easier than I expected. I put it simply: 

  • “You have a choice between traffic and leads. If you want leads, here’s how we get there, even if it means less traffic.” 

I was upfront that overall traffic might dip. But whoever shows up is more likely to convert. That framing landed. Nobody argued for traffic when the alternative was a qualified pipeline.

The most effective bottom-of-funnel pieces are comprehensive comparison and listicle-style guides targeting high-intent queries.

One of the best examples is a guide to the best time-tracking software for construction. Before writing it, I built a reusable review methodology for the client. The guide called out pros and cons honestly, including the client’s own product, because that’s what builds credibility with readers evaluating their options.

It was factual, specific, and written for someone in the middle of a purchase decision, not someone casually browsing.

Within weeks, it became our most cited article in LLM responses. It’s now a cornerstone piece, regularly appearing in conversion paths and driving qualified leads. 

That single piece delivered more pipeline impact than a dozen informational posts from the previous quarter because it answers the question a buyer is actually asking, not the one that gets the most search volume.

Dig deeper: How to align your SEO strategy with the stages of buyer intent

TOFU isn’t dead. It just has a different job now.

I see many SEOs treating this as an either-or conversation. To be clear, I haven’t eliminated TOFU content. I’ve repositioned it.

TOFU’s job now is to build topical authority that helps BOFU pages rank. It’s the supporting structure, not the primary event. Guides and educational content:

  • Support the content cluster.
  • Establish expertise in Google’s eyes.
  • Pass internal link equity to BOFU pages.

For my clients’ content, we’ve revisited the best-performing TOFU pieces and made them work harder.

We added sections that connect the information directly to the client’s product, supported by screenshots and subject matter expert quotes. 

We also redesigned calls to action to match the context and placed them throughout the content, rather than just at the end. 

For several clients, this led to a measurable increase in visitors navigating to demo request pages, without changing the informational intent.

The key distinction: You should still produce a meaningful volume of TOFU content, but make sure it has a unique angle — something not widely known or discussed from your perspective. 

In a sea of AI-generated content, that specificity is what drives performance.

Get the newsletter search marketers rely on.


Why this works in AI-driven search

People arriving from AI platforms show up with context. They’ve already explored the problem. They’re evaluating options. This aligns with how AI Overviews are applied in search results.

AI Overviews still appear far more often for informational queries than commercial ones. Ecommerce searches trigger them far less frequently, which helps protect bottom-of-funnel content — at least for now, though coverage for commercial and transactional queries is rising quickly.

That shift in behavior changes what content performs. Informational content loses value when answers are summarized upfront, while decision-stage content becomes more useful because it helps users compare options, validate choices, and move forward.

That’s why bottom-of-funnel content holds up. It aligns with where the user is in the process, not just what they searched for.

The time tracking software comparison piece I mentioned is a clear example. It’s consistently cited when users ask about construction time tracking tools. That visibility doesn’t always show up as a click, but it appears later — in branded search, direct visits, and ultimately, leads.

The attribution problem you need to accept

Here’s the challenge: bottom-of-funnel content’s value is systematically underreported in traditional analytics.

Someone sees your solution mentioned in a ChatGPT response, researches your brand, and converts later through a direct visit or branded search. In GA4, that journey often shows up as direct traffic. It looks like SEO didn’t contribute — but it did.

That’s why I’ve shifted clients away from traffic as the primary success metric and toward a broader set of signals, including:

  • Brand search volume trends.
  • Citation frequency in LLM platforms.
  • Direct traffic movement after content publication.
  • Conversion rate changes, even when traffic stays flat.

The ROI of BOFU and LLM-optimized content is higher than what dashboards show. If you’re evaluating performance based only on immediate click attribution, you’re missing where SEO is actually creating value.

Your practical playbook for shifting to BOFU

Here’s how to turn this shift into a practical content strategy:

  • Audit your existing content for BOFU gaps: Before creating anything new, identify which high-intent, purchase-stage queries you have zero coverage on. These are often the easiest wins.
  • Build comparison content with real methodology: Create a review framework you can reuse. Be honest about pros and cons, including your client’s product. Credibility is what makes these pieces rank and get cited.
  • Retrofit your best TOFU pieces: Add product-connected sections, contextual CTAs, and subject matter expert input. Make the informational content do conversion work, too.
  • Build LLM tracking into GA4 now: A regex-based segment capturing ChatGPT, Perplexity, Claude, and other AI referrers gives you visibility into a channel most clients have zero data on.
  • Reset the success metrics conversation with clients: Traffic volume is increasingly a vanity metric. Lead quality, branded search growth, and conversion rate are what actually matter in this environment.

AI Overviews have fundamentally changed the economics of informational content.

But that disruption creates a strategic opening. Bottom-of-funnel content has always converted better. AI is simply removing the incentive to keep over-investing in content that drives traffic without driving revenue.

The window to shift strategy is still open. It won’t stay that way.

AI traffic converts better than non-AI visits for U.S. retailers: Report

17 April 2026 at 18:49
AI traffic conversions grow

Traffic from AI sources increased 393% year-over-year in Q1 and 269% in March. But the real surprise? AI traffic is converting better than last year.

  • AI-driven visits converted 42% better than non-AI traffic in March. A year ago, AI traffic was 38% less likely to result in a purchase.

By the numbers. Traffic from AI sources increased engagement by 12%, time on site by 48%, and pages per visit by 13%. Adobe also surveyed consumers and found that:

  • 39% have used AI for shopping. Of those, 85% said it improved the experience.
  • 66% believe AI tools provide accurate results.

What they’re saying. According to Vivek Pandya, director of Adobe Digital Insights:

  • “Notably, AI traffic continues to convert better (visits that result in purchases) than non-AI traffic, which covers channels such as paid search and email marketing.”

Yes, but. While consumer adoption is up, and traffic, engagement, and conversions are growing, many retail sites still aren’t fully optimized for AI visibility, especially on product pages, according to Adobe.

Why we care. Until now, reports have been mixed on whether AI traffic is better, equal to, or worse than organic search traffic (see our Dig deeper resources below). That may be changing, as we expected it would. Like generative AI, AI shopping today is as bad as it will ever be, meaning this channel’s value will only increase.

About the data. Adobe’s findings are based on direct transaction data from more than 1 trillion visits to U.S. retail websites. The company also surveyed more than 5,000 U.S. consumers to understand how they use AI to shop.

The report. Adobe report: U.S. retailers see surge in AI traffic, but many websites are not entirely readable by machines.

Dig deeper.

U.S. search ad revenue reached $114.2 billion in 2025

17 April 2026 at 18:42
digital advertising

Search remained the largest force in digital advertising in 2025. However, its growth slowed as total U.S. ad revenue climbed to a record $294.6 billion.

Search still dominates. Search generated $114.2 billion, accounting for 38.8% of total digital ad revenue, according to the latest IAB/PwC Internet Advertising Revenue Report. But growth slowed to 11%, down from 15.9% in 2024, as advertisers shifted more budget into faster-growing formats and as AI began reshaping how users discover information.

Overall market growth accelerated as the year went on. It climbed from 12.2% in Q1 to 15.4% in Q4. The fourth quarter alone brought in $85 billion, even without major cyclical events like the U.S. election or the Olympics, which boosted 2024.

Video, social, and programmatic all grew faster than search. Digital video revenue jumped 25.4% to $78 billion, making it the fastest-growing major format. Social rose 32.6% to $117.7 billion, while programmatic increased 20.5% to $162.4 billion — continuing the shift toward automated, performance-driven buying.

The market is more concentrated. The top 10 companies now control 84.1% of U.S. digital ad revenue, up from 80.8% a year ago, reflecting the advantages of scale, first-party data, and AI-driven platforms.

AI is no longer just a tool layered onto campaigns. AI is increasingly shaping discovery, media buying, and measurement as consumer journeys fragment across platforms.

Why we care. Search still delivers the most scale, but it’s no longer growing the fastest. More budget is flowing into video, social, and programmatic, where automation and AI are more deeply embedded. That means more competition for budget, less visibility into performance, and a greater need to prove incrementality.

About the data. The IAB/PwC report is based on U.S. internet advertising revenue data compiled across the industry.

The report. Internet Advertising Revenue Report Full-year 2025 results (PDF)

No-JavaScript fallbacks in 2026: Less critical, still necessary

17 April 2026 at 18:00
No-JavaScript fallbacks in 2026- Less critical, still necessary

Google can render JavaScript. That’s no longer up for debate. But that doesn’t mean it always does — or that it does so instantly or perfectly.

Since Google’s 2024 comments suggesting it renders all HTML pages, many developers have questioned whether no-JavaScript fallbacks are still necessary. Two years later, the answer is clearer and more nuanced.

Google’s stance on JavaScript rendering

In July 2024, Google sparked debate during an episode of Search Off the Record titled “Rendering JavaScript for Google Search.” When asked how Google decides which pages to render, Martin Splitt said: 

  • “If it’s so expensive, how do we decide which page should get rendered and which one doesn’t?” 

Zoe Clifford, from Google’s rendering team, replied: 

  • “We just render all of them, as long as they’re HTML, and not other content types like PDFs.”

That comment quickly led developers, especially those building JavaScript-heavy or single-page applications, to argue that no-JavaScript fallbacks were no longer necessary.

Many SEOs weren’t convinced. The remark was informal, untested at scale, and lacking detail. It wasn’t clear:

  • How rendering fit into Googlebot’s process.
  • Whether pages were queued for later execution.
  • How the system behaved under resource constraints.
  • Whether Google might fall back to non-rendered crawling under load.

Without clarity on timing, consistency, and limits, removing fallbacks entirely still felt risky.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

What Google’s documentation actually says

Google’s documentation now gives us a much clearer picture of how JavaScript rendering actually works. Let’s start with the “JavaScript SEO basics” page:

What Google says:

  • “Googlebot queues all pages with a 200 HTTP status code for rendering, unless a robots meta tag or header tells Google not to index the page. The page may stay on this queue for a few seconds, but it can take longer than that. Once Google’s resources allow, a headless Chromium renders the page and executes the JavaScript. Googlebot parses the rendered HTML for links again and queues the URLs it finds for crawling. Google also uses the rendered HTML to index the page.”

Google clearly states that JavaScript rendering doesn’t necessarily happen on the initial crawl. Once resources allow, a headless browser is used to parse JavaScript. 

Googlebot likely won’t click on all JavaScript elements, so this probably only includes scripts that don’t require user interactions to fire.

This is important because it tells us Google may make some basic determinations before JavaScript is rendered, via subsequent execution queues. 

If content is generated behind elements (content tabs, etc.) that Google doesn’t click, it likely won’t be discovered without no-JavaScript fallbacks.

Looking at Google’s “How Search works” documentation:

The language is much simpler. Google states it will attempt, at some point, to execute any discovered JavaScript. There’s nothing here that directly contradicts what we’ve seen so far in other Google documentation.

On March 31, Google published a post titled “Inside Googlebot: demystifying crawling, fetching, and the bytes we process,” which further clarifies JavaScript crawling.

The notes on partial fetching are particularly interesting. Google will only crawl up to 2MB of HTML. If a page exceeds this, Google won’t discard it entirely, but instead examines only the first 2MB of returned code.

Google explicitly states that extreme resource bloat, including large JavaScript modules, can still be a problem for indexing and ranking. 

If your JavaScript approaches 2MB and appears at the top of the page, it may push HTML content far enough down that Google won’t see it. The 2MB limit also applies to individual resources pulled into a page. If a CSS file, image, or JavaScript module exceeds 2MB, Google will ignore it.

We’re beginning to see that Google’s claim that it renders all pages comes with important caveats. 

In practice, it seems unlikely that a page with no consideration for server-side rendering (SSR) or no-JavaScript fallbacks would be handled optimally. This highlights why it’s risky to take comments from Googlers at face value without following how the details evolve over time.

The question we opened with is also evolving. It’s less “Do I need blanket no-JavaScript fallbacks in 2026?” and more “Do I still need critical-path fallbacks and resilient HTML within my application?”

Google’s recent search documentation updates add more context:

Google has recently softened its language around JavaScript. It now says it has been rendering JavaScript for “multiple years” and has removed earlier guidance that suggested JavaScript made things harder for Search. 

It also notes that more assistive technologies now support JavaScript than in the past. 

Within that same documentation, Google still recommends pre-rendering approaches, such as server-side rendering and edge-side rendering.

So while the language is softer, Google isn’t suggesting developers can ignore how JavaScript affects SEO.

Looking again at the December 2025 updates:

Google states that non-200 pages may not receive JavaScript execution. This suggests no-JavaScript fallbacks for internal linking within custom 404 pages may still be important.

Google also notes that canonical tags are processed both before and after JavaScript rendering. If source HTML canonicals and JavaScript-modified canonicals don’t match, this can cause significant issues. Google suggests either omitting canonical directives from the source HTML so they’re only evaluated after rendering, or ensuring JavaScript doesn’t modify them.

These updates reinforce an important point: even as Google becomes more capable at rendering JavaScript, the initial HTML response and status code still play a critical role in discovery, canonical handling, and error processing.

Dig deeper: Google removes accessibility section from JavaScript SEO section

Get the newsletter search marketers rely on.


What the data shows

JavaScript rendering is introducing new inconsistencies across the web, according to recent HTTP Archive data:

We can see that since November 2024, the percentage of crawled pages with valid canonical links has dropped.

Via the HTTP Archives 2025 Almanac:

About 2-3% of rendered pages exhibit a “changed” canonical URL, something Google’s documentation explicitly states can be confusing for its indexing and ranking systems. That 2-3% doesn’t explain the larger drop in valid canonical deployment since November 2024.

Other factors are likely at play, such as the adoption of new CMS platforms that don’t properly handle canonicals. The rise of vibe-coded websites using tools like Cursor and Claude Code may also be contributing to these issues across the web.

In July 2024, Vercel published a study to help demystify Google’s JavaScript rendering process:

It analyzed more than 100,000 Googlebot fetches and found that all resulted in full-page renders, including pages with complex JavaScript. However, 100,000 fetches is a relatively small sample given Googlebot’s scale. 

The study was also limited to sites built on specific frameworks, so it’s unwise to assume Google always renders pages perfectly. It’s also unclear how deeply those renders were analyzed.

It does suggest that Google attempts to fully render most pages it encounters. Broadly speaking, Google can generate JavaScript-modified renders, but the quality of those renders is still up for debate. As noted earlier, the 2MB page and resource limits still apply.

Because this study dates to mid-2024, any contradictions with Google’s updated 2025–2026 documentation should take precedence.

Vercel also published a notable finding:

  • “Most AI crawlers don’t execute JavaScript. We tested the major ones (ChatGPT, Claude, and others), and the results were consistent: none of them render client-side content. If your Next.js site ships critical pages as JavaScript-dependent SPAs, those pages are inaccessible to the systems shaping how people discover information.”

So even if Google is far more capable with JavaScript than it used to be, that’s not true across the broader web ecosystem. Many systems still rely on HTML-first delivery. That’s why you shouldn’t rush to remove no-JavaScript fallbacks — they may still be critical to your future visibility.

Cloudflare’s 2025 review is also worth noting:

Cloudflare reported that Googlebot alone accounted for 4.5% of HTML request traffic. While this doesn’t directly explain how Google handles JavaScript, it does highlight the scale at which Google continues to crawl the web.

Dig deeper: How the DOM affects crawling, rendering, and indexing

No-JavaScript fallbacks in 2026

The question we set out to answer was whether no-JavaScript fallbacks are required in 2026.

Google is far more capable with JavaScript than in previous years. Its documentation shows that pages are queued for rendering, and that JavaScript is executed and used for indexing. For many sites, heavy reliance on JavaScript is no longer the red flag it once was.

However, the details of Google’s rendering process still matter. Rendering isn’t always immediate. There are resource constraints, and not all behaviors are supported.

At the same time, the broader web ecosystem hasn’t necessarily kept pace with Google. The risk of removing all no-JavaScript fallbacks hasn’t disappeared — it’s just changed shape.

Key takeaways:

  • Google doesn’t necessarily render JavaScript on the first crawl. There’s a rendering queue, and execution happens when resources allow.
  • Technical limits still exist, including a 2MB HTML and resource cap, and limited interaction with user-triggered elements.
  • Non-200 responses may not receive rendering treatment, which keeps basic HTML and linking important in some cases.
  • Differences between raw HTML and rendered output still exist at scale across the web.
  • Google’s guidance still leans toward SSR (server-side rendering), pre-rendering, and resilient HTML for critical content.
  • Other crawlers, especially AI-driven ones, often don’t execute JavaScript at all. As these systems become more important, the need for fallbacks may increase again.
  • Blanket, site-wide no-JavaScript fallbacks aren’t universally required in 2026, but critical content, links, and signals shouldn’t depend entirely on JavaScript. Many modern crawlers still rely on HTML-first delivery.

For now, no-JavaScript fallbacks for critical architecture, links, and content are still strongly recommended, if not required going forward.

Your ROAS looks great — but is it actually driving growth?

17 April 2026 at 17:00
Your ROAS looks great — but is it actually driving growth

An ecommerce company hires your PPC agency to explore paid search. A solid plan follows, and after approval, the campaigns go live. Soon, you’re seeing stellar results: high conversion volumes and a healthy ROAS.

On the surface, the strategy is a resounding success.

But look closer.

Some of these conversions might have occurred anyway via direct or organic search traffic — meaning the campaigns may not be driving real growth. Too often, this goes unmeasured.

To truly understand performance, you need to look at incremental lift and marginal ROAS.

The truth about ROAS

Perhaps you’ve heard about eBay’s paid search experiment? They were spending heavily on brand PPC ads. Then they ran a controlled test, turning those ads off for a portion of users to measure impact.

Organic traffic picked up most of those conversions, with minimal impact on revenue. But guess what? Despite the clear results, eBay turned the branded ads back on. Fear, or smart? You tell me.

With search becoming increasingly automated, and the customer journey spreading across more surfaces than ever, attributing conversions to the right channels is harder than ever. Advertising platforms are quick to claim credit for these conversions, but be skeptical.

What most platforms report is attributed return, not causal lift. In other words, ROAS tells you how much revenue the platform says it influenced; it doesn’t tell you how much of that revenue would have happened without the ads.

When it comes to black-box automation like Performance Max and Advantage+, platforms have become exceptionally good at one thing: finding the path of least resistance to a conversion. They aren’t necessarily finding new customers. They’re often just becoming the most expensive touchpoint in a journey that was already destined to convert.

Without measuring incrementality, automation simply amplifies non-incremental signals, such as:

  • Brand search campaigns capturing existing demand.
  • Retargeting campaigns hitting users who were seconds away from purchasing.
  • Reporting that makes “safe” channels appear more valuable than they truly are.

Dig deeper: Paid media efficiency: How to cut waste and improve ROAS

Incrementality tells you whether marketing created something extra

Incrementality is causal lift — what changed because the campaign existed, typically measured by comparing exposed groups with holdout or control groups. So what did this campaign actually drive that wouldn’t have happened otherwise?

Even though you may not want to admit it, this is a much more useful lens for budget allocation than platform attribution alone.

A channel can have a fantastic in-platform ROAS and still generate a weak incremental impact. Why? Because it might be harvesting demand rather than creating it.

If you want to know whether a campaign genuinely drove growth, the better question is incrementality.

But it’s still not the full answer.

To decide what to do next, you also need marginal ROAS.

Dig deeper: Why incrementality is the only metric that proves marketing’s real impact

Get the newsletter search marketers rely on.


Marginal ROAS tells you what to do next

A channel may be incremental. But that still doesn’t tell you where the next $10,000 should go. That’s a marginal ROAS question.

Marginal ROAS measures the return on the next unit of spend, not the average return across all spend. Here’s how it works: the first tranche of budget often performs well, then the next performs worse.

Keep going, and the final dollars become dramatically less efficient than the average suggests. The same applies to CPA metrics: a blended CPA may look acceptable, while the last dollars spent were far less efficient, leaving many advertisers bidding beyond where they should.

Imagine you spend $10,000 and generate $50,000 in revenue (500% ROAS). You decide to scale and spend an additional $5,000. This extra spend generates only $5,000 in additional revenue.

  • Your new average ROAS: 366% 
  • Your marginal ROAS: 100% (You essentially traded $1 for $1.)

In this scenario, the last $5,000 you spent was entirely wasted, even though the total “average” performance still looks decent on your dashboard.

This is the trap of average ROAS. It makes a channel look scalable when it may only be efficient at lower spend levels, and it hides the difference between profitable core demand capture and weak incremental expansion.

To make better decisions, you need to look further. Platform ROAS helps with in-platform optimization, incrementality shows whether campaigns actually created value, and marginal ROAS tells you whether more budget should go there.

A strong ROAS can signal true efficiency, or it can mean the platform is capturing demand that would have converted anyway. That’s why you should focus more on incrementality tests.

Don’t ask whether the channel has been efficient. Ask whether the next dollar is efficient enough — that’s what determines smart scaling.

Dig deeper: The marketing measurement flywheel: A 4-step framework for proving impact

Options for incrementality testing

You don’t need a perfect measurement lab before you start. Geo tests, holdouts, audience exclusions, and controlled spend reductions can all teach you more than another month of attribution debates.

  • Geo-split testing: Divide your markets into two comparable geographic groups, keep your ads running in the “test” group, and turn them off in the “control” group. The difference in total revenue between the two regions reveals the true incremental lift of your ads.
  • Search lift tests (holdouts): Use platform tools to create holdout groups, a small percentage of users who are intentionally not shown your ads. By comparing their behavior to the exposed group, you can see the direct impact of your (for example) Search or YouTube campaigns.

Beyond these, you can also test the impact of remarketing, branding, awareness campaigns, or additional social channels.

The real shift: From reporting performance to allocating capital

Too many marketing teams still use measurement to explain what happened. The better use of measurement is to decide what should happen next.

Incrementality helps you understand whether a channel created value. Marginal ROAS helps you understand whether more investment is justified. Together, they move marketing measurement out of the reporting function and into capital allocation.

ROAS tells you who gets credit. Incrementality tells you what actually moved. Marginal ROAS tells you where the next budget should go. But be aware: incrementality is not the same as attribution. Attribution tells you who, or which channel, should get the credit, while incrementality shows you whether or not it was worth it.

Dig deeper: How to take your marketing measurement from crawl to sprint

❌
❌