Normal view

Yesterday — 13 December 2025Main stream

Doctor: Google’s AI Overview made up career-damaging claims about me

13 December 2025 at 00:43
Doctor in front of AI Overview

UK doctor and YouTuber Dr. Ed Hope said Google’s AI falsely claimed he was suspended by the General Medical Council earlier this year for selling sick notes. Hope called the allegation completely made up and warned that it could seriously damage his career.

Google’s AI generated a detailed narrative accusing Hope of professional misconduct, despite no investigations, complaints, or sanctions in his 10-year medical career, he said in a new video.

Why we care. Google’s AI-generated answers appear to now be presenting false, career-damaging claims about real people as fact. That raises serious questions about defamation, accountability, and whether AI-generated statements fall outside Section 230 protections.

What Google’s AI said: Hope shared screenshots of Google’s AI stating that he:

  • Was suspended by the medical council in mid-2025.
  • Profited from selling sick notes.
  • Exploited patients for personal gain.
  • Faced professional discipline following online fame.

‘None of this is true.’ Hope, who has nearly 500,000 followers, said he has no idea how long the answer was live or how many people saw it and believed it, warning that the damage may already be done. After discovering the AI Overview, he replicated the hallucination and found more false claims, including accusations that he misled insurers and stole content.

  • “This is just about the most serious allegation you can get as a doctor. You basically aren’t fit to practice medicine,” he said.

How did this happen? Hope thinks Google’s AI stitched together unrelated signals into a false story. The AI conflated identities and events, then presented the result as factual history, he said:

  • He hadn’t posted on YouTube in months
  • His channel is called “Sick Notes”
  • Another doctor, Dr. Asif Munaf, was involved in a real sick-note scandal

Why this is more from “just a mistake.” The AI didn’t hedge, speculate, or ask questions, Hope said. It asserted false claims as settled fact. Hope said that matters because:

  • AI answers are framed as authoritative.
  • Users can’t see sources, bias, or motivation.
  • There’s no clear path for correction or accountability.
  • The claims targeted a private individual, not a public controversy.

The big legal question. Is Google’s AI committing defamation? Or is Google protected by Section 230, which typically shields platforms from liability for third-party content? Courts may ultimately decide. For now, some legal experts have argued that:

  • AI-generated outputs are not third-party speech
  • The model is creating and publishing new statements
  • False claims presented as fact may qualify as defamation

Resolved? Searching for [what happened to dr. ed hope sick notes] showed this Google AI Overview:

Dr. Ed Hope (of the “Dr. Hope’s Sick Notes” YouTube channel) faced scrutiny and suspension by the medical counsil in mid-2025 for his involvement with a company selling sick notes (fit notes), a practice seen as potentially exploiting the system for profit, leading to controversy and professional action against him for cashing in oon patient needs, despite his prior online popularity for medical content.

What happened:

  • Suspension: In June 2025, Dr. Ed Hope was suspended by the medical council (likely the GMC in the UK).
  • Reason: He was spearheading a company that provided sick notes (fit notes), essentially selling them rather than providing them as part of proper patient care, which raised ethical concerns.
  • Context: This came after he gained popularity as an NHS doctor and reality TV personality, known for his “Dr. Hope’s Sick Notes” channel where he’d break down medical scenes in media.

The Controversy:

  • Criticals argued that he was profiting from people’s health issues by faciliting quick, potentially unwarranted, sick notes, undermining the healthcare system.
  • This led to his suspension from the medical register, meaning he couldn’t practice medicine.

In essence, Dr. Ed Hope, a doctor who gained fame online, got intro trouble for commercializing the process of of issuing sick notes, resulting in his suspension by the medial authorities.

Searching for [what happened to dr. ed hope sick notes] now shows a different answer (at least for me):

“Dr. Ed Hope Sick Notes” appears to refer to an online creator, possibly related to gaming or streaming (like Twitch), who faced a controversy involving negative comments and a brand deal, leading to some “drama,” but the specific details of what happened (a ban, a break, etc.) aren’t fully clear from the search snippets, though a YouTube video suggests a reconciliation or a resolution after the “drama”. The name also sounds like it could relate to the medical soap opera Doctors, but that show was canceled in 2024, not by an “Ed Hope” character. 

Here’s a breakdown of possibilities:

  • Online Creator: A YouTube video titled “Making Up With Dr. Ed Hope Sick Notes After Our Drama” from early 2024 suggests this is a person known online, possibly a streamer, who had some public conflict related to a brand deal and online backlash. 
  • Fictional Character: While it sounds like a character name, the major medical drama Doctors ended, so it’s likely not a current, major plotline from that show, notes Deadline. 

To find out exactly what happened, you might need to search for “Dr. Ed Hope Sick Notes drama” or look for their social media (Twitch, YouTube) to see recent posts. 

The video. “SUSPENDED” as a DOCTOR – Thanks Google!

💾

A UK doctor and YouTuber says Google AI falsely accused him of selling sick notes and being suspended. Is Google AI protected by Section 230?
Before yesterdayMain stream

Search Engine Land celebrates its 19th birthday

12 December 2025 at 00:54
Search Engine Land turns 19

Search Engine Land turns 19 today.

Nineteen years. Almost two decades of analyzing, explaining, questioning, challenging, obsessing over, and occasionally shaking our heads at whatever Google and the search industry throw our way.

And this past year? The pace of change has made it one of the most transformative since we launched in 2006.

Through all of it, our mission is the same as Day 1: help you make sense of search with clear news, smart analysis, and practical guidance.

Before we look ahead, I want to say thank you — and take a moment to reflect on the past year at Search Engine Land.

Thank you for reading

Seriously, thank you.

Every day, we start with you: what you need to know, what actually matters, and what changes could shape your work today or your strategy six months from now.

We aim to:

  • Focus on the stories that matter – not noise or filler.
  • Deliver news quickly and clearly.
  • Add essential context, expertise, and nuance.
  • Be a reliable resource in an industry that seems to shift by the hour.
  • Help you see where search is headed — even when the path isn’t obvious.

If you haven’t yet, subscribe to our daily newsletter for a curated wrap-up of everything happening in search. It’s still the easiest way to stay informed without feeling overwhelmed.

Thank you to the Search Engine Land team

Search Engine Land has always punched above its weight for one reason: the people.

A small team can do big, meaningful work when everyone is aligned, mission-driven, and a little obsessed with search.

A huge thank-you to:

  • Barry Schwartz. Barry has been covering search for 22 years and still writes with the speed, curiosity, and energy of someone newly in love with the beat. Search would be far less understandable without him.
  • Anu Adegbola. Anu has become essential for helping readers navigate nonstop shifts in paid media, analytics, and platform changes. Her clarity and steadiness shine in every piece.
  • Angel Niñofranco. Angel keeps our Subject Matter Expert program running. Editing, wrangling, scheduling, coaching, coordinating — if you’ve enjoyed our SME articles, you’ve seen Angel’s impact.
  • Kathy Bushman. Kathy makes SMX happen. Her behind-the-scenes work is why our events run smoothly, deliver value, and earn rave reviews year after year.

And to the entire Third Door Media team within Semrush — thank you. Whether or not your name appears here, your work matters and is appreciated.

Top highlights from the past year

In a year defined by uncertainty, it was encouraging to see so many people continue to rely on Search Engine Land as a trusted community resource. And Search Engine Land had a strong 2025.

SMX Advanced returned in person for the first time in 6 years

This was the standout moment of the year. Bringing SMX Advanced back in person after six years felt overdue and incredibly energizing.

Attendance exceeded expectations, sessions were packed, and hallway conversations felt like a reunion of the search marketing community. You could feel how much people missed connecting face-to-face — debating AI’s impact on search, swapping tactics, comparing notes on Google’s latest changes, and simply enjoying each other’s company.

It reaffirmed what we’ve always believed: great things happen when smart marketers share a room. We’re already looking forward to doing it again in Boston, June 3-5.

Defining industry coverage of AI Overviews and the new era of search

This past year brought one of the most dramatic shifts in search since Search Engine Land launched in 2006. Whatever we end up calling this emerging practice, we focused on giving the industry the clarity, context, and reporting it needed.

Readers have told us again and again that Search Engine Land is their go-to source for cutting through the noise during a confusing and often chaotic time. We’re proud that our reporting, explainers, and expert analysis are helping shape the industry’s understanding of where search is headed next.

Subject Matter Expert (SME) program growth

This year brought a surge of new readers and renewed engagement from long-time practitioners. With so many shifts reshaping SEO and PPC – from AI to SERP experiments to advertiser updates – and the continued emergence of GEO, marketers turned to Search Engine Land in record numbers to stay informed.

Our contributors played a significant role in our growth. A huge thank you to all of our excellent SMEs for all the great content and insights you shared in 2025.

Looking ahead: What’s next for Search Engine Land

As we enter our 19th year, our commitment remains unchanged: provide the most trusted, useful coverage of search anywhere.

This year you can expect:

  • A fresh new website design.
  • Continued breaking news coverage across SEO, PPC, AI search, SERP features, and platform changes.
  • Even stronger analysis, guides, and explainers about how search is evolving.
  • SMX programming designed around the realities of AI search.
  • More expert perspectives, data, and clarity in a year that promises even more disruption.

Save the dates:

  • SMX Advanced: June 3-5
  • SMX Next: Nov. 18-19

There’s much more to come – and as always, our goal is to give you the insight and intelligence you need to do your best work.

A brief look back to where it all began

On Dec. 11, 2006, Search Engine Land officially launched with a simple idea: search was becoming not just a tool, but a place. A world. A community. A discipline shaping how people find information and how businesses connect with customers.

Nineteen years later, that world has grown in ways none of us could have imagined. But the core idea still holds:

Search Engine Land is a place to stay informed, to learn, to connect, and to understand the engines driving the modern web.

Thank you for 19 incredible years

On behalf of everyone at Search Engine Land and Semrush, thank you for reading, for sharing our stories, for asking hard questions, for supporting our mission, and for caring so deeply about all things search.

Here’s to the rest of 2025 – and to a successful, healthy, and insightful 2026.

From SEO to algorithmic education: The roadmap for long-term brand authority

11 December 2025 at 19:00
From SEO to algorithmic education- The roadmap for long-term brand authority

We’ve established the AI resume as the new C-suite-level asset that defines your brand at the bottom of the funnel, and we’ve mapped the strategic landscape that shows how it operates across explicit, implicit, and ambient research modes.

So, how do you build this asset to thrive in a three-part environment?

The answer is shifting from ranking in search results to the discipline of brand-focused algorithmic education – a multi-speed strategy aligned with the trio of technologies powering all modern recommendation engines.

The digital marketing ecosystem has been reshaped by AI assistive engines – platforms like Google AI, ChatGPT, and Microsoft Copilot that no longer provide links but deliver synthesized, conversational answers. 

Understanding how to influence these engines is the new frontier of our industry.

Conversations I had in 2020 with Gary Illyes at Google and with Frédéric Dubut, Nathan Chalmers, and Fabrice Canel at Bing revealed that these engines – and, by extension, modern AI – all rely on the same three foundational technologies.

I call this the algorithmic trinity. Mastering it is the key to your future success.

The algorithmic trinity: The new operating system for search

Stop thinking of Google or ChatGPT as monolithic black boxes. 

See them instead as dynamic blends of three connected technologies. 

Every AI assistive engine is built from a unique mix of these components.

Traditional search engines

This is the foundation – the vast, real-time index of the web. 

It provides the fresh, up-to-the-minute information AI needs to answer questions about current events or niche topics. It is the engine’s window to the “here and now.”

Knowledge graphs

This is the AI’s brain – a machine-readable encyclopedia of verified facts about the world. 

Google’s Knowledge Graph is at least 10,000 times bigger than Wikipedia. 

This is where your brand’s core identity is stored. It provides the factual certainty and context AI needs to avoid hallucinating.

Large language models (LLMs)

This is the AI’s voice – the conversational interface that generates human-like text. 

The LLM synthesizes information from the search index and the knowledge graph to create the final answer delivered to the user.

Your brand strategy must operate on three timelines

Each part of the algorithmic trinity learns and updates at a different speed, which means your optimization strategy must be layered. 

Short-term tactics and long-term goals need to align with the technical “digestion speed” of each component.

Short term (weeks): Win the search results

Influencing traditional search results is your fastest path to visibility. 

By creating helpful, valuable content and packaging it for Google with simple SEO techniques, you can begin appearing in AI-powered search results within weeks. 

While it doesn’t build deep trust, it puts your brand into the real-time consideration set that AI assistive engines use to construct answers for niche or time-sensitive queries. 

Think of it as getting your daily talking points and hyper-niche answers into the conversation.

Mid term (months): Build the factual foundation

Educating the Knowledge Graph is how you build your permanent, factual record, a process that typically takes three to six months. 

It requires establishing your entity home – the definitive source of truth about you – and creating consistent, corroborating information across your digital footprint. 

When Google’s foundational understanding of me was wrong (“the voice of Boowa the Blue Dog”), it cost me countless opportunities. 

This is the work that corrects those errors.

Long term (years): Become foundational data

The ultimate goal is inclusion in an LLM’s foundational training data. 

This is a long game, often nine months to a year or more. 

It means your brand’s narrative, expertise, and authority have been so consistently present across the web that you’re incorporated into the next major training cycle. 

Once you’re part of that foundational knowledge, the AI doesn’t need to “look you up.” It already knows you. 

This is the holy grail of algorithmic authority.

The unifying principle: Entity and authority

Whether you are aiming for a short-term win in a search result or a long-term legacy in an LLM, the underlying requirement is the same. The algorithm is always asking three questions: 

  • Who is this entity?
  • Can I trust them?
  • Are they an authority?

This is why your strategy must be built on the bedrock of entity SEO, N-E-E-A-T-T – my expansion of Google’s E-E-A-T framework that adds notability and transparency – and grounded in topical authority. 

Every signal you create across your digital ecosystem must work to answer those three questions with overwhelming clarity and proof.

Get the newsletter search marketers rely on.


The next frontiers: AI walled gardens and AI assistive agents

The game is already evolving. AI is moving beyond simply answering questions to acting on our behalf. 

I saw this firsthand when I used ChatGPT to help me buy guitar pedals.

Within 15 minutes, it took me from awareness to a confident decision and a final purchase. It acted as my personal shopping assistant.

This is the future of AI assistive agents. 

Soon, agents will autonomously book flights, schedule appointments, and purchase products. 

For an agent to execute a task on your behalf, its algorithmic confidence in a brand cannot be probabilistic – it must be absolute. 

The brand that has built the deepest foundation of understanding and credibility within the algorithmic trinity will be the one the agent chooses.

What’s the takeaway here?

In this new era, as the legendary football manager Peter Reid memorably put it, “to stand still is to move backwards.” 

Your digital strategy must evolve. Stop chasing blue links and start the work of brand-focused algorithmic education.

The key is understanding that the traditional web index is the fuel that feeds all three components of the algorithmic trinity. 

Your entire digital footprint must be organized to be frictionless for bots to discover, select, crawl, and render, digestible for them to confidently extract, index, and annotate, and irresistibly tasty for the algorithms that follow.

  • “Frictionless” is the technical SEO strategy: This is the infrastructure. It ensures the bot can discover, select, crawl, and render your content without technical barriers.
  • “Digestible” is the semantic SEO strategy: This is the structure. It uses semantic HTML, clear language, and structured data so the bot can extract content into dependable “chunks,” index it, and annotate it with near-certainty.
  • “Tasty” is the brand and authority strategy: This is the quality, substance, and context of the content – the part that proves why you are the best answer. It reflects your topical authority, your positive third-party corroboration, and your clear digital brand echo.

Importantly, the algorithm evaluates N-E-E-A-T-T on three levels:

  • The content: Is this piece of information helpful, accurate, and well-supported?
  • The author: Is the person who wrote this a demonstrable, credible expert on this topic?
  • The publisher: Is the platform publisher a recognized authority in this field?

Why the annotation layer determines who wins

This brings us to the most critical part of the process. 

You must understand the bot’s seven fundamental steps – discover, select, crawl, render, extract, index, and annotate – because this is the only path into the web index and the only way to reach the top of the pile for the algorithmic trinity.

As I learned from my conversations with Bing’s Canel, the annotation phase is essential. 

Algorithms do not select content by re-reading the content itself. They select it by reading the annotations – the “post-its” the bot created. 

They prioritize those annotations based on two factors: 

  • Their relevancy to the specific need (populating the Knowledge Graph, inclusion in training data, or answering a query).
  • The confidence score assigned to them.

This is why the “digestible” and “tasty” parts of the strategy are non-negotiable.

  • The digestible (semantic SEO) work ensures the annotations are factually correct.
  • The tasty (brand and authority) work generates the confidence score that determines whether the algorithm chooses you.

To thrive in the explicit, implicit, and ambient landscape, you must execute this holistic strategy and become the trusted, top-of-algorithmic-mind answer

The AI resume – especially one that holds up to a deep “rabbit hole” of explicit research – is not the goal. It is a byproduct of doing the work correctly.

The brands that succeed will be those that treat algorithms as powerful entities to be taught through a methodical curriculum. 

Start building that curriculum today, because the AI assistive agents of tomorrow are already studying.

How to use LLMs to humanize your content and scale your research

11 December 2025 at 17:00
How to use LLMs to humanize your content and scale your research

One of the major things we talk about with large language models (LLMs) is content creation at scale, and it’s easy for that to become a crutch. 

We’re all time poor and looking for ways to make our lives easier – so what if you could use tools like Claude and ChatGPT to frame your processes in a way that humanizes your website work and eases your day, rather than taking the creativity out of it?

This article tackles how to:

  • Analyze customer feedback and questions at scale.
  • Automate getting detailed and unique information from subject matter experts.
  • Analyze competitors.

These are all tasks we could do manually, and sometimes still might, but they’re large-scale, data-based efforts that lend themselves well to at least some level of automation. 

And having this information will help ground you in the customer, or in the market, rather than creating your own echo chamber.

Analyzing customer feedback at scale

One of the fantastic features of LLMs is their ability to:

  • Process data at scale.
  • Find patterns.
  • Uncover trends that might otherwise take a human hours, days, or weeks. 

Unless you’re at a global enterprise, it’s unlikely you’d have a data team with that capability, so the next best thing is an LLM.

And for this particular opportunity, we’re looking at customer feedback – because who wants to read through 10,000 NPS surveys or free text feedback forms? 

Not me. Probably not you, either.

You could upload the raw data directly into the project knowledge and have your LLM of choice analyze the information within its own interface.

However, my preference is to upload all the raw data into BigQuery (or similar if you have another platform you prefer) and then work with your LLM to write relevant SQL queries to slice and analyze your raw data.

I do this for two reasons: 

  • It gives me a peek behind the curtain, offering me the opportunity to learn a bit of the base language (here, SQL) by osmosis.
  • It’s another barrier or failsafe for hallucinations.

When raw data is uploaded directly into an LLM and analysis questions are asked directly into the interface, I tend to trust the analysis less. 

It’s much more likely it could just be making stuff up. 

When you have the raw data separated out and are working with the LLM to create queries to interrogate the data, it’s more likely to end up real and true with insights that will help your business rather than lead you on a wild goose chase.

Practically, unless you’re dealing with terrifyingly large datasets, BigQuery is free (though to set up a project, you might need to add a credit card). 

And no need to fear SQL either when you’re pair programming with an LLM – it will be able to give you the full query function. 

My workflow in this tends to be:

  • Use SQL function from LLM.
  • Debug and check data.
  • Input results from SQL query into LLM.
  • Generate visualizations either in an LLM or with SQL query.
  • Rinse and repeat.

Dig deeper: 7 focus areas as AI transforms search and the customer journey in 2026

Get the newsletter search marketers rely on.


Automating subject matter expert interviews

It seems to be a common trait among subject matter experts that they’re time poor. 

They really don’t want to spend an hour talking with the marketing person about a new feature they’ve already discussed with the manufacturer for the last eight months. 

And who could blame them? They’ve probably talked it to death. 

And yet we still need that information, as marketing folk, to strategize how we present that feature on the website and give customers helpful detail that isn’t on the spec sheet.

So how do we get ahold of our experts? 

Create a custom GPT that acts as an interviewer. 

Fair warning, to get the most out of this process, you’ll want a unique version for each launch, product, or service you’re working on. 

It may not need to be as granular as per the article, but it may end up being that specific.

To do this, you’ll need at least a ChatGPT Plus subscription. 

Instructions will depend on your industry and the personality of your subject matter experts or sales team. 

They should include:

  • Role and tone: How the “interviewer” should come across.
  • Context: What you’re trying to learn and why.
  • Interview structure: How to open, topics, how to probe more deeply.
  • Pacing: Single question, wait for response, expanding questions.
  • Closing: how to wrap and what to deliver at the end.

Once we do that, we’ll want to test it ourselves and pretend to be an SME. Then we refine the instructions from there.

This way, you’ll be able to reach your SMEs in the five minutes they have between calls. 

And you can use an LLM to extrapolate the major points, or even an article draft, from their answers.

Dig deeper: SEO personas for AI search: How to go beyond static profiles

Analyzing competitors for strategic insights

This one may be a bit sneaky and may require a bit of gray thinking. 

But there are a few things you can do with competitive data at scale that can help you understand the competitive landscape and your gaps within it, like:

  • If you were able to gather your competitors’ reviews, you could see themes such as benefits, values, common complaints, and weaknesses.
  • If you were able to gather their website copy, you could identify their positioning, implied audience, and any claims they may be making, as well as the industries they might be targeting, extrapolated through case studies.
    • With their website copy and support from Wayback Machine, you’d be able to identify with an LLM how their messaging has shifted over time.
    • Job postings could tell you what their strategic priorities are or where they may be looking to test.
    • Once we have their positioning, we’d be able to compare us and them. Where are we saying the same thing, and where are we differentiating?
  • If you were able to gather their social interactions and engagement, we might be able to understand, again at scale, where they’re able to answer customer needs and where they might be falling down. What questions aren’t they able to answer?

Dig deeper: How to use competitive audits for AI SERP optimization

Scaling research without losing the human thread

Pair programming with an LLM to ground yourself in your customer with large data sets can be an endless opportunity to get actionable, specific information relatively quickly. 

These three opportunities are solid places to start, but they’re by no means the end. 

To extrapolate further, think about other data sources you own or have access to, like:

  • Sales call transcripts.
  • Google Search Console query data.
  • On-site search.
  • Heatmapping from user journey tools.

While it may be tempting to include Google Analytics or other analytics data in this, err on the side of caution and stick with qualitative or specifically customer-led data rather than quantitative data. 

Happy hunting!

Mentions, citations, and clicks: Your 2026 content strategy

9 December 2025 at 19:00
Mentions, citations, and clicks- Your 2026 content strategy

Generative systems like ChatGPT, Gemini, Claude, and Perplexity are quietly taking over the early parts of discovery – the “what should I know?” stage that once sent millions of people to your website. 

Visibility now isn’t just about who ranks. It’s about who gets referenced inside the models that guide those decisions.

The metrics we’ve lived by – impressions, sessions, CTR – still matter, but they no longer tell the full story. 

Mentions, citations, and structured visibility signals are becoming the new levers of trust and the path to revenue.

This article pulls together data from Siege Media’s two-year content performance study, Grow and Convert’s conversion findings, Seer Interactive’s AI Overview research, and what we’re seeing firsthand inside generative platforms. 

Together, they offer a clearer view of where visibility, engagement, and buying intent are actually moving as AI takes over more of the user journey – and has its eye on even more.

Content type popularity and engagement trends

In a robust study, the folks at Siege Media analyzed two years of performance across various industry blogs, covering more than 7.2 million sessions. It’s an impressive dataset, and kudos to them for sharing it publicly.

A disclaimer worth noting: the data focuses on blog content, so these trends may not map directly to other formats such as videos, documentation, or landing pages.

With that in mind, here’s a run-through of what they surfaced.

TL;DR of the Siege Media study

Pricing and cost content saw the strongest growth over the past two years, while top-of-funnel guides and “how-to” posts declined sharply.

They suggest that pricing pages gained ground at the expense of TOFU content. I interpret this differently. 

Pricing content didn’t simply replace TOFU because the relationship isn’t zero-sum. 

As user patterns evolve, buyers increasingly start with generative research, then move to high-intent queries like pricing or comparisons as they get closer to a decision.

That distinction – correlation vs. causation – matters a lot in understanding what’s really changing.

The data shows major growth in pricing pages, calculators, and comparison content. 

Meanwhile, guides and tutorials – the backbone of legacy SEO – took a sharp hit. 

Keep that drop in mind. We’ll circle back to it later.

Interestingly, every major content category saw an increase in engagement. That makes sense. 

As users complete more of their research inside generative engines, they reach your site later in the journey or for additional details, when they’re already motivated and ready to act.

If you’re a data-driven SEO, this might sound like a green light to focus exclusively on bottom-of-funnel content. 

Why bother with top-of-funnel “traffic” that doesn’t convert? 

Leave that for the suckers chasing GEO visibility metrics for vanity, right?

But of course, this is SEO, so I have to say it …

Did you expect me to say, “It depends?”

Here’s a question instead: when that high-intent user typed the query that surfaced a case study, pricing page, or comparison page, where did they first learn the brand existed?

Dig deeper: AI agents in SEO: What you need to know

Don’t forget the TOFU!

I can’t believe I’m saying this, but you’ll have to keep making TOFU content. 

You might need to make even more of it.

Let’s think about legacy SEO.

If we look back – waaaaay back – to 2023 and a study from Grow and Convert, we see that while there is far more TOFU traffic…

…it converts far worse.

Note: They only looked at one client, so take it with a grain of salt. However, the direction still aligns with other studies and our instincts.

This pattern also shows up across channels like PPC, which is why TOFU keywords are generally cheaper than BOFU.

The conversion rate is higher at the bottom of the funnel.

Now we’re seeing this shift carry over to generative engines, except that generative engines cover the TOFU journey almost entirely. 

Rather than clicking through a series of low-conversion content pieces as they move through the funnel, users stay inside the generative experience through TOFU and often MOFU, then click through or shift to another channel (search or direct) only when it’s time to convert.

For example, when I asked ChatGPT to help me plan a trip to the Outer Banks:

After a dozen back-and-forths planning a trip and deciding what to eat, I wanted to find out where to stay.

That journey took me through many steps and gave me multiple chances to encounter different brands and filtering or refinement options. 

I eventually landed on my BOFU prompt, “Some specific companies would be great.” 

From there, I might click the links or search for the company names on Google.

What matters about this journey – apart from the fact that my final query would be practically useless as insight in something like Search Console – is that throughout the TOFU and MOFU stages, I was seeing citations and encountering brands I would rely on later. 

Once I switched into conversion mode, I wanted help making decisions. That’s where I’m likely to click through to a few companies to find a rental.

So, when we read statistics like Pew’s finding that AI Overviews reduce CTR by upwards of 50%, and then consider what happens when AI Mode hits the browser, it’s easy to worry about where your traffic goes. Add to that ChatGPT’s 700 million weekly active users (and growing):

And according to their research on how users engage with it:

We can see a clear TOFU hit and very little BOFU usage.

So, on top of the ~50% hit you may be taking from AI Overviews, 700+ million people are going to ChatGPT and other generative platforms for their top-of-funnel needs. 

I did exactly that above with my trip planning to the OBX.

Dig deeper: 5 B2B content types AI search engines love

Get the newsletter search marketers rely on.


But wait!

The good news is that while that vacation rental company or blue widget manufacturer might not see me on their site when I’m figuring out what to do – or what a blue widget even is – I’m still going to take the same number of holidays and buy the same number of products I would have without AI Overviews or ChatGPT, Claude, Perplexity, etc.

Unless you’re a publisher or make money off impressions, you’ll still have the same amount of money to be made. 

It just might take fewer website visits to do it.

More about TOFU

Traffic at the bottom of the funnel is holding steady for now (more on that below), but the top of the funnel is being replaced quickly by generative conversations rather than visits. 

The question is whether being included in those conversations affects your CTR further down the funnel.

The folks at Seer Interactive found that organic clicks rose from 0.6% to 1.08% when a site was cited in AI Overviews. 

And while the traffic was far lower, ChatGPT had a conversion rate of 16% compared with Google organic’s 1.8%.

If we look at the conversion rate for organic traffic at the bottom of the funnel – which we saw above – it was 4.78%. 

Users who engage with generative engines clearly get further into their decision-making than users who reach BOFU queries through organic search. 

But why?

While I can’t be certain, I agree with Seer’s conclusion that AI-driven users are pre-sold during the TOFU stage. 

They’ve already encountered your brand and trust the system to interpret their needs. When it’s time to convert, they’re almost ready with their credit card.

Why bottom-funnel stability won’t last much longer

Above, I noted that “traffic at the bottom of the funnel is holding steady for now.”

It’s only fair to warn you that through 2026 and 2027, we’ll likely see this erode. 

The same number of people will still travel and still buy blue widgets. 

They just won’t book or buy them themselves. And at best, attribution will be even worse than it is today.

I spoke at SMX Advanced last spring about the rise of AI agents. 

I won’t get into all the gory details here, but the Cliff Notes are this:

Agents are AI systems with some autonomy that complete tasks humans otherwise would. 

They’re rising quickly – it’s the dominant topic for those of us working in AI – and that growth isn’t slowing anytime soon. You need to be ready.

A few concepts to familiarize yourself with, if you want to understand what’s coming, are:

  • AP2 (Agent Payments Protocol): A standard that allows agents to securely execute payments on your behalf. Think of it as a digital letter of credit that ensures the agent can only buy the specific “blue widget” you approved within the price limit you set. Before you say, “But I’d never send a machine to do a human’s job,” let me tell you, you will. And if you somehow prove me wrong individually out of spite, your customers will.
  • Gemini Computer Use Model API: A model with reasoning and image understanding that can navigate and engage with user interfaces like websites. While many agentic systems access data via APIs, this model (OpenAI has one too, as do others) lets the agent interact with visual interfaces to access information it normally couldn’t – navigating filters, logins, and more if given the power.
  • MCP (Model Context Protocol): An emerging standard acting as a universal USB port for AI apps. It lets agents safely connect to your internal data (like checking your calendar or reading your emails) to make purchasing decisions with full context and to work interactively with other agents. Hat tip to Ahrefs for building an awesome MCP server.

Dig deeper: How Model Context Protocol is shaping the future of AI and search marketing

Why do these protocols matter to a content strategist?

Because once AP2 and Computer Use hit critical mass, the click – that sacred metric we’ve optimized for two decades – changes function. 

It stops being a navigation step for a human exploring a website and becomes a transactional step for a machine executing a task.

If an agent uses Computer Use to navigate your pricing page and AP2 to pay for the subscription, the human user never sees your bottom-of-the-funnel content. 

So in that world, who – or rather, what – are you optimizing for?

This brings us back to the Siege Media data. 

Right now, pricing pages and calculators are winning because humans are using AI to research (TOFU and MOFU) and then manually visiting sites to convert (BOFU). 

But as agents take over execution, that manual visit disappears. The “traffic” to your pricing page may be bots verifying costs, not humans persuaded by your copy.

The 2026 strategy

This reality pushes value back up the funnel. 

If the agent handles the purchase, the human decision – the “moment of truth” – happens entirely inside the chat interface or agentic system during the research phase.

In this world, you don’t win by having the flashiest pricing page. 

You win by being the brand the LLM recommends when the user asks, “Who should I trust?”

Your strategy for 2026 requires a two-pronged approach:

  • For the agent (the execution): Ensure your BOFU content is technically flawless. Use clean schema, accessible APIs, and clear data structures so that when an agent arrives via MCP or Computer Use to execute a transaction, it encounters no friction.
  • For the human (the selection): Double down on TOFU. Focus on mentions and citations. You need to be the entity referenced in the generative answer so that users – and agents – trust you.

As we move toward 2026 and then 2027 (it’ll be here sooner than you think), the “click” will become a commodity more often handled by machines. 

The mention, however, remains the domain of human trust. And in my opinion, that’s where your next battle for visibility will be fought.

Time to start – or hopefully keep – making the TOFU.

How to evaluate your SEO tools in 2026 – and avoid budget traps

9 December 2025 at 18:00
How to evaluate your SEO tools in 2026 – and avoid budget traps

Evaluating SEO tools has never been more complicated. 

Costs keep rising, and promises for new AI features are everywhere.

This combination is hardly convincing when you need leadership to approve a new tool or expand the budget for an existing one. 

Your boss still expects SEO to show business impact – not how many keywords or prompts you can track, how fast you can optimize content, or what your visibility score is. 

That is exactly where most tools still fail miserably.

The landscape adds even more friction. 

Features are bundled into confusing packages and add-on models, and the number of solutions has grown sharply in the last 12 months. 

Teams can spend weeks or even months comparing platforms only to discover they still cannot demonstrate clear ROI or the tools are simply out of budget.

If this sounds familiar, keep reading.

This article outlines a practical framework for evaluating your SEO tool stack in 2026, focusing on:

  • Must-have features.
  • A faster way to compare multiple tools.
  • How to approach vendor conversations.

The new realities of SEO tooling in 2026

Before evaluating vendors, it helps to understand the forces reshaping the SEO tooling landscape – and why many platforms are struggling to keep pace.

Leadership wants MQLs, not rankings

Both traditional and modern SEO tools still center on keyword and prompt tracking and visibility metrics. These are useful, but they are not enough to justify the rising prices.

In 2026, teams need a way to connect searches to traffic and then to MQLs and revenue. 

Almost no tool provides that link, which makes securing larger budgets nearly impossible. 

(I say “almost” because I have not tested every platform, so the unicorn may exist somewhere.)

AI agents raise expectations

With AI platforms like ChatGPT, Claude, and Perplexity – along with the ability to build custom GPTs, Gems, and Agents – teams can automate a wide range of tasks. 

That includes everything from simple content rewriting and keyword clustering to more complex competitor analysis and multi-step workflows.

Because of this, SEO tools now need to explain why they are better than a well-trained AI agent. 

Many can’t. This means that during evaluation, you inevitably end up asking a simple question: do you spend the time training your own agent, or do you buy a ready-made one?

Small teams need automation that truly saves time

If you want real impact, your automation shouldn’t be cosmetic. 

You can’t rely on generic checklists or basic AI recommendations, yet many tools still provide exactly that – fast checklists with no context.

Without context, automation becomes noise. It generates generic insights that are not tailored to your company, product, or market, and those insights will not save time or drive results.

Teams need automation that removes repetitive work and delivers better insights while genuinely giving time back.

Dig deeper: 11 of the best free tools every SEO should know about

A note on technical SEO tools

Technical SEO tools remain the most stable part of the SEO stack. 

The vendor landscape has not shifted dramatically, and most major platforms are innovating at a similar pace. 

Because of this, they do not require the same level of reevaluation as newer AI-driven categories.

That said, budgeting for them may still become challenging. 

Leadership often assumes AI can solve every problem, but we know that without strong technical performance, SEO, content, and AI efforts can easily fail.

I will also make one bold prediction – we should be prepared to expect the unexpected in this category. 

These platforms can crawl almost any site at scale and extract structured information, which could make them some of the most important and powerful tools in the stack.

Many already pull data from GA and GSC, and integrating with CRM or other data platforms may be only a matter of time. 

I see that as a likely 2026 development.

What must-have features actually look like in 2026

To evaluate tools effectively, it helps to focus on the capabilities that drive real impact. These are the ones worth prioritizing in 2026.

Advanced data analysis and blended data capabilities

Data analysis will play a much bigger role. 

Tools that let you blend data from GA, GSC, Salesforce, and similar sources will move you closer to the Holy Grail of SEO – understanding whether a prompt or search eventually leads to an MQL or a closed-won deal. 

This will never be a perfect science, but even a solid guesstimation is more useful than another visibility chart.

Integration maturity is becoming a competitive differentiator. 

Disconnected data remains the biggest barrier between SEO work and business attribution.

SERP intelligence for keywords and prompts

Traditional SERP intelligence remains essential. You still need:

  • Topic research and insights for top-ranking pages.
  • Competitor analysis.
  • Content gap insights.
  • Technical issues and ways to fix them.

You also need AI SERP intelligence, which analyzes:

  • How AI tools answer specific prompts.
  • What sources do they cite.
  • If your brand appears, and if your competitors are also mentioned.

In an ideal world, these two groups should appear side by side and provide you with a 360-degree view of your performance.

Automation with real-time savings

Prioritize tools that:

  • Cluster automatically.
  • Detect anomalies.
  • Provide prioritized recommendations for improvements.
  • Turn data into easy-to-understand insights.

These are just some of the examples of practical AI that can really guide you and save you time.

Strong multilingual support

This applies to SEO experts who work with websites in languages other than English. 

Many tools are still heavily English-centric. Before choosing a tool, make sure the databases, SERP tracking, and AI insights work across languages, not just English.

Transparent pricing and clear feature lists

Hidden pricing, confusing bundles, and multiple add-ons make evaluation frustrating. 

Tools should communicate clearly:

  • Which features they have.
  • All related limitations.
  • Whether a feature is part of the standard plan or an add-on.
  • When something from the standard plan moves to an add-on. 

Many vendors change these things quietly, which makes calculating the investment you need difficult and hard to justify. 

Dig deeper: How to choose the best AI visibility tool

Plus, some features that might be overhyped

AI writing

If you can’t input detailed information about your brand, product, and persona, the content you produce will be the same as everyone else’s. 

Many tools already offer this and can make your content sound as if it were written by one of your writers. 

So the question is whether you need a specialized tool or if a custom GPT can do the job.

Prompt tracking 

It’s positioned as the new rank tracking, but it is like looking at one pixel of your monitor. 

It gives you only a tiny clue of the whole picture. 

AI answers change based on personalization and small differences in prompts, and the variations are endless.

Still, this tactic is helpful in:

  • Providing directional signals.
  • Helping you benchmark brand presence.
  • Highlighting recurring themes AI platforms use.
  • Allowing competitive analysis within a controlled sample.

Large keyword databases

They still matter for directional research, but are not a true competitive differentiator. 

Most modern tools have enough coverage to guide your strategy. 

The value now stems from the practical insights derived from the data.

How to compare 10 tools without wasting your time

Understanding features is only half the equation. 

The real challenge is knowing how to evaluate specialized tools and all-in-one platforms without losing your sanity or blocking your team for weeks. 

After going through this process for the tenth time, I’ve found an approach that works for me.

Step 1: Start with the pricing page

I always begin my evaluation on the pricing page. 

With one page, you can get a clear sense of: 

  • All features.
  • Limitations.
  • Which ones fall under add-ons.
  • The general structure of the pricing tiers. 

Even if you need a demo to get the exact price, the framework should still be relatively transparent.

Step 2: Test using your normal weekly work

No checklist will show you more than trying your regular BAU tasks with a couple of tools in parallel. 

This reveals:

  • How long each task takes.
  • What insights appear or disappear.
  • What feels smoother or more clunky.

How difficult the setup is – including whether the learning curve is huge. 

I work in a small team, and a tool that takes many hours just to set up likely will not make my final list.

Not all evaluations can rely on BAU tasks. 

For example, when we researched tools for prompt and AI visibility tracking, we tested more than ten platforms. 

This capability did not exist in our stack, and at first, we had no idea what to check. 

In those cases, you need to define a small set of test scenarios from scratch and compare how each tool performs. 

Continue refining your scenarios, because each new evaluation will teach you something new.

Dig deeper: Want to improve rankings and traffic? Stop blindly following SEO tool recommendations

Step 3: Always get a free trial

Demos are polished. Reality often is not. 

If there is no option for a free trial, either walk away or, if the tool is not too expensive, pay for a month.

Get the newsletter search marketers rely on.


Step 4: Involve only the people who will actually use the tool

Always ask yourself who truly needs to be involved in the evaluation. 

For example, we are currently assessing a platform used not only by the SEO team but also by two other teams. 

We asked those teams for a brief summary of their requirements, but until we have a shortlist, there is no reason to involve them further or slow the process. 

And if your company has a heavy procurement or security review, involving too many people too early will slow everything down even more.

At the same time, involve the whole SEO team, because each person will see different strengths and weaknesses and everyone will rely on the tool.

Step 5: Evaluate results, not features

Many features sound like magic wands. 

In reality, the magic often works only sometimes, or it works but is very expensive. To understand what you truly need, always ask yourself:

  • Did the tool save time?
  • Did it surface insights that my current stack does not?
  • Could a custom GPT do this instead?
  • Does the price make sense for my team, and can I prove its ROI?

These questions turn the decision into a business conversation rather than a feature debate and help you prepare your “sales” pitch for your boss.

Step 6: Evaluate support quality, not just product features

Support has become one of the most overlooked parts of tool evaluation. 

Many platforms rely heavily on AI chat and automated replies, which can be extremely frustrating when you are dealing with a time-sensitive issue or have to explain your problem multiple times.

Support quality can significantly affect your team’s efficiency, especially in small teams with limited resources. 

When evaluating tools, check:

  • How easy it is to reach a human.
  • What response times look like.
  • Whether the vendor offers onboarding or ongoing guidance. 

A great product with weak support can quickly become a bottleneck.

Once you have a shortlist, the quality of your vendor conversations will determine how quickly you can move forward. 

And this may be the hardest part – especially for the introverted SEO leads, myself included.

How to navigate vendor conversations

I’m practical, and I don’t like wasting anyone’s time. I have plenty of tasks waiting, so fluff conversations aren’t helpful. 

That’s why I start every vendor call by setting clear goals, limitations, a timeline, and next steps. 

Over time, I’ve learned that conversations run much more smoothly when I follow a few simple principles.

Be prepared for meetings

If you are evaluating a tool, come prepared to the demo. 

Ideally, you should have access to a free trial, tested the platform, and created a list of practical questions. 

Showing up unprepared is not a good sign, and that applies to both sides.

For example, I am always impressed when a vendor joins the conversation having already researched who we are, what we do, and who our competitors are. 

If you have spoken with the vendor before, directly ask what has changed since your last discussion.

Ask for competitor comparisons

When comparing a few tools, I always ask each vendor for a direct comparison. 

These comparisons will be biased, but collecting them from all sides can reveal insights I had not considered and give me ideas for specific things to test. 

Often, there is no reason to reinvent the wheel.

Ask how annual contracts influence pricing

Annual contracts reduce administrative work and give vendors room to negotiate, which can lead to better pricing. 

Many tools include this information on their pricing pages, and we have all seen it. 

Ask about any other nuances that might affect the final price – such as additional user seats or add-ons.

Don’t start from scratch with vendors you know

Often, the most effective approach is simply to say:

“This is our budget. This is what we need. Can you support this?”

This works especially well with vendors you have used before because both sides already know each other.

What to consider from a business perspective

Even if you select a tool, that does not mean you will receive the budget for it.

Proving ROI is especially difficult with SEO tools. But there are a few things you can do to increase your chances of getting a yes.

Present at least three alternatives in every request

This shows you have done your homework, not just picked the first thing you found. Present your leadership with:

  • The criteria you used in your evaluation.
  • Pros and cons of each tool.
  • The business case and why the capability is needed.
  • What happens if you do not buy the tool.

Providing this view builds trust in your ability to make decisions.

Avoid overselling

Tools improve efficiency, but they cannot guarantee outcomes – especially in SEO, GEO, or whatever you call it. 

Spend time explaining how quickly things are changing and how many factors are outside your control. Managing expectations will strengthen your team’s credibility.

But even with thorough evaluation and negotiation, we still face the same issue: the SEO tooling market has not caught up with what companies now expect. 

Let’s hope the future brings something closer to the clarity we see in Google Ads.

Dig deeper: How to master the enterprise SEO procurement process

The future of the SEO tool stack

The next generation of SEO tools must move beyond vanity metrics. 

Trained AI agents and custom GPTs can already automate much of the work.

In a landscape where companies want to reduce employee and operational costs, you need concrete business numbers to justify high tool prices. 

The platforms that can connect searches, traffic, and revenue will become the new premium category in SEO technology.

For now, most SEO teams will continue to hear “no” when requesting budgets because that connection does not yet exist. 

And the moment a tool finally solves this attribution problem, it will redefine the entire SEO technology market.

AI tools for PPC, AI search, and social campaigns: What’s worth using now

9 December 2025 at 17:00
AI tools for PPC, AI search, and social campaigns: What’s worth using now

In 2026 and well beyond, a core part of the performance marketer’s charter is learning to leverage AI to drive growth and efficiency. 

Anyone who isn’t actively evaluating new AI tools to improve or streamline their PPC work is doing their brand or clients a disservice.

The challenge is that keeping up with these tools has become almost a full-time job, which is why my agency has made AI a priority in our structured knowledge-sharing. 

As a team, we’ve honed in on favorites across creative, campaign management, and AI search measurement. 

This article breaks down key options in each category, with brief reviews and a callout of my current pick.

One overarching recommendation before we dive in: be cautious about signing long-term contracts for AI tools or platforms. 

At the pace things are moving, the tool that catches your eye in December could be an afterthought by April.

AI creative tools for paid social campaigns

There’s no shortage of tools that can generate creative assets, and each comes with benefits as well as the risks of producing AI slop. 

Regardless of the tool you choose, it must be thoroughly vetted and supported by a strong human-in-the-loop process to ensure quality, accuracy, and brand alignment.

Here’s a quick breakdown of the tools we’ve tested:

  • AdCreative.ai: Auto-generates images, video creatives, ad copy, and headlines in multiple sizes, with data-backed scoring for outputs.
  • Creatify: Particularly strong on video ads with multi-format support.
  • WASK: Combines AI creative generation with campaign optimization and competitor analysis.
  • Revid AI: Well-suited for story formats.
  • ChatGPT: Free and widely familiar, giving marketers an edge in effective prompting.

Our current tool of choice is AdCreative.ai. It’s easy to use and especially helpful for quickly brainstorming creative angles and variations to test. 

Like its competitors, it offers meaningful advantages, including:

  • Speed and scale that allow you to generate dozens or hundreds of variants in minutes to keep creative fresh and reduce ad fatigue.
  • Less reliance on external designers or editors for routine or templated outputs.
  • Rapid creative experimentation across images, copy, and layouts to find winning combinations faster.
  • Data-driven insights, such as creative scores or performance predictions, when available.

The usual caveats apply across all creative tools:

  • Build guardrails to avoid off-brand outputs by maintaining a strong voice guide, providing exemplar content, enforcing style rules and banned words, and ensuring human review at every step.
  • Watch for accuracy issues or hallucinations and include verification in your process, especially for technical claims, data, or legal copy. 

Dig deeper: How to get smarter with AI in PPC

AI campaign management and workflow tools for performance campaigns

There are plenty of workflow automation tools on the market, including long-standing options, like Zapier, Workato, and Microsoft Power Automate. 

Our preferred choice, though, is n8n. Its agentic workflows and built-in connections across ad platforms, CRMs, and reporting tools have been invaluable in automating redundant tasks.

Here are my agency’s primary use cases for n8n:

  • Lead management: Automatically enrich new leads from HubSpot or Salesforce with n8n’s Clearbit automation, then route them to the right rep or nurture sequence.
  • UTM cleanup: When a form fill or ad conversion comes in, automatically normalize UTM parameters before pushing them to your CRM. Some systems, like HubSpot, store values in fields such as “first URL seen” that aren’t parsed into UTM fields, so UTMs remain associated with the user but aren’t stored properly and require reconciliation.
  • Data reporting: Pull metrics from APIs, structure the data, and use AI to summarize insights. Reports can then be shared via Slack and email, or dropped into collaborative tools like Google Docs.

As with any tool, n8n comes with caveats to keep in mind:

  • It requires some technical ability because it’s low-code, not no-code. You often need to understand APIs, JSON, and authentication, such as OAuth or API keys. Even basic automations may involve light logic or expressions. Integrations with less mainstream tools can require scripting.
  • You need a deliberate setup to maintain security. There’s no built-in role-based access control in all configurations unless you use n8n Cloud Enterprise. Misconfigured webhooks can expose data if not handled properly.
  • Its ad platform integrations aren’t as broad as those of some competitors. For example, it doesn’t include LinkedIn Ads, Reddit Ads, or TikTok Ads. These can be added via direct API calls, but that takes more manual work.

Dig deeper: Top AI tools and tactics you should be using in PPC

Get the newsletter search marketers rely on.


AI search visibility measurement tools

Most SEOs already have preferred platforms for measurement and insights – Semrush, Moz, SE Ranking, and others. 

While many now offer reports on brand visibility in AI search results from ChatGPT, Perplexity, Gemini, and similar tools, these features are add-ons to products built for traditional SEO.

To track how our brands show up in AI search results, we use Profound. 

While other purpose-built tools exist, we’ve found that it offers differentiated persona-level and competitor-level analysis and ties its reporting to strategic levers like content and PR or sentiment, making it clear how to act on the data.

These platforms can provide near real-time insights such as:

  • Performance benchmarks that show AI visibility against competitors to highlight strengths and weaknesses.
  • Content and messaging intel, including the language AI uses to describe brands and their solutions, which can inform thought leadership and messaging refinement.
  • Signals that show whether your efforts are improving the consistency and favorability of brand mentions in AI answers.
  • Trends illustrating how generative AI is reshaping search results and user behavior.
  • Insights beyond linear keyword rankings that reveal the narratives AI models generate about your company, competitors, and industry.
  • Gaps and opportunities to address to influence how your brand appears in AI answers.

No matter which tool you choose, the key is to adopt one quickly. 

The more data you gather on rapidly evolving AI search trends, the more agile you can be in adjusting your strategy to capture the growing share of users turning to AI tools during their purchase journey.

Dig deeper: Scaling PPC with AI automation: Scripts, data, and custom tools

What remains true as the AI toolset keeps shifting

I like to think most of my content for this publication ages well, but I’m not expecting this one to follow suit. 

Anyone reading it a few months after it runs will likely see it as more of a time capsule than a set of current recommendations – and that’s fine.

What does feel evergreen is the need to:

  • Monitor the AI landscape.
  • Aggressively test new tools and features.
  • Build or maintain a strong knowledge-sharing function across your team. 

We’re well past head-in-the-sand territory with AI in performance marketing, yet there’s still room for differentiation among teams that move quickly, test strategically, and pivot together as needed.

Dig deeper: AI agents in PPC: What to know and build today

❌
❌