Normal view

Yesterday — 20 February 2026Search Engine Land

The latest jobs in search marketing

20 February 2026 at 20:23
Search marketing jobs

Looking to take the next step in your search marketing career?

Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.

Newest SEO Jobs

(Provided to Search Engine Land by SEOjobs.com)

  • Overview You will be working with an internal team that functions as an SEO helpdesk for a large international client in the hospitality sector with over 120 locations worldwide. The helpdesk receives a wide range of SEO-related requests from internal senior members and locations via email daily, and it is their responsibility to triage, process, […]
  • Job Description Salary: Up to 80K Position: Senior SEO Analyst Company: Mason Interactive Job Overview: As a Senior SEO Analyst at Mason Interactive, you will take a leading role in optimizing search engine performance for our diverse clientele. This position requires an individual who can combine deep technical SEO knowledge with creative problem-solving to enhance […]
  • Company Description Thought Industries powers the Business of Value – enabling enterprises to unlock growth across the customer lifecycle. From our Boston headquarters, we help organizations drive measurable impact, maximize customer lifetime value, and fuel innovation through our leading enterprise solutions. Unlock growth with us – where your potential meets boundless possibilities. Job Description We’re […]
  • Job Description Marketing Strategist (SEO, ORM & Marketing Automation, Mortgage Industry) Location: Hybrid – Irvine, CA Job Type: Full-Time   Mutual of Omaha is a Fortune 300 Company. Mutual of Omaha Mortgage is inspired by hometown values and a commitment to being responsible and caring for each other. We exist for the benefit of our customers […]
  • Nectiv is an organic growth consultancy focused on technical SEO, content strategy, information architecture, and optimization for AI answer engines (AEO/GEO). We work with SaaS companies, marketplaces, and enterprise brands to solve complex organic search challenges — from site architecture and crawl optimization to structured content systems and AI search visibility.   We prefer systemic […]
  • Job Title: Off-Page SEO Specialist Experience: 5+ Years Schedule: 8 AM to 5 PM CST Compensation: $15/hour base minimum (based on experience) Location: Fully Remote Job Type: Full-Time Contract Position Job Overview We’re looking for a seasoned Off-Page SEO Specialist to own and scale our off-page SEO operations across multiple HVAC, plumbing, and electrical service […]
  • Join the Tilt team At Tilt (formerly Empower), we see a side of people that traditional lenders miss. Our mobile-first products and machine learning-powered credit models look beyond outdated credit scores, using over 250 real-time financial signals to recognize real potential. Named among the next billion-dollar startups, we’re not just changing how people access financial […]
  • Nectiv is an SEO and AI search agency focused on helping brands grow their visibility across search engines and AI-driven discovery platforms. We work with fast-growing SaaS companies, marketplaces, and enterprise brands to solve complex organic search challenges — from technical architecture and crawl optimization to content strategy and search performance analysis. The Role We’re […]
  • The Role We’re seeking a Senior Content Marketing Manager dedicated exclusively to Southwest Airlines, helping one of America’s most iconic companies redefine travel inspiration and organic discoverability. This role will act as the content lead within an embedded, cross-functional Earned & Owned team, working onsite at Southwest Airlines’ office in Dallas, TX for key meetings […]
  • About the Role Zarifa USA is adding a flexible, resourceful teammate to help with everything from content writing and design work to website tweaks, email campaigns, and customer support. You don’t need to know it all on day one—bring curiosity, initiative, and solid Photoshop skills, and we’ll provide structured, on‐the‐job training so you can grow […]

Newest PPC and paid media jobs

(Provided to Search Engine Land by PPCjobs.com)

  • Job Description DEL Records, Inc. is on the lookout for a dynamic Social Media Manager to elevate our Social Media team! If you’re passionate about music, events, and digital storytelling, we want you on board. Join us and play a pivotal role in connecting fans with their favorite artists and events. Responsibilities: Develop and implement […]
  • About Us AirSculpt® is a next-generation body contouring treatment designed to optimize both comfort and precision, available exclusively at AirSculpt offices. The minimally invasive procedure removes fat and tightens skin, while sculpting targeted areas of the body, allowing for quick healing with minimal bruising, tighter skin, and precise results. More than 75,000 AirSculpt cases have […]
  • Job Description Drive our digital strategy across all performance marketing campaigns, channels and touch-points. Champion a performance thinking through the organization to establish a centralized hub experience within the agency and then deploy precise, coordinated and measurable engagement strategies that deliver the right message, to the right person through the right channel. Help understand what […]
  • Job Description This is a remote position. We are looking for a strategic and results-oriented Performance Marketing Manager to lead and optimize our performance marketing campaigns. This role requires a strong command of B2C paid social advertising, creative strategy, team leadership, and conversion optimization. The ideal candidate is a data-driven marketing expert with proven experience […]
  • Job Description Position Title: Performance Marketing Manager Reports To: SVP of Marketing Location: Remote: Some Travel Required Compensation: $85,000-$105,000 Annual Salary Position Overview: The Performance Marketing Manager owns demand quality and performance channels. Accountable for Pay Per Click governance, reviews performance, and OPP-level visibility. This role is a primary owner of the demand pillar of […]

Other roles you may be interested in

Meta Ads Manager, Cardone Ventures (Scottsdale, AZ)

  • Salary: $85,000 – $100,000
  • Develop, execute, and optimize cutting-edge digital campaigns from conception to launch
  • Provide ongoing actionable insights into campaign performance to relevant stakeholders

Senior Manager of Marketing (Paid, SEO, Affiliate), What Goes Around Comes Around (Jersey City, NJ)

  • Salary: $125,000
  • Develop and execute paid media strategies across channels (Google Ads, social media, display, retargeting)
  • Lead organic search strategy to improve rankings, traffic, and conversions

Search Engine Optimization Manager, Method Recruiting, a 3x Inc. 5000 company (Remote)

  • Salary: $95,000 – $105,000
  • Lead planning and execution of SEO and AEO initiatives across assigned digital properties
  • Conduct content audits to identify optimization, refresh, pruning, and gap opportunities

Senior Manager, SEO, Kennison & Associates (Hybrid, Boston MA)

  • Salary: $150,000 – $180,000
  • You’ll own high-visibility SEO and AI initiatives, architect strategies that drive explosive organic and social visibility, and push the boundaries of what’s possible with search-powered performance.
  • Every day, you’ll experiment, analyze, and optimize-elevating rankings, boosting conversions across the customer journey, and delivering insights that influence decisions at the highest level.

Backlink Manager (SEO Agency), SEOforEcommerce (Remote)

  • Salary: $60,000
  • Managing and overseeing backlink production across multiple campaigns
  • Reviewing and approving backlink opportunities (guest posts, niche edits, outreach-based links, etc.)

PPC Specialist, BrixxMedia (Remote)

  • Salary: $80,000 – $115,000
  • Manage day-to-day PPC execution, including campaign builds, bid strategies, budgets, and creative rotation across platforms
  • Develop and refine audience strategies, remarketing programs, and lookalike segments to maximize efficiency and scale

Performance Marketing Manager, Mailgun, Sinch (Remote)

  • Salary: $100,000 – $125,000
  • Manage and optimize paid campaigns across various channels, including YouTube, Google Ads, Meta, Display, LinkedIn, and Connected TV (CTV).
  • Drive scalable growth through continuous testing and optimization while maintaining efficiency targets (CAC, ROAS, LTV)

SEO and AI Search Optimization Manager, Big Think Capital (New York)

  • Salary: $100,000
  • Own and execute Big Think Capital’s SEO and AI search (GEO) strategy
  • Optimize website architecture, on-page SEO, and technical SEO

Senior Copywriter, Viking (Hybrid, Los Angeles Metropolitan Area)

  • Salary: $95,000 – $110,000
  • Editorial features and travel articles for onboard magazines
  • Seasonal web campaigns and themed microsites

Paid Search Marketing Manager, LawnStarter (Remote)

  • Salary: $90,000 – $125,000
  • Manage and optimize large-scale, complex SEM campaigns across Google Ads, Bing Ads, Meta Ads and other search platforms
  • Activate, optimize and make efficient Local Services Ads (LSA) at scale

Senior Manager, SEO, Turo (Hybrid, San Francisco, CA)

  • Salary: $168,000 – $210,000
  • Define and execute the SEO strategy across technical SEO, content SEO, on-page optimization, internal linking, and authority building.
  • Own business and operations KPIs for organic growth and translate them into clear quarterly plans.

Search Engine Op imization Manager, NoGood (Remote)

  • Salary: £80,000 – $100,000
  • Act as the primary strategic lead for a portfolio of enterprise and scale-up clients.
  • Build and execute GEO/AEO strategies that maximize brand visibility across LLMs and AI search surfaces.

Note: We update this post weekly. So make sure to bookmark this page and check back.

Merchant Center flags feeds disruption

20 February 2026 at 20:04
Google Shopping Ads - Google Ads

Google Merchant Center is investigating an issue affecting Feeds, according to its public status dashboard.

The details:

  • Incident began: Feb. 4, 2026 at 14:00 UTC
  • Latest update (Feb. 20, 14:43 UTC): “We’re investigating reports of an issue with Feeds. We will provide more information shortly.”
  • Status: Service disruption

The alert appears on the official Merchant Center Status Dashboard, which tracks availability across Merchant Center services.

Why we care. Feeds power product listings across Shopping ads and free listings. Any disruption can impact product approvals, updates, or visibility in campaigns tied to retail inventory.

What to watch. Google has not yet shared scope, root cause, or estimated time to resolution. Advertisers experiencing feed processing delays or disapprovals may want to monitor the dashboard closely.

Bottom line. When feeds stall, ecommerce performance can follow. Retail advertisers should keep an eye on diagnostics and campaign delivery until more details emerge.

Dig Deeper. Merchant Center Status Dashboard

What’s next for PPC: AI, visual creative and new ad surfaces

20 February 2026 at 20:00

PPC is evolving beyond traditional search. Those who adopt new ad formats, smarter creative strategies, and the right use of AI will gain a competitive edge.

Ginny Marvin, Google’s Ads Product Liaison, and Navah Hopkins, Microsoft’s Product Liaison, joined me for a conversation about what’s next for PPC. Here’s a recap of this special keynote from SMX Next.

Emerging ad formats and channels

When discussing what lies beyond search, both speakers expressed excitement about AI-driven ad formats.

Hopkins highlighted Microsoft’s innovation in AI-first formats, especially showroom ads:

  • “Showroom ads allow users to engage and interact with a showroom where the advertiser provides the content, and Copilot provides the brand security.”

She also pointed to gaming as a major emerging ad channel. As a gamer, she noted that many users “justifiably hate the ads that serve on gaming surfaces,” but suggested more immersive, intelligent formats are coming.

Marvin agreed that the landscape is shifting, driven by conversational AI and visual discovery tools. These changes “are redefining intent” and making conversion journeys “far more dynamic” than the traditional keyword-to-click model.

Both stressed that PPC marketers must prepare for a landscape where traditional search is only one of many ad surfaces.

Importance of visual content

A major theme throughout the discussion was the growing importance of visual content. Hopkins summed up the shift by saying:

  • “Most people are visual learners… visual content belongs in every stage of the funnel.”

She urged performance marketers to rethink the assumption that visuals belong only at the top of the funnel or in remarketing.

Marvin added that leading with brand-forward visuals is becoming essential, as creatives now play “a much more important role in how you tell your stories, how you drive discovery, and how you drive action.” Marketers who understand their brand’s positioning and reflect it consistently in their creative libraries will thrive across emerging channels.

Both noted that AI-driven ad platforms increasingly rely on strong creative libraries to assemble the right message at the right moment.

Myths about AI and creative

The conversation also addressed misconceptions about AI-generated creative.

Hopkins cautioned against overrelying on AI to build entire creative libraries, emphasizing:

  • “AI is not the replacement for our creativity… you should not be delegating full stop your creative to AI.”

Instead, she said marketers should focus on how AI can amplify their work. Campaigns must perform even when only a single asset appears, such as a headline or image. Creatives need to “stand alone” and clearly communicate the brand.

Marvin reinforced the need for a broader range of visual assets than most advertisers maintain. “You probably need more assets than you currently have,” she noted, especially as cross-channel campaigns like Demand Gen depend on testing multiple combinations.

Both positioned AI as an enabler, not a replacement, stressing that human creativity drives differentiation.

Strategic use of assets

Both liaisons emphasized the need for a diverse, adaptable asset library that works across formats and surfaces.

Marvin explained that AI systems now evaluate creative performance individually:

  • “Underperforming assets should be swapped out, and high-performing niche assets can tell you something about your audience.”

Hopkins added that distinct creative assets reduce what she called “AI chaos moments,” when the system struggles because assets overlap too closely. Distinctiveness—visual and textual—helps systems identify which combinations perform best.

Both urged marketers to rethink creative planning, treating assets as both brand-building and performance-driving rather than separating the two.

Partnering with AI for measurement

The conversation concluded with a deep dive into what it means to measure performance in an AI-first world.

Hopkins listed the key strategic inputs AI relies on:

  • “First-party data, creative assets, ad copy, website content, goals and targets, and budget. These are the things AI uses to optimize towards your business outcomes.”

She also highlighted that incrementality — understanding the true added value of ads — is becoming more important than ever.

Marvin acknowledged the challenges marketers face in letting go of old control patterns, especially as measurement shifts from granular data to privacy-protective models. However, she stressed that modern analytics still provide meaningful signals, just in a different form:

  • “It’s not about individual queries anymore… it’s about understanding the themes that matter to your audience.”

Both encouraged marketers to think more strategically and holistically in their analysis rather than getting stuck in granular metrics.

💾

Google and Microsoft liaisons explain why dynamic ad surfaces, distinct assets and smarter AI inputs will define the next era of paid media.

How to vibe-code an SEO tool without losing control of your LLM

20 February 2026 at 19:00
How to vibe-code an SEO tool without losing control of your LLM

We all use LLMs daily. Most of us use them at work. Many of us use them heavily.

People in tech — yes, you — use LLMs at twice the rate of the general population. Many of us spend more than a full day each week using them — yes, me.

LLM usage amount

Even those of us who rely on LLMs regularly get frustrated when they don’t respond the way we want.

Here’s how to communicate with LLMs when you’re vibe coding. The same lessons apply if you find yourself in drawn-out “conversations” with an LLM UI like ChatGPT while trying to get real work done.

Choose your vibe-coding environment

Vibe coding is building software with AI assistants. You describe what you want, the model generates the code, and you decide whether it matches your intent.

That’s the idea. In practice, it’s often messier.

The first thing you’ll need to decide is which code editor to work in. This is where you’ll communicate with the LLM, generate code, view it, and run it.

I’m a big fan of Cursor and highly recommend it. I started on the free Hobby plan, and that’s more than enough for what we’re doing here. 

Fair warning – it took me about two months to move up two tiers and start paying for the Pro+ account. As I mentioned above, I’m firmly in the “over a day a week of LLM use” camp, and I’d welcome the company.

 A few options are:

  • Cursor: This is the one I use, as do most vibe coders. It has an awesome interface and is easily customized.
  • Windsurf: The main alternative to Cursor. It can run its own terminal commands and self-correct without hand-holding.
  • Google Antigravity: Unlike Cursor, it moves away from the file-tree view and focuses on letting you direct a fleet of agents to build and test features autonomously.

In my screenshots, I’ll be using Cursor, but the principles apply to any of them. They even apply when you’re simply communicating with LLMs in depth.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Why prompting alone isn’t enough

You might wonder why you need a tutorial at all. You tell the LLM what you want, and it builds it, right? That may work for a meta description or a superhero SEO image of yourself, but it won’t cut it for anything moderately complex — let alone a tool or agentic system spanning multiple files.

One key concept to understand is the context window. That’s the amount of content an LLM can hold in memory. It’s typically split across input and output tokens.

GPT-5.2 offers a 400,000-token context window, and Gemini 3 Pro comes in at 1 million. That’s roughly 50,000 lines of code or 1,500 pages of text.

The challenge isn’t just hitting the limit, especially with large codebases. It’s that the more content you stuff into the window, the worse models get at retrieving what’s inside it.

Attention mechanisms tend to favor the beginning and end of the window, not the middle. In general, the less cluttered the window, the better the model can focus on what matters.

If you want a deeper dive into context windows, Matt Pocock has a great YouTube video that explains it clearly. For now, it’s enough to understand placement and the cost of being verbose.

A few other tips:

  • One team, one dream. Break your project into logical stages, as we’ll do below, and clear the LLM’s memory between them.
  • Do your own research. You don’t need to become an expert in every implementation detail, but you should understand the directional options for how your project could be built. You’ll see why shortly.
  • When troubleshooting, trust but verify. Have the model explain what’s happening, review it carefully, and double-check critical details in another browser window.

Dig deeper: How vibe coding is changing search marketing workflows

Tutorial: Let’s vibe-code an AI Overview question extraction system

How do you create content that appears prominently in an AI Overview? Answer the questions the overview answers.

In this tutorial, we’ll build a tool that extracts questions from AI Overviews and stores them for later use. While I hope you find this use case valuable, the real goal is to walk through the stages of properly vibe coding a system. This isn’t a shortcut to winning an AI Overview spot, though it may help.

Step 1: Planning

Before you open Cursor — or your tool of choice — get clear on what you want to accomplish and what resources you’ll need. Think through your approach and what it’ll take to execute.

While I noted not to launch Cursor yet, this is a fine time to use a traditional search engine or a generative AI.

I tend to start with a simple sentence or two in Gemini or ChatGPT describing what I’m trying to accomplish, along with a list of the steps I think the system might need to go through. It’s OK to be wrong here. We’re not building anything yet.

For example, in this case, I might write:

I’m an SEO, and I want to use the current AI Overviews displayed by Google to inspire the content our authors will write. The goal is to extract the implied questions answered in the AI Overview. Steps might include:

1 – Select a query you want to rank for.
2 – Conduct a search and extract the AI Overview.
3 – Use an LLM to extract the implied questions answered in the AI Overview.
4 – Write the questions to a saveable location.

With this in hand, you can head to your LLM of choice. I prefer Gemini for UI chats, but any modern model with solid reasoning capabilities should work.

Start a new chat. Let the system know you’ll be building a project in Cursor and want to brainstorm ideas. Then paste in the planning prompt.

The system will immediately provide feedback, but not all of it will be good or in scope. For example, one response suggested tracking the AI Overview over time and running it in its own UI. That’s beyond what we’re doing here, though it may be worth noting.

It’s also worth noting that models don’t always suggest the simplest path. In one case, it proposed a complex method for extracting AI Overviews that would likely trigger Google’s bot detection. This is where we go back to the list we created above.

Step 1 will be easy. We just need a field to enter keywords.

Step 2 could use some refinement. What’s the most straightforward and reliable way to capture the content in an AI Overview? Let’s ask Gemini.

Reverse-engineering Google AI Overviews

I’m already familiar with these services and frequently use SerpAPI, so I’ll choose that one for this project. The first time I did this, I reviewed options, compared pricing, and asked a few peers. Making the wrong choice early can be costly.

Step 3 also needs a closer look. Which LLMs are best for question extraction?

Which LLMs are best for question extraction

That said, I don’t trust an LLM blindly, and for good reason. In one response, Claude 4.6 Opus, which had recently been released, wasn’t even considered.

After a couple of back-and-forth prompts, I told Gemini:

  • “Now, be critical of your suggestions and the benchmarks you’ve selected.”
  • “The text will be short, so cost isn’t an issue.”

We then came around to:

AI Mode - comparisons

For this project, we’re going with GPT-5.2, since you likely have API access or, at the very least, an OpenAI account, which makes setup easy. Call it a hunch. I won’t add an LLM judge in this tutorial, but in the real world, I strongly recommend it.

Now that we’ve done the back-and-forth, we have more clarity on what we need. Let’s refine the outline:

I’m an SEO, and I want to use the current AI Overviews displayed by Google to inspire the content our authors will write. The idea is to extract the implied questions answered in the AI Overview. Steps might include:

1 – Select a query you want to rank for.
2 – Conduct a search and extract the AI Overview using SerpAPI.
3 – Use GPT-5.2 Thinking to extract the implied questions answered in the AI Overview.
4 – Write the query, AI Overview, and questions to W&B Weave.

Before we move on, make sure you have access to the three services you’ll need for this:

  • SerpAPI: The free plan will work.
  • OpenAI API: You’ll need to pay for this one, but $5 will go a long way for this use case. Think months. 
  • Weights & Biases: The free plan will work. (Disclosure: I’m the head of SEO at Weights & Biases.)

Now let’s move on to Cursor. I’ll assume you have it installed and a project set up. It’s quick, easy, and free. 

The screenshots that follow reflect my preferred layout in Editor Mode.

Cursor - Editor Mode

Step 2: Set the groundwork

If you haven’t used Cursor before, you’re in for a treat. One of its strengths is access to a range of models. You can choose the one that fits your needs or pick the “best” option based on leaderboards.

I tend to gravitate toward Gemini 3 Pro and Claude 4.6 Opus.

Cursor - LLM options

If you don’t have access to all of them, you can select the non-thinking models for this project. We also want to start in Plan mode.

Cursor - Plan mode

Let’s begin with the project prompt we defined above.

Cursor - project prompt

Note: You may be asked whether you want to allow Cursor to run queries on your behalf. You’ll want to allow that.

Cursor - project integrations

Now it’s time to go back and forth to refine the plan that the model developed from our initial prompt. Because this is a fairly straightforward task, you might think we could jump straight into building it, which would be bad for the tutorial and in practice. If you thought that, you’d be wrong. Humans like me don’t always communicate clearly or fully convey our intent. This planning stage is where we clarify that.

When I enter the instructions into the Cursor chat in Planning mode, using Sonnet 4.5, it kicks off a discussion. One of the great things about this stage is that the model often surfaces angles I hadn’t considered at the outset. Below are my replies, where I answer each question with the applicable letter. You can add context after the letter if needed.

An example of the model suggesting angles I hadn’t considered appears in question 4 above. It may be helpful to pass along the context snippets. I opted for B in this case. There are obvious cases for C, but for speed and token efficiency, I retrieve as little as possible. Intent and related considerations are outside the scope of this article and would add complexity, as they’d require a judge.

The system will output a plan. Read it carefully, as you’ll almost certainly catch issues in how it interpreted your instructions. Here’s one example.

Cursor - model selection

I’m told there is no GPT-5.2 Thinking. There is, and it’s noted in the announcement. I have the system double-check a few details I want to confirm, but otherwise, the plan looks good. Claude also noted the format the system will output to the screen, which is a nice touch and something I hadn’t specified. That’s what partners are for.

Cursor - output format

Finally, I always ask the model to think through edge cases where the system might fail. I did, and it returned a list. From that list, I selected the cases I wanted addressed. Others, like what to do if an AI Overview exceeds the context window, are so unlikely that I didn’t bother.

A few final tweaks addressed those items, along with one I added myself: what happens if there is no AI Overview?

Cursor - what happens if there is no AI Overview?

I have to give credit to Tarun Jain, whom I mentioned above, for this next step. I used to copy the outline manually, but he suggested simply asking the model to generate a file with the plan. So let’s direct it to create a markdown file, plan.md, with the following instruction:

Build a plan.md including the reviewed plan and plan of action for the implementation. 

Remember the context window issue I discussed above? If you start building from your current state in Cursor, the initial directives may end up in the middle of the window, where they’re least accessible, since your project brainstorming occupies the beginning.

To get around this, once the file is complete, review it and make sure it accurately reflects what you’ve brainstormed.

Step 3: Building

Now we get to build. Start a new chat by clicking the + in the top right corner. This opens a new context window.

This time, we’ll work in Agent mode, and I’m going with Gemini 3 Pro.

Cursor - Agent mode

Arguably, Claude 4.6 Opus might be a technically better choice, but I find I get more accurate responses from Gemini based on how I communicate. I work with far smarter developers who prefer Claude and GPT. I’m not sure whether I naturally communicate in a way that works better with Gemini or if Google has trained me over the years.

First, tell the system to load the plan. It immediately begins building the system, and as you’ll see, you may need to approve certain steps, so don’t step away just yet.

Cursor - Load the plan

Once it’s done, there are only a couple of steps left, hopefully. Thankfully, it tells you what they are.

First, install the required libraries. These include the packages needed to run SerpAPI, GPT, Weights & Biases, and others. The system has created a requirements.txt file, so you can install everything in one line.

Note: It’s best to create a virtual environment. Think of this as a container for the project, so downloaded dependencies don’t mix with those from other projects. This only matters if you plan to run multiple projects, but it’s simple to set up, so it’s worth doing.

Open a terminal:

Cursor - terminal

Then enter the following lines, one at a time:

  • python3 -m venv .venv
  • source .venv/bin/activate
  • pip install -r requirements.txt

You’re creating the environment, activating it, and installing the dependencies inside it. Keep the second command handy, since you’ll need it any time you reopen Cursor and want to run this project.

You’ll know you’re in the correct environment when you see (.venv) at the beginning of the terminal prompt.

When you run the requirements.txt installation, you’ll see the packages load.

Cursor - packages

Next, rename the .env.example file to .env and fill in the variables.

The system can’t create a .env file, and it won’t be included in GitHub uploads if you go that route, which I did and linked above. It’s a hidden file used to store your API keys and related credentials, meaning information you don’t want publicly exposed. By default, mine looks like this.

API keys and related credentials

I’ll fill in my API keys, sorry, can’t show that screen, and then all that’s left is to run the script.

To do that, enter this in the terminal:

python main.py "your search query"

If you forget the command, you can always ask Cursor.

Oh no … there’s a problem!

I’m building this as we go, so I can show you how to handle hiccups. When I ran it, I hit a critical one.

Cursor - no AI Overview found

It’s not finding an AI Overview, even though the phrase I entered clearly generates one.

Google - what is SEO

Thankfully, I have a wide-open context window, so I can paste:

  • An image showing that the output is clearly wrong.
  • The code output illustrates what the system is finding.
  • A link (or sometimes simply text) with additional information to direct the solution. 

Fortunately, it’s easy to add terminal output to the chat. Select everything from your command through the full error message, then click “Add to Chat.”

Cursor - Add to Chat.

It’s important not to rely solely on LLMs to find the information you need. A quick search took me to the AI Overview documentation from SerpAPI, which I included in my follow-up instructions to the model.

My troubleshooting comment looks like this.

Cursor - troubleshooting comment

Notice I tell Cursor not to make changes until I give the go-ahead. We don’t want to fill up the context window or train the model to assume its job is to make mistakes and try fixes in a loop. We reduce that risk by reviewing the approach before editing files.

Glad I did. I had a hunch it wasn’t retrieving the code blocks properly, so I added one to the chat for additional review. Keep in mind that LLMs and bots may not see everything you see in a browser. If something is important, paste it in as an example.

Now it’s time to try again.

Cursor - troubleshooting executed

Excellent, it’s working as we hoped.

Now we have a list of all the implied questions, along with the result chunks that answer them.

Dig deeper: Inspiring examples of responsible and realistic vibe coding for SEO

Logging and tracing your outputs

It’s a bit messy to rely solely on terminal output, and it isn’t saved once you close the session. That’s what I’m using Weave to address.

Weave is, among other things, a tool for logging prompt inputs and outputs. It gives us a permanent place to review our queries and extracted questions. At the bottom of the terminal output, you’ll find a link to Weave.

There are two traces to watch. The first is what this was all about: the analyze_query trace.

W&B Weave

In the inputs, you can see the query and model used. In the outputs, you’ll find the full AI Overview, along with all the extracted questions and the content each question came from. You can view the full trace here, if you’re interested.

Now, when we’re writing an article and want to make sure we’re answering the questions implied by the AI Overview, we have something concrete to reference.

The second trace logs the prompt sent to GPT-5.2 and the response.

W&B Weave second trace

This is an important part of the ongoing process. Here you can easily review the exact prompt sent to GPT-5.2 without digging through the code. If you start noticing issues in the extracted questions, you can trace the problem back to the prompt and get back to vibing with your new friend, Cursor.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Structure beats vibes

I’ve been vibe coding for a couple of years, and my approach has evolved. It gets more involved when I’m building multi-agent systems, but the fundamentals above are always in place.

It may feel faster to drop a line or two into Cursor or ChatGPT. Try that a few times, and you’ll see the choice: give up on vibe coding — or learn to do it with structure.

Keep the vibes good, my friends.

Emina Demiri talks surviving firing your biggest client

20 February 2026 at 18:14

On episode 352 of PPC Live The Podcast, I spoke to Emina Demiri Watson, Head of Digital at Brighton-based Vixen Digital, where she to shared one of the most candid stories in agency life: deliberately firing a client that accounted for roughly 70% of their revenue — and what they learned the hard way in the process.

The decision to let go

The client relationship had been deteriorating for around three months before the leadership team made their move. The decision wasn’t about the client being difficult from day one — it was a relationship that had slowly soured over time. By the end, the toxic dynamic was affecting the entire team, and leadership decided culture had to come first.

The mistake they didn’t see coming

Here’s where it got painful. When Vixen sat down to run the numbers, they realized they had a serious customer concentration problem — one client holding a disproportionately large share of total revenue. It’s the kind of thing that gets lost when you’re busy and don’t have sophisticated financial systems. A quick Excel formula later, and the reality hit harder than expected.

Warning signs agencies should watch for

Emina outlined the signals that a client relationship is shifting — beyond the obvious drop in campaign performance. External factors inside the client’s business matter too: company restructuring, team changes, even a security breach that prevents leads from converting downstream. The lesson? Don’t just watch your Google Ads dashboard — understand what’s happening on the client’s side of the fence.

How they clawed back

Recovery came down to three things: tracking client concentration properly going forward, returning to their company values as a decision-making compass, and accepting that rebuilding revenue simply takes time. Losing the client freed up the mental bandwidth to pitch new business and re-engage with the industry community — things that had quietly fallen by the wayside.

Common account mistakes still haunting audits in 2026

When asked about errors she sees in audited accounts, Emina didn’t hold back. Broad match without proper audience guardrails remains a persistent problem, as does the absence of negative keyword lists entirely. Over-narrow targeting is another — particularly for clients chasing high-net-worth audiences, where the data pool becomes too thin for Smart Bidding to function.

The right way to think about AI

Emina’s take on AI is pragmatic: the biggest mistake is believing the hype. PPC practitioners are actually better positioned than most to navigate AI skeptically, given they’ve been working with automation and black-box systems for years. Her preferred approach — and the one she quietly enforces with junior team members via a robot emoji — is to treat Claude and other LLMs as a first stop for research, not a replacement for critical thinking.

The takeaway

If you’re sitting on a deteriorating client relationship and nervous about pulling the trigger, Emina’s advice is simple: go back to your values. If commercial survival sits at the top of the list, keep the client. If culture and team wellbeing matter more, it might be time.

AI agents in SEO: A practical workflow walkthrough

20 February 2026 at 18:00
AI agents in SEO: A practical workflow walkthrough

Automation has long been part of the discipline, helping teams structure data, streamline reporting, and reduce repetitive work. Now, AI agent platforms combine workflow orchestration with large language models to execute multi-step tasks across systems.

Among them, n8n stands out for its flexibility and control. Here’s how it works – and where it fits in modern SEO operations.

Understanding how n8n AI agents are deployed

If you think of modern AI agent platforms as an AI-powered Zapier, you’re not far off. The difference is that tools like n8n don’t just pass data between steps. They interpret it, transform it, and determine what happens next.

Getting started with n8n means choosing between cloud-hosted and self-hosted deployment. You can have n8n host your environment, but there are drawbacks:

  • The environment is more sandboxed.
  • You can’t recode the server to interact with n8n workflows in custom ways, such as de-sandboxing the saving of certain file types to a database.
  • You can’t install or use community nodes.
  • Costs tend to be higher.

There are advantages, too:

  • You don’t have to be as hands-on managing the n8n environment or applying patches after core engine updates.
  • Less technical expertise is required, and you don’t need a developer to set it up.
  • Although customization and control are reduced, maintenance is less frequent and less stressful.

There are also multiple license packages available. If you run n8n self-hosted, you can use it for free. However, that can be challenging for larger teams, as version control and change attribution are limited in the free tier.

How n8n workflows run in practice

Regardless of the package you choose, using AI models and LLMs isn’t free. You’ll need to set up API credentials with providers such as Google, OpenAI, and Anthropic.

Once n8n is installed, the interface presents a simple canvas for designing processes, similar to Zapier.

n8n workflow in practice

You can add nodes and pull in data from external sources. Webhook nodes can trigger workflows, whether on a schedule, through a contact form, or via another system.

Executed workflows can then deliver outputs to destinations such as Gmail, Microsoft Teams, or HTTP request nodes, which can trigger other n8n workflows or communicate with external APIs.

In the example above, a simple workflow scrapes RSS feeds from several search news publishers and generates a summary. It doesn’t produce a full news article or blog post, but it significantly reduces the time needed to recap key updates.

Dig deeper: Are we ready for the agentic web?

Building AI agent workflows in n8n

Below, you can see the interior of a webhook trigger node. This node generates a webhook URL. When Microsoft Teams calls that URL through a configured “Outgoing webhook” app, the workflow in n8n is triggered.

Users can request a search news update directly within a specific Teams channel, and n8n handles the rest, including the response.

n8n webhook URL

Once you begin building AI agent nodes, which can communicate with LLMs from OpenAI, Google, Anthropic, and others, the platform’s capabilities become clearer.

 AI agent nodes communicating with LLMs

In the image above, the left side shows the prompt creation view. You can dynamically pass variables from previously executed nodes. On the right, you’ll see the prompt output for the current execution, which is then sent to the selected LLM. 

In this case, data from the scraping node, including content from multiple RSS feeds, is passed into the prompt to generate a summary of recent search news. The prompt is structured using Markdown formatting to make it easier for the LLM to interpret.

Returning to the main AI agent node view, you’ll see that two prompts are supported.

The user prompt defines the role and handles dynamic data mapping by inserting and labeling variables so the AI understands what it’s processing. The system prompt provides more detailed, structured instructions, including output requirements and formatting examples. Both prompts are extensive and formatted in markdown.

On the right side of the interface, you can view sample output. Data moves between n8n nodes as JSON. In this example, the view has been switched to “Schema” mode to make it easier to read and debug. The raw JSON output is available in the “JSON” tab.

This project required two AI agent nodes.

n8n project nodes

The short news summary needed to be converted to HTML so it could be delivered via email and Microsoft Teams, both of which support HTML.

The first node handled summarizing the news. However, when the prompt became large enough to generate the summary and perform the HTML conversion in a single step, performance began to degrade, likely due to LLM memory constraints.

To address this, a second AI agent node converts the parsed JSON summary into HTML for delivery. In practice, a dual AI agent node structure often works well for smaller, focused tasks.

Finally, the news summary is delivered via Teams and Gmail. Let’s look inside the Gmail node:

n8n news summary delivered

The Gmail node constructs the email using the HTML output generated by the second AI agent node. Once executed, the email is sent automatically.

n8n news summary delivered via Gmail

The example shown is based on a news summary generated in November 2025.

Dig deeper: The AI gold rush is over: Why AI’s next era belongs to orchestrators

Get the newsletter search marketers rely on.


n8n SEO automations and other applications

In this article, we’ve outlined a relatively simple project. However, n8n has far broader SEO and digital applications, including:

  • Generating in-depth content and full articles, not just summaries.
  • Creating content snippets such as meta and Open Graph data.
  • Reviewing content and pages from a CRO or UX perspective.
  • Generating code.
  • Building simple one-page SEO scanners.
  • Creating schema validation tools.
  • Producing internal documents such as job descriptions.
  • Reviewing inbound CVs, or resumes, and applications.
  • Integrating with other platforms to support more complex, connected systems.
  • Connecting to platforms with API access that don’t have official or community n8n nodes, using custom HTTP request nodes.

The possibilities are extensive. As one colleague put it, “If I can think it, I can build it.” That may be slightly hyperbolic.

Like any platform, n8n has limitations. Still, n8n and competing tools such as MindStudio and Make are reshaping how some teams approach automation and workflow design.

How long that shift will last is unclear.

Some practitioners are exploring locally hosted tools such as Claude Code, Cursor, and others. Some are building their own AI “brains” that communicate with external LLMs directly from their laptops. Even so, platforms like n8n are likely to retain a place in the market, particularly for those who are moderately technical.

Drawbacks of n8n

There are several limitations to consider:

  • It’s still an immature platform, and core updates can break nodes, servers, or workflows.
  • That instability isn’t unique to n8n. AI remains an emerging space, and many related platforms are still evolving. For now, that means more maintenance and oversight, likely for the next couple of years.
  • Some teams may resist adoption due to concerns about redundancy or ethics.
  • n8n shouldn’t be positioned as a replacement for large portions of someone’s role. The technology is supplementary, and human oversight remains essential.
  • Although multiple LLMs can work together, n8n isn’t well-suited to thorough technical auditing across many data sources or large-scale data analysis.
  • Connected LLMs can run into memory limits or over-apply generic “best practice” guidance. For example, an AI might flag a missing meta description on a URL that turns out to be an image, which doesn’t support metadata.
  • The technology doesn’t yet have the memory or reasoning depth to handle tasks that are both highly subjective and highly complex

It’s often best to start by identifying tasks your team finds repetitive or frustrating and position automation as a way to reduce that friction. Build around simple functions or design more complex systems that rely on constrained data inputs.

SEO’s shift toward automation and orchestration

AI agents and platforms like n8n aren’t a replacement for human expertise. They provide leverage. They reduce repetition, accelerate routine analysis, and give SEOs more time to focus on strategy and decision-making. This follows a familiar pattern in SEO, where automation shifts value rather than eliminating the discipline.

The biggest gains typically come from small, practical workflows rather than sweeping transformations. Simple automations that summarize data, structure outputs, or connect systems can deliver meaningful efficiency without adding unnecessary complexity. With proper human context and oversight, these tools become more reliable and more useful.

Looking ahead, the tools will evolve, but the direction is clear. SEO is increasingly intertwined with automation, engineering, and data orchestration. Learning how to build and collaborate with these systems is likely to become a core competency for SEOs in the years ahead.

Dig deeper: The future of SEO teams is human-led and agent-powered

Google now attributes app conversions to the install date

20 February 2026 at 17:46
Google Ads (Credit: Shutterstock)

Google is updating how it attributes conversions in app campaigns, shifting from the date of the ad click to the date of the actual install.

What’s changing. Previously, conversions were logged against the original ad interaction date. Now, they’re assigned to the day the app was actually installed — bringing Google’s methodology closer in line with how Mobile Measurement Partners (MMPs) like AppsFlyer and Adjust report data.

Why this helps:

  • It should meaningfully reduce discrepancies between Google Ads and MMP dashboards — a persistent headache for mobile marketers reconciling two different numbers.
  • Google’s default 30-day attribution window meant many conversions were being reported too late to be useful for campaign learning, effectively starving Smart Bidding of timely signals.
  • Tying conversions to install date gives the algorithm fresher, more accurate data — which should translate to faster optimization cycles and more stable performance.

Why we care. The change sounds technical, but its impact is significant. Attribution timing directly affects how Google’s machine learning optimizes campaigns — and a 30-day lag between ad click and conversion credit has long been a silent drag on performance. This change means Google’s machine learning will finally receive conversion signals at the right time — tied to when a user actually installed the app, not when they clicked an ad weeks earlier.

That shift should lead to smarter bidding decisions, faster campaign optimization, and fewer frustrating discrepancies between Google Ads and MMP reporting. If you’ve ever wondered why your Google numbers don’t match AppsFlyer or Adjust, this update is a direct response to that problem.

Between the lines. Most advertisers never touch their attribution window settings, leaving Google’s 30-day default in place. That default has quietly been working against them — delaying the conversion signals that machine learning depends on to make better bidding decisions.

The bottom line. A small change in attribution logic could have an outsized impact on app campaign performance. Mobile advertisers should monitor their data closely in the coming weeks for shifts in reported conversions and optimization behavior.

First spotted. This update was first spotted by David Vargas who shared receiving a message of this post on LinkedIn.

How to use GA4 and Looker Studio for smarter PPC reporting

20 February 2026 at 17:00
How to use GA4 and Looker Studio for smarter PPC reporting in 2026

Data isn’t just a report card. It’s your performance marketing roadmap. Following that roadmap means moving beyond Google Analytics 4’s default tools.

If you rely only on built-in GA4 reports, you’re stuck juggling interfaces and struggling to tell a clear story to stakeholders.

This is where Looker Studio becomes invaluable. It allows you to transform raw GA4 and advertising data into interactive dashboards that deliver decision-grade insights and drive real campaign improvements.

Here’s how GA4 and Looker Studio work together for PPC reporting. We’ll compare their roles, highlight recent updates, and walk through specific use cases, from budget pacing visualizations to waste-reduction audits.

GA4 vs. Looker Studio: How they differ for PPC reporting

GA 4 is your source of truth for website and app interactions. It tracks user behavior, clicks, page views, and conversions with a flexible, event-based model. It even integrates with Google Ads to pull key ad metrics into its Advertising workspace. However, GA4 is primarily designed for data collection and analysis, not polished, client-facing reporting.

Looker Studio, on the other hand, serves as your one-stop shop for reporting. It connects to more than 800 data sources, allowing you to build interactive dashboards that bring everything together.

Here’s how they compare functionally in 2026.

Data sources

GA4 focuses on on-site analytics. In late 2025, Google finally rolled out native integration for Meta and TikTok, allowing automatic import of cost, clicks, and impressions without third-party tools. 

However, the feature is still rigid. It requires strict UTM matching and lacks the ability to clean campaign names or import platform-specific conversion values, such as Facebook Leads vs. GA4 Conversions. 

Looker Studio excels here, allowing you to blend these data sources more flexibly or connect to platforms GA4 still doesn’t support natively, such as LinkedIn or Microsoft Ads.

Metrics and calculations

GA4’s reporting UI has improved significantly, now allowing up to 50 custom metrics per standard property, up from the previous limit of five. However, these are often static. 

Looker Studio allows calculated fields, meaning you can perform calculations on your data in real time, such as calculating profit by subtracting cost from revenue, without altering the source data.

Data blending

Looker Studio lets you blend multiple data sources, essentially joining tables, to create richer insights. While enterprise users on Looker Studio Pro can now use LookML models for robust data governance, the standard free version still offers flexible data blending capabilities to match ad spend with downstream conversions.

Sharing and collaboration

Sharing insights in GA4 often means granting property access or exporting static files. Looker Studio reports are live web links that update automatically. You can also schedule automatic email delivery of PDF reports for free.

Enterprise features in Looker Studio Pro add options for delivery to Google Chat or Slack, but standard email scheduling is available to everyone.

Dig deeper: How to use GA4 predictive metrics for smarter PPC targeting

Why you need Looker Studio

Here’s where Looker Studio moves from helpful to essential for PPC teams.

1. Unified, cross-channel view of PPC performance

You don’t rely on just one ad platform. A Looker Studio dashboard becomes your single source of truth, pulling in intent-based Google Ads data and blending it with awareness-based Meta and Instagram Ads for a holistic view.

Instead of just comparing clicks, use Looker Studio to normalize your data. For instance, you might discover that X Ads drove 17.9% of users, while Microsoft Ads drove 16.1%, allowing you to allocate budget based on actual blended performance.

2. Visualizing creative performance

In industries like real estate, the image sells the click. A spreadsheet saying “Ad_Group_B performed well” means nothing to a client.

Use the IMAGE function in Looker Studio. If you use a connector that pulls the Ad Image URL, you can display the actual photo of that luxury condo or HVAC promotion directly in the report table alongside the CTR. This lets clients see exactly which creative is driving results, without translation.

3. Deeper insight into post-click behavior

Reporting shouldn’t stop at the click. By bringing GA4 data into your Looker Studio report, you connect the ad to the subsequent action.

You might discover that a Cheap Furnace Repair campaign has a high CTR but a 100% bounce rate. Looker Studio lets you visualize engaged sessions per click alongside ad spend, proving lead quality matters more than volume.

4. Custom metrics for business goals

Every business has unique KPIs. A real estate company might track tour-to-close ratio, while an HVAC company focuses on seasonal efficiency. 

Looker Studio lets you build these formulas once and have them update automatically. You can even bridge data gaps to calculate return on ad spend (ROAS) by creating a formula that divides your CRM revenue by your Google Ads cost.

5. Storytelling and narrative

Raw data needs context. Looker Studio allows you to add text boxes, dynamic date ranges, and annotations that turn numbers into narratives.

Use annotations to explain spikes or drops. Highlight the so what behind the metrics. If cost per lead spiked in July, add a text note directly on the chart, “Seasonal demand surge + competitor aggression.” This preempts client questions and transforms a static report into a strategic tool.

Dig deeper: How to leverage Google Analytics 4 and Google Ads for better audience targeting

Get the newsletter search marketers rely on.


Use cases: PPC dashboards that drive real insights

These dashboards go beyond surface metrics and surface insights you can act on immediately.

The budget pacing dashboard

Anxious about overspending? Standard reports show what you’ve spent, but not how it relates to your monthly cap.

Use bullet charts in Looker Studio. Set your target to the linear spend for the current day of the month. For example, if you’re 50% through the month, the target line is 50% of the budget.

This visual instantly shows stakeholders whether you’re overpacing and need to pull back, or underpacing and need to push harder, ensuring the month ends on budget.

The zero-click audit report

High spend with zero conversions is the silent budget killer in service industries.

Create a dedicated table filtered for waste. Set it to show only keywords where conversions = 0 and cost > $50, or whatever threshold makes sense for you, sorted by cost in descending order.

This creates an immediate hit list of keywords to pause. Showing this to a client proves you’re actively managing their budget and cutting waste, or you can use it internally.

Geographic performance maps

For local services, location is everything. GA4 provides location reports, but Looker Studio visualizes them in ways that matter.

Build a geo performance page that shades regions by cost per lead rather than traffic volume.

You might find that while City A drives the most traffic, City B generates leads at half the cost. This allows you to adjust bid modifiers by ZIP code or city to maximize ROI.

Dig deeper: 5 things your Google Looker Studio PPC Dashboard must have

Getting the most out of GA4 and Looker Studio in 2026

To ensure success with this combination, keep these final tips in mind.

Watch your API quotas

One of today’s biggest technical challenges is GA4 API quotas. If your dashboard has too many widgets or gets viewed by too many people at once, charts may break or fail to load.

If you have heavy reporting needs, consider extracting your GA4 data to Google BigQuery first, then connecting Looker Studio to BigQuery. This bypasses API limits and significantly speeds up your reports.

Enable optional metrics

Different clients have different needs. In your charts, enable the “optional metrics” feature. This adds a toggle that lets viewers swap metrics, for example, changing a chart from clicks to impressions, without editing the report each time.

Validate and iterate

When you first build a report, spot-check the numbers against the native GA4 interface. Make sure your attribution settings are correct.

Once you’ve established trust in the data, treat the dashboard as a living product, and keep iterating on the design based on what your stakeholders actually use and need.

From reactive reporting to proactive PPC strategy

Master Looker Studio to unlock GA4’s full potential for PPC reporting. GA4 gives you granular behavioral metrics; Looker Studio is where you combine, refine, and present them.

Move beyond basic metrics and use advanced visualizations — budget pacing, bullet charts, and ad creative tables — to deliver the transparency that builds real trust.

The result? You’ll shift from reactive reporting to proactive strategy, ensuring you’re always one step ahead in the data-driven landscape of 2026.

Dig deeper: Why click-based attribution shouldn’t anchor executive dashboards

Google Ads shows how landing page images power PMax ads

20 February 2026 at 16:36
In Google Ads automation, everything is a signal in 2026

Google Ads is now displaying examples of how “Landing Page Images” can be used inside Performance Max (PMax) campaigns — offering clearer visibility into how website visuals may automatically become ad creatives.

How it works. If advertisers opt in, Google can pull images directly from a brand’s landing pages and dynamically turn them into ads. Now when creating your campaigns, before setting it live, Google Ads will show you the automated creatives it plans on setting live.

Why we care. For PMax campaigns your site is part of your asset library. Any banner, hero image, or product visual could surface across Search, Display, YouTube, or Discover placements — whether you designed it for ads or not. Google Ads is now showing clearer examples of how Landing Page Images may be used inside those PMax campaigns — giving much-needed visibility into what automated creatives could look like.

Instead of guessing how Google might transform site visuals into ads, brands can better anticipate, audit, and control what’s eligible to serve. That visibility makes it easier to refine landing pages proactively and avoid unwanted surprises in live campaigns.

Between the lines: Automation is expanding — but so is creative risk. Therefore this is a very useful update that keeps advertisers aware of what will be set live before the hit the go live button.

Bottom line: In PMax, your website is no longer just a landing page. It’s part of the ad engine.

First seen. This update was spotted by Digital Marketer Thomas Eccel who showed an example on LinkedIn.

This press release strategy actually earns media coverage

20 February 2026 at 16:00
Press release evolution

I stopped using press releases several years ago. I thought they had lost most of their impact.

Then a conversation with a good friend and mentor changed my perspective.

She explained that the days of expecting organic features from simply publishing a press release were long gone. But she was still getting strong results by directly pitching relevant journalists once the release went live, using its key points and a link as added leverage.

I reluctantly tried her approach, and the results were phenomenal, earning my client multiple organic features.

My first thought was, “If it worked this well with a small tweak, I can make it even more effective with a comprehensive strategy.”

The strategy I’m about to share is the result of a year of experiments and refinements to maximize the impact of my press releases.

Yes, it requires more research, planning, and execution. But the results are exponentially greater, and well worth the extra effort.

Research phase

You already know what your client wants the world to know — that’s your starting point.

From there:

  • Map out tangential topics, such as its economic impact, related technology, legislation, and key industry players.
  • Find media coverage from the past three months on those topics in outlets where you want your client featured.
    • Your list should include a link to each piece, its key points, and the journalist’s contact information. Also include links to any related social media posts they’ve published.
  • Sort the list by relevance to your client’s message.

Planning phase

As you write your client’s press release, look for opportunities to cite articles from the list you compiled, including links to the pieces you reference.

Make sure each citation is highly relevant and adds data, clarity, or context to your message. Aim for three to five citations. More won’t add value and will dilute your client’s message.

At the same time, draft tailored pitches to the journalists whose articles you’re citing, aligned with their beat and prior coverage.

Mention their previous work subtly — one short quote they’ll recognize is enough. Include links to a few current social media threads that show active public interest in the topic. Close with a link to your press release (once it’s live) and a clear call to action.

The goal isn’t to win favor by citing them. It’s to show the connection between your client’s message and their previous coverage. Because they’ve already covered the topic, it’s an easy transition to approach it from a new angle — making a media feature far more likely.

Execution phase 

Start by engaging with the journalists on your list through social media for a few days. Comment on their recent posts, especially those covering topics from your list. This builds name recognition and begins the relationship.

Then publish your press release. As soon as it goes live, send the pitches you wrote earlier to the three to five journalists you cited. Include the live link to your press release. (I prefer linking to the most authoritative syndication rather than the wire service version.)

After that, pitch other relevant journalists.

As with the first group, tailor each pitch to the journalist. Reference relevant points from their previous articles that support your client’s message. The difference is that because you didn’t cite these journalists in your press release, the impact may be lower than with the first group.

Track all organic features you secure. You may earn some simply from publishing the press release, though that’s less common now. You’re more likely to earn them through direct pitches, and each one creates new opportunities.

Review each new feature for references to other articles, especially from the list you compiled earlier. Then pitch the journalist who wrote the original article, citing the new piece that references or reinforces their work.

The psychology behind why this works

This strategy leverages two powerful psychological principles:

  • We all have an ego, so when a journalist sees their work cited, it validates their perspective.
  • We look for ways to make life easier, and expanding on a topic they’ve already covered is far easier than starting from scratch.

Follow this framework for your next press release, and you’ll earn more media coverage, keep your clients happier, and create more impact with less effort — while looking like a rockstar.

ChatGPT ads spotted and they are quite aggressive

19 February 2026 at 23:40
OpenAI ChatGPT ad platform

OpenAI is serving ads inside ChatGPT, and new findings suggest the experience looks quite different from what the company originally envisioned.

What’s happening. Research from AI ad intelligence firm Adthena has identified the first confirmed ads appearing on ChatGPT for signed-in desktop users in the U.S.

The big surprise. Early speculation suggested ads would only surface after extended back-and-forth conversations. That’s not what’s happening. When a user asked “What’s the best way to book a weekend away?”, sponsored placements appeared immediately — on the very first response.

What they look like. The ads feature a prominent brand favicon and a clear “Sponsored” label, a design that differs slightly from the concepts OpenAI had previously shared publicly.

Why we care. ChatGPT is one of the most visited sites on the internet. Ads appearing in its responses marks a significant moment for the future of AI monetization — and a potential shift in how brands reach consumers at the point of inquiry.

Between the lines. The immediacy of the ad trigger suggests OpenAI is treating single, high-intent prompts — not just sustained conversations — as viable ad inventory. That’s a meaningful strategic signal for advertisers evaluating where to place budget.

The bottom line. ChatGPT’s ad era has quietly begun. For marketers, the question is no longer if they need an AI search strategy — it’s whether they’re already late.

First spotted. CMO of Adthena, Ashley Fletcher shared his team spotting the ads on LinkedIn.

Reddit tests AI shopping carousels in search results

19 February 2026 at 23:22
Crawl, walk, run- A smarter Reddit strategy for organic and AI search visibility

Reddit is piloting a new AI-powered shopping experience that transforms its famously trusted community recommendations into shoppable product carousels — a move that could reshape how the platform monetizes its search traffic.

What’s happening. A small group of U.S.-based users are seeing interactive product carousels appear in search results when their queries signal purchase intent — think “best noise-canceling headphones” or “top budget laptops.”

  • The carousels sit at the bottom of search results and include pricing, images and direct retailer links.
  • Products are surfaced from items actually mentioned in Reddit posts and comments — not just ad inventory.
  • For consumer electronics queries, Reddit is also pulling from select Dynamic Product Ads (DPA) partner catalogs.

How it works. The AI identifies purchase-intent queries, scans relevant Reddit conversations for product mentions, and assembles them into structured, shoppable cards. Users can tap a card to get more details and link out to retailers.

Why we care. Reddit’s shopping carousels give advertisers a rare opportunity to reach consumers at peak purchase intent — at the exact moment they’re seeking peer validation for a buying decision. Unlike traditional display ads, products surfaced here benefit from the implicit trust of Reddit’s community context, making them feel less like ads and more like recommendations.

For brands already running Dynamic Product Ads on Reddit, this is a direct pipeline from community buzz to conversion.

Between the lines. Reddit is doing something its competitors haven’t quite cracked — using organic, peer-driven content as the foundation of a commerce experience rather than pure ad targeting.

That’s a meaningful distinction. Consumers increasingly distrust sponsored recommendations, and Reddit’s entire value proposition is built on authentic community voice. Formalizing that into a shopping layer could give it a credibility edge over traditional retail media networks.

The big picture. Retail media is a fast-growing business, and platforms with high-intent audiences are racing to claim their share. Reddit’s search traffic has grown significantly since its Google search partnership, making this a natural next frontier.

The bottom line. Reddit is experimenting with turning intent-driven search into commerce, aiming to make it easier for users to move from recommendation to transaction — without leaving the community context that drives trust.

Dig deeper. In Case You Saw It: We are Testing a New Shopping Product Experience in Search

Before yesterdaySearch Engine Land

Google Analytics adds AI insights and cross-channel budgeting to Home page

19 February 2026 at 22:15

Google Analytics is adding AI-powered Generated insights to the Home page and rolling out cross-channel budgeting (beta), moves designed to help marketers spot performance shifts faster and manage paid spend more strategically.

What’s happening. Generated insights now appear directly on the Google Analytics Home screen, summarizing the top three changes since a user’s last visit. That includes notable configuration updates, anomalies in performance and emerging seasonality trends — all without digging into detailed reports.

The feature is built for speed. Instead of manually scanning dashboards, marketers get a quick snapshot of what changed and why it may matter.

Cross-channel budgeting (Beta). Google is also introducing cross-channel budgeting in beta. The feature helps advertisers track performance across paid channels and optimize investments based on results.

Access is currently limited, with broader availability expected over time.

Why we care. These updates make it faster to spot performance shifts and easier to connect insights to budget decisions. Generated insights surface key changes automatically, reducing the time spent digging through reports, while cross-channel budgeting helps marketers allocate spend more strategically across paid channels.

Together, they streamline analysis and improve how quickly teams can

Bottom line. Together, Generated insights and cross-channel budgeting aim to reduce reporting friction and improve decision-making — giving marketers faster answers and more control over how they allocate budget across channels.

LLM consistency and recommendation share: The new SEO KPI

19 February 2026 at 20:00
LLM consistency and recommendation share- The new SEO KPI for AI and zero-click search

Search is no longer a blue-links game. Discovery increasingly happens inside AI-generated answers – in Google AI Overviews, ChatGPT, Perplexity, and other LLM-driven interfaces. Visibility isn’t determined solely by rankings, and influence doesn’t always produce a click.

Traditional SEO KPIs like rankings, impressions, and CTR don’t capture this shift. As search becomes recommendation-driven and attribution grows more opaque, SEO needs a new measurement layer.

LLM consistency and recommendation share (LCRS) fills that gap. It measures how reliably and competitively a brand appears in AI-generated responses – serving a role similar to keyword tracking in traditional SEO, but for the LLM era.

Why traditional SEO KPIs are no longer enough

Traditional SEO metrics are well-suited to a model where visibility is directly tied to ranking position and user interaction largely depends on clicks.

In LLM-mediated search experiences, that relationship weakens. Rankings no longer guarantee that a brand appears in the answer itself.

A page can rank at the top of a search engine results page yet never appear in an AI-generated response. At the same time, LLMs may cite or mention another source with lower traditional visibility instead.

This exposes a limitation in conventional traffic attribution. When users receive synthesized answers through AI-generated responses, brand influence can occur without a measurable website visit. The impact still exists, but it isn’t reflected in traditional analytics.

At the core of this change is something SEO KPIs weren’t designed to capture:

  • Being indexed means content is available to be retrieved.
  • Being cited means content is used as a source.
  • Being recommended means a brand is actively surfaced as an answer or solution.

Traditional SEO analytics largely stop at indexing and ranking. In LLM-driven search, the competitive advantage increasingly lies in recommendation – a dimension existing KPIs fail to quantify.

This gap between influence and measurement is where a new performance metric emerges.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

LCRS: A KPI for the LLM-driven search era

LLM consistency and recommendation share is a performance metric designed to measure how reliably a brand, product, or page is surfaced and recommended by LLMs across search and discovery experiences.

At its core, LCRS answers a question traditional SEO metrics can’t: When users ask LLMs for guidance, how often and how consistently does a brand appear in the answer?

This metric evaluates visibility across three dimensions:

  • Prompt variation: Different ways users ask the same question.
  • Platforms: Multiple LLM-driven interfaces.
  • Time: Repeatability rather than one-off mentions.

LCRS isn’t about isolated citations, anecdotal screenshots, or other vanity metrics. Instead, it focuses on building a repeatable, comparative presence. That makes it possible to benchmark performance against competitors and track directional change over time.

LCRS isn’t intended to replace established SEO KPIs. Rankings, impressions, and traffic still matter where clicks occur. LCRS complements them by covering the growing layer of zero-click search – where recommendation increasingly determines visibility.

Dig deeper: Rand Fishkin proved AI recommendations are inconsistent – here’s why and how to fix it

Breaking down LCRS: The two components

LCRS has two main components: LLM consistency and recommendation share.

LLM consistency

In the context of LCRS, consistency refers to how reliably a brand or page appears across similar LLM responses. Because LLM outputs are probabilistic rather than deterministic, a single mention isn’t a reliable signal. What matters is repeatability across variations that mirror real user behavior.

Prompt variability is the first dimension. Users rarely phrase the same question in exactly the same way. High LLM consistency means a brand surfaces across multiple, semantically similar prompts, not just one phrasing that happens to perform well.

For example, a brand may appear in response to “best project management tools for startups” but disappear when the prompt changes to “top alternatives to Asana for small teams.”

Temporal variability reflects how stable those recommendations are over time. An LLM may recommend a brand one week and omit it the next due to model updates, refreshed training data, or shifts in confidence weighting.

Consistency here means repeated queries over days or weeks produce comparable recommendations. That indicates durable relevance rather than momentary exposure.

Platform variability accounts for differences between LLM-driven interfaces. The same query may yield different recommendations depending on whether a conversational assistant, an AI-powered search engine, or an integrated search experience responds.

A brand demonstrating strong LLM consistency appears across multiple platforms, not just within a single ecosystem.

Consider a B2B SaaS brand that different LLMs consistently recommend when users ask for “CRM tools for small businesses,” “CRM software for sales teams,” and “HubSpot alternatives.” That repeatable presence indicates a level of semantic relevance and authority LLMs repeatedly recognize.

Recommendation share

While consistency measures repeatability, recommendation share measures competitive presence. It captures how frequently LLMs recommend a brand relative to other brands in the same category.

Not every appearance in an AI-generated response qualifies as a recommendation:

  • A mention occurs when an LLM references a brand in passing, for example, as part of a broader list or background explanation.
  • A suggestion positions the brand as a viable option in response to a user’s need.
  • A recommendation is more explicit, framing the brand as a preferred or leading choice. It’s often accompanied by contextual justification such as use cases, strengths, or suitability for a specific scenario.

When LLMs repeatedly answer category-level questions such as comparisons, alternatives, or “best for” queries, they consistently surface some brands as primary responses while others appear sporadically or not at all. Recommendation share captures the relative frequency of those appearances.

Recommendation share isn’t binary. Appearing among five options carries less weight than being positioned first or framed as the default choice.

In many LLM interfaces, response ordering and emphasis implicitly rank recommendations, even when no explicit ranking exists. A brand that consistently appears first or includes a more detailed description holds a stronger recommendation position than one that appears later or with minimal context.

Recommendation share reflects how much of the recommendation space a brand occupies. Combined with LLM consistency, it provides a clearer picture of competitive visibility in LLM-driven search.

To be useful in practice, this framework must be measured in a consistent and scalable way.

Dig deeper: What 4 AI search experiments reveal about attribution and buying decisions

How to measure LCRS in practice

Measuring LCRS demands a structured approach, but it doesn’t require proprietary tooling. The goal is to replace anecdotal observations with repeatable sampling that reflects how users actually interact with LLM-driven search experiences.

1. Select prompts

The first step is prompt selection. Rather than relying on a single query, build a prompt set that represents a category or use case. This typically includes a mix of:

  • Category prompts like “best accounting software for freelancers.”
  • Comparison prompts like “X vs. Y accounting tools.”
  • Alternative prompts like “alternatives to QuickBooks.”
  • Use-case prompts like “accounting software for EU-based freelancers.”

Phrase each prompt in multiple ways to account for natural language variation.

2. Confirm tracking

Next, decide between brand-level and category-level tracking. Brand prompts help assess direct brand demand, while category prompts are more useful for understanding competitive recommendation share. In most cases, LCRS is more informative at the category level, where LLMs must actively choose which brands to surface.

3. Execute prompts and collect data

Tracking LCRS quickly becomes a data management problem. Even modest experiments involving a few dozen prompts across multiple days and platforms can generate hundreds of observations. That makes spreadsheet-based logging impractical.

As a result, LCRS measurement typically relies on programmatically executing predefined prompts and collecting the responses.

To do this, define a fixed prompt set and run those prompts repeatedly across selected LLM interfaces. Then parse the outputs to identify which brands are recommended and how prominently they appear.

4. Analyze the results

You can automate execution and collection, but human review remains essential for interpreting results and accounting for nuances such as partial mentions, contextual recommendations, or ambiguous phrasing.

Early-stage analysis may involve small prompt sets to validate your methodology. Sustainable tracking, however, requires an automated approach focused on a brand’s most commercially important queries.

As data volume increases, automation becomes less of a convenience and more of a prerequisite for maintaining consistency and identifying meaningful trends over time.

Track LCRS over time rather than as a one-off snapshot because LLM outputs can change. Weekly checks can surface short-term volatility, while monthly aggregation provides a more stable directional signal. The objective is to detect trends and identify whether a brand’s recommendation presence is strengthening or eroding across LLM-driven search experiences.

With a way to track LCRS over time, the next question is where this metric provides the most practical value.

Get the newsletter search marketers rely on.


Use cases: When LCRS is especially valuable

LCRS is most valuable in search environments where synthesized answers increasingly shape user decisions.

Marketplaces and SaaS

Marketplaces and SaaS platforms benefit significantly from LCRS because LLMs often act as intermediaries in tool discovery. When users ask for “best tools,” “alternatives,” or “recommended platforms,” visibility depends on whether LLMs consistently surface a brand as a trusted option. Here, LCRS helps teams understand competitive recommendation dynamics.

Your money or your life

In “your money or your life” (YMYL) industries like finance, health, or legal services, LLMs tend to be more selective and conservative in what they recommend. Appearing consistently in these responses signals a higher level of perceived authority and trustworthiness.

LCRS can act as an early indicator of brand credibility in environments where misinformation risk is high and recommendation thresholds are stricter.

Comparison searches

LCRS is also particularly relevant for comparison-driven and early-stage consideration searches. LLMs often summarize and narrow choices when users explore options or seek guidance before forming brand preferences.

Repeated recommendations at this stage influence downstream demand, even if no immediate click occurs. In these cases, LCRS ties directly to business impact by capturing influence at the earliest stages of decision-making.

While these use cases highlight where LCRS can be most valuable, it also comes with important limitations.

Dig deeper: How to apply ‘They Ask, You Answer’ to SEO and AI visibility

Limitations and caveats of LCRS

LCRS is designed to provide directional insight, not absolute certainty. LLMs are inherently nondeterministic, meaning identical prompts can produce different outputs depending on context, model updates, or subtle changes in phrasing.

As a result, you should expect short-term fluctuations in recommendations and avoid overinterpreting them.

LLM-driven search experiences are also subject to ongoing volatility. Models are frequently updated, training data evolves, and interfaces change. A shift in recommendation patterns may reflect platform-level changes rather than a meaningful change in brand relevance.

That’s why you should evaluate LCRS over time and across multiple prompts rather than as a single snapshot.

Another limitation is that programmatic or API-based outputs may not perfectly mirror responses generated in live user interactions. Differences in context, personalization, and interface design can influence what individual users see.

However, API-based sampling provides a practical, repeatable reference point because direct access to real user prompt data and responses isn’t possible. When you use this method consistently, it allows you to measure relative change and directional movement, even if it can’t capture every nuance of user experience.

Most importantly, LCRS isn’t a replacement for traditional SEO analytics. Rankings, traffic, conversions, and revenue remain essential for understanding performance where clicks and user journeys are measurable. LCRS complements these metrics by addressing areas of influence that currently lack direct attribution.

Its value lies in identifying trends, gaps, and competitive signals, not in delivering precise scores or deterministic outcomes. Viewed in that context, LCRS also offers insight into how SEO itself is evolving.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

What LCRS signals about the future of SEO

The introduction of LCRS reflects a broader shift in how search visibility is earned and evaluated. As LLMs increasingly mediate discovery, SEO is evolving beyond page-level optimization toward search presence engineering.

The objective is no longer ranking individual URLs. Instead, it’s ensuring a brand is consistently retrievable, understandable, and trustworthy across AI-driven systems.

In this environment, brand authority increasingly outweighs page authority. LLMs synthesize information based on perceived reliability, consistency, and topical alignment.

Brands that communicate clearly, demonstrate expertise across multiple touchpoints, and maintain coherent messaging are more likely to be recommended than those relying solely on isolated, high-performing pages.

This shift places greater emphasis on optimization for retrievability, clarity, and trust. LCRS doesn’t attempt to predict where search is headed. It measures the early signals already shaping LLM-driven discovery and helps SEOs align performance evaluation with this new reality.

The practical question for SEOs is how to respond to these changes today.

The shift from position to presence

As LLM-driven search continues to reshape how users discover information, SEO teams need to expand how they think about visibility. Rankings and traffic remain important, but they no longer capture the full picture of influence in search experiences where answers are generated rather than clicked.

The key shift is moving from optimizing only for ranking positions to optimizing for presence and recommendation. LCRS offers a practical way to explore that gap and understand how brands surface across LLM-driven search.

The next step for SEOs is to experiment thoughtfully by sampling prompts, tracking patterns over time, and using those insights to complement existing performance metrics.

ChatGPT ads collapse the wall between SEO and paid media

19 February 2026 at 19:00
ChatGPT ads collapse the wall between SEO and paid media

Digital marketing teams have long debated the balance between SEO and PPC. Who owns the keyword? Who gets the budget? Who proves ROI most effectively?

For years, the division felt clear. SEO optimized for organic rankings, while paid media optimized for auctions. Both fought for visibility on the same results page, but operated under fundamentally different mechanics and incentives.

ChatGPT ads are beginning to erase that line. The separation between organic and paid isn’t just blurring, it’s breaking down inside conversational AI.

The new battleground isn’t the SERP. It’s the prompt. The intersection of PPC and SEO now lives inside ChatGPT ads.

From SERP-based strategy to prompt-based demand insights

Search marketing has always revolved around keywords: bidding strategies, landing page optimization, and even attribution modeling.

Generative AI doesn’t operate on keyword strings the same way. It operates on intent-rich, multi-variable prompts. 

“Best CRM” becomes “What’s the best CRM for a B2B SaaS company under 50 employees?” “Project management tool” becomes “What project management tool integrates with Slack and Notion?”

These prompts carry deeper layers of context and specificity that traditional keyword research often flattens to accommodate SERP coverage rather than answer an individualized question.

When ChatGPT introduces sponsored placements beneath its answers, ads don’t appear next to a head term. They show under a fully articulated need. That changes everything.

ChatGPT ads are structurally different. They:

  • Appear underneath an AI-generated response.
  • Are clearly labeled as “Sponsored.”
  • Don’t influence the answer itself.
  • Are primarily contextual and session-based.

This isn’t a classic auction layered over a keyword strategy. It’s contextual alignment layered over a conversational experience. For marketers, that means three things:

  • Intent is richer.
  • Context matters more.
  • SEO and PPC must coordinate at the prompt level.

Dig deeper: Ads in ChatGPT: Why behavior matters more than targeting

The new playbook: Prompt intelligence as the bridge

If ChatGPT ads represent a new demand capture environment, the first strategic question becomes, “How do we know which prompts to prioritize?”

The answer isn’t buried in Google Search Console, Keyword Planner, or any other SERP research or keyword mining tool. It’s surfaced in LLM performance that SEO counterparts have been analyzing for the past several months.

The first intersection of PPC and SEO begins with organic LLM visibility. We can start developing a ChatGPT ads strategy by mining high-performing LLM prompts. To do this, we’ll need to understand:

  • When does your brand appear organically in ChatGPT responses, and when do competitors appear?
  • What types of prompts surface the kinds of discussions we want to be part of?
  • Which use cases are most commonly referenced?

This is prompt intelligence. Instead of asking, “What keywords are we ranking for?” the question becomes, “Which conversational queries are surfacing our brand?”

When you analyze those prompts, you uncover something even more valuable: fanout keywords.

Fanout keywords: The new long tail

Fanout keywords are contextual signals embedded within prompts. For example, take this prompt: “Best CRM for B2B SaaS startups with under 50 employees that integrates with HubSpot.”

Traditional keyword tools might surface relevant targets as “CRM for SaaS,” “best CRM,” and “B2B CRM,” focusing on the root terms and the core subject of the prompt.

The fanout structure would include “SaaS startups with under 50 employees,” “HubSpot integration,” “budget sensitivity,” and “growth-stage scaling,” focusing not only on the root terms and core subject but also on factors like company size, growth trajectory, and pain-point considerations.

These aren’t simple keyword variations to cover semantic phrasing. They’re layered qualifiers that reveal nuance and support us as marketers in identifying additional high-intent segments, highlighting underserved or undiscovered audience segments, and identifying potential gaps in paid keyword coverage. This is an example of PPC and SEO converging.

Dig deeper: Why AI optimization is just long-tail SEO done right

Aligning fanout keywords with paid coverage

After extracting fanout keywords from high-performing LLM prompts, run a paid coverage audit to see whether your strategy addresses the nuanced variants that surfaced, whether you’re over-indexed on root terms while missing higher-intent expansions, and whether competitors dominate contextual areas you’ve overlooked.

You can prioritize where to activate paid media based on this audit:

  • If LLM organic presence is high and paid media coverage is high: Great. Continue reinforcing your strategy to dominate.
  • If LLM organic presence is high and paid media coverage is low: Consider testing ChatGPT ads to increase overall coverage.
  • If LLM organic presence is low and paid media coverage is high: Work on improving organic LLM and SEO visibility and strength.
  • If LLM organic presence is low and paid media coverage is low: This is a lower priority. Focus on building foundational marketing strategies to increase overall coverage.

The opportunity lies where organic LLM visibility and paid gaps intersect. If your brand frequently appears in conversational responses for “CRM for early-stage SaaS,” but you aren’t targeting that intent via paid placements, you’re leaving incremental demand on the table.

ChatGPT ads can become a mechanism for defending and amplifying organic AI authority.

Landing pages: An overlooked leverage point

Until now, PPC and SEO teams may have both sent traffic to the same landing pages, but each team optimized them based on independent factors. That approach won’t hold in conversational AI.

When prompts become hyper-specific, landing pages must mirror that specificity. Consider this group of queries: “Best CRM for 10-person SaaS team,” “Affordable CRM for startups,” and “CRM with simple onboarding for founders.”

If all of those drive to a generic “CRM software” page, conversion friction increases and conversion rates drop.

Instead, we can use these groups to build intent-specific landing pages, add content tied to common keyword fanout themes, adjust messaging to mirror conversational phrasing, and highlight deeper, relevant information for the customer.

The more your landing page reflects the nuance of the prompt, the stronger alignment becomes across ad relevance, user experience, conversion performance, and even LLM organic authority.

The critical loop is this: Improved landing page clarity doesn’t just increase conversion. It increases the likelihood that LLMs understand and surface your brand appropriately in future prompts.

This is the new feedback cycle between SEO and paid.

Get the newsletter search marketers rely on.


The closed loop between LLM visibility and paid media

In traditional search, SEO influenced PPC through factors like Quality Score and brand demand. Paid media influenced SEO indirectly through brand lift. With conversational AI, the loop tightens.

  • Organic LLM visibility surfaces prompt clusters.
  • Prompt clusters inform ChatGPT ad prioritization.
  • Paid performance identifies high-converting conversational segments.
  • Landing page optimizations improve both conversion and LLM clarity.
  • Improved clarity increases organic AI mentions.

This isn’t parallel channel management anymore. It has to be a unified system.

Dig deeper: SEO vs. PPC vs. AI: The visibility dilemma

Measurement: Moving beyond last click

One of the most common objections to emerging ad formats is the ability to accurately measure performance and report ROI.

ChatGPT ads operate with privacy-forward controls and aggregate reporting. We won’t have pixel-level behavioral depth or cross-session tracking parity with traditional paid media.

This continues to force a shift in how marketing performance is evaluated, away from click-based attribution models. Instead of relying exclusively on click-based ROI, teams should prioritize:

  • Incrementality testing.
  • Assisted conversion analysis.
  • Prompt-level lift.
  • Brand search lift post-exposure.
  • LLM visibility shifts before and after paid media campaign coverage.

If ChatGPT ads reinforce high-intent conversational exposure, that impact might show up downstream in branded search, direct traffic, and higher close rates in assisted funnels.

We shouldn’t think of this as a purely demand capture channel, but as a hybrid of capture and demand influence or creation.

Organizational implications: SEO and PPC can’t be siloed

This shift is less about media buying and more about team structure. To execute effectively, marketing organizations need to prioritize. 

1. Shared prompt taxonomies

SEO and paid teams must work together to group queries into prompt categories. For example, role-based queries (e.g., CMO, founder, or operations lead); industry-based queries (e.g., SaaS, healthcare, or ecommerce); and constraint-based queries (e.g., budget, team size, or integrations).

These groupings should inform both content and paid media structure and bidding strategies.

2. Unified reporting dashboards

Instead of separate keyword and ranking reports, teams should see:

  • Query group performance.
  • LLM visibility by segment.
  • Paid coverage by segment or query group.
  • Landing page conversion by prompt type or category.

3. Integrated budget planning

Paid media budget allocation should consider where:

  • Organic AI authority is strongest.
  • Competitors dominate conversational mentions.
  • Incremental coverage via ChatGPT ads can defend or expand.

This isn’t about shifting dollars from Google Ads to ChatGPT. It’s about reallocating dollars based on a deeper understanding of user demand and behavior.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

The bigger shift: AI as the primary discovery layer

Zoom out. Search engines were the gateway to information. Social feeds were the gateway to discovery. Conversational AI is becoming the gateway to decision-making.

If that trajectory continues, optimizing for LLM visibility becomes as critical as ranking on Google once was. Now that ads are layered into that experience, paid media and SEO become inseparable.

The future won’t be defined by organic rankings or paid media CPC efficiency alone. It will be defined by how effectively brands show a unified message and experience across:

  • Prompt intelligence.
  • Contextual ad placement.
  • Landing page alignment.
  • Conversational authority.

Think in systems, not channels

The introduction of ads into ChatGPT isn’t just another platform beta. It’s a structural signal.

The channel divide between SEO and paid media, a debate that has shaped marketing teams for as long as they’ve existed, is dissolving inside conversational AI.

The brands that win will:

  • Mine prompt data like they once mined keyword reports.
  • Extract fanout signals that reveal hidden demand.
  • Align paid media coverage to conversational intent.
  • Build landing pages that mirror prompt nuance.
  • Measure incrementally and holistically, not myopically.

The intersection of paid and SEO is no longer a shared SERP. It’s a shared intelligence system.

ChatGPT ads may be the first clear signal that conversational AI isn’t just changing how people search. It’s changing how we structure growth.

Google launches no-code Scenario Planner built on Meridian MMM

19 February 2026 at 18:00
Optimizing your Performance Max campaigns with Google Ads, GA4 data

Google is launching Scenario Planner, a no-code tool that lets you test budget scenarios and forecast ROI using its Meridian marketing mix model without needing data science expertise.

What’s new. Scenario Planner turns complex MMM outputs into actionable marketing insights:

  • Intuitive, code-free interface: You can test different budget allocations and view ROI estimates without writing any code.
  • Forward-looking planning: The tool lets you simulate investment scenarios and stress-test strategies, moving beyond retrospective reporting.
  • Digestible insights: Technical model outputs are visualized in clear, easy-to-understand formats so you can leverage them for strategy decisions.

Why we care. With predictive marketing insights at your fingertips, you can test budgets, predict returns, and adjust campaigns in real time — so you plan smarter and make the most of every dollar.

Closing the MMM actionability gap. Scenario Planner bridges the long-standing “usability gap” in Marketing Mix Models, which traditionally required specialized skills. Nearly 40% of organizations struggle to turn MMM outputs into actionable decisions, according to Harvard Business Review.

Bottom line. By combining the rigor of MMM with an intuitive, interactive interface, Scenario Planner helps you plan smarter, optimize your spend, and make confident, data-driven decisions — without relying on technical experts.

Retire these 9 SEO metrics before they derail your 2026 strategy

19 February 2026 at 17:00
Retire these 9 SEO metrics before they derail your 2026 strategy

You’re tracking the wrong numbers – and so is almost everyone else in SEO right now.

We’ve all been there. You present a chart showing organic traffic up 47%, only to get blank stares from the CMO who wants to know why revenue hasn’t budged. Or you celebrate a top-three ranking for a keyword nobody’s actually searching for anymore.

The metrics that made you look good in 2019 are actively misleading your decision-making in 2026.

With AI Overviews dominating search results, zero-click searches becoming the norm, and personalized SERPs making traditional rankings less meaningful, sticking with outdated measurements puts your strategy and budget at risk.

Let’s walk through the exact metrics your SEO team needs to retire this year and what you should measure instead.

Traffic metrics

1. Organic traffic

As a standalone KPI, organic traffic has been the primary metric in SEO reporting since SEO began. But on its own, it lacks context.

Not all traffic is created equal. A thousand visitors who bounce in three seconds aren’t helping your business. A hundred visitors who convert at 8%? That’s a different story.

I worked with a local HVAC company that saw traffic drop 22% year over year. Panic mode, right? Except revenue from organic actually increased by 31%. We’d pruned low-intent informational content and doubled down on high-intent service pages. Fewer visitors, better visitors.

Before you panic about any traffic drop, look at where you’re losing traffic. If it’s informational articles and customer login pages, that’s not a revenue problem. It’s noise leaving your dashboard.

2. Total impressions without intent segmentation 

This metric is equally misleading.

A million impressions from informational queries like “what is SEO” might generate awareness, but zero revenue. Ten thousand impressions from commercial queries like “best enterprise SEO agency” could fill your pipeline. Google Search Console gives you this data, but most teams don’t slice it intelligently.

3. Traffic growth without revenue correlation

This one gets SEO teams in trouble with executives. You walk into a quarterly review, proudly show a 35% increase in organic traffic, and the CFO asks, “Great, how much revenue did that drive?” If you can’t answer that question, you’re just showing noise.

Ranking metrics

4. Average keyword position 

This looks useful in a dashboard but falls apart under scrutiny. If you rank No. 1 for a keyword with 10 monthly searches and No. 50 for a keyword with 50,000 monthly searches, your average position might look decent, but you’re getting crushed where it actually matters. 

The metric treats all keywords as equal when they aren’t. And with personalized search results, “average position” varies widely by user and location.

5. Isolated keyword tracking

Searchers don’t think in isolated keywords. They ask questions, explore topics, and refine queries. Google has shifted to semantic search and topic modeling.

Tracking “lawyer” alone is useless without intent — criminal defense, divorce, or someone researching what lawyers do.

6. Share of top 10 rankings 

This metric sounds smart until you realize 80% of your top 10 rankings may be low-intent, low-volume informational queries. Meanwhile, competitors hold the top three spots for every high-intent commercial query in your niche.

One No. 1 ranking for a high-converting transactional keyword is worth more than 50 top-10 rankings for informational fluff.

Authority and engagement metrics

7. Domain authority and domain rating 

DA and DR aren’t Google metrics. They’re proprietary scores created by SEO tool companies. Yet I see teams setting goals like “increase DA from 42 to 50 by Q3.” 

You can have a DA of 65 and get crushed by a DA 35 competitor if that competitor’s content better matches search intent. Stop putting these in executive dashboards.

8. Total backlink volume 

This is another vanity metric. Google’s algorithm weighs link quality, relevance, and context.

A single link from a highly relevant, authoritative site in your niche is worth more than 500 spammy directory links. I’ve audited sites with 100,000+ backlinks that couldn’t rank for anything meaningful because 95% were junk.

9. Bounce rate 

This metric has been misunderstood for years. If someone searches “business hours for [your company],” lands on your contact page, finds the hours, and leaves, that’s a successful session with a 100% bounce rate. 

Google replaced bounce rate with “engagement rate” in GA4 for good reason. Similarly, session duration and pages per session need context. A high pages-per-session metric on your pricing page might mean users are confused rather than engaged. 

Get the newsletter search marketers rely on.


Why these SEO metrics are failing now

The search landscape has fundamentally shifted. Up to 58.5% of U.S. Google searches and 59.7% of EU searches now end without a click to any external website, according to SparkToro’s zero-click study. That means for every 1,000 searches, only 360 clicks go to the open web.

AI Overviews, ChatGPT, and Perplexity are pulling information and synthesizing answers without requiring a click. Your content can be highly visible and influential without generating a single session in Google Analytics.

In many verticals, AI is now the primary discovery layer.

Buyers are discovering vendors inside AI tools, then turning to Google to confirm what they’ve already heard. This means your SEO team’s goal is no longer just to “drive traffic.” It’s to make sure your brand shows up when buyers are deciding which options to consider.

Modern customer journeys are also messy. A prospect might discover you via organic search, return through a paid ad, sign up for your email list, and finally convert through direct traffic. If you’re using last-click attribution, SEO looks ineffective. But without that initial organic touchpoint, the conversion never would’ve happened.

Dig deeper: Measuring zero-click search: Visibility-first SEO for AI results

What to measure instead

Revenue and pipeline contribution from organic 

For ecommerce, track revenue from organic sessions by product category and landing page. For lead-gen businesses, track qualified leads from organic and how many convert to customers. Use CRM integration to connect the dots.

Nobody cares about your DA if you can show organic contributed $1.2 million in revenue last quarter.

Conversion-weighted visibility 

Track your visibility specifically for high-value terms that actually drive conversions.

A franchise client shifted to this metric and discovered they were dominating low-intent queries but barely visible for high-intent local service terms. We reallocated resources, and qualified leads doubled in four months.

Topic cluster performance 

This replaces individual keyword rankings. Track how well you rank across entire topic clusters, how many related keywords you rank for, average visibility across the cluster, and total traffic and conversions from that cluster. This gives you a holistic view of topical authority.

SERP real estate ownership 

Measure how much of the search results page you own, not just organic listings, but featured snippets, knowledge panels, local packs, and People Also Ask boxes. Owning multiple SERP features for a high-value query means you’ve effectively blocked out competitors.

AI platform visibility and brand mentions

How often is your brand mentioned or recommended in AI-generated responses? Brand recommendations now matter as much as clicks.

If you have a 90%+ recommendation rate across ChatGPT, Perplexity, and Google AI Overviews for your core topics, you’re winning, even if your click-through traffic looks flat.

Tools are emerging to track this, but you can also do manual spot checks. This visibility builds authority and awareness, leading to brand searches and conversions down the line.

Branded search and direct traffic as AI visibility proxies

Here’s something most teams miss: When buyers discover your brand through AI tools or zero-click searches, they don’t click through. They search your brand name directly or type your URL into their browser. That traffic shows up in your branded search and direct channels, not organic.

If your nonbranded organic traffic is flat but branded searches and direct visits are climbing, that’s often a sign your content is being cited in AI Overviews and LLM responses. Track these together.

A client of mine saw organic traffic plateau while brand search volume increased 40%. Their content was being cited in AI Overviews, building awareness without the click.

Dig deeper: 12 new KPIs for the generative AI search era

How to transition your reporting

Changing your reporting framework is scary. Stakeholders have stared at the same metrics for years.

Start by auditing your current dashboard. Does each metric connect to a business outcome, or is it just activity?

Retire vanity metrics gradually. If you’ve reported organic traffic as a standalone KPI, introduce “organic traffic by intent segment” and “organic-attributed revenue” alongside it. Over a few reporting cycles, shift focus to the new metrics and phase out the old.

When introducing new metrics, explain them in business terms. Don’t say “conversion-weighted visibility.” Say “visibility for the search terms that drive the most leads and revenue.”

Be transparent about why change is necessary. AI Overviews, zero-click results, and personalization have made old metrics less reliable. That’s not admitting failure. It’s demonstrating you’re evolving with the reality of search in 2026.

The metrics that prove SEO’s value

The metrics you retire this year — organic traffic as a standalone number, average keyword position, domain authority, and bounce rate — aren’t bad. They’re incomplete. Worse, they create the illusion of progress while competitors focus on metrics that drive revenue.

The metrics you adopt — revenue contribution, conversion-weighted visibility, topic authority, SERP real estate ownership, and AI platform mentions — connect SEO directly to business outcomes. They prove ROI, justify budget, and align your strategy with what matters.

Take a hard look at your dashboard. Identify the metrics that make you look busy instead of effective. Retire them. Replace them.

No one cares how much traffic you drove or your DA score. They care whether SEO drove growth. Make sure your metrics prove it.

The authority era: How AI is reshaping what ranks in search

19 February 2026 at 16:00
In an AI-driven search world, authority outweighs optimization

In the early days of SEO, authority was a crude concept. In the early 2000s, ranking well often came down to how effectively you could game PageRank. Buy enough links, repeat the right keywords, and visibility followed. It was mechanical, transactional, and remarkably easy to manipulate.

Two decades later, that version of search is largely extinct. Algorithms have matured. So has Google’s understanding of brands, people, and real-world reputation.

In a landscape increasingly shaped by AI-powered discovery, authority is no longer a secondary ranking factor – it’s the foundational principle. This is the logical conclusion of a long, deliberate evolution in search.

From links to legitimacy: How authority evolved

Google’s first major move against manipulation came with Penguin, which forced the industry to evolve. That’s when “digital PR” began emerging as a more palatable framing than link building.

Google also began experimenting with entity-based understanding. Author photos appeared in search results. Knowledge panels surfaced. Brands, authors, and organizations were treated less like URLs and more like connected entities.

Old Google SERPs results

Although experiments like Google authorship were eventually retired, the direction was clear. Google was redefining how it assessed website and brand authority.

Instead of asking, “Who links to this page?” the algorithms increasingly asked, “Who authored this content, and how are they recognized elsewhere?”

That shift has only accelerated over the past 12 months, as AI-driven search experiences have made the trend impossible to ignore.

Dig deeper: From SEO to algorithmic education: The roadmap for long-term brand authority

Helpful content and the end of synthetic authority

The integration of the helpful content system into Google’s core algorithm marked a turning point. Sites that built visibility through over-optimization saw organic performance erode almost overnight. In contrast, brands demonstrating depth, experience, and strong brand authority gained ground.

Search systems are now far better at evaluating whether content reflects lived expertise. Over-optimized sites – those with disproportionately high link metrics but limited brand recognition – have struggled as a result.

In recent core updates, larger, well-known brands have consistently outperformed smaller sites that were technically strong but lacked brand authority. Authority, not optimization, has become a key differentiator.

Authority in an AI‑mediated search world

Large language models (LLMs) learn from the open web: journalism, reviews, forums, social platforms, video transcripts, and expert commentary. Reputation is inferred through the frequency, consistency, and context of brand mentions.

Perplexity - best B2B SaaS platforms call tracking

This has profound implications for how brands approach SEO.

Reddit, Quora, LinkedIn, YouTube, and trusted review platforms such as G2 are among the most heavily cited sources in AI search responses. These aren’t environments you can fully control. They reflect what people actually say about your brand, not what you claim about yourself.

Top cited domains in ChatGPT

In other words, authority is now externally validated – and much harder to influence. Visibility is no longer driven solely by what happens on your website. It’s shaped by how convincingly your brand shows up across the wider digital ecosystem.

This doesn’t mean the end of Google

Market share data continues to show Google commanding over 90% of global search usage, with AI platforms accounting for a fraction of referral traffic. Even among heavy ChatGPT users, the vast majority still rely on Google as part of their search behavior.

Google is absorbing AI-style answers into its own interface through AI Overviews, AI Mode, and other generative enhancements. Users aren’t abandoning Google. They’re encountering AI within it.

The opportunity lies in building authority that performs across both traditional and AI-mediated search surfaces. I’ve previously written about the concept of building a total search strategy.

Brand building is the new SEO multiplier

One of the more uncomfortable realizations for SEO practitioners is that some of the most effective authority signals sit outside traditional search channels.

Digital PR, brand advertising, events, partnerships, and even offline activity increasingly influence organic performance. A physical event can generate listings on event platforms, coverage in local press, and organic social discussion – each feeding into a broader perception of legitimacy. This is where paid and organic disciplines begin to converge.

Brand awareness improves click‑through rates. Familiar names attract citations. Mentions on YouTube or in long-form journalism reinforce topical authority in ways links alone never could. We’ve even seen a recent study showing YouTube comments as a leading factor correlated with AI mentions.

ChatGPT, AI Mode and AI Overviews compared

As someone who works across both paid and organic strategy, I see this multiplier effect repeatedly. Strong brands don’t just convert better – they now perform better organically, too.

Dig deeper: The new SEO imperative: Building your brand

Get the newsletter search marketers rely on.


A practical framework: The three pillars of authority

Building authority requires a holistic approach – one that starts with brand strategy, category understanding, and a broader set of tactics than traditional SEO.

I’ve developed a simple framework that ensures consistent focus on three core pillars:

The three pillars of authority

1. Category authority: Owning the truth, not just the traffic

This is about defining how the category itself is understood, not merely competing within it. Authority begins upstream of content production, with a clear point of view on what matters, what’s outdated, and what’s misunderstood. 

Rather than chasing keywords, the goal is to become the reference point others defer to when making sense of the space. This is the layer search engines and LLMs increasingly reward because it signals genuine expertise rather than tactical optimization.

2. Canonical authority: Creating the definitive explanations

If category authority sets the belief system, canonical authority operationalizes it. This is where brands invest in explanation-first content that answers questions properly, not superficially. 

Canonical explanations are designed to be cited, reused, and paraphrased across the ecosystem: by journalists, analysts, creators, forums, and AI systems. They form the backbone of content infrastructure – hubs, guides, FAQs, and explainers that are structurally sound, consistently updated, and clearly authored. 

In an AI-mediated search environment, these assets become the raw material models learn from and reference, making them central to long-term visibility.

3. Distributed authority: Proving legitimacy beyond your website

What matters isn’t just what you publish, but how your brand shows up across platforms you don’t control. This includes:

  • PR coverage.
  • Social mentions.
  • Video platforms.
  • Communities.
  • Reviews.
  • Events.
  • Even product experiences. 

Distribution and amplification aren’t afterthoughts. They’re how authority is stress-tested in public. Consistent, credible presence across these surfaces feeds both human perception and algorithmic inference, reinforcing legitimacy at scale.

Dig deeper: How paid, earned, shared, and owned media shape generative search visibility

Building authority beats chasing algorithms

Every evolution in search presents the same choice. You can react – scrambling to interpret updates, tweaking tactics, and hoping the next change favors you.

Or you can invest in becoming the recognized authority in your space. This requires patience, cross-channel collaboration, and genuine investment. But it’s the only approach that’s proved durable across decades of algorithmic change.

The tactics influencing performance today feel less like legacy SEO and far more like classic marketing and PR: building authority, earning attention, and influencing demand rather than engineering visibility.

No doubt Google will continue to evolve. AI systems will mature. New discovery platforms will emerge. None of that changes the underlying truth: Authority has always been the hardest signal to earn – and the most valuable once established.

Google Ads shows PMax placements in “Where ads showed” report

19 February 2026 at 01:31
Why Google Ads auctions now run on intent, not keywords

Google Ads now surfaces Performance Max (PMax) campaign data in the “Where ads showed” report, giving advertisers clearer insight into placements, networks, and impressions — data that was previously unavailable.

What’s new. The update makes it possible to see exactly where PMax ads are appearing across Google’s network, including search partners, display, and other placements. Advertisers can now track impressions by placement type and network, helping them understand how campaigns are performing in detail.

Why we care. This update finally gives visibility into where PMax campaigns are running, including Google Search Partners, display, and other networks. With placement, type, and impression data now available, marketers can better understand campaign performance, optimize budgets, and make informed decisions instead of relying on guesswork. It turns previously opaque PMax reporting into actionable insights.

User reaction. Digital marketer Thomas Eccel shared on LinkedIn that the report was historically empty, but now finally shows real data.

  • “I finally see where and how PMax is being displayed,” he wrote.
  • He also noted the clarity on Google Search Partners, previously a “blurry grey zone.”

The bottom line. This update gives marketers actionable visbility into PMax campaigns, helping them understand placement performance, optimize spend, and identify which networks are driving results — all in one report.

Paid search click share doubles as organic clicks fall: Study

19 February 2026 at 01:01

Organic search clicks are shrinking across major verticals — and it’s not just because of Google’s AI Overviews.

  • Classic organic click share fell sharply across headphones, jeans, greeting cards, and online games queries in the U.S., new Similarweb data comparing January 2025 to January 2026 shows.
  • The biggest winner: text ads.

Why we care. You aren’t just competing with AI Overviews. You’re competing with Google’s aggressive expansion of paid search real estate. Across every vertical analyzed, text ads gained more click share than any other measurable surface. In product categories, paid listings now capture roughly one-third of all clicks. As a result, several brands that are losing organic visibility are increasing their paid investment.

By the numbers. Across four verticals, text ads showed the most consistent, measurable click-share gains.

  • Classic organic lost 11 to 23 percentage points of click share year over year.
  • Text ads gained 7 to 13 percentage points in every case.
  • Paid click share doubled in major product categories.
  • AI Overviews SERP presence rose ~10 to ~30 percentage points, depending on the vertical.

Classic organic is down everywhere. Year-over-year classic organic click share declined across all four verticals. Headphones saw the steepest drop. Even online games — historically organic-heavy — lost double digits. In two verticals (headphones, jeans), total clicks also fell.

  • Headphones: Down from 73% to 50%
  • Jeans: Down from 73% to 56%
  • Greeting cards: Down from 88% to 75%
  • Online games: Down from 95% to 84%

Text ads are the biggest winner. Text ads gained share in every vertical; no other surface showed this level of consistent growth:

  • Headphones: Up from 3% to 16%
  • Online games: Up from 3% to 13%
  • Jeans: Up from 7% to 16%
  • Greeting cards: Up from 9% to 16%

In product categories, PLAs compounded the shift:

  • Headphones: Up from 16% to 36%
  • Jeans: Up from 18% to 34%
  • Greeting cards: Up from 10% to 19%

AI Overviews surged unevenly. The presence of Google AI Overviews expanded sharply, but varied by vertical:

  • Headphones: 2.28% → 32.76%
  • Online games: 0.38% → 29.80%
  • Greeting cards: 0.94% → 21.97%
  • Jeans: 2.28% → 12.06%

Zero-click searches are high — and mostly stable. Except for online games, zero-click rates didn’t change dramatically:

  • Headphones: 63% (flat)
  • Jeans: Down from 65% to 61%
  • Online games: Up from 43% to 50%
  • Greeting cards: Up from 51% to 53%

Brands losing organic traffic are buying it back. In headphones:

  • Amazon increased paid clicks 35% while losing organic volume.
  • Walmart nearly 6x’d paid clicks.
  • Bose boosted paid 49%.

In jeans:

  • Gap grew paid clicks 137% to become the top paid player.
  • True Religion entered the paid top tier without top-10 organic presence.

In online games:

  • CrazyGames quadrupled paid clicks while organic declined.
  • Arkadium entered paid after losing 68% of organic clicks.

The result? We’re seeing a self-reinforcing cycle, according to the study’s author, Aleyda Solis:

  • Organic share declines.
  • Competition intensifies.
  • More brands increase paid budgets.
  • Paid surfaces capture more clicks.

About the data. This analysis used Similarweb data to examine SERP composition and click distribution for the top 5,000 U.S. queries in headphones, jeans, and online games, and the top 956 queries in greeting cards and ecards. It compares January 2025 to January 2026, tracking how clicks shifted across classic organic results, organic SERP features, text ads, PLAs, zero-click searches, and AI Overviews.

The study. Search Isn’t Just Turning to AI, it’s being Re-Monetized: Text Ads Are Taking a Bigger Share of Google SERP Clicks (Data)

Microsoft Advertising adds a multi image creative to Shopping ads

18 February 2026 at 22:22
Microsoft Ads: How it compares to Google Ads and tips for getting started

Microsoft Advertising is rolling out multi-image ads for Shopping campaigns in Bing search results, giving ecommerce brands a richer way to showcase products and capture shopper attention before the click.

What’s new. Advertisers can now display multiple product images within a single Shopping ad, letting shoppers preview different angles, styles or variations directly in search.

The format is designed to make ads more visually engaging and informative, helping consumers compare options quickly without leaving the results page.

How it works:

  • Additional images are uploaded through the optional additional_image_link attribute in the product feed.
  • Advertisers can include up to 10 images, separated by commas.
  • The images appear alongside pricing and retailer information in Shopping results.

Why we care. Multi-image ads could increase engagement and purchase intent by presenting a fuller picture of a product. More visuals can highlight features, colors and design details that a single image might miss.

Discovery. The feature was first spotted by digital marketer Arpan Banerjee who shared spotting it on LinkedIn.

The bottom line. Multi-image Shopping ads give retailers more creative flexibility and shoppers more context at a glance — a shift that could improve ad performance and reshape how products compete in search results.

Microsoft rolls out applied Performance Max learning path

18 February 2026 at 22:05
Microsoft Ads

A new applied learning path from Microsoft Advertising is designed to help marketers get more value from Performance Max campaigns through hands-on, scenario-based training — not just theory.

What’s happening. The new Performance Max learning path bundles three progressive courses that focus on real-world setup, optimization and troubleshooting. The structure is meant to let advertisers learn at their own pace while building practical skills they can immediately apply to live campaigns.

Each course targets a different stage of expertise, from beginner fundamentals to advanced strategy and credentialing.

What’s included:

Course 1: Foundations

  • Introducing Microsoft Advertising Performance Max campaigns covers the essentials.
  • Ideal for beginners who want to understand how PMax campaigns work.
  • Focuses on core concepts and terminology.

Course 2: Hands-on setup

  • Setting up Microsoft Advertising Performance Max campaigns provides a guided walkthrough.
  • Designed for advertisers launching their first PMax campaign or refreshing their skills.
  • Walks step-by-step through campaign creation and answers common setup questions.

Course 3: Advanced implementation

  • Implementing & optimizing Microsoft Advertising Performance Max centers on scenario-based applied learning.
  • Targets advanced users developing strategic and optimization skills.
  • Includes practical tools like checklists, videos and reusable reference materials.

How it works. The third course introduces embedded support features that let learners access targeted educational resources mid-assessment via a “Help me understand” option. Users can review specific concepts in context and return directly to their questions.

The benefit. Learners can spend more time on weak areas while quickly progressing through familiar material.

Credential payoff. Completing the advanced course unlocks the chance to earn a Performance Max badge. The credential signals proficiency in implementing and optimizing PMax campaigns and applying best practices in real-world scenarios.

The badge is digitally shareable and verifiable through Credly, making it easy to display on professional platforms like LinkedIn.

Why we care. This update from Microsoft Advertising makes it faster and easier to build real, job-ready skills for running Performance Max campaigns — not just theoretical knowledge. The applied, scenario-based training helps marketers avoid common setup mistakes, optimize campaigns more confidently, and improve performance in live accounts.

Plus, the shareable credential adds professional credibility, signaling proven expertise to clients and employers.

The bottom line. The new learning path aims to close the gap between training and execution. By combining applied scenarios, embedded support and credentialing, it offers a structured route for advertisers to build confidence — and prove it — in Performance Max campaign management.

44% of ChatGPT citations come from the first third of content: Study

18 February 2026 at 21:47

ChatGPT heavily favors the top of content when selecting citations, according to an analysis of 1.2 million AI answers and 18,012 verified citations by Kevin Indig, Growth Advisor.

Why we care. Traditional search rewarded depth and delayed payoff. AI favors immediate classification — clear entities and direct answers up front. If your substance isn’t surfaced early, it’s less likely to appear in AI answers.

By the numbers. Indig’s team found a consistent “ski ramp” citation pattern that held across randomized validation batches. He called the results statistically indisputable:

  • 44.2% of citations come from the first 30% of content.
  • 31.1% come from the middle (30–70%).
  • 24.7% come from the final third, with a sharp drop near the footer.

At the paragraph level, AI reads more deeply:

  • 53% of citations come from the middle of paragraphs.
  • 24.5% come from first sentences.
  • 22.5% come from last sentences.

The big takeaway. Front-load key insights at the article level. Within paragraphs, prioritize clarity and information density over forced first sentences.

Why this happens. Large language models are trained on journalism and academic writing that follow a “bottom line up front” structure. The model appears to weight early framing more heavily, then interpret the rest through that lens.

  • Modern models can process massive token windows, but they prioritize efficiency and establish context quickly.

What gets cited. Indig identified five traits of highly cited content:

  • Definitive language: Cited passages were nearly twice as likely to use clear definitions (“X is,” “X refers to”). Direct subject-verb-object statements outperform vague framing.
  • Conversational Q&A structure: Cited content was 2x more likely to include a question mark. 78.4% of citations tied to questions came from headings. AI often treats H2s as prompts and the following paragraph as the answer.
  • Entity richness: Typical English text contains 5% to 8% proper nouns. Heavily cited text averaged 20.6%. Specific brands, tools, and people anchor answers and reduce ambiguity.
  • Balanced sentiment: Cited text clustered around a subjectivity score of 0.47 — neither dry fact nor emotional opinion. The preferred tone resembles analyst commentary: fact plus interpretation.
  • Business-grade clarity: Winning content averaged a Flesch-Kincaid grade level of 16 versus 19.1 for lower-performing content. Shorter sentences and plain structure beat dense academic prose.

About the data. Indig analyzed 3 million ChatGPT responses and 30 million citations, isolating 18,012 verified citations to examine where and why AI pulls content. His team used sentence-transformer embeddings to match responses to specific source sentences, then measured their page position and linguistic traits such as definitions, entity density, and sentiment.

Bottom line. Narrative “ultimate guide” writing may underperform in AI retrieval. Structured, briefing-style content performs better.

  • Indig argues this creates a “clarity tax.” Writers must surface definitions, entities, and conclusions early—not save them for the end.

The report. The science of how AI pays attention

Google Ads adds Results tab to show impact of applied recommendations

18 February 2026 at 21:08
Google Ads logo on smartphone

Google Ads has launched a new Results tab inside its Recommendations section that shows advertisers the measured performance impact after they apply bid and budget suggestions.

How it works. After an advertiser applies a bid or budget recommendation, Google analyzes campaign performance one week later and compares it to an estimated baseline of what would have happened without the change. The system then highlights the incremental lift, such as additional conversions generated by raising a budget or adjusting targets.

Where to find it. Impact reporting appears in the Recommendations area of an account. A summary callout shows recent results on the main page, while a dedicated Results tab provides a deeper breakdown grouped by Budget and Target recommendations, with filtering options for each.

Why we care. Advertisers can now see whether Google’s automated recommendations actually drive incremental results — not just projected gains — helping teams evaluate the business value of platform guidance.

What to expect. Results are reported as a seven-day rolling average measured across a 28-day window after a recommendation is applied. Metrics focus on the campaign’s primary bidding objective — such as conversions, conversion value, or clicks.

Between the lines. The feature adds a layer of accountability to automated recommendations at a time when advertisers are relying more heavily on platform-driven optimization.

Spotted by. Hana Kobzová founder of PPCNewsFeed who shared a screenshot of the help doc on LinkedIn.

Help doc. Even though there isn’t a live Google help doc, a Google spokesperson has confirmed that there’s an early pilot running.

AI search KPIs: Focus on inclusion, not position

18 February 2026 at 19:00
AI search is about the consideration set, not ranking first

We need to have a talk about KPIs and AI search.

I’ve observed numerous SEO professionals on LinkedIn and at conferences talking about “ranking No. 1 on ChatGPT” as if it’s the equivalent of a No. 1 ranking on Google:

LinkedIn post - ranking # 1 on ChatGPT

On Google, being the first result is often a golden ticket.

Going from No. 2 to No. 1 in Google search will often result in 100%-300% increases in traffic and conversions.

This is almost certainly not the case with AI responses – even if they weren’t constantly changing.

Our team’s research shows AI users consider an average of 3.7 businesses before deciding who to contact.

Being the first result in that list on ChatGPT isn’t the golden ticket it is in Google search.

This being the case, the focus of AI search really should be on “inclusion in the consideration set” – not necessarily being “the first mentioned in that set” – as well as crafting what AI is saying about us.

User behavior on AI platforms differs from Google search

Over the past several months, my team has spent more than 100 hours observing people use ChatGPT and Google’s AI Mode to find services.

One thing came into focus within the first dozen or so sessions: User behavior on AI platforms differs from Google search in ways that extend far beyond using “natural language” and having conversations versus performing keyword searches.

Which is overstated, by the way. About 75% of the sessions we observed included “keyword searches.”

One key difference: Users consider more businesses in AI responses than in organic search.

It makes sense — it’s much easier to compare multiple options in a chat window than to click through three to five search results and visit each site.

Dig deeper: From searching to delegating: Adapting to AI-first search behavior

AI users don’t stop at the first result

In both Google AI Mode and ChatGPT, users considered an average of 3.7 businesses from the results.

The average AI chat user considers 3.7 businesses

This has strong implications for the No. 1 result – as well as No. 4.

The value of appearing first drops sharply — and the value of appearing lower rises — when, in 75% of sessions, users also consider businesses in Positions 2 to 8.

What’s driving conversions isn’t your position in that list.

Get the newsletter search marketers rely on.


Why do businesses with lower rankings end up in the consideration set in LLMs?

First of all, these aren’t rankings.

They are a list of recommendations that will likely get shuffled, reformatted from a list to a table, and completely changed, given the probabilistic nature of AI.

That aside, AI chat makes it much easier to scan and consider more options than Google search does.

Let’s look at the Google search results for “fractional CMO.”

If a user wants to evaluate multiple fractional CMO options for their startup, it’s more work to do so in Google Search than in ChatGPT.

Only two options appear above the fold, and each requires a click-through to read their website content.

Contrast this with the experience on ChatGPT.

ChatGPT - Fractional CMO

The model gave them eight options, along with information about each one.

It’s easy to read all eight blurbs and decide whom to explore further.

Which leads to the other thing we really need to focus on: what the model is saying about you.

A bigger driver than being first on ChatGPT: Being a good fit

Many search marketers focus on rankings and traffic, but rarely on messaging and positioning.

This needs to change.

In the case of the response for an ophthalmologist in southern New Jersey, you get an easily scannable list:

Roughly 60% make their entire decision based on the response, without visiting the website or switching to Google, according to our study.

So how do you drive conversion?

Deliver the right message — and make sure the model shares it.

Dr. Lanciano may be the best glaucoma specialist in the area. But if the model highlights Ravi D. Goel and Bannett Eye Centers for glaucoma care, and that’s what the user needs, they’ll go there.

Bannett Eye Centers appears last in the AI response but may still win the conversion because of what the model says about it — something that rarely happens in Google Search.

Visibility doesn’t pay the bills. Conversions do. And conversions don’t happen when customers think someone else is a better fit.

Dig deeper: How to measure your AI search brand visibility and prove business impact

As SEOs shift toward Dig deeper: , a mindset shift needs to occur

We’re still thinking about AI search the way we’ve thought about SEO.

In SEO, the top result captures most of the traffic. In AI search, it doesn’t.

AI users consider more available options.

Responses — and their format — change dramatically with each request.

“Winning” in AI search means getting into the consideration set and being presented compellingly.

It’s not about being first on a list, especially if what’s said about you misses the mark.

In other words, SEOs who think like copywriters and salespeople will drive outcomes for their organizations.

Dig deeper: Is SEO a brand channel or a performance channel? Now it’s both

Perplexity stops testing advertising

18 February 2026 at 18:16

Perplexity is abandoning advertising, for now at least. The company believes sponsored placements — even labeled ones — risk undermining the trust on which its AI answer engine depends.

  • Perplexity phased out the ads it began testing in 2024 and has no plans to bring them back, the Financial Times reported.
  • The AI search company could revisit advertising or “never ever need to do ads,” the report said.

Why we care. If Perplexity remains ad-free, brands lose paid access to a fast-growing audience. The company previously reported that it gets 780 million monthly queries. With sponsored placements gone, brands have no way to get visibility inside Perplexity’s answers other than via organic citations.

What changed. Perplexity was one of the first AI search companies to test ads, placing sponsored answers beneath chatbot responses. It said at the time that ads were clearly labeled and didn’t influence outputs. Executives now say perception matters as much as policy.

  • “A user needs to believe this is the best possible answer,” one executive said, adding that once ads appear, users may second-guess response integrity.

Meanwhile. Perplexity’s exit comes as other AI platforms experiment with ads.

Perplexity says subscriptions are its core business. It offers a free tier and paid plans from $20 to $200 per month. It has more than 100 million users and about $200 million in annualized revenue, according to executives.

  • Perplexity also introduced shopping features, but doesn’t take a cut of transactions, another indication it’s cautious about revenue models that could create conflicts of interest.
  • “We are in the accuracy business, and the business is giving the truth, the right answers,” one executive said.

The report. Perplexity drops advertising as it warns it will hurt trust in AI (subscription required)

How to apply ‘They Ask, You Answer’ to SEO and AI visibility

18 February 2026 at 18:00
Why answering pricing, problems, and comparisons drives AI visibility

Search behavior is no longer just people typing keywords into Google. It’s people asking questions and, in some cases, outsourcing their thinking to LLMs.

As Google evolves from a traditional search engine into a more question-and-answer machine, businesses need a robust, time-tested way to respond to customer questions.

AI changes how people research and compare options. Tasks that once felt painful and time-consuming are now easy. But there’s a catch. The machine only knows what it can find about you.

If you want visibility across the widest possible range of questions, you need to understand your customers’ wants, needs, and concerns in depth.

That’s where the “they ask, you answer” framework comes in. It helps businesses identify and create the many questions and answers prospective customers already have in mind. Always useful, it’s a practical, actionable way forward in the age of AI.

An answer-first content strategy and why it matters now

“They Ask, You Answer” (TAYA) is a book by Marcus Sheridan. (I strongly recommend you read it.)

The concept is simple: buyers have questions, and businesses should answer them honestly, clearly, and publicly — especially the ones sales teams avoid.

No dodging. No “contact us for a quote.” No “it depends” – sorry, SEO folks.

TAYA isn’t just an inbound marketing strategy. It’s a practical way to map a customer-facing content strategy with an E-E-A-T mindset.

The framework centers on five core content categories:

  • Pricing and cost.
  • Problems.
  • Versus and comparisons.
  • Reviews.
  • Best in class.

These categories align with the moments when a buyer is seeking the best solution, reducing risk, and making a decision.

More of those moments now happen inside AI environments — on your smartphone, your PC, in apps like ChatGPT or Gemini, or anywhere else AI shows up, which at this point is nearly everywhere.

At their core, these are question-and-answer machines. You ask. The machine answers. That’s why the TAYA process fits so well.

The modern web is chaotic. Finding what you need can be exhausting — dodging ads, navigating layers of SERP features, and avoiding pop-ups on the site you finally click.

AI is gaining ground because it feels better. Easier. Faster. Cleaner. Less chaos. More order.

Turning E-E-A-T into a practical content strategy

You could argue we already have a north star for content creation in E-E-A-T. But have you ever tried to build a content strategy around it? Great in principle, harder in practice.

They ask, you answer puts an E-E-A-T-focused content strategy on rails:

  • Pricing supports trust, experience, and expertise.
  • Problems show experience and trust.
  • Versus content builds authority and expertise.
  • Reviews build experience and trust.
  • Best-in-class content builds authority and trust.

E-E-A-T can be difficult to target because there are many ways to build trust, show experience, and demonstrate authority. TAYA maps those signals across multiple areas within each category, helping you build a comprehensive database of people-first content that AI readily surfaces

Dig deeper: How to build an effective content strategy for 2026

How to integrate TAYA with traditional SEO research

The skills and tools we use as SEOs already put us in a strong position for the AI era. They can help us build an integrated SEO, PPC, and AI strategy

The action plan:

  • Google Search Console: Go to Google Search Console > Performance. Filter queries by question modifiers such as who, what, why, how, and cost. These are your raw TAYA topics.
  • Google Business Profile: Review keywords and queries in your Google Business Profile for additional ideas.
  • The semantic map: Use AnswerThePublic or Also Asked. Look for secondary questions. If you’re writing about cost, you’ll often see related concerns such as financing or hidden fees.
  • The competitor gap: Use Semrush or Ahrefs Keyword Gap tools. Don’t focus on what competitors rank for. Look for “how-to” and “versus” keywords where they have no meaningful content. That’s your land grab.
  • Method marketing: Immerse yourself in the mindset of your ideal customer and start searching. What comes up? What does AI say? What’s missing? Tools like the Value Proposition Canvas and SCAMPER can help you evaluate these angles in structured ways.

Often, you won’t go wrong by simply searching for your own products and services. AI tools and search results will surface a wide range of questions, answers, and perspectives that can feed directly into your AI and SEO content strategy. 

Also consider the internal sources available to you:

  • Sales calls and sales teams.
  • Live chat transcripts.
  • Emails.
  • Customer service tickets.
  • Proposal feedback.
  • Complaints.

All of this helps you understand the question landscape. From there, you can begin organizing those insights within the five TAYA categories.

TAYA and your AI-era content marketing strategy

The framework centers on five core categories, reinterpreted for an answer-driven environment where Google, Gemini, and ChatGPT-like systems anticipate user needs.

For each, here’s what it is, why it matters now, and examples to get you started.

1. Pricing and cost: Why we must talk about money

Buyers want cost clarity early. Businesses avoid it because “it depends.” Both are true, but only one is useful.

AI systems will readily summarize typical costs, using someone else’s numbers if you don’t publish your own. If you fail to provide a credible range with context, you’re effectively handing the narrative to competitors, directories, or a generic blog with a stock photo handshake.

How to do it

  • Publish ranges, not unrealistic single prices.
  • Explain what drives costs up or down.
  • Include example packages, such as good, better, and best.
  • Be explicit about what’s included and excluded.
  • Add country-specific variables where relevant, such as tax or VAT in the UK.

Content examples

  • How much does [service] cost in the UK? Include price ranges and what influences them.
  • X vs. Y pricing: what you get at each level.
  • The hidden costs of [solution] and how to avoid them.
  • Budget checklist: what to prepare before you buy [product or service].

One of the most cited examples in the TAYA world is Yale Appliance. The company embraced transparent, buyer-focused content and saw inbound become its largest sales channel, alongside significant reported growth.

The takeaway isn’t “go sell fridges.” It’s to answer money questions more clearly and honestly than anyone else. Do that, and you build trust at scale.

2. Problems: Turning problems into strengths

This category focuses on being honest about drawbacks, limitations, risks, and who a product or service isn’t for. You have to think beyond pure SEO or GEO. 

A core communication strategy is taking a perceived weakness, such as being a small business, and reframing it as a strength, like a more personalized approach.

Own the areas that could be seen as problems. Present them clearly and constructively so customers understand the trade-offs and context.

The answer layer aims to provide balanced guidance. Pages that focus only on benefits read like marketing. Pages that acknowledge trade-offs read like advice.

People can spot spin quickly. Be direct. Own your limitations. When you do, credibility increases.

How to do it

  • Create problem-and-solution guides.
  • Include “avoid if …” sections.
  • Address common failure modes and misuses.
  • Be explicit about prerequisites, such as budget, timeline, skill, or access.

Content examples

  • The biggest problems with [solution] and how to mitigate them.
  • Is [product or service] worth it? When it’s a great choice and when it isn’t.
  • Common mistakes when buying or implementing [solution].
  • What can go wrong with [approach] and how to reduce risk.

This is where your “experience” in E-E-A-T becomes tangible. “We’ve seen this go wrong when …” carries far more weight than “we’re passionate about excellence.”

Get the newsletter search marketers rely on.


3. Versus and comparisons

People rely on comparisons to reduce cognitive load. They want clarity. What’s the difference?

Comparison queries are ideal for answer engines because they lend themselves to structured summaries, tables, and recommendations. If you don’t publish the clearest comparison, you won’t be the source used to generate the clearest answer.

How to do it

  • Compare by use case, not just features.
  • Use a consistent framework, such as price, setup, outcomes, risks, and who it suits.
  • Include clear guidance, such as “If you’re X, choose Y.”

Content examples

  • X vs. Y: which is better for [specific scenario]?
  • In-house vs. outsourced for [service]: cost, risk, and results.
  • Tool A vs. Tool B vs. Tool C: an honest comparison for UK teams.
  • Alternatives to [popular option]: when to choose each.

SEO bonus: These pieces tend to earn links because they’re genuinely useful and because many competitors hesitate to name alternatives directly.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

4. Reviews, case studies, and credibility

This isn’t about asking for a five-star review. It’s about creating review-style content that helps buyers evaluate their options.

AI summaries often rely on review-style pages because they’re structured around evaluation. But generic affiliate reviews can be, at best, inconsistent in sincerity. Your advantage is first-hand experience and contextual truth.

How to do it

  • Review your own services honestly, including what clients value and where they struggle.
  • Review the tools you use with clear pros and cons.
  • Publish “what we’d choose and why” for different buyer types.

Content examples

  • Is [solution] worth it? Our honest take after implementing it for X clients.
  • Best [category] tools for [persona], including limitations.
  • The top questions to ask before choosing a [provider].
  • What good looks like: a checklist to evaluate [service].

If you want to be cited in AI answers, you have to sound like a source, not an ad.

5. Best in class – and the courage to recommend others

Sheridan’s view, and it’s a bold one, is that you should sometimes publish “best in class” recommendations even when the best option isn’t you. That’s how trust is built.

The answer layer rewards utility. If your page genuinely helps users choose well, it becomes the kind of resource systems are more likely to reference.

How to do it

  • Build “best for” lists based on clear criteria, not hype.
  • Explain how you evaluated the options.
  • Include scenarios where each option wins or loses.

Content examples

  • Best [solutions] for [use case] in 2026, including criteria and picks.
  • Best [service] providers for [industry] and what to look for.
  • Best budget, best premium, best for speed, best for compliance.
  • If I were buying this today: the decision framework I’d use.

The goal is for your brand to become a trusted educator, not just a vendor.

TAYA as the playbook for answer-first visibility

Strategic content marketing in the age of AI centers on middle-of-the-funnel content, where AI helps prospects make informed decisions. The content you publish and organize on your website remains the foundation, and SEO remains the backbone of AI visibility

When leveraged effectively, TAYA is a powerful way to map what you should be addressing and to build a content strategy that ensures you’re represented across the AI landscape.

In practice, that means building an editorial program where:

  • Every piece begins with a real buyer question.
  • The five core categories prioritize decision-stage content, not just awareness content.
  • Traditional SEO research validates language and demand.
  • Content is written to satisfy both the human, through clarity and confidence, and the machine, through structure, specificity, evidence, and balanced trade-offs.

This shift also changes how success is measured.

In classic SEO, the win was rank, click, convert.

In the AI era, the win is often be the source, earn trust, be chosen, with or without the click.

If your content is the clearest, most in-depth, most honest, and most experience-backed explanation available for the questions buyers are already asking, then whether someone discovers it through Google, Gemini, ChatGPT, or elsewhere, you’ve built something durable.

Which is what strong SEO has always been about. The window has changed. The principles haven’t.

Dig deeper: Mentions, citations, and clicks: Your 2026 content strategy

How to build AI confidence inside your SEO team

18 February 2026 at 17:00
How to build AI confidence inside your SEO team

With more than two decades in SEO, I’ve lived through every major disruption the industry has faced — from stuffing meta keywords to rank on AltaVista to Google reshaping search, to mobile-first indexing, and now AI.

What feels different today is the speed of change and the emotional weight it carries. I see growing pressure across teams, even among seasoned professionals who have weathered every major shift before this one.

Many have a legitimate concern: If AI can do this faster, where do I fit in? That’s not a technical question. It’s a human one.

That uncertainty affects morale and adoption. Productivity slows. Experimentation stalls. Teams either overuse AI without judgment or avoid it altogether.

The real leadership challenge is about building confidence, capability, and trust in AI-assisted teams.

4 tips for building AI confidence in SEO teams

Building real confidence in AI within an SEO team isn’t about deploying new tools. It’s about shifting the culture.

The most effective SEO teams aren’t the ones adopting the most tools. They use AI intentionally and with discipline. They automate data pulls, summarize research, and cluster keywords. This allows teams to focus on strategy, storytelling, and stakeholder alignment.

Technology adoption is largely cultural, as Harvard Business School has noted. Tools alone don’t drive change. Trust does. That insight applies directly to SEO teams navigating AI today.

Below are four strategies for building AI confidence in your teams through clarity, participation, and shared ownership, not pressure or hype.

1. Earn trust by involving the team in AI tool selection and workflow design

A practical way to strengthen trust is to move from top-down implementation to shared ownership. People trust what they help create.

When AI is imposed on a team, resistance increases. Inviting people into evaluation and workflow design makes AI feel less intimidating and more empowering. Bringing teams in early also surfaces real-world insight into where AI reduces friction or introduces new risks.

Effective leaders:

  • Invite teams to test tools and share feedback.
  • Run small experiments before scaling adoption.
  • Communicate clearly about what you’re adopting, what you’re rejecting, and why.

When teams feel included, they’re more willing to experiment. They learn and stretch into new capabilities. That openness fuels growth and innovation.

Dig deeper: Why SEO teams need to ask ‘should we use AI?’ not just ‘can we?’

2. Meet people where they are – not where you want them to be

AI capability varies widely across SEO teams. Some practitioners experiment daily. Others feel overwhelmed or skeptical, often because they’ve seen past automation trends come and go.

Leaders who strengthen confidence understand that capability develops at different speeds. They create environments that encourage curiosity, where uncertainty is normal, and learning happens continuously, not just when it’s mandated.

That means:

  • Normalizing different comfort levels.
  • Creating psychological safety around “I don’t know yet.”
  • Avoiding shame or over-celebration of early adopters.
  • Offering multiple learning paths.

Recognizing different starting points makes progress feel achievable rather than threatening.

3. Celebrate wins and highlight champions

Confidence grows through visible success.

When someone uses AI to cut a task from hours to minutes, it’s more than a productivity gain. It proves AI can support real work without replacing human judgment.

Effective teams:

  • Share clear examples of AI improving quality and efficiency.
  • Highlight internal champions who can mentor others.
  • Create space for demos and knowledge sharing.
  • Reinforce a culture of experimentation, not judgment.

My agency formed AI focus groups with members from across the organization. One group focused on integrating AI into project management, with representatives from SEO, operations, and leadership.

That shared ownership made adoption more successful. Teams weren’t just implementing AI; they were shaping how it fit into real workflows. The result was stronger buy-in, better collaboration, and greater confidence across the team.

Each group shared its successes and lessons. This built awareness of what worked and why. Momentum builds when teams see their peers using AI responsibly and effectively.

Dig deeper: The future of SEO teams is human-led and agent-powered

4. Frame AI as a collaborative partner, not a replacement

Fear of replacement is real. Ignoring it doesn’t make it disappear. Teams need explicit clarity about where human expertise still matters.

Reframing AI as a partner means emphasizing:

  • AI handles volume. Humans handle nuance.
  • AI accelerates analysis. Humans interpret meaning.
  • AI drafts. Humans validate, refine and contextualize.
  • AI scales output. Humans build trust and influence.

AI can help with execution, but it can’t replace strategic instincts, contextual judgment, or cross-functional leadership. Those are the skills that ultimately move performance forward.

Why experience still matters in AI-driven SEO

AI has lowered the barrier to entry for many SEO tasks. With effective prompts, almost anyone can generate keyword lists, outlines, or summaries. With that accessibility, we see many short-lived tactics and recycled “quick wins.” 

Anyone who’s been in SEO long enough has seen this cycle before. The tactics change. The fundamentals don’t. This is where experience becomes the differentiator.

AI can generate outputs, not accountability

AI can produce content and analyze data, but it doesn’t own outcomes. It doesn’t carry responsibility for brand reputation, compliance, or long-term performance.

SEO professionals remain accountable for:

  • Deciding what to exclude from publication.
  • Assessing technical, reputational, and compliance risks.
  • Weighing long-term consequences against short-term gains.

AI executes. Humans decide. That distinction matters more than ever.

Pattern recognition is learned, not automated

AI excels at surfacing patterns. It struggles to explain why they matter or whether they apply in a specific context.

Experienced SEOs bring a depth of understanding that AI can’t replicate. Their historical background helps them distinguish true shifts from industry noise. 

Few industries have seen as many tactics rise and fall as SEO. Experience enables strategic thinking beyond what worked before and helps avoid repeating tactics that once succeeded but later failed.

AI suggests possibilities. Experience evaluates relevance.

Professional integrity remains a differentiator

In high-visibility search environments, mistakes scale quickly. AI can produce inaccuracies and hallucinations. These errors can put brands at risk of losing trust and facing compliance issues.

Teams with strong professional SEO foundations:

  • Validate AI output instead of assuming correctness.
  • Prioritize accuracy over speed.
  • Maintain ethical SEO standards.
  • Protect brand voice and credibility.

Integrity isn’t automated. It’s practiced. In a high-speed AI environment, that discipline matters even more.

Dig deeper: How to build and lead a successful remote SEO team

Growing the SEO profession in an AI era

AI is accelerating SEO execution.

As routine tasks become automated, the SEO professional’s role shifts to strategic oversight. Time once spent on manual analysis can be redirected to interpreting user intent, shaping search strategy, guiding stakeholders, and assessing risk.

This makes fundamentals more important. Teams still need sound judgment, technical expertise, and accountability. AI can support execution, but professionals remain responsible for decisions, quality, and long-term performance.

Developing the next generation of SEOs requires more than proficiency with tools. It requires teaching:

  • When to rely on AI.
  • When to challenge it.
  • How to apply experience and context to its output.

❌
❌