Google DeepMind researchers have developed BlockRank, a new method for ranking and retrieving information more efficiently in large language models (LLMs).
BlockRank is designed to solve a challenge called In-context Ranking (ICR), or the process of having a model read a query and multiple documents at once to decide which ones matter most.
As far as we know, BlockRank is not being used by Google (e.g., Search, Gemini, AI Mode, AI Overviews) right now – but it could be used at some point in the future.
What BlockRank changes. ICR is expensive and slow. Models use a process called “attention,” where every word compares itself to every other word. Ranking hundreds of documents at once gets exponentially harder for LLMs.
How BlockRank works. BlockRank restructures how an LLM “pays attention” to text. Instead of every document attending to every other document, each one focuses only on itself and the shared instructions.
The model’s query section has access to all the documents, allowing it to compare them and decide which one best answers the question.
This transforms the model’s attention cost from quadratic (very slow) to linear (much faster) growth.
By the numbers. In experiments using Mistral-7B, Google’s team found that BlockRank:
Ran 4.7× faster than standard fine-tuned models when ranking 100 documents.
Scaled smoothly to 500 documents (about 100,000 tokens) in roughly one second.
Matched or beat leading listwise rankers like RankZephyr and FIRST on benchmarks such as MSMARCO, Natural Questions (NQ), and BEIR.
Why we care. BlockRank could change how future AI-driven retrieval and ranking systems work to reward user intent, clarity, and relevance. That means (in theory) clear, focused content that aligns with why a person is searching (not just what they type) should increasingly win.
What’s next. Google/DeepMind researchers are continuing to redefine what it means to “rank” information in the age of generative AI. The future of search is advancing fast – and it’s fascinating to watch it evolve in real time.
Google has expanded the What’s happening feature within Google Business Profiles to restaurants and bars in the United Kingdom, Canada, Australia, and New Zealand. It is now available for multi-location restaurants, not just single-location restaurants.
The What’s happening feature launched back in May as a way for some businesses to highlight events, deals, and specials prominently at the top of your Google Business Profile. Now, Google is bringing it to more countries.
What Google said. Google’s Lisa Landsman wrote on LinkedIn:
How do you promote your “Taco Tuesday” in Toledo and your “Happy Hour” in Houston… right when locals are searching for a place to go?
I’m excited to share that the Google Business Profile feature highlighting what’s happening at your business, such as timely events, specials and deals, has now rolled out for multi-location restaurants & bars across the US, UK, CA, AU & NZ! (It was previously only available for single-location restaurants)
This is a great option for driving real-time foot traffic. It automatically surfaces the unique specials, live music, or events you’re already promoting at a specific location, catching customers at the exact moment they’re deciding where to eat or grab a cocktail.
What it looks like. Here is a screenshot of this feature:
More details. Google’s Lisa Landsman added, “We’ve already seen excellent results from testing and look forward to hearing how this works for you!”
Availability. This feature is only available for restaurants & bars. Google said it hopes to expand to more categories soon. It is also only available in the United States, United Kingdom, Canada, Australia, and New Zealand.
The initial launch was for single-location Food and Drink businesses in the U.S., UK, Australia, Canada, and New Zealand. It is now available for multi-location restaurants, not just single-location restaurants.
Why we care. If you manage restaurants and/or bars, this may be a new way to get more attention and visitors to your business from Google Search. Now, if you manage multi-location restaurants or bars, you can leverage this feature.
Marketing, technology, and business leaders today are asking an important question: how do you optimize for large language models (LLMs) like ChatGPT, Gemini, and Claude?
LLM optimization is taking shape as a new discipline focused on how brands surface in AI-generated results and what can be measured today.
For decision makers, the challenge is separating signal from noise – identifying the technologies worth tracking and the efforts that lead to tangible outcomes.
The discussion comes down to two core areas – and the timeline and work required to act on them:
Tracking and monitoring your brand’s presence in LLMs.
Improving visibility and performance within them.
Tracking: The foundation of LLM optimization
Just as SEO evolved through better tracking and measurement, LLM optimization will only mature once visibility becomes measurable.
We’re still in a pre-Semrush/Moz/Ahrefs era for LLMs.
Tracking is the foundation of identifying what truly works and building strategies that drive brand growth.
Without it, everyone is shooting in the dark, hoping great content alone will deliver results.
The core challenges are threefold:
LLMs don’t publish query frequency or “search volume” equivalents.
Their responses vary subtly (or not so subtly) even for identical queries, due to probabilistic decoding and prompt context.
They depend on hidden contextual features (user history, session state, embeddings) that are opaque to external observers.
Why LLM queries are different
Traditional search behavior is repetitive – millions of identical phrases drive stable volume metrics. LLM interactions are conversational and variable.
People rephrase questions in different ways, often within a single session. That makes pattern recognition harder with small datasets but feasible at scale.
These structural differences explain why LLM visibility demands a different measurement model.
This variability requires a different tracking approach than traditional SEO or marketing analytics.
The leading method uses a polling-based model inspired by election forecasting.
The polling-based model for measuring visibility
A representative sample of 250–500 high-intent queries is defined for your brand or category, functioning as your population proxy.
These queries are run daily or weekly to capture repeated samples from the underlying distribution of LLM responses.
Tracking tools record when your brand and competitors appear as citations (linked sources) or mentions (text references), enabling share of voice calculations across all competitors.
Over time, aggregate sampling produces statistically stable estimates of your brand visibility within LLM-generated content.
Early tools providing this capability include:
Profound.
Conductor.
OpenForge.
Consistent sampling at scale transforms apparent randomness into interpretable signals.
Over time, aggregate sampling provides a stable estimate of your brand’s visibility in LLM-generated responses – much like how political polls deliver reliable forecasts despite individual variations.
Building a multi-faceted tracking framework
While share of voice paints a picture of your presence in the LLM landscape, it doesn’t tell the complete story.
Just as keyword rankings show visibility but not clicks, LLM presence doesn’t automatically translate to user engagement.
Brands need to understand how people interact with their content to build a compelling business case.
Because no single tool captures the entire picture, the best current approach layers multiple tracking signals:
Share of voice (SOV) tracking: Measure how often your brand appears as mentions and citations across a consistent set of high-value queries. This provides a benchmark to track over time and compare against competitors.
Referral tracking in GA4: Set up custom dimensions to identify traffic originating from LLMs. While attribution remains limited today, this data helps detect when direct referrals are increasing and signals growing LLM influence.
Branded homepage traffic in Google Search Console: Many users discover brands through LLM responses, then search directly in Google to validate or learn more. This two-step discovery pattern is critical to monitor. When branded homepage traffic increases alongside rising LLM presence, it signals a strong causal connection between LLM visibility and user behavior. This metric captures the downstream impact of your LLM optimization efforts.
Nobody has complete visibility into LLM impact on their business today, but these methods cover all the bases you can currently measure.
Be wary of any vendor or consultant promising complete visibility. That simply isn’t possible yet.
Understanding these limitations is just as important as implementing the tracking itself.
Because no perfect models exist yet, treat current tracking data as directional – useful for decisions, but not definitive.
Measuring LLM impact is one thing. Identifying which queries and topics matter most is another.
Compared to SEO or PPC, marketers have far less visibility. While no direct search volume exists, new tools and methods are beginning to close the gap.
The key shift is moving from tracking individual queries – which vary widely – to analyzing broader themes and topics.
The real question becomes: which areas is your site missing, and where should your content strategy focus?
To approximate relative volume, consider three approaches:
Correlate with SEO search volume
Start with your top-performing SEO keywords.
If a keyword drives organic traffic and has commercial intent, similar questions are likely being asked within LLMs. Use this as your baseline.
Layer in industry adoption of AI
Estimate what percentage of your target audience uses LLMs for research or purchasing decisions:
High AI-adoption industries: Assume 20-25% of users leverage LLMs for decision-making.
Slower-moving industries: Start with 5-10%.
Apply these percentages to your existing SEO keyword volume. For example, a keyword with 25,000 monthly searches could translate to 1,250-6,250 LLM-based queries in your category.
Using emerging inferential tools
New platforms are beginning to track query data through API-level monitoring and machine learning models.
Accuracy isn’t perfect yet, but these tools are improving quickly. Expect major advancements in inferential LLM query modeling within the next year or two.
The technologies that help companies identify what to improve are evolving quickly.
While still imperfect, they’re beginning to form a framework that parallels early SEO development, where better tracking and data gradually turned intuition into science.
Optimization breaks down into two main questions:
What content should you create or update, and should you focus on quality content, entities, schema, FAQs, or something else?
How should you align these insights with broader brand and SEO strategies?
Identify what content to create or update
One of the most effective ways to assess your current position is to take a representative sample of high-intent queries that people might ask an LLM and see how your brand shows up relative to competitors. This is where the Share of Voice tracking tools we discussed earlier become invaluable.
These same tools can help answer your optimization questions:
Track who is being cited or mentioned for each query, revealing competitive positioning.
Identify which queries your competitors appear for that you don’t, highlighting content gaps.
Show which of your own queries you appear for and which specific assets are being cited, pinpointing what’s working.
From this data, several key insights emerge:
Thematic visibility gaps: By analyzing trends across many queries, you can identify where your brand underperforms in LLM responses. This paints a clear picture of areas needing attention. For example, you’re strong in SEO but not in PPC content.
Third-party resource mapping: These tools also reveal which external resources LLMs reference most frequently. This helps you build a list of high-value third-party sites that contribute to visibility, guiding outreach or brand mention strategies.
Blind spot identification: When cross-referenced with SEO performance, these insights highlight blind spots; topics or sources where your brand’s credibility and representation could improve.
Understand the overlap between SEO and LLM optimization
LLMs may be reshaping discovery, but SEO remains the foundation of digital visibility.
Across five competitive categories, brands ranking on Google’s first page appeared in ChatGPT answers 62% of the time – a clear but incomplete overlap between search and AI results.
That correlation isn’t accidental.
Many retrieval-augmented generation (RAG) systems pull data from search results and expand it with additional context.
The more often your content appears in those results, the more likely it is to be cited by LLMs.
Brands with the strongest share of voice in LLM responses are typically those that invested in SEO first.
Strong technical health, structured data, and authority signals remain the bedrock for AI visibility.
What this means for marketers:
Don’t over-focus on LLMs at the expense of SEO. AI systems still rely on clean, crawlable content and strong E-E-A-T signals.
Keep growing organic visibility through high-authority backlinks and consistent, high-quality content.
Use LLM tracking as a complementary lens to understand new research behaviors, not a replacement for SEO fundamentals.
Redefine on-page and off-page strategies for LLMs
Just as SEO has both on-page and off-page elements, LLM optimization follows the same logic – but with different tactics and priorities.
Off-page: The new link building
Most industries show a consistent pattern in the types of resources LLMs cite:
Wikipedia is a frequent reference point, making a verified presence there valuable.
Reddit often appears as a trusted source of user discussion.
Review websites and “best-of” guides are commonly used to inform LLM outputs.
Citation patterns across ChatGPT, Gemini, Perplexity, and Google’s AI Overviews show consistent trends, though each engine favors different sources.
This means that traditional link acquisition strategies, guest posts, PR placements, or brand mentions in review content will likely evolve.
Instead of chasing links anywhere, brands should increasingly target:
Pages already being cited by LLMs in their category.
Reviews or guides that evaluate their product category.
Articles where branded mentions reinforce entity associations.
The core principle holds: brands gain the most visibility by appearing in sources LLMs already trust – and identifying those sources requires consistent tracking.
On-page: What your own content reveals
The same technologies that analyze third-party mentions can also reveal which first-party assets, content on your own website, are being cited by LLMs.
This provides valuable insight into what type of content performs well in your space.
For example, these tools can identify:
What types of competitor content are being cited (case studies, FAQs, research articles, etc.).
Where your competitors show up but you don’t.
Which of your own pages exist but are not being cited.
From there, three key opportunities emerge:
Missing content: Competitors are cited because they cover topics you haven’t addressed. This represents a content gap to fill.
Underperforming content: You have relevant content, but it isn’t being referenced. Optimization – improving structure, clarity, or authority – may be needed.
Content enhancement opportunities: Some pages only require inserting specific Q&A sections or adding better-formatted information rather than full rewrites.
Leverage emerging technologies to turn insights into action
The next major evolution in LLM optimization will likely come from tools that connect insight to action.
Early solutions already use vector embeddings of your website content to compare it against LLM queries and responses. This allows you to:
Detect where your coverage is weak.
See how well your content semantically aligns with real LLM answers.
Identify where small adjustments could yield large visibility gains.
Current tools mostly generate outlines or recommendations.
The next frontier is automation – systems that turn data into actionable content aligned with business goals.
Timeline and expected results
While comprehensive LLM visibility typically builds over 6-12 months, early results can emerge faster than traditional SEO.
The advantage: LLMs can incorporate new content within days rather than waiting months for Google’s crawl and ranking cycles.
However, the fundamentals remain unchanged.
Quality content creation, securing third-party mentions, and building authority still require sustained effort and resources.
Think of LLM optimization as having a faster feedback loop than SEO, but requiring the same strategic commitment to content excellence and relationship building that has always driven digital visibility.
From SEO foundations to LLM visibility
LLM traffic remains small compared to traditional search, but it’s growing fast.
A major shift in resources would be premature, but ignoring LLMs would be shortsighted.
The smartest path is balance: maintain focus on SEO while layering in LLM strategies that address new ranking mechanisms.
Like early SEO, LLM optimization is still imperfect and experimental – but full of opportunity.
Brands that begin tracking citations, analyzing third-party mentions, and aligning SEO with LLM visibility now will gain a measurable advantage as these systems mature.
In short:
Identify the third-party sources most often cited in your niche and analyze patterns across AI engines.
Map competitor visibility for key LLM queries using tracking tools.
Audit which of your own pages are cited (or not) – high Google rankings don’t guarantee LLM inclusion.
Continue strong SEO practices while expanding into LLM tracking – the two work best as complementary layers.
Approach LLM optimization as both research and brand-building.
Don’t abandon proven SEO fundamentals. Rather, extend them to how AI systems discover, interpret, and cite information.
AI tools can help teams move faster than ever – but speed alone isn’t a strategy.
As more marketers rely on LLMs to help create and optimize content, credibility becomes the true differentiator.
And as AI systems decide which information to trust, quality signals like accuracy, expertise, and authority matter more than ever.
It’s not just what you write but how you structure it. AI-driven search rewards clear answers, strong organization, and content it can easily interpret.
This article highlights key strategies for smarter AI workflows – from governance and training to editorial oversight – so your content remains accurate, authoritative, and unmistakably human.
Your organization will benefit from clear boundaries and expectations. Creating policies for AI use ensures consistency and accountability.
Only 7% of companies using genAI in marketing have a full-blown governance framework, according to SAS.
However, 63% invest in creating policies that govern how generative AI is used across the organization.
Source- “Marketers and GenAI- Diving Into the Shallow End,” SAS
Even a simple, one-page policy can prevent major mistakes and unify efforts across teams that may be doing things differently.
As Cathy McPhillips, chief growth officer at the Marketing Artificial Intelligence Institute, puts it:
“If one team uses ChatGPT while others work with Jasper or Writer, for instance, governance decisions can become very fragmented and challenging to manage. You’d need to keep track of who’s using which tools, what data they’re inputting, and what guidance they’ll need to follow to protect your brand’s intellectual property.”
So drafting an internal policy sets expectations for AI use in the organization (or at least the creative teams).
When creating a policy, consider the following guidelines:
What the review process for AI-created content looks like.
When and how to disclose AI involvement in content creation.
How to protect proprietary information (not uploading confidential or client information into AI tools).
Which AI tools are approved for use, and how to request access to new ones.
How to log or report problems.
Logically, the policy will evolve as the technology and regulations change.
Keep content anchored in people-first principles
It can be easy to fall into the trap of believing AI-generated content is good because it reads well.
LLMs are great at predicting the next best sentence and making it sound convincing.
But reviewing each sentence, paragraph, and the overall structure with a critical eye is absolutely necessary.
Think: Would an expert say it like that? Would you normally write like that? Does it offer the depth of human experience that it should?
“People-first content,” as Google puts it, is really just thinking about the end user and whether what you are putting into the world is adding value.
Any LLM can create mediocre content, and any marketer can publish it. And that’s the problem.
People-first content aligns with Google’s E-E-A-T framework, which outlines the characteristics of high-quality, trustworthy content.
E-E-A-T isn’t a novel idea, but it’s increasingly relevant in a world where AI systems need to determine if your content is good enough to be included in search.
According to evidence in U.S. v. Google LLC, we see quality remains central to ranking:
“RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: [redacted]% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.”
Source: U.S. v. Google LLC court documentation
It suggests that the same quality factors reflected in E-E-A-T likely influence how AI systems assess which pages are trustworthy enough to ground their answers.
So what does E-E-A-T look like practically when working with AI content? You can:
Review Google’s list of questions related to quality content: Keep these in mind before and after content creation.
Demonstrate firsthand experience through personal insights, examples, and practical guidance: Weave these insights into AI output to add a human touch.
Use reliable sources and data to substantiate claims: If you’re using LLMs for research, fact-check in real time to ensure the best sources.
Insert authoritative quotes either from internal stakeholders or external subject matter experts: Quoting internal folks builds brand credibility while external sources lend authority to the piece.
Create detailed author bios: Include:
Relevant qualifications, certifications, awards, and experience.
Links to social media, academic papers (if relevant), or other authoritative works.
Add schema markup to articles to clarify the content further: Schema can clarify content in a way that AI-powered search can better understand.
Become the go-to resource on the topic: Create a depth and breadth of material on the website that’s organized in a search-friendly, user-friendly manner. You can learn more in my article on organizing content for AI search.
Source: Creating helpful, reliable, people-first content,” Google Search Central
The do’s and don’ts of phrases and language to use.
Formatting rules such as SEO-friendly headers, sentence length, paragraph length, bulleted list guidelines, etc.
You can refresh this as needed and use it to further train the model over time.
Build a prompt kit
Put together a packet of instructions that prompts the LLM. Here are some ideas to start with:
The style guide
This covers everything from the audience personas to the voice style and formatting.
If you’re training a custom GPT, you don’t need to do this every time, but it may need tweaking over time.
A content brief template
This can be an editable document that’s filled in for each content project and includes things like:
The goal of the content.
The specific audience.
The style of the content (news, listicle, feature article, how-to).
The role (who the LLM is writing as).
The desired action or outcome.
Content examples
Upload a handful of the best content examples you have to train the LLM. This can be past articles, marketing materials, transcripts from videos, and more.
If you create a custom GPT, you’ll do this at the outset, but additional examples of content may be uploaded, depending on the topic.
Sources
Train the model on the preferred third-party sources of information you want it to pull from, in addition to its own research.
For example, if you want it to source certain publications in your industry, compile a list and upload it to the prompt.
As an additional layer, prompt the model to automatically include any third-party sources after every paragraph to make fact-checking easier on the fly.
SEO prompts
Consider building SEO into the structure of the content from the outset.
Early observations of Google’s AI Mode suggest that clearly structured, well-sourced content is more likely to be referenced in AI-generated results.
With that in mind, you can put together a prompt checklist that includes:
Crafting a direct answer in the first one to two sentences, then expanding with context.
Covering the main question, but also potential subquestions (“fan-out” queries) that the system may generate (for example, questions related to comparisons, pros/cons, alternatives, etc.).
Chunking content into many subsections, with each subsection answering a potential fan-out query to completion.
Being an expert source of information in each individual section of the page, meaning it’s a passage that can stand on its own.
Provide clear citations and semantic richness (synonyms, related entities) throughout.
A custom GPT is a personalized version of ChatGPT that’s trained on your materials so it can better create in your brand voice and follow brand rules.
It mostly remembers tone and format, but that doesn’t guarantee the accuracy of output beyond what’s uploaded.
Some companies are exploring RAG (retrieval-augmented generation) to further train LLMs on the company’s own knowledge base.
RAG connects an LLM to a private knowledge base, retrieving relevant documents at query time so the model can ground its responses in approved information.
While custom GPTs are easy, no-code setups, RAG implementation is more technical – but there are companies/technologies out there that can make it easier to implement.
That’s why GPTs tend to work best for small or medium-scale projects or for non-technical teams focused on maintaining brand consistency.
Create a custom GPT in ChatGPT
RAG, on the other hand, is an option for enterprise-level content generation in industries where accuracy is critical and information changes frequently.
Run an automated self-review
Create parameters so the model can self-assess the content before further editorial review. You can create a checklist of things to prompt it.
For example:
“Is the advice helpful, original, people-first?” (Perhaps using Google’s list of questions from its helpful content guidance.)
“Is the tone and voice completely aligned with the style guide?”
Have an established editing process
Even the best AI workflow still depends on trained editors and fact-checkers. This human layer of quality assurance protects accuracy, tone, and credibility.
Writers and editors need to continue to upskill in the coming year, and, according to the Microsoft 2025 annual Work Trend Index, AI skilling is the top priority.
Source: 2025 Microsoft Work Trend Index Annual Report
Professional training creates baseline knowledge so your team gets up to speed faster and can confidently handle outputs consistently.
This includes training on how to effectively use LLMs and how to best create and edit AI content.
In addition, training content teams on SEO helps them build best practices into prompts and drafts.
Editorial procedures
Ground your AI-assisted content creation in editorial best practices to ensure the highest quality.
This might include:
Identifying the parts of the content creation workflow that are best suited for LLM assistance.
Conducting an editorial meeting to sign off on topics and outlines.
Drafting the content.
Performing the structural edit for clarity and flow, then copyediting for grammar and punctuation.
Getting sign-off from stakeholders.
AI editorial process
The AI editing checklist
Build a checklist to use during the review process for quality assurance. Here are some ideas to get you started:
Every claim, statistic, quote, or date is accompanied by a citation for fact-checking accuracy.
All facts are traceable to credible, approved sources.
Outdated statistics (more than two years) are replaced with fresh insights.
Draft meets the style guide’s voice guidelines and tone definitions.
Content adds valuable, expert insights rather than being vague or generic.
For thought leadership, ensure the author’s perspective is woven throughout.
Draft is run through the AI detector, aiming for a conservative percentage of 5% or less AI.
Draft aligns with brand values and meets internal publication standards.
Final draft includes explicit disclosure of AI involvement when required (client-facing/regulatory).
Grounding AI content in trust and intent
AI is transforming how we create, but it doesn’t change why we create.
Every policy, workflow, and prompt should ultimately support one mission: to deliver accurate, helpful, and human-centered content that strengthens your brand’s authority and improves your visibility in search.
The conversation around artificial intelligence (AI) has been dominated by “replacement theory” headlines. From front-line service roles to white-collar knowledge work, there’s a growing narrative that human capital is under threat.
Economic anxiety has fueled research and debate, but many of the arguments remain narrow in scope.
Stanford’s Digital Economy Lab found that since generative AI became widespread, early-career workers in the most exposed jobs have seen a 13% decline in employment.
This fear has spread into higher-paid sectors as well, with hedge fund managers and CEOs predicting large-scale restructuring of white-collar roles over the next decade.
However, much of this narrative is steeped in speculation rather than the fundamental, evolving dynamics of skilled work.
Yes, we’ve seen layoffs, hiring slowdowns, and stories of AI automating tasks. But this is happening against the backdrop of high interest rates, shifts in global trade, and post-pandemic over-hiring.
As the global talent thought-leader Josh Bersin argues, claims of mass job destruction are “vastly over-hyped.” Many roles will transform, not vanish.
What this means for SEO
For the SEO discipline, the familiar refrain “SEO is dead” is just as overstated.
Yes, the nature of the SEO specialist is changing. We’ve seen fewer leadership roles, a contraction in content and technical positions, and cautious hiring. But the function itself is far from disappearing.
In fact, SEO job listings remain resilient in 2025 and mid-level roles still comprise nearly 60% of open positions. Rather than declining, the field is being reshaped by new skill demands.
Don’t ask, “Will AI replace me?” Ask instead, “How can I use AI to multiply my impact?”
Think of AI not as the jackhammer replacing the hammer but as the jackhammer amplifying its effect. SEOs who can harness AI through agents, automation, and intelligent systems will deliver faster, more impactful results than ever before.
“AI is a tool. We can make it or teach it to do whatever we want…Life will go on, economies will continue to be driven by emotion, and our businesses will continue to be fueled by human ideas, emotion, grit, and hard work,” Bersin said.
Rewriting the SEO narrative
As an industry, it’s time to change the language we use to describe SEO’s evolution.
Too much of our conversation still revolves around loss. We focus on lost clicks, lost visibility, lost control, and loss of num=100.
That narrative doesn’t serve us anymore.
We should be speaking the language of amplification and revenue generation. SEO has evolved from “optimizing for rankings” to driving measurable business growth through organic discovery, whether that happens through traditional search, AI Overviews, or the emerging layer of Generative Engine Optimization (GEO).
AI isn’t the villain of SEO; it’s the force multiplier.
When harnessed effectively, AI scales insight, accelerates experimentation, and ties our work more directly to outcomes that matter:
Pipeline.
Conversions.
Revenue.
We don’t need to fight the dystopian idea that AI will replace us. We need to prove that AI-empowered SEOs can help businesses grow faster than ever before.
The new language of SEO isn’t about survival, it’s about impact.
The team landscape has already shifted
For years, marketing and SEO teams grew headcount to scale output.
Today, the opposite is true. Hiring freezes, leaner budgets, and uncertainty around the role of SEO in an AI-driven world have forced leaders to rethink team design.
A recent Search Engine Land report noted that remote SEO roles dropped to 34% of listings in early 2025, while content-focused SEO positions declined by 28%. A separate LinkedIn survey found a 37% drop in SEO job postings in Q1 compared to the previous year.
This signals two key shifts:
Specialized roles are disappearing. “SEO writers” and “link builders” are being replaced by versatile strategists who blend technical, analytical, and creative skill sets.
Leadership is demanding higher ROI per role. Headcount is no longer the metric of success – capability is.
What it means for SEO leadership
If your org chart still looks like a pyramid, you’re behind.
The new landscape demands flexibility, speed, and cross-functional integration with analytics, UX, paid media, and content.
It’s time to design teams around capabilities, not titles.
Rethinking SEO Talent
The best SEO leaders aren’t hiring specialists, they’re hiring aptitude. Modern SEO organizations value people who can think across disciplines, not just operate within one.
The strongest hires we’re seeing aren’t traditional technical SEOs focused on crawl analysis or schema. They’re problem solvers – marketers who understand how search connects to the broader growth engine and who have experience scaling impact across content, data, and product.
Progressive leaders are also rethinking resourcing. The old model of a technical SEO paired with engineering support is giving way to tech SEOs working alongside AI product managers and, in many cases, vibe coding solutions. This model moves faster, tests bolder, and builds systems that drive real results.
For SEO leaders, rethinking team architecture is critical. The right question isn’t “Who should I hire next?” It’s “What critical capability must we master to stay competitive?”
Once that’s clear, structure your people and your agents around that need. The companies that get this right during the AI transition will be the ones writing the playbook for the next generation of search leadership.
The new human-led, agent-empowered team
The future of SEO teams will be defined by collaboration between humans and agents.
These agents are AI-enabled systems like automated content refreshers, site-health bots, or citation-validation agents that work alongside human experts.
The human role? To define, train, monitor, and QA their output.
Why this matters
Agents handle high-volume, repeatable tasks (e.g., content generation, basic auditing, link-score filtering) so humans can focus on strategy, insight, and business impact.
The cost of building AI agents can range from $20,000 to $150,000, depending on the complexity of the system, integrations, and the specialized work required across data science, engineering, and human QA teams, according to RTS Labs.
A single human manager might oversee 10-20 agents, shifting the traditional pyramid and echoing the “short pyramid” or “rocket ship” structure explored by Tomasz Tunguz.
The future: teams built around agents and empowered humans.
Real-world archetypes
SaaS companies: Develop a bespoke “onboarding agent” that reads product data, builds landing pages, and runs first-pass SEO audits, human strategist refines output.
Marketplace brands (e.g., upcoming seasonal trend): Use an “Audience Discovery Agent” that taps customer and marketplace data, but the human team writes the narrative and guides the vertical direction.
Enterprise content hubs: deploy “Content Refresh Agents” that identify high-value pages, suggest optimizations, and push drafts that editors review and finalise.
Integration is key
These new teams succeed when they don’t live in silos. The SEO/GEO squad must partner with paid search, analytics, revenue ops, and UX – not just serve them.
Agents create capacity; humans create alignment and amplification.
A call to SEO practitioners
Building the SEO community of the future will require change.
The pace of transformation has never been faster and it’s created a dangerous dependence on third-party “AI tools” as the answer to what is unknown.
But the true AI story doesn’t begin with a subscription. It begins inside your team.
If the only AI in your workflow is someone else’s product, you’re giving up your competitive edge. The future belongs to teams that build, not just buy.
Here’s how to start:
Build your own agent frameworks, designed with human-in-the-loop oversight to ensure accuracy, adaptability, and brand alignment.
Partner with experts who co-create, not just deliver. The most successful collaborations help your team learn how to manage and scale agents themselves.
Evolve your team structure, move beyond the pyramid mentality, and embrace a “rocket ship” model where humans and agents work in tandem to multiply output, insights, and results.
The future of SEO starts with building smarter teams. It’s humans working with agents. It’s capability uplift. And if you lead that charge, you’ll not only adapt to the next generation of search, you’ll be the ones designing it.
Google added Query groups to the Search Console Insights report. Query groups groups similar search queries together so you can quickly see the main topics your audience searches for.
What Google said. Google wrote, “We are excited to announce Query groups, a powerful Search Console Insights feature that groups similar search queries.”
“Query groups solve this problem by grouping similar queries. Instead of a long, cluttered list of individual queries, you will now see lists of queries representing the main groups that interest your audience. The groups are computed using AI; they may evolve and change over time. They are designed for providing a better high level perspective of your queries and don’t affect ranking,” Google added.
What it looks like. Here is a sample screenshot of this new Query groups report:
You can see that Google is lumping together “search engine optimization, seo optimization, seo website, seo optimierung, search engine optimization (seo), search …” into the “seo” query group in the second line. This shows the site overall is getting 9% fewer clicks on SEO related queries than it did previously.
Availability. Google said query groups will be rolling out gradually over the coming weeks. It is a new card in the Search Console Insights report. Plus, query groups are available only to properties that have a large volume of queries, as the need to group queries is less relevant for sites with fewer queries.
Why we care. Many SEOs have been grouping these queries into these clusters manually or through their own tools. Now, Google will do it for you, making it easier for more novie SEOs and beginner SEOs to understand.
Every year, Search Engine Land is delighted to celebrate the best of search marketing by rewarding the agencies, in-house teams, and individuals worldwide for delivering exceptional results.
Today, I’m excited to announce all 18 winners of the 11th annual Search Engine Land Awards.
The 2025 Search Engine Land Awards winners
Best Use Of AI Technology In Search Marketing
15x ROAS with AI: How CAMP Digital Redefined Paid Search for Home Services
ATRA & Jason Stone Injury Lawyers – Leveraging CRM Data to Scale Case Volume
Best Commerce Search Marketing Initiative – PPC
Adwise & Azerty – 126% uplift in profit from paid advertising & 1 percent point net margin business uplift by advanced cross-channel bucketing
Best Local Search Marketing Initiative – PPC
How We Crushed Belron’s Lead Target by 238% With an AI-Powered Local Strategy (Adviso)
Best B2B Search Marketing Initiative – PPC
Blackbird PPC and Customer.io: Advanced Data Integration to Drive 239% Revenue Increase with 12% Greater Lead Efficiency, with MMM Future-Proofing 2025 Growth
Best Integration Of Search Into Omnichannel Marketing
How NBC used search to drive +2,573 accounts in a Full-Funnel Media Push (Adviso)
Best Overall SEO Initiative – Small Business
Digital Hitmen & Elite Tune: The Toyota Shift That Delivered 678% SEO ROI
Best Overall SEO Initiative – Enterprise
825 Million Clicks, Zero Content Edits: How Amsive Engineered MSN’s Technical SEO Turnaround
Best Commerce Search Marketing Initiative – SEO
Scaling Non-Branded SEO for Assouline to Drive +26% Organic Revenue Uplift (Block & Tam)
Best Local Search Marketing Initiative – SEO
Building an Unbeatable Foundation for Success: Using Hyperlocal SEO to Build Exceptional ROI (Digital Hitmen)
Best B2B Search Marketing Initiative – SEO
Page One, Pipeline Won: The B2B SEO Playbook That Turned 320 Visitors into $10.75M in Pipeline (LeadCoverage)
Agency Of The Year – PPC
Driving Growth Where Search Happens: Stella Rising’s Paid Search Transformation
Agency Of The Year – SEO
How Amsive Rescued MSN’s Global Visibility Through Enterprise Technical SEO at Scale
In-House Team Of The Year – SEO
How the American Cancer Society’s Lean SEO Team Drove Enterprise-Wide Consolidation and AI Search Visibility Gains for Cancer.org
Search Marketer Of The Year
Mike King, founder and CEO of iPullRank
Small Agency Of The Year – PPC
ATRA & Jason Stone Injury Lawyers – Leveraging CRM Data to Scale Case Volume
Small Agency Of The Year – SEO
From Zero to Top of the Leaderboard: Bloom Digital Drives Big Growth With Small SEO Budgets
“I’m going to SMX Next!”
Select winners of the 2025 Search Engine Land Awards will be invited to speak live at SMX Next during our two ask-me-anything-style sessions. Bring your burning SEO and PPC questions to ask this award-winning panel of search marketers!
Congrats again to all the winners. And huge thank yous to everyone who entered the 2025 Search Engine Land Awards, the finalists, and our fantastic panel of judges for this year’s awards.
The web’s purpose is shifting. Once a link graph – a network of pages for users and crawlers to navigate – it’s rapidly becoming a queryable knowledge graph.
For technical SEOs, that means the goal has evolved from optimizing for clicks to optimizing for visibility and even direct machine interaction.
Enter NLWeb – Microsoft’s open-source bridge to the agentic web
At the forefront of this evolution is NLWeb (Natural Language Web), an open-source project developed by Microsoft.
NLWeb simplifies the creation of natural language interfaces for any website, allowing publishers to transform existing sites into AI-powered applications where users and intelligent agents can query content conversationally – much like interacting with an AI assistant.
Developers suggest NLWeb could play a role similar to HTML in the emerging agentic web.
Its open-source, standards-based design makes it technology-agnostic, ensuring compatibility across vendors and large language models (LLMs).
This positions NLWeb as a foundational framework for long-term digital visibility.
Schema.org is your knowledge API: Why data quality is the NLWeb foundation
NLWeb proves that structured data isn’t just an SEO best practice for rich results – it’s the foundation of AI readiness.
Its architecture is designed to convert a site’s existing structured data into a semantic, actionable interface for AI systems.
In the age of NLWeb, a website is no longer just a destination. It’s a source of information that AI agents can query programmatically.
The NLWeb data pipeline
The technical requirements confirm that a high-quality schema.org implementation is the primary key to entry.
Data ingestion and format
The NLWeb toolkit begins by crawling the site and extracting the schema markup.
The schema.org JSON-LD format is the preferred and most effective input for the system.
This means the protocol consumes every detail, relationship, and property defined in your schema, from product types to organization entities.
For any data not in JSON-LD, such as RSS feeds, NLWeb is engineered to convert it into schema.org types for effective use.
Semantic storage
Once collected, this structured data is stored in a vector database. This element is critical because it moves the interaction beyond traditional keyword matching.
Vector databases represent text as mathematical vectors, allowing the AI to search based on semantic similarity and meaning.
For example, the system can understand that a query using the term “structured data” is conceptually the same as content marked up with “schema markup.”
This capacity for conceptual understanding is absolutely essential for enabling authentic conversational functionality.
Every NLWeb instance operates as an MCP server, an emerging standard for packaging and consistently exchanging data between various AI systems and agents.
MCP is currently the most promising path forward for ensuring interoperability in the highly fragmented AI ecosystem.
The ultimate test of schema quality
Since NLWeb relies entirely on crawling and extracting schema markup, the precision, completeness, and interconnectedness of your site’s content knowledge graph determine success.
The key challenge for SEO teams is addressing technical debt.
Custom, in-house solutions to manage AI ingestion are often high-cost, slow to adopt, and create systems that are difficult to scale or incompatible with future standards like MCP.
NLWeb addresses the protocol’s complexity, but it cannot fix faulty data.
If your structured data is poorly maintained, inaccurate, or missing critical entity relationships, the resulting vector database will store flawed semantic information.
This leads inevitably to suboptimal outputs, potentially resulting in inaccurate conversational responses or “hallucinations” by the AI interface.
Robust, entity-first schema optimization is no longer just a way to win a rich result; it is the fundamental barrier to entry for the agentic web.
By leveraging the structured data you already have, NLWeb allows you to unlock new value without starting from scratch, thereby future-proofing your digital strategy.
NLWeb vs. llms.txt: Protocol for action vs. static guidance
The need for AI crawlers to process web content efficiently has led to multiple proposed standards.
A comparison between NLWeb and the proposed llms.txt file illustrates a clear divergence between dynamic interaction and passive guidance.
The llms.txt file is a proposed static standard designed to improve the efficiency of AI crawlers by:
Providing a curated, prioritized list of a website’s most important content – typically formatted in markdown.
Attempting to solve the legitimate technical problems of complex, JavaScript-loaded websites and the inherent limitations of an LLM’s context window.
In sharp contrast, NLWeb is a dynamic protocol that establishes a conversational API endpoint.
Its purpose is not just to point to content, but to actively receive natural language queries, process the site’s knowledge graph, and return structured JSON responses using schema.org.
NLWeb fundamentally changes the relationship from “AI reads the site” to “AI queries the site.”
Attribute
NLWeb
llms.txt
Primary goal
Enables dynamic, conversational interaction and structured data output
Improves crawler efficiency and guides static content ingestion
Operational model
API/Protocol (active endpoint)
Static Text File (passive guidance)
Data format used
Schema.org JSON-LD
Markdown
Adoption status
Open project; connectors available for major LLMs, including Gemini, OpenAI, and Anthropic
Proposed standard; not adopted by Google, OpenAI, or other major LLMs
Strategic advantage
Unlocks existing schema investment for transactional AI uses, future-proofing content
Reduces computational cost for LLM training/crawling
The market’s preference for dynamic utility is clear. Despite addressing a real technical challenge for crawlers, llms.txt has failed to gain traction so far.
NLWeb’s functional superiority stems from its ability to enable richer, transactional AI interactions.
It allows AI agents to dynamically reason about and execute complex data queries using structured schema output.
The strategic imperative: Mandating a high-quality schema audit
While NLWeb is still an emerging open standard, its value is clear.
It maximizes the utility and discoverability of specialized content that often sits deep in archives or databases.
This value is realized through operational efficiency and stronger brand authority, rather than immediate traffic metrics.
Several organizations are already exploring how NLWeb could let users ask complex questions and receive intelligent answers that synthesize information from multiple resources – something traditional search struggles to deliver.
The ROI comes from reducing user friction and reinforcing the brand as an authoritative, queryable knowledge source.
For website owners and digital marketing professionals, the path forward is undeniable: mandate an entity-first schema audit.
Because NLWeb depends on schema markup, technical SEO teams must prioritize auditing existing JSON-LD for integrity, completeness, and interconnectedness.
Publishers should ensure their schema accurately reflects the relationships among all entities, products, services, locations, and personnel to provide the context necessary for precise semantic querying.
The transition to the agentic web is already underway, and NLWeb offers the most viable open-source path to long-term visibility and utility.
It’s a strategic necessity to ensure your organization can communicate effectively as AI agents and LLMs begin integrating conversational protocols for third-party content interaction.
Nearly 90% of businesses are worried about losing organic visibility as AI transforms how people find information, according to a new survey by Ann Smarty.
Why we care. The shift from search results to AI-generated answers seems to be happening faster than many expected, threatening the foundation of how companies are found online and drive sales. AI is changing the customer journey and forcing an SEO evolution.
By the numbers. Most prefer to keep the “SEO” label – with “SEO for AI” (49%) and “GEO” (41%) emerging as leading terms for this new discipline.
87.8% of businesses said they’re worried about their online findability in the AI era.
85.7% are already investing or plan to invest in AI/LLM optimization.
61.2% plan to increase their SEO budgets due to AI.
Brand over clicks. Three in four businesses (75.5%) said their top priority is brand visibility in AI-generated answers – even when there’s no link back to their site.
Just 14.3% prioritize being cited as a source (which could drive traffic).
A small group said they need both.
Top concerns. “Not being able to get my business found online” ranked as the biggest fear, followed by the total loss of organic search and loss of traffic attribution.
About the survey. Smarty surveyed 300+ in-house marketers and business owners, mostly from medium and enterprise companies, with nearly half representing ecommerce brands.
Google Search Console’s performance report is stuck and has not shown an update in the main report since Sunday, October 19th. Google confirmed the issue and said it will catch up.
What it looks like. As I said on the Search Engine Roundtable, before Google confirmed the issue, the performance reports for all Search Console profiles are stuck on Sunday. Here is a sample chart:
More details. The weird thing is that when you dive in to 24 hour data, you do get recent data. So it does seem like the data is being collected and stored but it just isn’t being rendered in most of the reporting.
In addition, when you click on the by date breakdown under the chart, Google is only showing data as recent as this past Sunday.
Again, I really think the data is not lost and will soon show up in the main reporting charts soon.
What Google said. Daniel Waisberg from the Google Search Central team who works with Search Console said on X, “We’re catching up.”
Why we care. If you’ve been looking to run reports for clients or stakeholders, you may have to wait a few more days for this report to catch up. It is not a bug just for your site, but for all sites in Google Search Console and it should be fixed soon.
In the early days of SEO, ranking algorithms were easy to game with simple tactics that became known as “black hat” SEO – white text on a white background, hidden links, keyword stuffing, and paid link farms.
Early algorithms weren’t sophisticated enough to detect these schemes, and sites that used them often ranked higher.
Today, large language models power the next generation of search, and a new wave of black hat techniques are emerging to manipulate rankings and prompt results for advantage.
The AI content boom – and the temptation to cut corners
Up to 21% of U.S. users access AI tools like ChatGPT, Claude, Gemini, Copilot, Perplexity, and DeepSeek more than 10 times per month, according to SparkToro.
Overall adoption has jumped from 8% in 2023 to 38% in 2025.
It’s no surprise that brands are chasing visibility – especially while standards and best practices are still taking shape.
One clear sign of this shift is the surge in AI-generated content. Graphite.io and Axios report that the share of articles written by AI has now surpassed those created by humans.
Two years ago, Sports Illustrated was caught publishing AI-generated articles under fake writer profiles – a well-intentioned shortcut that backfired.
The move damaged the brand’s credibility without driving additional traffic.
Its authoritativeness, one of the pillars of Google’s E-E-A-T (experience, expertise, authoritativeness, and trustworthiness) framework, was compromised.
While Google continues to emphasize E-E-A-T as the North Star for quality, some brands are testing the limits.
With powerful AI tools now able to execute these tactics faster and at scale, a new wave of black hat practices is emerging.
As black hat GEO gains traction, several distinct tactics are emerging – each designed to exploit how AI models interpret and rank content.
Mass AI-generated spam
LLMs are being used to automatically produce thousands of low-quality, keyword-stuffed articles, blog posts, or entire websites – often to build private blog networks (PBNs).
The goal is sheer volume, which artificially boosts link authority and keyword rankings without human oversight or original insight.
Fake E-E-A-T signals
Search engines still prioritize experience, expertise, authoritativeness, and trustworthiness.
Black hat GEO now fabricates these signals using AI to:
Create synthetic author personas with generated headshots and fake credentials.
Mass-produce fake reviews and testimonials.
Generate content that appears comprehensive but lacks genuine, human-validated experience.
LLM cloaking and manipulation
A more advanced form of cloaking, this tactic serves one version of content to AI crawlers – packed with hidden prompts, keywords, or deceptive schema markup – and another to human users.
The goal is to trick the AI into citing or ranking the content more prominently.
Schema misuse for AI Overviews
Structured data helps AI understand context, but black hat users can inject misleading or irrelevant schema to misrepresent the page’s true purpose, forcing it into AI-generated answers or rich snippets for unrelated, high-value searches.
SERP poisoning with misinformation
AI can quickly generate high volumes of misleading or harmful content targeting competitor brands or industry terms.
The aim is to damage reputations, manipulate rankings, and push legitimate content down in search results.
Even Google surfaces YouTube videos that explain how these tactics work. But just because they’re easy to find doesn’t mean they’re worth trying.
The risks of engaging in – or being targeted by – black hat GEO are significant and far-reaching, threatening a brand’s visibility, revenue, and reputation.
Severe search engine penalties
Search engines like Google are deploying increasingly advanced AI-powered detection systems (such as SpamBrain) to identify and penalize these tactics.
De-indexing: The most severe penalty is the complete removal of your website from search results, making you invisible to organic traffic.
Manual actions: Human reviewers can issue manual penalties that lead to a sudden and drastic drop in rankings, requiring months of costly, intensive work to recover.
Algorithmic downgrading: The site’s ranking for targeted keywords can be significantly suppressed, leading to a massive loss of traffic and potential customers.
Reputation and trust damage
Black hat tactics inherently prioritize manipulation over user value, leading to poor user experience, spammy content, and deceptive practices.
Loss of credibility: When users encounter irrelevant, incoherent, or keyword-stuffed content – or find that an AI-cited answer is baseless – it damages the perception of the brand’s expertise and honesty.
Erosion of E-E-A-T: Since AI relies on E-E-A-T signals for authoritative responses, being caught fabricating these signals can permanently erode the brand’s trustworthiness in the eyes of the algorithm and the public.
Malware distribution: In some extreme cases, cybercriminals use black hat SEO to poison search results, redirecting users to sites that install malware or exploit user data. If a brand’s site is compromised and used for such purposes, the damage is catastrophic.
AI changes the game – not the rules
The growth of AI-driven platforms is remarkable – but history tends to repeat itself.
Black hat SEO in the age of LLMs is no different.
While the tools have evolved, the principle remains the same: best practices win.
Google has made that clear, and brands that stay focused on quality and authenticity will continue to rise above the noise.
ChatGPT referral traffic converts worse than Google search, email and affiliate links, trailing on both conversion rate and revenue per session, according to a new analysis of 973 ecommerce sites.
Why we care. AI search platforms are starting to refer meaningful traffic to retailers – but not yet sales. For now, Google (paid organic) search still wins on conversion and revenue per session.
By the numbers. The dataset consisted of 12 months (Augusut 2024 to July 2025), 973 ecommerce sites, and $20 billion combined revenue.
ChatGPT referral traffic was ~0.2% of total sessions – ~200× smaller than Google organic.
>90% of LLM-originating ecommerce traffic came from ChatGPT (Perplexity, Gemini, Copilot, etc., are were negligible).
Affiliate (+86%) and organic search (+13%) conversion rates were higher than ChatGPT; only paid social converted worse than ChatGPT.
ChatGPT trailed paid and organic search on revenue per session, but beat paid social.
ChatGPT referrals had lower bounce rates than most channels, but organic/paid search was still best on bounce rate. Session depth was generally lower than most channels.
Trendline. Conversion rate and revenue per session from ChatGPT improved, while average order value declined.
Model projections suggested continued gains but no parity with organic search within the next year.
Between the lines. Authors suggested early-stage friction – trust and verification behavior – may push shoppers to confirm elsewhere before buying, shifting last-click credit to traditional channels.
Yes, but. Findings reflect last-click attribution and an emerging channel. If ChatGPT (and other LLMs) reshape customer journeys or make it easier to buy directly, its impact on sales could become more visible in the data.
Bottom line. Despite the hype, the data suggests AI assistants haven’t disrupted Google Search – and won’t at least in the next year. However, the trajectory for AI assistants is up and to the right. Now is the time to test, learn, and iterate to be ready when LLM shopping matures.
About the research. The study analyzed 12 months of first-party Google Analytics data from 973 ecommerce websites generating $20 billion in combined revenue. Researchers compared more than 50,000 ChatGPT-driven transactions with 164 million from traditional digital channels, using regression models that accounted for data sparsity, site effects, and device differences to evaluate conversion, order value, and engagement metrics.
Recent studies echo the same pattern. LLM traffic may be rising, but it’s weaker on engagement and conversion.
All of this research points to a consistent takeaway: AI-driven referrals are growing, but still lag traditional search in both scale and purchase intent.
Looking to take the next step in your search marketing career?
Below, you will find the latest SEO, PPC, and digital marketing jobs at brands and agencies. We also include positions from previous weeks that are still open.
GSA Management is seeking an experienced SEO Manager to join our digital marketing team in driving organic traffic growth and boosting our local and organic rankings across multiple locations. This strategic role is pivotal to our digital marketing success as we expand our footprint nationally. Objective: The ideal candidate will combine technical SEO expertise with […]
Omniscient Digital is an organic growth agency that partners with ambitious B2B SaaS companies like SAP, Adobe, Loom, and Hotjar to turn SEO and content into growth engines. We pride ourselves on being lean, agile, and experimental. Our team thrives on R&D and innovation, always exploring the smartest ways to deliver exceptional results. We believe […]
Please Note: Internal Employees, please access the Jobs Hub app on the Workday Dashboard homepage to apply for the position. The University of Massachusetts Global (UMass Global) is a private, nonprofit affiliate of the University of Massachusetts. Accredited by WASC (Western Association of Schools and Colleges), the university offers undergraduate, graduate, credential, and certificate programs […]
About CSP Agency: We’re an established SEO, GEO, Content and Link-Building agency recognized nationally for our thought leadership and 13 years of driving measurable results. Our team is small by design; a group of dedicated SEO professionals who thrive on collaboration, creative problem-solving, and pushing the boundaries of what’s possible in search. We partner with clients […]
Description The Director of Local SEO & LSA will serve as the strategic leader driving Rankings.io’s local visibility across all client accounts — blending deep expertise in Local SEO with advanced knowledge of Google Local Services Ads (LSA) to help law firms dominate local search results and generate high-quality leads. This role will lead the […]
Here at Lower, we believe homeownership is the key to building wealth, and we’re making it easier and more accessible than ever. As a mission-driven fintech, we simplify the home-buying process through cutting-edge technology and a seamless customer experience. With tens of billions in funded home loans and top ratings on Trustpilot (4.8), Google (4.9), […]
At New Media Advisors, we don’t just deliver SEO and content strategies—we empower in-house marketing teams to build and sustain digital growth. As ex-in-house leaders with 125+ years of combined experience, we partner with mid-market and enterprise brands to solve real problems, drive organic performance, and upskill internal teams. We’re not an agency. We’re strategic […]
At NerdWallet, we’re on a mission to bring clarity to all of life’s financial decisions and every great mission needs a team of exceptional Nerds. We’ve built an inclusive, flexible, and candid culture where you’re empowered to grow, take smart risks, and be unapologetically yourself (cape optional). Whether remote or in-office, we support how you […]
Pella is seeking a strategic, data-driven, and curious SEO specialist to lead and scale our organic search strategy. This role will partner with internal and external stakeholders to manage and deliver cutting edge digital experiences. As an SEO Specialist, this position will be responsible for driving the development and execution of comprehensive SEO strategies, enhancing visibility […]
SEO Content Specialist Role Description This is a part-time position, offering up to 20 hours per week of work. While this Position relies on traditional SEO skills to achieve results in the Marketplace, the SEO Specialist will primarily be creating content and working within CMS (WordPress) daily to achieve desired Rankings. The SEO Specialist position […]
Description Moneturn, a dynamic new venture from Kinetic Investments, is an ambitious marketing agency on a mission to help brands achieve long-term success in the digital landscape. We deliver a full suite of services, spanning SEO, PPC, social media management, content creation, and beyond, tailored to drive growth for iGaming-focused brands. We’re a small but […]
Code3 is an integrated marketing agency, powering business growth for digital disruptors and Fortune 500 leaders alike. Our power is at the intersection of Connections, Creative and Commerce – that’s what is in our DNA. By harvesting insights and utilizing audience data, we work with our clients to develop scroll-stopping content and creative that performs […]
Accelerated Digital Media stands as a self-funded, employee-owned digital marketing agency with a focus on performance media management across paid search and paid social channels. Our culture prioritizes our team and values high standards. We welcome fresh perspectives, champion collaboration, and cultivate individual accountability to meet our ambitious growth objectives. Specializing in direct-to-consumer brands, we […]
Description We’re looking for a brilliant Part-Time Paid Social Video Editor on an initial 6-month fixed-term contract to create bold, results-driven social ads that bring our adventures to life. Read more about working at Much Better Adventures The Role Can you craft bold, engaging ad creatives that drive growth across Meta, TikTok, and YouTube? This […]
Job Description Job Description The Paid Search Strategy Director is a standout expert in the field of B2B paid media (search, display, social, retargeting, etc.). They are the lead day-to-day subject matter expert (SME) for their assigned B2B clients, providing strategic recommendations, analysis, and reporting as well as responding to ad-hoc requests. They have a […]
Work on high impact, strategic, user acquisition-focused projects for paid search engine marketing, developing new growth levers, scaling paid search traffic and growing performance marketing revenue and margin at US News.
Be a channel expert for vertical teams as marketing needs scale
You’ve worked inside WordPress and other CMS platforms, conducted full-scale technical audits, and can wield tools like Ahrefs, SEMrush, Moz, Screaming Frog, or similar like a pro.
You can explain technical work in plain terms—whether you’re coaching a team member, presenting to a client, or reporting to leadership.
Own the global strategy for paid social growth campaigns, with accountability for performance across regions and lines of business.
Architect automation and optimization frameworks in collaboration with Ad Tech and Product partners-using tools like Meta APIs, Smartly, and custom-built solutions to scale operations.
Director of Growth, Havas Media Network (Hybrid, New York City Metropolitan Area)
Salary: $135,000 – $145,000
Lead outbound prospecting for Havas Play and Havas Market, generating and qualifying new business opportunities with net-new clients.
Develop tailored outreach strategies that resonate with brand marketers, e-commerce leaders, and cultural partners.
Note: We update this post weekly. So make sure to bookmark this page and check back.