❌

Normal view

Today β€” 3 April 2026Tech

Microsoft Outlook frustrates Artemis II astronauts live from space β€” Microsoft's reputation for shoddy software now stretches to the moon

In space, no one can hear you scream ... at Microsoft Outlook (New). Microsoft's weird decision to split Outlook into two separate apps creates issues for the Atemis II NASA mission to the Moon, live on camera.

I’m hooked on this solo-dev-made dice roller on PC Game Pass: easy to learn and hard to master

If you’re looking for an addictive gambling-like with roguelike gameplay inspired by games like Balatro, Dice a Million is for you. Roll the dice, rack up as much as score as you can, and answer the mystery phone calls in this solo-developed indie available now on PC Game Pass.

(PR) Intel Appoints Aparna Bawa as Executive Vice President and Chief Legal & People Officer

3 April 2026 at 00:32
Intel Corporation today announced the appointment of Aparna Bawa as EVP, chief legal & people officer. Bawa will report directly to CEO Lip-Bu Tan and will lead Intel's global legal, ethics, compliance, people, and culture organizations as the company accelerates its transformation and execution agenda.

"The role of legal and people leadership has never been more critical as Intel drives cultural transformation with discipline, speed, and integrity," said Intel CEO Lip-Bu Tan. "Aparna brings a rare combination of operational rigor, business judgment, and people-first leadership. Her experience helping scale global technology companies through periods of significant change will be invaluable as we build a stronger, more agile Intel."

Hardware Hunter – Automated deal alerts for used hardware


Hardware Hunter monitors Reddit, eBay, Slickdeals, and more to surface used computer hardware deals worth buying. It scores each listing for scam signals, vague specs, condition red flags, and price versus market, then delivers concise email or Telegram alerts with a verdict and link. Define what you're huntingβ€”category, specs, sources, and price rangeβ€”and it checks every two hours so you only see real opportunities.

View startup

Skedly – Receive, filter, retry, and replay webhooks reliably


Skedly is the reliability layer between your event sources and your application. It receives, filters, retries, and replays webhooks so you never miss a critical event, with real-time observability into payloads, headers, status codes, latency, and delivery history. HMAC signature verification, workspace isolation, and configurable alerts help you secure and monitor traffic.

Use the open-source CLI to test with real production events on localhost, and embed a customer-facing webhook dashboard in your app to let users manage endpoints, filters, and retries without building infrastructure.

View startup

Hackers Exploit CVE-2025-55182 to Breach 766 Next.js Hosts, Steal Credentials

AΒ large-scale credential harvesting operationΒ has beenΒ observed exploiting the React2Shell vulnerability as an initial infection vector to steal database credentials, SSH private keys, Amazon Web Services (AWS) secrets, shell command history, Stripe API keys, and GitHub tokens atΒ scale. CiscoΒ Talos has attributed the operation to a threat cluster it tracksΒ as

Yesterday β€” 2 April 2026Tech

Preplo – Extract recipes from YouTube, TikTok, and Instagram videos


Preplo turns cooking videos from YouTube, TikTok, and Instagram into clear, actionable recipes. Paste a link and its AI parses transcripts or descriptions to extract ingredients, quantities, cost estimates, and timed steps you can follow in full-screen cook mode. Generate smart shopping lists, track a weekly cooking streak, and adapt recipes to your needs. Start free with limited extractions or upgrade for unlimited extractions, advanced features, and priority support. Available on iOS, with Android coming soon.

View startup

REAIGENT7 – Build AI listing pages that answer buyers and capture every lead


REAIGENT7 gives real estate agents their own listing platform with AI that answers buyer questions 24/7, captures leads, and routes every inquiry directly to your REAIGENT7 dashboard or your CRM. Build professional property pages in minutes, share anywhere, and keep your brand front and center.

Manage leads in a simple dashboard, move them through stages, and get follow-up reminders. Generate listing descriptions and social posts with one click, track showings and open house sign-ins, and grow your pipeline without portals or referral fees.

View startup

Ubuntu 26.04 LTS Raises Recommended Memory Requirement to 6 GB

2 April 2026 at 21:56
Canonical increased the recommended system memory for the upcoming Ubuntu 26.04 LTS "Resolute Raccoon" to 6 GB of RAM, a first major change since 2018. According to the release note, the 26.04 LTS now lists 6 GB of RAM as the baseline for a comfortable desktop experience, alongside a 2 GHz dual-core CPU and 25 GB of storage, unchanged from previous generations. This represents a 50% increase over Ubuntu 18.04 LTS, which in 2018 raised the bar to 4 GB, and a notable shift from earlier releases that ran on as little as 1 GB. The change is not caused by a heavier core OS. Instead, it reflects the reality of modern workloads. The GNOME desktop, now updated to newer revisions, along with current web browsers (i.e., Firefox) and everyday apps like LibreOffice, demand more memory in multitasking scenarios.

Importantly, 6 GB is not a hard requirement. Ubuntu 26.04 LTS will still install on systems with less than 6 GB of RAM. However, performance may suffer. Early testing shows that the OS remains functional even on 2 GB systems although with significant slowdowns. As before, the 25 GB storage requirement remains mandatory for the desktop edition. As Ubuntu 26.04 LTS is expected to be the next long-term support release from Canonical, the Ubuntu ecosystem provides many options for those using lower-end hardware. Lighter flavors such as the official Lubuntu or the Linux Lite distro, and manual installations with a minimal base remain available and viable options. Also, Ubuntu Server can be deployed on systems with around 1-1.5 GB of RAM, depending on the use case. Ubuntu 26.04 LTS is currently in development and scheduled to be released on April 23.

Steam Deck 2 Ditches Semi-Custom APU for Off-the-Shelf AMD Silicon, Eyes 2028 Launch

2 April 2026 at 21:45
Valve's next-generation Steam Deck 2 handheld console is reportedly planned for release in 2028, with significant manufacturing changes expected for this sequel to the highly successful handheld gaming device. According to a well-known industry leaker, KeplerL2, posting in the NeoGAF community, Valve is targeting a 2028 refresh for the second-generation Steam Deck. However, the ongoing supply chain shortages of DRAM and NAND Flash could cause disruptions to these plans, potentially leading to delays. Interestingly, this period is when the shortages are expected to start easing, so the Steam Deck 2 could still be released on time, depending on Valve's sourcing capabilities.

One of the most significant procurement shifts for the Steam Deck 2 is Valve's choice of the computing base that will power the handheld. Instead of using a semi-custom AMD APU, Valve is expected to use an off-the-shelf AMD APU that won't require any custom tuning from AMD to meet Valve's needs. This is welcome news, as the latest Steam Machine showed that Valve's reliance on a semi-custom APU solution made the hardware "obsolete" quickly while the rest of the industry advanced. With any semi-custom solution, stockpiling silicon and waiting for DRAM/NAND modules to arrive puts pressure on Valve to ship a product that is significantly underpowered or too expensive. However, with an off-the-shelf solution, Valve could use the best available option at the time of shipping and optimize SteamOS around it.

AMD Details Upcoming Zen 6 PQOS Extensions: Advanced Bandwidth and Privilege Controls

2 April 2026 at 21:15
Imagine you're a web hosting vendor leasing out specific number of CPU cores of a large core-count processor. You'd want to specify QoS limits on the shared L3 cache performance for those cores, so they don't hamper performance of other tenants. AMD this week released a technical document detailing the Platform Quality of Service (PQOS) ISA extensions for its next-generation Zen 6 microarchitecture. These ISA enhancements provide sysadmins and cloud providers with greater control over processor and memory subsystem performance. The latest document outlines three primary additions to the Zen 6 PQOS feature set, Global Bandwidth Enforcement (GLBE), Global Slow Bandwidth Enforcement (GLSBE), and Privilege-Level Zero Association (PLZA). These features are designed to scale performance management across complex multicore environments by allowing software to regulate bandwidth and execution privileges more effectively across expansive groups of logical processors. The development shows that AMD is steering toward a more closely collaborative hardware QoS solution for its multicore processors.

A highlight of the Zen 6 PQOS updates is the implementation of Global Bandwidth Enforcement (GLBE), which allows system software to specify L3 external bandwidth limits for groups of cores that span across multiple traditional QoS Domains. By grouping these into a unified "GLBE Control Domain," AMD enables a competitively shared bandwidth ceiling for specific Classes of Service (CoS). This upgrades older architectures that only provided L3 external bandwidth enforcement on a strictly per-domain granularity. Next up, AMD introduced Global Slow Bandwidth Enforcement (GLSBE), a parallel feature that applies the exact same multi-domain bandwidth limiting principles to system memory explicitly designated as "Slow Memory." Both GLBE and GLSBE provide granular controls via specific model-specific registers.

Windows Security App Gains Secure Boot Certificate Status Ahead of Major Certificate Refresh

2 April 2026 at 21:11
On your Windows PC, the Unified Extensible Firmware Interface (UEFI) uses Secure Boot certificates to ensure that only trusted software initiates the startup sequence. The certificates currently in use were originally issued in 2011 and are set to expire in late June 2026. To address this, Microsoft has been quietly rolling out updated certificates through Windows Update. Starting in April 2026, users can check their device's status via a new indicator in the Windows Security app. By navigating to Device security and then Secure Boot, a color-coded badge will show whether your device is fully updated, awaiting an update, or requires immediate attention.

The badge system is simple yet significant. A green checkmark indicates that the new certificates are installed and no further action is needed. A yellow caution badge, which will start appearing in May 2026, means the update is either pending or has been blocked by a hardware or firmware limitation. A red stop icon is the most serious state and could appear as early as June 2026, once older certificates start expiring. When this occurs, the device will no longer be able to receive critical boot-level security updates. The same status is reflected in the Windows Security system tray icon, so warnings are visible even when the app is closed.

America’s AI chip rules keep changing β€” and the rest of the world is paying the price

2 April 2026 at 20:58
We interview experts, including Chris McGuire, senior fellow for China and emerging technologies at the Council on Foreign Relations, and former senior director in the U.S. National Security Coundil on the Trump Administration's shifting stance toward AI accelerator export control rules.

ChatGPT ads favor clarity over creativity, new data shows

2 April 2026 at 20:30
Optimizing for ChatGPT Shopping: How product feeds power GEO

The new ChatGPT ad format is standardizing, according to a new Adthena analysis of 40,000+ daily placements. What once felt experimental is becoming a disciplined, high-intent system for users already deep in decision mode.

The big picture: ChatGPT ads are converging on a short, structured, highly contextual style that favors precision over persuasion and utility over storytelling, marking a shift from creative-led advertising to real-time, intent-driven assistance.

By the numbers. Every word must carry weight and contribute directly to clarity or conversion:

  • The average headline clocks in at just 30 characters and around 5 words.
  • Body copy averages 116 characters and roughly 19 words.

What’s working. The dominant pattern is a β€œBrand: Benefit” headline, separating the name from a specific value. It works because users in conversational environments expect immediate clarity, not intrigue or ambiguity.

  • Almost every ad leads with the brand name. You need easy recall in a setting where users are already evaluating options, not discovering them.

Headlines are compressed. Headlines often read like functional labels rather than slogans. This brevity carries into the body copy. It typically uses two tight sentences: a proof point followed by an offer or nudge, showing you’re not trying to win an argument but give one compelling reason to act.

Context mirroring is a defining feature. The strongest ads directly reflect the user’s query or situation, signaling real-time tailoring. This marks a new level of AI-native targeting that goes beyond keyword matching into conversational relevance.

Concrete value signals carry outsized weight. Dollar signs and specific numbers β€” prices, savings, performance β€” consistently outperform vague claims. Numbers dominate body copy because they feel credible and native in a setting where you’re actively researching and comparing options.

Offers. Low-friction offers β€” especially β€œfree” trials or demos β€” are the most common conversion lever, reducing commitment barriers while users are exploring.

Calls to action. These are explicit and action-oriented, favoring direct phrases like β€œShop now,” β€œCompare,” or β€œBook” while abandoning generic prompts like β€œLearn more.”

The overall tone. Calm, confident, and measured, with minimal exclamation points or question marks. It aligns more with helpful guidance than ad hype, helping ads blend into the conversational flow rather than disrupt it.

Why we care. ChatGPT ads reach users at high intent, where clarity and relevance matter more than creativity or storytelling. In a conversational environment, ads compete with useful answers, so vague or overly branded messages get ignored while precise, value-driven copy performs better. This shift rewards short, structured messaging and gives early adopters an advantage as the format standardizes.

Between the lines. While ChatGPT ads share DNA with paid search β€” especially in their focus on intent and relevance β€” they differ by integrating into dialogue, responding to high-intent users, and delivering messaging that feels assistive rather than interruptive.

The takeaway. Success in ChatGPT advertising depends on precision, relevance, and credibility over creativity, emotional appeal, or brand-led storytelling. The winning strategy: fit in perfectly when a user needs a clear, trustworthy answer.

The analysis. Adthena CMO Alex Fletcher shared the data on LinkedIn.

Steam on Linux Surpasses 5% Market Share in the Latest Survey Update

2 April 2026 at 20:33
As we enter a new month, Steam's Hardware and Software Survey data has been processed, providing us with a clearer view of the overall gaming market that uses Steam platform. Today, the most notable change in the Steam Survey is the increase in Linux gamers, who have moved from their historically low single-digit market share to mid-single digits. As of March, Linux-based operating systems were running Steam on 5.33% of all polled systems. This represents an impressive 3.10% increase over February's data, which showed a dip in Linux market share from January's 3.5%. Fortunately, the numbers have now been revised upwards, marking a significant improvement for the community that has been steadily implementing improvements and making Linux-based gaming more accessible to everyone.

What might not be surprising is that a large portion of those 5.33% Linux installations run on Valve's customized SteamOS operating system. With a 24.48% share, the use of SteamOS grew by 0.65% last month alone, while other Linux distributions also contributed significantly. Other Windows alternatives like macOS are gaining momentum as well, with Apple seeing a 1.19% month-over-month increase to 2.35%. Although Linux now holds more than twice the market share of macOS, its growth within the Steam install base is a significant change, nearly doubling in just a month. Perhaps these alternative operating systems are now attracting enough attention from big game studios to encourage them to release native ports instead of relying on translation tools like Wine/Proton.

(PR) Urban Ascend Launches on Steam April 3

2 April 2026 at 20:14
Urban Ascend is a city-building game centered on continuous expansion, system-driven design, and long-term optimization. Players grow a small town into a highly efficient metropolis by placing buildings, managing resources, and refining interconnected systems that evolve over time. The full version launches on Steam on April 3, 2026, following a public demo that introduced its core progression loop. The full release expands on those systems with additional buildings, upgrades, and mechanics designed to deepen strategic decision-making and long-term planning.

Urban Ascend features nearly 100 buildings and hundreds of upgrades that reshape how the city functions. Players manage citizen needs such as happiness, safety, and governance to unlock powerful bonuses, while responding to dynamic incidents that introduce new challenges as the city grows.

(PR) Solidigm Expands Sacramento Development, Fueling Global AI Leadership

2 April 2026 at 19:47
Solidigm, a pioneer in enterprise data storage, today announced it has exceeded initial investment goals for its Greater Sacramento initiatives, including the company's Rancho Cordova headquarters and surrounding research and development (R&D) campus. Announced in September 2022, Solidigm committed to investing $100 million into regional R&D facilities. Approximately three-and-a-half years into the build out, the company has surpassed this figure and will continue to invest in local talent and technology to help fuel global AI advancements.

In addition to $75 million in local lab investments, Solidigm has introduced close to 100 new NAND tools through the development of more than a $5 million world-class NAND lab and R&D center. "We have the most robust data storage product line for AI data centers," said Greg Matson, SVP, Head of Products and Marketing at Solidigm. "Our industry leading SSDs help our customers achieve the highest levels of efficiency, density, and performance in storage for their AI demands. And all of the innovation for us starts right here in Rancho Cordova."

(PR) NVIDIA GeForce NOW Brings 10 Games to the Cloud

2 April 2026 at 19:21
No jokeβ€”GFN Thursday is skipping the tricks and heading straight into the games. April kicks off with ten new titles, bringing fresh adventures to GeForce NOW, including the launch of Capcom's highly anticipated PRAGMATA.

A dozen new games are available to stream this week, including Arknights: Endfield, which expands the acclaimed series into a full 3D real‑time strategy adventure. On GeForce NOW, every battle flows with precision and every mission looks sharper than ever. So gear up, grab a controller or gaming device of choice, and get ready to streamβ€”another month of great gaming is now underway.

8BitDo Launches Limited Edition Apple II Inspired Retro 68 AP50 Keyboard

2 April 2026 at 18:57
8BitDo has released a limited edition version of its Retro 68 mechanical keyboard to mark Apple's 50th anniversary. Called the AP50, it takes direct visual inspiration from the Apple II color scheme with the familiar beige and brown colors of that era of computing. The keyboard uses a 68-key compact layout built around a gasket-mount system for better typing acoustics and a softer key-press feel. Construction is all-aluminium, chassis, plate, and keycaps, and the 323.3 x 138.5 x 46.5 mm body reflects that, with the keyboard weight reaching 2.2 kg. Switches are Kailh BOX Ice Cream Pro Max, and the PCB is hot-swappable if you want to try something else without soldering. RGB backlighting is included, and the keyboard is programmable through 8BitDo Ultimate Software V2 or via fast-mapping directly on the keyboard without any software.

Connectivity covers all three modes: wired USB, 2.4G wireless, and Bluetooth LE. It's compatible with macOS, Windows 10 and above, and Android 9.0 and above. The 6500 mAh battery is rated for up to 300 hours of use with a 9-hour charge time. The package also includes a set of Wireless Dual Super Buttons (160.2 x 75.3 x 32.6 mm, 270 g), a 2.4G adapter, and a USB cable. At $499.99, the 8BitDo AP50 keyboard is clearly aimed at collectors and enthusiasts rather than anyone shopping on a budget.

Cisco Patches 9.8 CVSS IMC and SSM Flaws Allowing Remote System Compromise

CiscoΒ has released updates to address a critical security flaw in the Integrated Management Controller (IMC) that, if successfully exploited, could allow an unauthenticated, remote attacker to bypass authentication and gain access to the system with elevated privileges. TheΒ vulnerability, tracked as CVE-2026-20093, carries a CVSS score of 9.8Β out of a maximum ofΒ 10.0. "This

N+One – Your AI cycling coach that adapts training to daily readiness


N+One gives cyclists an AI coach that designs and adapts training to your goals and daily readiness. Connect Strava, Garmin, Wahoo, or WHOOP to sync activities, sleep, HRV, and heart rate, then view recovery, mood, and FTP estimates on a single dashboard. Chat with your coach anytime for guidance, nutrition tips, and real-time plan adjustments that fit your schedule.

View startup

Video Clipper – Turn YouTube videos into viral Shorts with AI in minutes


Video Clipper helps creators repurpose long-form YouTube content into Shorts. Paste a YouTube URL and let AI transcribe, detect the most engaging moments, and cut ready-to-upload clips. You pay per minute processed with credits that never expire, starting with 20 free credits. Upload results straight to YouTube and scale your clipping workflow without subscriptions.

View startup

Build your marketing ark: A framework for AI, empathy, and design

2 April 2026 at 19:00
How to design AI-powered marketing systems that reduce friction and burnout

There’s a flood coming. A downpour of noise β€” more content, more channels, more AI-generated everything, moving faster than most teams can keep up with. Somewhere in that volume, your customers are quietly drowning β€” overwhelmed, underserved, and one bad experience away from choosing someone else.

You’ve probably felt it on your team, too. Another tool. Another sprint. Another quarter of doing more with less. The productivity metrics look fine from the outside. But inside, people are running on empty.

There’s an old story about a man named Noah who, facing catastrophic disruption, didn’t freeze or panic. He didn’t look for shortcuts or try to outswim the storm. He built β€” with intention, with a clear design, and with people he trusted. When the waters rose, the ark held.

The brands that lead don’t adopt the most technology the fastest. They build with intention β€” designing systems and experiences that protect people.

What follows is the case for building your ark β€” and a practical framework to do it.

The hidden emotional tax nobody is measuring

Customer-obsessed organizations achieved 49% faster profit growth and 51% better customer retention rates than their peers, according to Forrester. The gap between what customers need emotionally and what brands deliver comes down to design.

The strain isn’t only on the customer side.

  • AI power users report that it makes their overwhelming workload more manageable (92%), boosts creativity (92%), and helps them focus on their most important work (93%), per Microsoft and LinkedIn’s Work Trend Index,.
  • Yet, 60% of leaders say their company lacks a concrete AI vision or plan β€” meaning the very tool that could relieve team burnout is sitting underutilized.Β 

That gap shows up in real ways.

For customers, it creates friction β€” too many choices, unclear navigation, and messaging that misses where they are. They arrive with a question and leave with more confusion. They don’t feel seen or helped.

For marketing teams, the impact is quieter but just as serious:

  • Decision fatigue disguised as strategy.
  • Tool overload framed as innovation.
  • Burnout that looks like productivity β€” until it doesn’t.
  • Fragmented workflows that drain energy faster than they produce results.

Brands that recognize these human issues move faster, retain stronger talent, build deeper customer loyalty, and drive better business outcomes. Enter what I call the wellness sweet spot.

Where AI, empathy, and design come together

The wellness sweet spot is the moment where AI, empathy, and human-first design converge β€” creating conditions where both your customers and your team can think clearly, act confidently, and trust the experience they’re in.

It’s an architectural decision about how your entire marketing ecosystem is designed to make people feel. When its three pillars are genuinely working together, four things become true simultaneously:

  • AI reduces waste and cognitive load in the experience β€” making things simpler.
  • Emotional friction is intentionally minimized at every touchpoint.
  • Marketing teams operate from a foundation of wellness (and well-being).
  • Systems and workflows support human thriving, not just throughput.
The convergence of AI capability, empathy-led design, and human-first systems

When these conditions are in place, something shifts. AI stops feeling like a disruption and starts working as a stabilizing layer β€” supporting, protecting, and quietly holding the system together. It manages the overwhelm. The ark keeps floating.

Dig deeper: How to avoid decision fatigue in SEO

AI as an invisible wellness layer

Most marketing leaders still think about AI in terms of what it does β€” automate, generate, optimize, analyze. Those outcomes matter, but they don’t tell the full story. The more consequential question is how AI makes people feel while it’s doing those things.

For customers, AI used well is a guide that:

  • Summarizes complexity without dumbing it down.
  • Narrows choices in ways that feel helpful rather than manipulative.Β 
  • Anticipates what someone needs next and removes ambiguity from decision paths.Β 
  • Saves time β€” which is, in a very real sense, saving emotional energy.

For teams, thoughtfully deployed AI absorbs the work that depletes people most: the repetitive, the reactive, and the administrative. It creates space for what human brains do best: strategy, creativity, relationship-building, and nuanced judgment.

When you build your marketing systems around it, the output quality goes up because the people producing it aren’t running on fumes.

This is empathy at scale. Not the kind that lives in a tagline, but the kind that’s baked into how your systems are structured and how your content is designed to reach people.

Get the newsletter search marketers rely on.


The new emotional metrics: What to measure when you start caring about feelings

This is where things get practical and start to move ahead of the curve. Most marketing dashboards show what happened β€” click-through rates, conversion rates, and time on page. Those metrics matter, but they don’t explain why someone left or how they felt along the way.

Emotional metrics help fill that gap by focusing on the conditions under which decisions are made. Research in psychology and neuroscience shows that people make better decisions, build stronger brand relationships, and become more loyal when they feel clear, confident, and calm.

Here’s how traditional metrics map to emotional KPIs:

Traditional metricEmotional KPIWhat it measures, reimagined
Time on pageClarity indexHow quickly someone finds what they need β€” without confusion
Conversion rateDecision effort scoreCognitive load required to complete an action
Engagement rateCustomer calm markersBehavioral signals of confidence, not stress (Qualified attention)
Team output volumeWellness throughputStrategic output produced with reduced burnout

These are upstream indicators that help explain downstream performance. A low clarity index often shows up as stalled conversion rates. A high decision effort score can lead to rising cart abandonment. Declining wellness throughput tends to result in average output from top strategists.

Brands that start tracking these now gain an advantage over those that wait to react.

5 steps to design toward your wellness sweet spot

A caution before the roadmap: more speed and scale applied to a broken system will not fix it. It will amplify everything that’s wrong with it. These five steps are meant to be done before you push harder on AI adoption.

Step 1: Run an empathy audit

Where are customers confused? Hesitating? Leaving? Map these moments using behavioral data combined with qualitative insight β€” customer interviews, session recordings, support tickets, search data. Focus less on what people clicked and more on where they felt lost.

Step 2: Simplify for cognitive ease

Fewer choices. Plain language. Cleaner navigation. Every step you remove from a decision path is a small act of respect for your customer’s mental energy. This is generous. It’s designing with intelligence.

Step 3: Use AI as a shepherd

Deploy AI to enhance orientation, clarity, and confidence. Don’t push aggressive automation or manufacture a sense of urgency. AI should make customers feel helped, not herded. There’s a difference, and your audience feels it.

Step 4: Rebuild team workflows around energy

Audit where your team’s cognitive energy actually goes each week. Identify the work that is routine, reactive, or repetitive β€” and build AI into those gaps first. Protect the hours that require human judgment, creativity, and relationship-building. Those are the hours that drive real growth.

Step 5: Measure the feels

Begin tracking emotional outcomes alongside performance metrics. Start simple: add a one-question post-interaction survey.Β 

Review search data for confusion signals. For example, growing volume for β€œhow do I” or β€œwhy can’t I” phrases on your own site may indicate your content isn’t answering questions before they’re asked.Β 

Monitor support ticket themes for friction patterns. A perfect measurement system isn’t required to start. The intention to look is.

Dig deeper: The secret to work-life harmony in SEO: Setting boundaries

The future belongs to emotionally intelligent brands

In a market where nearly every brand claims to be customer-centric and frictionless, the real differentiator comes down to how people feel and whether systems consistently deliver on that promise.

Leading organizations don’t rely on bigger AI budgets. They align technology with clear intent, prioritize well-timed, empathy-led content over volume, treat customer well-being as part of the brand promise, and protect their teams’ energy as rigorously as performance.

Creating value starts with protecting the people who create it. Noah didn’t survive the flood by ignoring it or fearing it. He paid attention, took action, and built with intention β€” something designed to carry what mattered most: his people, his purpose, his peace, and his future. That’s the kind of leadership this moment calls for.

You don’t have to figure this out alone. The tools are here. The framework is yours. The decision is whether to build before the pressure hits or react once it’s already underway.

Why your content doesn’t appear in AI Overviews (even if it ranks in the top 10)

2 April 2026 at 18:00
Why your content doesn't appear in AI Overviews (even if it ranks in the top 10)

You’ve done everything right. You have a fast website with comprehensive content, pages ranking in the top 10, and a strong backlink profile. Yet when you search the query you rank for, your site doesn’t appear in Google’s corresponding AI Overview.

This is a retrieval problem, not a ranking issue. And the difference between the two is the most important shift SEOs need to understand right now.

AI Overviews don’t work like traditional organic rankings. Instead of considering which page has the most signals, AI Overviews look for the page that gives the cleanest, most usable answer.

If your content doesn’t meet that standard, your traditional search ranking is irrelevant. Here’s what’s going wrong, and how to fix it so your content appears in more AI Overviews.

The ranking-citation gap is real β€” and growing

The overlap between AI Overview citations and organic rankings grew from 32.3% to 54.5% between May 2024 and September 2025, according to a BrightEdge study.

This trend sounds encouraging. But it also means that even at peak convergence, nearly half of all AI Overview citations come from pages that don’t rank at the top of organic results. Google actively bypasses higher-ranking pages when it finds content that better serves the AI Overview format.

The pattern varies sharply by sector, though. BrightEdge data shows that in ecommerce, the overlap barely changed, remaining essentially flat over the entire 16-month period. And in your money or your life (YMYL) categories like healthcare, insurance, and education, the overlap between AI Overview citations and organic rankings ranges from 68% to 75%.

Ranking and visibility are no longer the same thing. You can rank second and be invisible. Or, you can rank on the second page and be the first thing a searcher reads.

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

5 reasons AI Overviews skip your content

1. Your content answers the wrong version of the question

Informational queries β€” specifically long-tail and conversational searches β€” typically trigger AI Overviews. Informational queries drive 57% of AI Overviews, while commercial queries trigger this AI feature far less frequently, according to Semrush research.

Google’s AI engineΒ  looks for content that matches what the user asks, not just the keyword you’ve targeted. So, an AI Overview answering the query β€œwhat’s the best way to manage a remote team’s workload?” probably won’t cite a page that ranks for the keyword β€œproject management software” and leads with features and pricing.

2. You’ve buried the answer

If your introduction spends three paragraphs establishing context, warming up the reader, or restating the question before answering it, the retrieval system moves on. It seeks information it can extract cleanly. If that answer isn’t near the top of the page, the system skips that page.

3. Your structure is opaque to AI systems

Traditional SEO content is built around comprehensive long-form content: 3,000-word guides covering every angle of a topic, written for readers who scroll and skim.

AI retrieval systems don’t work the same way. They need to identify discrete, self-contained answers within your content.

That requires clear heading hierarchies, short paragraphs, and content that AI systems can extract. A section under a specific heading should completely answer the question posed in that heading, without requiring the surrounding context to make sense.

Content written as one long, unbroken narrative is harder for AI systems to parse. Even if every word is accurate and authoritative, it may not earn a citation if the structure doesn’t help the retrieval system identify individual answer units.

Dig deeper: AI Overview citations: Why they don’t drive clicks and what to do

Get the newsletter search marketers rely on.


4. Your E-E-A-T signals aren’t visible at the content level

Google has been clear that experience, expertise, authoritativeness, and trustworthiness (E-E-A-T) signals are important for content quality in traditional search. It likely matters for AI Overviews, too. But these signals need to appear in the content itself, not just in your domain profile or link graph.

Strong domain authority counts for less than you’d think if the content itself carries no credibility signals.

  • Who wrote it?
  • Where did the data come from?
  • Is there anything here that couldn’t have been written by someone who’d never worked in this field?

A retrieval system evaluating an individual page doesn’t know your domain’s track record. The page must make the case for itself.Β 

Content-level E-E-A-T signals are particularly important in YMYL categories, where AI Overviews are selective about sources because the risk of misinformation is higher.

5. You’re targeting queries that don’t trigger AI Overviews

Before optimizing your content for AI engines, it’s worth checking whether your target queries trigger AI Overviews at all. As of late 2025, AI Overviews appear in 16% of search results, though that figure isn’t evenly distributed across query types.

Transactional queries, navigational searches, branded queries, and highly local searches are far less likely to trigger an AI Overview. If most of your traffic comes from commercial or transactional keywords, the lack of AI Overview citation may not be a content problem. It may simply be that those query types are less likely to generate overviews in the first place.

What the data tells us about the impact of this shift

The stakes are significant. Research by Seer Interactive shows that organic click-through rates (CTRs) for informational queries that displayed AI Overviews dropped 61%, from 1.76% to 0.61%, between June 2024 and September 2025. Paid CTR fell even further, from 19.7% to 6.34%.

But the same research reveals a critical asymmetry: Brands cited in AI Overviews saw 35% higher organic CTR and 91% higher paid CTR than when they weren’t cited. A citation in an AI Overview doesn’t just protect you from a CTR decline. It actively amplifies your visibility.

The Pew Research Center’s study of searches by U.S. adults in March 2025 found that only 8% of users who encountered an AI Overview clicked a traditional search result, compared to 15% who clicked when no overview appeared. And 26% of searches with AI Overviews resulted in no clicks at all.

If AI Overviews appear for your most valuable queries and you aren’t cited, you aren’t just missing out on the overview. You’re losing clicks you previously received from the organic listing underneath it.

How to optimize for retrieval, not just rankings

These trends require you to adjust how you think about content structure and intent. Here’s where to focus:

  • Rewrite your introductions: Your first paragraph should directly and completely answer the primary question of the page. Save context and elaboration for later sections. Write as if the first 100 words of your page represent a standalone answer.
  • Restructure your headings: Each heading should be a question or a complete, specific claim. The following section should fully answer or support that heading without requiring the reader to review previous sections. Think of each section as a self-contained answer unit.
  • Add explicit expertise signals: Include author attribution with credentials, first-person experience language, original data, and links to primary sources and original research. These signals matter at the content level, not just at the domain level.
  • Audit your query triggers: Manually test your target queries in Google to see which ones actually generate AI Overviews. For those that do, study how the cited sources are structured, the length of the cited sections, and the format of the answer. Use that as your editorial brief.
  • Expand your topical coverage: AI Overviews favor sources that demonstrate breadth of knowledge across a topic, not just single-page depth. Focus on answering several related questions well instead of building one exceptional page surrounded by thin content.

Dig deeper: Want to beat AI Overviews? Produce unmistakably human content

How to shift your SEO approach

What AI Overviews represent is something that’s been discussed for years, but few have truly prepared for: the separation of content quality from ranking signals.

For two decades, we used rankings as a proxy for quality. High-ranking content was, by definition, good enough.

But that assumption no longer holds. Ranking in traditional search indicates that your brand has authority and that your page is relevant to the search query. It says nothing about whether your content is structured in a way that AI retrieval systems can use.

Visibility now goes to whoever understands how AI systems identify, extract, and surface answers. A strong backlink profile won’t help you if the answer is buried on page three of a 4,000-word guide.

Ranking in the top 10 is still worth pursuing. But it’s no longer the whole game.

Linux use hits an all-time-high on Steam, passing 5% user share

Linux use amongst PC gamers is growing, and Valve’s Steam Machine isn’t even out yet Valve has released its March 2026 Steam hardware survey, and it is clear that Linux adoption continues to grow among gamers. Now, Linux use has reached an all-time high of 5.33% of Steam users. This month, Linux has over twice […]

The post Linux use hits an all-time-high on Steam, passing 5% user share appeared first on OC3D.

ThreatsDay Bulletin: Pre-Auth Chains, Android Rootkits, CloudTrail Evasion & 10 More Stories

TheΒ latest ThreatsDay Bulletin is basically a cheat sheet for everything breaking on the internet right now. NoΒ corporate fluff or boring lectures here, just a quick and honest look at the messy reality of keeping systems safe thisΒ week. ThingsΒ are moving fast. TheΒ list includes researchers chaining small bugs together to create massive backdoors, old software flaws

(PR) IBM Announces Strategic Collaboration with Arm

2 April 2026 at 17:39
IBM (NYSE: IBM) today announced a strategic collaboration with Arm to develop new dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security.

IBM's leadership in system design, from silicon to software and security, has helped enterprises adopt emerging technologies with the scale and reliability required for mission‑critical workloads. As AI moves deeper into core business operations, IBM continues to invest in hardware platforms such as the Telum II processor and Spyre Accelerator, which are designed to bring AI from experimentation into everyday enterprise use.

ClawCloud – Run a private, 24/7 OpenClaw assistant with zero setup


ClawCloud provides fully managed hosting for OpenClaw, giving you a private, always-on AI assistant with no setup. Each plan includes a dedicated, isolated instance that stays online 24/7 while the team handles updates, security, scaling, and daily backups. Just log in and start using your assistant to automate tasks across email, code, the browser, files, and more, with priority support available on higher tiers.

View startup

Costlix – Compare costs, features, and value across products in seconds


Costlix is an AI-powered comparison engine that analyzes pricing, features, and reviews to deliver clear tables and detailed cost breakdowns. It reveals hidden fees, subscription tiers, and long-term value so you can see which option fits your budget. Use it to compare software, electronics, household goods, and travel, with recommendations tailored to your usage.

View startup

'Switch to MAX, by any means necessary' β€” Inside Russia’s great internet crackdown

After censoring the internet for years, the Kremlin is now pushing citizens to the state-controlled MAX by further targeting VPN usage. But the disconnect over the Telegram shutdown could be the needed turning point.

Diverse teams start with diverse VCs

2 April 2026 at 17:45
It is the path of least resistance for a growth-stage company to hire from the familiar Silicon Valley pipelines but if a founder wants a diverse team, that value has to be put into practice from the very first hire.Β 

6 Google Ads mistakes that hurt ecommerce campaigns

2 April 2026 at 17:00
6 mistakes that hurt ecommerce campaigns on Google Ads

Your paid social operation is on fire. You know how your audience thinks, the creative process is dialed in, and the results get better every year. Leadership greenlights an expansion to Google Ads β€” a new channel and, critically, a new source of revenue.

As it turns out, applying that same strategy really just buys you an express ticket to a very difficult conversation.

Google rewards a different kind of thinking. Intent signals and campaign logic are different, and the mistakes that eat at your budget don’t always make themselves clear. Brands that apply their existing Meta playbook often find themselves looking at shiny dashboards and dull balance sheets.

These six common mistakes tend to do the most damage before anyone realizes what’s happening. They’re what we see most often when ecommerce brands come to us after making the move to Google β€” and they can all be reversed.

Mistake 1: Treating Google like a retention channel

You can definitely use Google Ads to support retention and brand defense. The problem is when that becomes your whole strategy.

We see this regularly with brands new to the platform who launch directly into Performance Max. Early ROAS looks strong, and everyone’s happy. But a few months in, someone asks the right question: Are we actually growing, or paying to capture purchases that were going to happen anyway?

One client we worked with came to us with branded search and retargeting doing the heavy lifting inside PMax – essentially a tax on demand that had already been created elsewhere. Revenue flatlined because, while the ad spend was real, growth was not.

Net-new customer acquisition requires a different setup.

  • Shopping campaigns structured to surface products to people who have never heard of the brand.
  • Search campaigns built around non-branded, high-intent keywords.
  • Layered PMax configurations that limit the system from defaulting to the easiest conversions.

When Google has enormous reach into new audiences, treating it purely as a closing channel leaves most of that opportunity untouched.

Dig deeper: Ecommerce PPC: 4 takeaways that shape how campaigns perform

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Mistake 2: Not knowing how to get the most out of Google’s core levers

Paid social experience transfers to Google in some ways, but there are four areas where we see the biggest knowledge gaps.

Search intent

Ads on social media are an interrupting moment. Ads in search engines meet people as they’re looking for something you offer. This changes so much about campaign structure, ad copy, and keyword targeting.Β 

Upper-funnel terms and lower-funnel terms require different approaches, bids, and landing pages. Collapsing them into a single campaign structure is one of the fastest ways to dilute intent and waste budget on traffic that was never going to convert.

Data feed optimization

For ecommerce brands running Shopping and retail Performance Max, the product feed is the foundation everything else is built on. Weak titles, missing attributes, and poor categorization limit how often your products show up and who sees them.Β 

Most brands (including Google-native ones) underinvest here because the work is unglamorous. But a well-optimized feed consistently outperforms one that’s neglected after setup.

Keyword research

Paid search is a keyword-driven channel, which makes keyword strategy its own discipline. Understand match types, search volume, commercial intent, and the relationship between what people type and what they actually want. This takes time to develop, but brands that skip this step usually over-restrict their reach or bleed spend on irrelevant traffic.

Landing pages

Sending high-intent but unfamiliar visitors straight to a product page on Google often underperforms. A more engaging landing page format, like an advertorial, puts that traffic in front of context and trust before asking for the sale.Β 

Brands coming from paid social often overlook this because the funnel architecture they’re used to doesn’t require it.

Dig deeper: 7 Google Ads search term filters to cut wasted spend

Mistake 3: Letting operational issues interrupt campaign momentum

Google’s algorithms need consistent data to make the best decisions for your account. But every time a campaign goes dark β€” for a day or a week β€” there’s a risk that the learning resets. What feels like a minor admin issue can mean weeks of degraded performance and wasted ad spend.

Two types of disruption come up more than any other.

  • Payments: Brands switching to invoice billing or changing card details mid-flight will sometimes see campaigns pause without realizing it until the damage is done. A lapsed payment that takes three days to resolve can cost far more than the bill itself once you factor in recovery time.
  • Tracking and feed integrity: A broken pixel means no conversion data, and forces Smart Bidding to optimize blind. A feed error in Merchant Center means products disappear from Shopping and Performance Max. Neither of these failures are loud, and they tend to surface slowly as declining performance that gets misattributed.

They are both preventable with automated alerts, weekly feed audits, and a person or AI agent responsible for monitoring account health between reporting cycles. The cost of oversight is low compared to what happens if you only discover issues after the fact.

Get the newsletter search marketers rely on.


Mistake 4: Building a campaign structure that’s too granular

The instinct among detail-oriented advertisers is to segment everything because it feels like control on the surface.

  • One campaign per product category.
  • One ad group per keyword.
  • Separate budgets for every audience.

But Google’s automation needs data to make good decisions. When you spread your budget across too many campaigns, each one operates on thin resources and even thinner information. Smart Bidding can’t optimize effectively without sufficient conversion volume, so campaigns stuck below that threshold tend to underperform and stay there.

By over-segmenting, you’ve created the appearance of precision while actually limiting the system’s ability to learn.

The same logic applies to budget. Ten campaigns with a modest shared budget will almost always produce worse results than three well-funded ones. Google needs room to test, adjust, and find the traffic worth paying for. Fragmented budgets don’t allow it to do that.

Build a tighter structure with fewer campaigns, clearly defined goals, and enough budget to compete. This gives the algorithm what it needs while keeping the account manageable enough to oversee effectively.

Dig deeper: How to find and fix the root cause of low conversions

Mistake 5: Leaving campaigns on Max Conversion Value with no ROAS targets

Max Conversion Value is a Smart Bidding strategy that tells Google to spend your budget in whatever way generates the highest total conversion amount – no ceiling, no floor, no efficiency guardrail. Left unsupervised, it will find conversions, but won’t care what it costs to get them.

For brands new to Google Ads, this setting can trick you into thinking you’re crushing it. Conversion value shoots up in the right direction, making the account appear healthy. The problem surfaces when you look at what you actually spent to generate that value.

Without a target ROAS, Google has no efficiency quotient, and optimizes for volume, not profitability. But the fix is straightforward.

  • Once you have enough conversion data, set a realistic target.
  • A ROAS goal gives the algorithm a constraint, and shifts the objective from spending budget to spending it well.
  • Targets set too aggressively too early can starve campaigns of traffic before they’ve had a chance to learn.
  • Exercise patience, and a willingness to adjust gradually rather than chasing the ideal number from day one.

Dig deeper: How each Google Ads bid strategy influences campaign success

Mistake 6: Underfunding campaigns and keeping them stuck in learning

When you launch a Google campaign or make a significant change (like doubling the budget), it enters a new learning period. This is the window for gathering data, testing different auctions, and calibrating toward the conversion patterns you’ve defined.

It’s a normal part of how the platform works, and every campaign goes through it.

But the learning period requires a minimum volume of conversions to complete. Google typically needs around 30-50 conversion events in a short window before bidding stabilizes. A campaign that’s underfunded for this milestone will stay in learning indefinitely.

It’s a common trap for brands being cautious when testing Google.

  • You run your first campaign on a small budget.
  • CPAs are inflated, and data is inconclusive, so you don’t invest more or cut it entirely.
  • In reality, the campaign never had what it needed to graduate out of the learning phase.
  • You walk away from net new revenue before you’ve even scratched its surface.

Funding a new campaign adequately from the start β€” even if it means consolidating into fewer campaigns and chasing fewer goals β€” gives it the best chance of learning fast and delivering accurate results sooner.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Adding Google to the mix is the right call: Here’s what to do next

Diversifying away from a single ad platform is one of the smartest moves an ecommerce brand can make once it’s mature enough to fight on two fronts. It removes growth from the anchor of one platform’s algorithm changes, auction dynamics, seasonality, terms of service, etc.

Adding Google to Meta also gives you access to a different kind of demand that is actively expressed rather than passively targeted, which is a meaningful advantage worth building on.

These six mistakes are not reasons you should avoid Google, but a preventative guide to help you approach it with realistic expectations and enough patience to let the system learn. Treating it like a direct analog of what you’re already doing on Meta will make you leave before seeing what’s truly possible.

If you’re still in the early stages of making this move, my guide on how to expand from Meta Ads into Google Ads is a practical place to start. If you’ve seen early success and are now looking for the next layer of optimization, find out how to avoid getting sucked into Google’s many automation traps.

Google adds channel performance timeline view to PMax campaigns

2 April 2026 at 16:33
Google asserts ownership of all advertiser assets in Local Services Ads

Google launched a channel performance timeline view in Performance Max. It gives you a clearer breakdown of how Search, YouTube, Display, and other channels contribute to campaign results over time.

What’s new. A timeline graph shows channel-level contributions over a selected period, paired with investment and performance filters. You can quickly see which channels are pulling their weight β€” and which aren’t.

  • Yellow box – Channel Performance Evolution Over Time
  • Pink box (right) – All Ads, Ads Using Product Lists, Ads Using Video

Why we care. Performance Max campaigns run across multiple channels at once, making it difficult to see where your budget is most effective. This gives you a timeline view of channel-level contributions β€” so if YouTube is underperforming while Search drives most conversions, you can see it without digging through exports or relying on guesswork. You can spot channel-level trends earlier and adjust your asset strategy or budget accordingly.

The big picture. This view gives you a more actionable way to evaluate PMax performance without relying solely on Google’s automated decisions.

Bottom line. It’s not full transparency, but it’s a meaningful step in the right direction. You get a cleaner way to spot PMax trend anomalies early and adjust accordingly.

First spotted. This update was first spotted by Axel Falck, Head of Search at Le Mage du SEA, who shared it on LinkedIn.

β€œI didn’t expect these swings”: Steam’s March survey reveals PC hardware trends that caught me off guard

Steam just released its March hardware and software survey, and it's clear that the PC gaming market is going through a massive flux as inflated prices force buyers into new (and old) areas.

AMD Radeon RX 9070 and RX 9070 XT Fall Below MSRP in Germany

2 April 2026 at 16:47
AMD's RDNA 4-based Radeon RX 9070 and RX 9070 XT graphics cards have finally reached reasonable pricing, as German retailers report that these GPUs are now selling below MSRP. In Germany, the European MSRP for Radeon RX 9070 cards is €629, including 19% VAT. For its bigger sibling, the Radeon RX 9070 XT, the European MSRP is listed at €689, also including the sales tax. However, according to multiple listings from German online retailers, both cards are trading below their European MSRP pricing, marking the first occurrence since the memory shortage fiasco began, which took a toll on the gaming community. The cheapest Radeon RX 9070 non-XT model is listed at €539.00 in the form of the ASUS Prime Radeon RX 9070 OC SKU, while the cheapest Radeon RX 9070 XT model is listed at €640 for the ASRock Radeon RX 9070 XT Challenger GPU.

Interestingly, this price drop in Germany is not consistent with the markets in the United States, where GPU pricing for the Radeon RX 9070 and RX 9070 XT still ranges around $810-$820 for the non-XT SKU and about $880-$890 for the Radeon RX 9070 XT model. This represents a large premium in the U.S. market, considering that the Radeon RX 9070 and Radeon RX 9070 XT graphics cards have MSRPs of $549 and $599, respectively. Perhaps a fresh supply of GPUs has hit the German market, causing supply to overwhelm demand and significantly pushing prices down. In the U.S., that is not the case, where prices remain high and on an upward trajectory, according to PCPartPicker. In contrast, the German market is experiencing some of the lowest pricing in recent months, finally giving gamers a break.

Build your own AI search visibility tracker for under $100/month

2 April 2026 at 16:00
Build your own AI search visibility tracker for under $100:month

Tracking your brand’s visibility in AI-powered search is the new frontier of SEO. The tools built to do this are expensive, often starting at $300 to $500 per month and quickly rising from there. For many, that price is a nonstarter, especially when custom testing needs go beyond what off-the-shelf software can handle.

I faced this exact problem. I needed a specific tool, and it didn’t exist at a price I could afford, so I decided to build it myself. I’m not a developer. I spent a weekend talking to an AI agent in plain English, and the result was a working AI search visibility tracker that does exactly what I need.

Below is the guide I wish I’d had when I started: a step-by-step playbook for building your own custom tool, covering the technology, the process, what broke, and how to get it right faster.

The problem: A custom tool for a complex landscape

My goal was to automate an AI engine optimization (AEO) testing protocol. This wasn’t just about checking one or two models. To get a full picture of AI-driven brand visibility, I knew from the start that we had to track five distinct, critical surfaces:

  • ChatGPT (via API): The most well-known conversational AI.
  • Claude (via API): A major competitor with a different response style.
  • Gemini (via API): Google’s direct, developer-facing model.
  • Google AI Mode: Google’s AI search experience, which uses Gemini 3 for advanced reasoning and multimodal understanding.
  • Google AI Overviews: The summary boxes that appear at the very top of the SERP for many queries, which by late 2025 were appearing in nearly 16% of all Google searches.

On top of that, I needed to score the results using a custom 5-point rubric: brand name inclusion, accuracy, correctness of pricing, actionability, and quality of citations. No existing SaaS tool offered this exact combination of surfaces and custom scoring. The only path forward was to build.

Here are a few screenshots of the internal tool as it stands. You can see some of my frustration in the agent chat window.

Vibe coded AI visbility tracking tool - Dashboard
Vibe coded AI visbility tracking tool - Test runs
Vibe coded AI visbility tracking tool - Analytics

The method: Using vibe coding to build the tool

This project was built using vibe coding, a way of turning natural language instructions into a working application with an AI agent. You focus on the goal, the β€œvibe,” and the AI handles the complex code.

This isn’t a fringe concept. With 84% of developers now using AI coding tools and a quarter of Y Combinator’s Winter 2025 startups being built with 95% AI-generated code, this method has become a viable way for non-developers to create powerful internal tools.

Dig deeper: How vibe coding is changing search marketing workflows

Your tech stack: The three tools you’ll need

You can replicate this entire project with just three things, keeping your monthly cost under $100.

Replit Agent

This is a development environment that lives entirely in your web browser. Its AI agent lets you build and deploy applications just by describing what you want. You don’t need to install anything on your computer. The plan I used costs $20/month.

DataForSEO APIs

This was the backbone of the project. Their APIs let you pull data from all the different AI surfaces through a single, unified system.Β 

You can get responses from models like ChatGPT and Claude, and pull the specific results from Google’s AI Mode and AI Overviews. It has pay-as-you-go pricing, so you only pay for what you use.

DataForSEO APIs

Direct LLM APIs (optional but recommended)

I also set up direct connections to the APIs for OpenAI (ChatGPT), Anthropic (Claude), and Google (Gemini). This was useful for double-checking results and debugging when something seemed off.

The playbook: A step-by-step guide to building your tool

Building with an AI agent is a partnership. The AI will only do what you ask, so your job is to be a clear and effective guide.

Here’s a repeatable framework that will help you avoid the biggest mistakes.

Step 1: Write a requirements document first

Before you even open Replit, create a simple text document that outlines exactly what you need. This is your blueprint. Include:

  • The core problem you’re solving.
  • Every feature you want (e.g., CSV upload, custom scoring, data export).
  • The data you’ll put in, and the reports you want out.
  • Any APIs you know you’ll need to connect to.

Start your conversation with the AI agent by uploading this document. It will serve as the foundation for the entire build.

Step 2: Ask the AI, β€˜What am I missing?’

This is the most important step. After you provide your requirements, the AI has context. Now, ask it to find the blind spots. Use these exact questions:

  • β€œWhat am I not accounting for in this plan?”
  • β€œWhat technical issues should I know about?”
  • β€œHow should data be stored so my results don’t disappear?”

That last question is critical. I didn’t ask it, and I lost a whole batch of test results because the agent hadn’t built a database to save them.

Step 3: Build one feature at a time and test it

Don’t ask the AI to build everything at once. Give it one small task, like β€œbuild a screen where I can upload a CSV file of prompts.” 

Once the agent says it’s done, test that single feature. Does it work? Great. Now move to the next one.Β 

This incremental approach makes it much easier to find and fix problems.

Dig deeper: How to vibe-code an SEO tool without losing control of your LLM

Get the newsletter search marketers rely on.


Step 4: Point the agent to the documentation

When it’s time to connect to an API like DataForSEO, don’t assume the AI knows how it works. Find the API documentation page for what you’re trying to do, and give the URL directly to the agent.Β 

A simple instruction like, β€œRead the documentation at this URL to implement the authentication,” will save you hours of frustration. My first attempt at connecting failed because the agent guessed the wrong method.

Step 5: Save working versions

Before you ask for a major new feature, save a copy of your project. In Replit, this is called β€œforking.” New features can sometimes break old ones.Β 

I learned this when the agent was working on my results table, and it accidentally broke the CSV upload feature that had been working perfectly. Having a saved version makes it easy to go back and see what changed.

Dig deeper: Inspiring examples of responsible and realistic vibe coding for SEO

What will break: A field guide to common problems

Nearly everything will break at some point. That’s part of the process. Here are the most common issues I ran into, and the lessons I learned, so you can be prepared.

ProblemThe lesson and how to fix it
1. API authentication failsThe agent will often try a generic method.Β 

Fix: Give the agent the exact URL to the API’s authentication documentation.
2. Results disappearThe agent may not build a database by default, storing data in temporary memory instead.Β 

Fix: In your first step, ask the agent to include a database for persistent storage.
3. API responses don’t show upYou might see data in your API provider’s dashboard, but it’s missing in your app. This is usually a parsing error.Β 

Fix: Copy the raw JSON response from your API provider, and paste it into the chat. Say, β€œThe app isn’t displaying this data. Find the error in the parsing logic.”
4. Model responses are cut shortAn LLM like Claude might suddenly start giving one-word answers. This often means the token limit was accidentally changed.Β 

Fix: After any update, run a quick test on all your connected AI surfaces to ensure the basic parameters haven’t changed.
5. API results don’t match the public versionChatGPT’s public website provides web citations, but the API might not.Β 

Fix: Realize that APIs often have different default settings. You may need to explicitly tell the agent to enable features like web search for the API call.
6. Citation URLs are unusableGemini’s API returned long, encoded redirect links instead of the final source URLs.Β 

Fix: Inspect the raw data. You may need to ask the agent to build a post-processing step, like a redirect resolver, to clean up the data.
7. Your app isn’t updatedYou build a great new feature, but it doesn’t seem to be working in the live app.Β 

Fix: Understand the difference between your development environment and your production app. You need to explicitly β€œpublish” or β€œdeploy” your changes to make them live.

The real costs: Is it worth it?

Building this tool saved me a significant amount of money. Here’s a simple cost comparison against a mid-tier SaaS tool.

ItemDIY tool (My project)SaaS alternative
Software subscription~$20/month (Replit)$500/month
API usage~$60/month (variable)Included
Total monthly cost~$80/month$500/month

The biggest cost is your time. I spent a weekend and several evenings building the first version. However, I now have an asset that I can modify and reuse for any client without my costs increasing.Β 

The hidden costs are real: there’s no customer support, and you are responsible for maintenance. But for many, the savings and customization are worth it.

Dig deeper: AI agents in SEO: A practical workflow walkthrough

Should you build your own tool?

This approach isn’t for everyone. Here’s a simple guide to help you decide.

Build your own if:

  • You need a custom testing method that no SaaS tool offers.
  • You want a white-labeled tool for your agency.
  • Your budget is tight, but you have the time to invest in the process.

Stick with a SaaS tool if:

  • Your time is more valuable than the monthly subscription fee.
  • You need enterprise-level security and dedicated support.
  • Standard, off-the-shelf features are good enough for your needs.

For many SEOs, the answer is clear. The ability to build a tool that works exactly the way you do, for less than $100 a month, is a game-changer.Β 

The process will be frustrating at times, but you will end up with something that gives you a unique advantage. The era of the practitioner-developer is here. It’s time to start building.

The Resident Evil Classic Trilogy has arrived on Steam

GOG’s new versions of Capcom’s original Resident Evil games are now available on Steam Capcom has officially released its Resident Evil Classic Collection on Steam, bringing the original PC versions of Resident Evil, Resident Evil 2, and Resident Evil 3: Nemesis to PC’s most popular platform. These classic game re-releases were co-developed with GOG and […]

The post The Resident Evil Classic Trilogy has arrived on Steam appeared first on OC3D.

Researchers Uncover Mining Operation Using ISO Lures to Spread RATs and Crypto Miners

AΒ financially motivated operationΒ codenamed REF1695Β has beenΒ observed leveraging fake installers to deploy remote access trojans (RATs) and cryptocurrency miners since NovemberΒ 2023. "Beyond cryptomining, the threat actor monetizes infections through CPA (Cost Per Action) fraud, directing victims to content locker pages under the guise of software registration," Elastic

The State of Trusted Open Source Report

In DecemberΒ 2025, we shared the first-ever The State of Trusted OpenΒ Source report, featuring insights from our product data and customer base on open source consumption across our catalog of container image projects, versions, images, language libraries, and builds. TheseΒ insights shed light on what teams pull, deploy, and maintain day to day, alongside the vulnerabilities and

Intel buys back 49% stake in Ireland Fab Joint Venture β€” takes full control over Fab 34

In a mutually beneficial deal, Intel repurchases 49% of Fab 34 from Apollo for $14.2 billion, reducing pressure on its margins, but paying a hefty $3 billion premium to the financial company.

FamilyFeed – Your family's smart assistant with bots and rules


FamilyFeed is a shared organizer where families manage shopping, medicines, travel lists, appointments, todos, and event plans in one place. Smart bots monitor this family data and generate helpful reminders and insights, like medicines running out, tasks needing attention, or upcoming events. The app is designed for the whole family, from kids to grandparents. It includes AI features like image scanning to create items quickly, smart tools such as cloning lists or event plans, and unique layouts like Checkout View for shopping lists and Chat View for appointments.

View startup

WhatsApp Alerts 200 Users After Fake iOS App Installed Spyware; Italian Firm Faces Action

Meta-owned messaging platform WhatsApp said it alerted about 200 users who were tricked into installing a bogus version of its iOS app that was infected withΒ spyware. According to reports from ItalianΒ newspaper La Repubblica and newsΒ agency ANSA, the vast majority of the targets are located in Italy. It's assessed that the threat actors behind the activity used social engineering

(PR) Gigabyte Goes Dark with the X870E AERO X3D DARK WOOD

2 April 2026 at 13:14
Gigabyte Technology Co. Ltd, a leading manufacturer of motherboards, graphics cards, and hardware solutions, today marks a defining moment in computing aesthetics with the introduction of the X870E AERO X3D DARK WOODβ€”a groundbreaking motherboard that transcends technological achievement to become a true design statement. Building on the acclaimed success of the X870E AERO X3D WOOD, the X870E AERO X3D DARK WOOD carries that legacy boldly forward, deepening the experience into something more immersive and emotionally resonant.

Guided by the philosophy of Technology with Warmth, the X870E AERO X3D DARK WOOD envelops the user in a quiet, grounded mood that feels less like a hardware upgrade and more like coming home. The authentic dark wood texture finish brings natural warmth and character into high-performance computing; the supple leather pull tab adds tactile intimacy; and the understated dark metal tones offer calm and breathing roomβ€”a deliberate counterpoint to the noise of modern life.

(PR) Alphacool Announces New APEX Series CPU+VRM Monoblocks

2 April 2026 at 12:44
Alphacool International GmbH from Braunschweig is a pioneer in PC water-cooling technology. With one of the most comprehensive product portfolios in the industry and over 20 years of experience, Alphacool is once again expanding its portfolio with the long-awaited Apex Monoblocks. The monoblocks combine high cooling performance with an elegant design that is perfectly matched to the respective motherboard. In addition to the CPU, they also cool the voltage regulators (VRMs) and the M.2 NVMe SSD located below the CPU socket. This allows the waterblock to cover all key motherboard components.

As with the Apex 1 CPU cooler, the Apex Monoblocks feature an offset cold plate. This directs the coolant flow straight to the thermal hotspot of the AM5 CPU. Combined with the proven Cross-Slot structure and 3D Jetplate 2.0, the design generates high water pressure for particularly efficient heat dissipation. The Apex Monoblocks were developed for demanding systems that require both high cooling performance and seamless integration into the motherboard design. Their combination of technical precision, targeted coolant flow and clear design language makes them ideally suited to modern custom water-cooling setups.

Pet Video – Turn pet photos into viral videos in seconds


Pet Video is an AI pet video generator that turns your pet photos into shareable short videos in seconds. Upload a cat or dog photo, choose from over 50 trending styles like Woolen Pet or K‑Pop Dance, and instantly download HD results ready for TikTok, Instagram Reels, or YouTube Shorts. It includes dedicated cat and dog generators, effortless sharing, and a gallery of viral effects to inspire your next post.

View startup

Glowlytics – Track skin health with AI to understand acne, sun damage, and inflammation


Glowlytics is an AI-powered skin health tracker that helps you understand acne, sun damage, and inflammation with insights grounded in dermatology research. Built by doctors, it analyzes daily images to surface Skin Signals, track scores over time, and deliver evidence-based, privacy-first guidance to support your routine without offering diagnoses.

View startup

Apple Expands iOS 18.7.7 Update to More Devices to Block DarkSword Exploit

AppleΒ onΒ Wednesday expanded the availability of iOS 18.7.7Β and iPadOS 18.7.7Β to a broader range of devices to protect users from the risk posed by a recently disclosed exploit kit knownΒ as DarkSword. "We enabled the availability of iOS 18.7.7Β for more devices on April 1, 2026, so users with Automatic Updates turned on can automatically receive important security

Maxxmod – The browser extension built for YouTube power users


Maxxmod is an upcoming browser extension that upgrades YouTube with 200+ controls that shape how you watch and browse. Configure player behavior, speed, quality, volume, and captions, and use tools like Focus mode, volume booster, screenshots, Picture-in-Picture, and frame-by-frame to fine-tune playback. Clean up search and feeds by hiding Shorts, promos, premieres, and other distractions, all from a centralized admin with live previews.

Sign up at maxxmod.com to get notified at launch.

Maxxmod will offer 150 features on the free plan and 50 more on Pro.

View startup

CleanSmart – Automatically fix duplicates, formats, and missing values


CleanSmart streamlines data cleaning by automatically removing duplicates, standardizing formats, and filling missing values so you can export with confidence. Use SmartMatch for precise merges, SmartFill to predict gaps, AutoFormat to fix capitalization and phone numbers, and LogicGuard to flag outliers and impossible values. The platform encrypts data end-to-end, isolates each customer, and scales to 10,000+ records per minute, letting you upload a file and watch your Clarity Score rise.

View startup

Intel Core Ultra 400HX "Nova Lake" Mobile Processor Core Configurations Surface

2 April 2026 at 10:36
Intel's next-generation mobile processor for gaming notebooks and portable workstations, the Core Ultra Series 4 "Nova Lake-HX," will come in two distinct core configurations, according to a leak by Jaykihn, a reliable source for Intel leaks. "Nova Lake-HX" is segmented from the mainstream "Nova Lake-H" with a wider I/O that supports configurations with discrete GPUs. The top-of-the-line "Nova Lake-HX" processor will come with a CPU core configuration of 8P+16E+4LPE, that's eight "Coyote Cove" P-cores, and 16 "Arctic Wolf" E-cores, both of which are upgrades over the current "Cougar Cove" and "Darkmont" core architectures, respectively. The Compute tile features 8P+16E cores sharing an L3 cache, while the chip's 4 low-power island E-cores, also based on "Arctic Wolf," will be located in the SoC tile.

Intel is also planning a performance-segment "Nova Lake-HX" core configuration, with 6P+8E+4LPE under the hood. This will likely reuse the 6P+8E Compute tile from the mainstream "Nova Lake-H" processor, but with the SoC + I/O tiles Intel plans to use for the "Nova Lake-HX". This would give the chip a maximum core count of 6P+8E+4LPE. Perhaps the most interesting aspect of both these chip types is the iGPU, and its Graphics tile will be the tiniest variant in the series, with just 2 Xe cores. The iGPU of "Nova Lake" family is based on the Xe4 "Druid" graphics architecture. These chips feature a basic iGPU because they are expected to come with a full fat PCI-Express 5.0 PEG interface for discrete GPUs, and ideally you'd want the iGPU to be as small as possible.

Spectry – Track behavior, get AI insights, and A/B test in one platform


Spectry unifies analytics, AI insights, and A/B testing to help teams understand user behavior and ship improvements faster. Install once to capture heatmaps, session replays, funnels, errors, and feedback without developer effort. Every 12 hours, AI surfaces drop-offs, UX issues, and failures with clear recommendations, then lets you launch experiments directly from the insight. Build dashboards, monitor security and performance, and stay GDPR-compliant with EU-hosted data.

View startup

WriteMail.ai – Write professional emails 87% faster with AI


WriteMail.ai is an AI-powered email writer that helps you draft and reply to emails quickly using a web app or a Chrome extension for Gmail. It analyzes your text in real time and suggests clearer wording, tone, and structure while you customize length, style, mood, and emoji. You can create emails in many languages, keep history, and reuse drafts. The Mail Assistant gives instant feedback, and one-click send opens your default client. Plans range from free to pro with higher monthly limits and priority support.

View startup

Finch – Publish forecasts, get paid, and prove your accuracy


Finch is a marketplace for financial forecasts and research where analysts publish probability estimates, attach Markdown research, and build a verified track record. Every forecast is cryptographically timestamped and locked, then scored with Brier metrics when events resolve.

Your cumulative score powers public rankings and a downloadable, verified history that recruiters and allocators can trust. Monetize your work, connect directly with institutions, and let the market price the value of your intelligence.

View startup

Servo – Helps founders figure out how to explain what they do & improve conversion


Servo is for founders who can't explain why their product is different. It asks the focused questions a $15K strategist would ask, interrogates your product and your brain, then gives you the exact words for your site, pitch, posts, and ads. Words that separate you from your competitors and online noise. $79. Money back guarantee. Built on 20 years of strategy behind Amazon Music, Twitch, Red Bull, and X Games. You answer questions about your business. Servo's guided session figures out what makes you different and why anyone should care. You walk away with one-liners, your story, how your brand should sound, what to post, and where to start. So your homepage, pitch, and ads finally land.

View startup

Diven – Pick a topic and get instant facts and share-ready insights without prompts


Diven turns your curiosity into knowledge by delivering surprising facts, clear explanations, and detailed articles on topics you chooseβ€”no prompts required. Open the app, browse suggested subjects, and get concise, engaging content in seconds, complete with tailored images and share-ready summaries. Create an account in 30 seconds and start with free credits. Learn and share in multiple languages, and use Diven to fuel conversations, broaden your worldview, and keep discoveries organized.

View startup

De-fi platform Drift suspends deposits and withdrawals after millions in crypto stolen in hack

2 April 2026 at 02:58
Blockchain trackers put the cryptocurrency heist in the hundreds of millions of dollars and is already on track to be the largest crypto theft in 2026 so far.

AnyToURL – Drag and drop files to get a shareable link in seconds


AnyToURL turns any file into a short, shareable link in seconds. Drag and drop to upload, then share an instant URL with browser previews for images, PDFs, and documents. Files are delivered over a global edge network for fast access worldwide. Add password protection, keep files temporary or make them permanent on paid plans, and manage uploads via API or CLI with custom domains and branding, supporting sizes up to 10GB.

View startup

NanoMaker AI – Create images, videos, music, and voice with top AI models in one place


NanoMaker AI is an all-in-one creative platform that lets you generate and edit images using Nano banana AI, videos, music, and voice with the world's top AI models under one subscription. Work in a seamless workflow: turn an image into a video, add background music, and export without switching tools. Use prompt-based editing, background removal, lighting control, and style transfer to produce consistent, professional results for marketing, content creation, education, and e-commerce.

View startup

EmbedAI – Add AI inside sites and apps in minutes with no AI skills


Embed AI into your app or site in just 3 lines of code. Normally, building AI into mature apps or websites requires dealing with vector databases, custom integration pipelines, authentication, and brittle LLM calls, which distract core engineering teams from shipping product features. EmbedAI solves this by providing a drop-in component that abstracts away infrastructure, letting you inject AI into your app logic without restructuring your backend. It requires zero backend maintenance or database provisioning, offers seamless UI matching your brand rules, and gives you complete control with your own API keys.

View startup

Lutily – Launch a branded salon booking page clients use in under a minute


Lutily gives salons a branded booking page at yourname.lutily.com that lets clients pick services, choose a staff member, and reserve a real open slot in under a minute. It never shows competitors, charges no commission, and has no per‑staff fees. Every booking is phone‑verified to reduce no‑shows.

Use Lutily to stack appointments to fill gaps, run a smart waitlist that auto‑offers newly opened times, and control working hours by date. Manage a color‑coded calendar, team permissions, client history and notes, and automatic SMS confirmations and reminders, with instant rescheduling and cancellations.

View startup

32GB of Corsair Vengeance DDR5 RAM is 33% off today only β€” This superb memory deal for PC gamers might sell out before midnight

Memory deals are more important than ever with the current RAM pricing surges, and Woot! is currently home to today's best deal. It expires tonight (if not sooner), so don't hold out too long.

(PR) AMD Instinct MI355X GPUs Surpass 1M Tokens/Sec in MLPerf 6.0

1 April 2026 at 22:52
In its MLPerf Inference 6.0 submission, AMD did not simply revisit familiar benchmarks with a faster GPU. It expanded into first-time workloads, crossed the 1-million-tokens-per-second threshold at multinode scale and showed that partners can reproduce the results across a broader ecosystem. That combination matters because our customers no longer evaluate inference platforms on one metric alone. They want competitive single-node performance, efficient scale-out, faster bring-up on new models, reproducible results across partner systems and confidence that the software stack can keep pace. MLPerf Inference 6.0 let us show all of that in one submission.

Just as important, we showed that these results are not isolated. A broad partner ecosystem submitted across four AMD Instinct GPU types that closely reproduced numbers submitted by AMD and the first three-GPU heterogeneous MLPerf submission demonstrated that AMD hardware and AMD ROCm software can orchestrate meaningful inference throughput even across systems in different geographies.

US government hires BlackSky to build next-gen AI surveillance satellites for Earth and beyond

1 April 2026 at 23:48

The US government has selected BlackSky to design and build the next generation of its space surveillance capabilities. The newly announced contract is an indefinite delivery/indefinite quantity (IDIQ) agreement, meaning the company will provide as many satellites and monitoring services as the Air Force Research Laboratory requires for its missions....

Read Entire Article

Before yesterdayTech

GCS Cheats – GCS Cheats, a powerhouse for next-gen gaming performance


Experience the pinnacle of gaming technology with GCS Cheats, the industry’s leading provider of state-of-the-art gameplay modifications. It features the most intuitive interface and the lightest system footprint in its class, offering a powerful and easy-to-use level of customization. Every tool is designed for maximum stability, providing seamless integration into today’s biggest games.

View startup

CERT-UA Impersonation Campaign Spread AGEWHEEZE Malware to 1 Million Emails

The Computer Emergency Response Team of Ukraine (CERT-UA) has disclosed details of a new phishing campaign in which the cybersecurity agency itself was impersonated to distribute a remote administration tool known as AGEWHEEZE. As part of the attacks, the threat actors, tracked as UAC-0255, sent emails on March 26 and 27, 2026, posing as CERT-UA to distribute a password-protected ZIP archive

Formo – Analytics and attribution for DeFi apps


Formo makes analytics and attribution easy for DeFi apps, so you can focus on growth. Understand who your users are, where they come from, and what they do onchain. Measure what matters and drive growth onchain with the data platform for onchain apps. Get the best of web, product, and onchain analytics on one versatile platform.

View startup

Lifeplanr – See your life as 4,680 weeks to plan, journal, travel, and track money


Lifeplanr visualizes your entire life as 4,680 weeks and lets you plan, journal, map travel, and track finances with a built-in FIRE calculator. You can see life phases at a glance, tag moods, attach photos, and scratch off countries you’ve visited.

Install it as a PWA on any device, switch between 10 themes, and use it offline. Your data stays on your device by default, with optional Pro cloud sync and easy export.

View startup

Google Ads experiments now auto-apply results by default

1 April 2026 at 21:07
Your guide to Google Ads Smart Bidding

Google Ads added an auto-apply setting to experiments. It’s on by default, so winning variants can go live without review.

How it works. You choose directional results (default) or statistical significance at 80%, 85%, or 95% confidence. One safeguard: if your chosen success metric performs significantly worse in the test arm, the change won’t auto-apply.

Why we care. Experiments are one of the most powerful tools in your account. Automating apply can speed testing, but removes a checkpoint where you catch unintended consequences before they hit live campaigns.

The catch. Experiments allow only two success metrics. A third metric you care about β€” one you didn’t or couldn’t select β€” can decline unnoticed. Guardrails protect what you told Google to watch, not everything that matters.

Bottom line. Auto-apply is a reasonable shortcut for simple tests. For anything consequential, keep manual review. Run the experiment, reach significance, then review full data before you apply changes.

First seen. Google Ads specialist Bob Meijer shared this update on LinkedIn.

Bing tests larger sponsored product carousel in shopping results

1 April 2026 at 20:47
Microsoft Ads: How it compares to Google Ads and tips for getting started

Bing appears to be testing an expanded sponsored products section in its shopping results, featuring a double-row carousel that takes up significantly more space than the current format.

The test. The format pairs a large, double-row sponsored carousel with organic cards from individual sites below.

Why we care. If this rolls out broadly, it means more screen space for sponsored products β€” typically leading to higher visibility and more clicks if you run Microsoft Shopping campaigns. The double-row carousel is also more visually competitive, bringing Bing’s shopping ads closer to Google Shopping’s prominence.

The catch. The test appears limited β€” not all users see it. Search industry veteran Mordy Oberstein reported a more compact layout, suggesting Bing is still in early testing.

Bottom line. Bing runs many SERP experiments that never fully launch, so watch this one for now. If you run Microsoft Shopping campaigns, monitor impressions for any lift if the format expands.

First spotted. Sachin Patel shared a screenshot of the test on X.

SEO leads martech replacements, but not for the reason you think

1 April 2026 at 20:00
Martech replacement survey

SEO tools were the most replaced martech application in 2025 β€” but not for the reason you might expect.

According to the 2025 MarTech Replacement Survey, SEO platforms topped the list of replaced tools for the first time, overtaking categories like marketing automation platforms (MAPs), which had led for the past five years.

At first glance, that might suggest instability in SEO. After all, the discipline is being reshaped by LLMs, AI-generated answers, and the rise of zero-click search experiences β€” all of which challenge traditional keyword tracking and ranking-based workflows.

But the data tells a more nuanced story.

SEO tools: most replaced, but stabilizing

Even though SEO tools were the most replaced category in 2025, they were replaced at a slower rate than in prior years.

In other words, they’re now the most commonly replaced β€” but also more stable than before.

That shift suggests a maturing category. Rather than widespread churn, you appear to be consolidating, upgrading, or refining your SEO stack as search evolves.

Meanwhile, several other major martech categories saw sharper year-over-year declines in replacements:

  • CRM replacements fell more than 12% from 2024 to 2025, reaching their lowest level in the survey’s history.
  • MAPs, email platforms, and CMS tools also declined compared to 2024.

Why SEO tools are being replaced

So if SEO tools aren’t being swapped out due to instability, what’s driving the changes?

The survey points to three primary factors:

1. AI capabilities

For the first time, the survey asked about AI’s role in replacement decisions β€” and the impact was significant.

  • 37.1% cited AI capabilities as an important factor.
  • 33.9% said they wanted AI capabilities when replacing a tool.

This reflects a broader shift in SEO tooling, with platforms rapidly integrating AI for:

  • Content generation and optimization.
  • SERP analysis and intent modeling.
  • Workflow automation.

In many cases, replacing your SEO tool isn’t about abandoning SEO β€” it’s about upgrading to AI-native capabilities.

2. Cost pressures

Cost has become a major driver of martech replacement decisions, including SEO tools:

  • 43.8% of marketers cited cost reduction as a reason for replacing an application in 2025.
  • That’s up sharply from 23% in 2024 and 22% in 2023.

This suggests growing pressure to optimize and rationalize your SEO tech stack, especially as you evaluate overlapping functionality across tools.

3. Changing needs in a shifting search landscape

As search behavior changes, so do expectations for SEO platforms.

Traditional rank tracking and keyword monitoring are no longer sufficient on their own. Teams are increasingly looking for tools that can:

  • Surface insights across AI-driven SERPs
  • Track visibility beyond clicks
  • Integrate with broader marketing and data systems

That evolution is likely contributing to replacement activity β€” even as overall stability increases.

AI is reviving custom-built SEO tools

One of the more notable trends in the 2025 survey is the resurgence of homegrown solutions, including for SEO workflows.

Replacing commercial martech tools with homegrown applications accounted for:

  • 8.1% of replacements in 2025
  • Up from 3.4% in 2024 and 5% in 2023

This marks a meaningful shift after years of near-total reliance on commercial platforms.

β€œAI-assisted coding is changing the calculus of build vs. buy,” said martech analyst Scott Brinker. β€œIt’s easier and faster to build than ever before. Companies should still buy applications where they have no comparative advantage. But in cases where they can tailor capabilities to differentiate their operations or customer experience, custom-built software is an increasingly attractive option.”

For SEO teams, this could mean more organizations building:

  • Custom data pipelines.
  • Proprietary SERP tracking systems.
  • AI-driven analysis tools tailored to their specific needs.

Other martech categories show even greater stability

While SEO tools led in total replacements, the broader martech landscape is becoming more stable.

Several major categories saw declining replacement rates in 2025, including:

  • CRM platforms (down more than 12% year over year)
  • Marketing automation platforms
  • Email distribution tools
  • Content management systems

This suggests that many organizations are settling into core systems while selectively updating areas β€” like SEO β€” that are changing faster.

Methodology

Invitations to take the 2025 MarTech Replacement Survey were distributed via email, website, and social media in Q4 2025.

A total of 207 marketers responded. Findings are based on the 154 respondents (60%) who said they had replaced a martech application in the previous 12 months.

Download the 2025 MarTech Replacement Survey, no registration required.

PlayStation 6 Handheld Performance Detailed – Better than Xbox Series S

Sony hopes to deliver a better-than-Xbox Series S experience on its next-gen handheld with next-gen upscaling According to the leaker KeplerL2, Sony’s next-generation PlayStation 6 Handheld should feature a GPU that surpasses the Xbox Series S and Nintendo Switch 2. In fact, the leaker thinks that the system’s GPU will be β€œa bit ahead of […]

The post PlayStation 6 Handheld Performance Detailed – Better than Xbox Series S appeared first on OC3D.

(PR) EK Water Blocks Intros EK-Quantum VectorΒ³ TUF RTX 5070 Ti 5080 Plexi Water Block

1 April 2026 at 19:12
EK by LM TEK, is proud to introduce the EK-Quantum VectorΒ³ TUF RTX 5070 Ti/5080 - Plexi, a high-performance full-cover water block compatible with both the ASUS TUF Gaming GeForce RTX 5080 and the ASUS TUF Gaming GeForce RTX 5070 Ti. Designed to deliver exceptional thermal performance across the GPU core, VRAM, and power stages, the EK-Quantum VectorΒ³ TUF RTX 5070 Ti / 5080 features an optimized open split-flow cooling engine, next-gen pre-cut thermal pads,and a full-coverage black anodized aluminium backplate. Now available now at the EK Shop and local resellers.

Engineered for the ASUS TUF Gaming GeForce RTX 5080 and the ASUS TUF Gaming GeForce RTX 5070 Ti, the EK-Quantum VectorΒ³ delivers high performance liquid-cooling for enthusiasts who demand more. Featuring EK's expanded next-gen cooling engine, pre-cut high-performance thermal pads, and an advanced gasket design, this block ensures your GPU stays cool and efficient - even under the heaviest gaming, rendering, or overclocking loads.

Nvidia App adds 'Auto Shader Compilation' for faster load times in games β€” beta feature automatically recompiles shaders in the background after every driver update

The Nvidia App can now automatically recompile shaders for you in the background after every GPU driver update. This should save gamers several minutes across different titles, especially blockbuster ones, where shader compilation can often delay your session. You still need to compile shaders for the first time after a new install, however.

Why too many micro-conversions hurt PPC performance

1 April 2026 at 19:44
Micro-conversions overload

AI-powered ad bidding systems are highly sophisticated, but conversion tracking hasn’t kept pace. Ad platforms encourage advertisers to track more actions, while many experts argue for tracking only final outcomes.

Both are partly true. Neither is universally correct.

In practice, both over- and under-signaling can hurt PPC performance. Too many loosely defined micro-conversions introduce noise. Bidding shifts toward easy, low-value actions, inflating reported performance while eroding real results. Too few signals leave the system without enough data to learn.

This dynamic is most visible in Performance Max and Search plus PMax setups, where the system optimizes toward whatever signals it’s given β€” regardless of whether they reflect real business value.

Here’s what happens when micro-conversions outnumber real conversions, why bidding systems behave this way, and how to build a conversion framework that aligns signal volume with business impact.

The myth of the β€˜data-hungry’ PPC algorithm

The idea that algorithms need as much data as possible has been repeated so often that it’s become an assumption. Platform documentation, automated recommendations, and many PPC blog posts reinforce the same message: more signals equal better learning.

Bidding systems require a minimum level of signal density to function, but they don’t benefit from indiscriminate micro-conversion signals. More data isn’t always better data.

Adding low-intent or loosely correlated actions often degrades performance by shifting optimization toward behaviors that don’t correlate with revenue.

Machine learning systems don’t evaluate the strategic relevance of a signal. They evaluate frequency, consistency, and predictability.

When an account includes a mix of high- and low-intent micro-conversions β€” purchases, add-to-carts, pageviews, video plays, and soft leads β€” the system doesn’t inherently understand which actions matter most to the business.

Without a clear value hierarchy, the bidding algorithm treats all signals as valid optimization targets. This creates a structural bias toward high-frequency, low-value actions because they’re easier and cheaper to achieve. The result is a bidding pattern that maximizes conversion volume while minimizing business impact.

Why value-based bidding helps, but can’t fix everything

Many practitioners advocate for value-based bidding, where each micro-conversion is assigned a relative financial or hierarchical value. In theory, this helps the system understand which signals matter most. You can also instruct the platform to maximize conversion value, which should push the algorithm toward higher-value purchases or sales-qualified leads (SQLs).

But value-based bidding isn’t a complete solution. When too many micro-conversions are included β€” even with assigned values β€” the system can still become overwhelmed. A high volume of low-intent signals can dilute intent and distort the value hierarchy.

The issue isn’t just a lack of context.

Every signal becomes part of the optimization math. If the model weighs signals by volume rather than business importance, low-intent micro-conversions will dominate. Assigning values helps clarify priorities, but it can’t override signal imbalance. At a certain point, the math wins.

Dig deeper: In Google Ads automation, everything is a signal in 2026

How PPC bidding follows the path of least resistance

In practice, this shows up as a β€œpath of least resistance” problem.

Even with values assigned, bidding algorithms still optimize toward the signals they’re given. When low-intent micro-conversions are included as Primary actions, the system treats them as efficient ways to increase conversion volume. This isn’t an error. It’s expected behavior for a model designed to maximize conversions within a set budget.

When those signals occur more frequently, the system gravitates toward them. A signal that fires hundreds of times a day will exert more influence than a high-value action that fires only a handful of times per week.

This dynamic is especially visible in PMax. The system evaluates signals across channels, audiences, and placements, and pursues the cheapest, most abundant path to conversion. If a contact page visit or key pageview is treated as a Primary signal, PMax may prioritize it over a purchase or SQL because it’s easier to achieve at scale.

That’s why PMax often reports strong conversion volume and low CPA while revenue remains flat or declines. The system is performing as instructed, but the inputs lack a disciplined signal hierarchy. Value-based bidding improves structure, but without restraint in the number and type of signals, it can’t fully prevent the problem.

False performance signals inflate platform metrics

When low-value actions are tracked as Primary conversions, platform-reported performance becomes disconnected from business outcomes. Metrics such as CPA, ROAS, and conversion rate may improve, but those gains are often illusory.

For example:

  • A campaign may show a 40% reduction in CPA because the system is optimizing toward pageviews rather than purchases.
  • ROAS may increase because the system attributes inflated value to actions that don’t correlate with revenue.
  • Conversion volume may spike due to high-frequency micro-conversions.

These patterns create a false sense of success, leading advertisers to scale budgets prematurely and erode contribution margin.

Diluted intent and double-counting

When multiple micro-conversions are tracked as Primary, a single user journey can generate multiple wins for the algorithm.Β 

For example, a user who views a product page, signs up for a newsletter, and adds an item to cart may be counted as three conversions from a single click. If values are assigned to each step, conversion value and ROAS become inflated as well.

This inflates conversion volume, inflates conversion value, and distorts bidding behavior. The system interprets this as a high-value user and begins overbidding on similar traffic, even if the user never completes a purchase.

In many accounts, micro-conversions outnumber real conversions by a ratio of 500 to 1 or more. This imbalance has significant implications for bidding behavior.

When frequency overwhelms value

If an account records 500 pageviews, 200 add-to-carts, 50 lead form starts, 10 purchases, and all actions are treated as Primary, the system receives 760 signals for every 10 that actually matter.

Without distinct values, the algorithm can’t differentiate between a $0.05 action and a $500 action. It optimizes toward the most frequent signals because they provide the clearest path to increasing conversion volume.

Even when values are assigned, overvaluing micro-conversions teaches the algorithm to pursue easy wins. The result is a maximized conversion value metric that looks strong in the dashboard but isn’t reflected in actual sales.

The consequences of signal imbalance

When micro-conversions dominate the signal mix:

  • Bidding shifts toward low-intent traffic because it produces more conversions.
  • Budgets are allocated inefficiently as the system chases cheap signals.
  • Real ROAS declines, even as platform-reported ROAS appears strong.
  • Scaling becomes risky because the system is optimizing toward the wrong outcomes.

That’s why accounts with high micro-conversion volume often show strong platform metrics but weak financial performance.

When micro‑conversions stop helping

Micro-conversions are useful when an account lacks enough real conversion volume to support stable bidding. However, once a campaign consistently reaches 30 to 60 real conversions per month, they no longer provide meaningful benefit.

At that point, the system has enough high-quality data to optimize effectively. Continuing to rely on micro-conversions introduces unnecessary noise and increases the risk of misaligned bidding.

This is the point to transition from tCPA to tROAS and let real revenue guide optimization.

Dig deeper: Why better signals drive paid search performance

Get the newsletter search marketers rely on.


How to decide what should be a Primary conversion

Primary actions influence bidding, while Secondary actions provide visibility without affecting optimization. This four-part litmus test helps determine which actions should be treated as Primary.

1. The volume threshold

Micro-conversions should be used only when real conversion volume isn’t sufficient to support stable bidding. As a general guideline:

  • Below 30 real conversions per month: A high-intent micro-conversion may be needed to give the system enough data.
  • 30 to 60 real conversions per month: Begin reducing reliance on micro-conversions.
  • 60 or more real conversions per month: Remove micro-conversions from Primary status and rely on revenue-based optimization.

This threshold ensures micro-conversions serve as a temporary bridge, not a permanent crutch.

2. The necessary step test

A Primary action should represent a required step in the conversion journey, such as:

  • Add to cart.
  • Begin checkout.
  • Start lead form.

Actions that aren’t required steps β€” such as contact page visits, whitepaper downloads, or time on site β€” shouldn’t be treated as Primary. These may indicate interest, but they don’t reliably predict revenue.

3. The valuation test

If an action can’t be assigned a realistic financial value, it shouldn’t be used as a Primary conversion. Assigning arbitrary values introduces risk and can distort bidding behavior.

Actions such as time on site or scroll depth fail this test because they don’t consistently correlate with revenue. However, if CRM data shows a reliable statistical correlation with revenue, that can justify including the action.

4. The simplicity test

Even if multiple actions pass the first three tests, only the strongest one or two should be designated as Primary. Including too many Primary actions increases the risk of double-counting and overbidding.

A streamlined Primary set ensures the system focuses on the most meaningful signals.

Use Secondary conversions as a diagnostic tool

Secondary conversions provide visibility into user behavior without influencing bidding. They’re a useful diagnostic tool for understanding funnel performance and evaluating new signals.

Visibility without optimization risk

Tracking actions such as newsletter signups, video views, or soft leads as Secondary lets you monitor engagement without shifting bidding toward low-value behaviors.

This approach preserves data integrity while maintaining control over optimization.

Funnel analysis and bottleneck identification

Secondary conversions reveal where users drop off in the funnel. For example:

  • High Add-to-Cart volume but low purchase volume indicates checkout friction.
  • High MQL volume but low SQL volume suggests targeting or qualification issues.

These insights support more informed optimization decisions.

Safe testing environment

New signals should be tracked as Secondary for several weeks before being considered for Primary status. This allows you to evaluate frequency, correlation with revenue, stability, and predictive value.

Only signals that demonstrate consistent value should be promoted to Primary.

Assign micro-conversion values using a safety discount

When micro-conversions are used, they must be assigned values that reflect their true contribution to revenue. Overvaluing micro-conversions is a common cause of inflated platform performance and misaligned bidding.

Calculating baseline value

The baseline value of a micro-conversion is determined by:

  • Baseline value = Conversion rate to sale x Average order value (AOV) or profit

For example:

  • Ecommerce: If 25% of add-to-carts convert and AOV is $1,600, the baseline value is $400.
  • Lead generation: If 10% of demo requests convert to $5,000 profit, the baseline value is $500.

Applying the 25% safety discount

The baseline value shouldn’t be used directly. Instead, apply a 25% reduction:

  • $400 becomes $300.
  • $500 becomes $375.

This discount helps prevent overbidding by ensuring the system doesn’t overvalue micro-conversions relative to actual revenue.

Undervaluing is safer than overvaluing

Undervaluing micro-conversions may slightly slow learning, but it doesn’t distort bidding. Overvaluing them can push the system toward low-intent traffic, leading to rapid budget misallocation.

The safety discount provides a buffer that protects contribution margin while still supplying useful data.

Dig deeper: How to make automation work for lead gen PPC

Where PPC experts draw the line on micro-conversions

Practitioners consistently point to the same principle: signal discipline matters more than signal volume.

Julie Friedman Bacchini emphasizes that every conversion action becomes a signal the system optimizes toward. Using more than one Primary action introduces ambiguity β€” β€œit’s suddenly muddier” β€” and skipping values makes it easier for the system to latch onto lower-value signals. Values don’t need to be exact, but they must be relative.

She also notes that micro-conversions can help low-volume campaigns reach data thresholds, but they aren’t a substitute for real Primary conversions. Removing them later can mean β€œstarting over to a large extent on system learning.”

Jordan Brunelle takes a similarly disciplined approach: β€œThere can definitely be too many.” He recommends starting with one strong signal of intent and watching the ratio between micro-conversions and real outcomes. If volume is high but outcomes are low, it often signals a targeting or signal issue.

Across both perspectives:

  • You can have too many micro-conversions.
  • Values help, but they aren’t a cure-all.
  • The system favors the most frequent signals.
  • Micro-conversions are a tool, not a strategy.

Signal discipline is the real competitive advantage

The debate around micro-conversions often focuses on quantity. But the real differentiator isn’t volume, but discipline.

Bidding systems optimize toward the signals they’re given. When the signal mix is cluttered, performance drifts. When it’s clear and intentional, the system aligns with real business outcomes.

Micro-conversions should be selectively used and continuously evaluated. Start with a simple audit:

  • Identify all Primary conversions.
  • If more than two or three actions are Primary, the account is likely over-signaled.
  • Apply the litmus test.
  • Remove any Primary actions that fail the volume, necessary step, valuation, or simplicity tests.
  • Move nonessential actions to Secondary.
  • Assign conservative values to remaining micro-conversions.
  • Use the safety discount to avoid overbidding.
  • Monitor performance for 30 days, focusing on revenue, contribution margin, and signal distribution.

Micro-conversions should be a temporary bridge. Once real conversion volume is sufficient, optimization should be guided by revenue. A disciplined signal architecture gives automation what it needs to perform as intended: efficient, predictable, and aligned with real business outcomes.

How to run Google Ads in sensitive categories without remarketing

1 April 2026 at 18:00

If you’re a lawyer, college administrator, or financial services provider, you’ve likely seen the frustrating β€œEligible (Limited)” status in your Google Ads account. It can feel like you’re fighting Google with one hand tied behind your back when your remarketing lists, exact match keywords, and more don’t work as intended.

While it might feel like Google Ads is out to get you when you operate in a so-called β€œsensitive interest category,” there are specific reasons for these rules. More importantly, there are specific ways to succeed despite them.

This article will cover what the personalized advertising policies are, what they mean for your account, and five specific tactics you can use to succeed with Google Ads.

Why does Google have personalized advertising policies?

Google provides detailed explanations in its official policy documentation, but it comes down to two things: legal requirements and ethical standards.

In the United States, for example, the Fair Housing Act and employment laws prevent discrimination based on age, gender, or location. If you’re advertising a job opening or a new apartment complex, Google can’t allow you to exclude people based on those demographics because doing so would be against the law.

Then there’s the ethical side. Imagine you’re running a rehab center. If someone visits your site, Google’s β€œsensitive interest” policy prevents you from following them around the internet with targeted banner ads like, β€œStill struggling with addiction? Come to our clinic.”

That kind of remarketing is intrusive and, frankly, predatory when it targets someone’s health and struggles. To protect the user experience and maintain a sense of privacy, Google limits how personal data can be used in these high-stakes industries.

What can’t you do in a sensitive interest category?

If you fall into one of these categories β€” housing, employment, credit, healthcare, or legal services β€” the biggest impact is usually on your audience targeting.

Here’s what you can’t use:

  • Website or App Remarketing Lists, including the Google-engaged audience: You can’t target people who have previously visited your website or used your app.
  • Customer Match: You can’t upload your own email lists or phone numbers to target existing clients.
  • YouTube Audiences: You can’t target people based on how they’ve interacted with your videos.
  • Custom Segments: You aren’t allowed to build specialized audiences based on specific search terms or types of websites people visit

For certain categories in certain countries, like housing, credit, and employment in the United States, there’s further β€œdemographic stripping” β€” you can’t target by age, gender, parental status, or ZIP code. Your Smart Bidding strategies won’t use these signals as inputs either.

The good news: What can you do in a sensitive interest category?

It’s easy to focus on what’s gone, but what still works is a much longer list. Even in a restricted industry, you still have access to the core engine of Google Ads. You can still use:

  • Keywords, feeds, and keywordless technology: These rely on intent (queries) rather than identity, so they are perfectly fine in Search, Shopping, and Performance Max.
  • Google’s audiences: Affinities, In-Market, Detailed demographics, and Life Events segments are still fully at your disposal, where eligible, in Demand Gen, Display, Video, Search, and Shopping.
  • Optimized targeting: Google’s AI can still find people likely to convert based on your historical converters, in Demand Gen, Display, and Performance Max.
  • Content Targeting: You can choose to show your ads on specific keywords, topics, and placements in Display and Video campaigns.
  • Conversion tracking: Yes, you can still track conversions and use features like Enhanced Conversions, Offline Conversion Import, and Consent Mode. While your internal legal team may have reservations or restrictions around your website tracking, Google’s Personalized advertising policy doesn’t restrict any conversion tracking.

5 strategies to win in sensitive categories

If you want to move the needle without relying on remarketing, you need to rethink your account structure and messaging. Here are five things you can do right now.

1. The β€œSeparate Domain” strategy

If your business offers a mix of services β€” some sensitive, some not β€” don’t let the sensitive ones β€œpoison” your whole account. Think of a spa that offers haircuts, pedicures, and Botox. Haircuts are fine; Botox is a medical procedure that triggers sensitive category restrictions.

If you put them all on one site, your entire remarketing capability might get shut down. Consider putting the sensitive service on a separate domain and a separate Google Ads account. This lets you use every available tool for your main business while the sensitive portion operates under the necessary restrictions.

2. Choose Demand Gen over Display

If you want to use image or video ads, use Demand Gen instead of the standard Display Network. In my experience, Demand Gen delivers higher-quality audiences and tends to perform better in restricted niches.

3. Lean into phrase and broad Match

You might be tempted to stick to Exact Match keywords to keep things tight. However, in sensitive categories, Google may restrict ads on very narrow, specific queries for privacy reasons. If your Exact Match keywords aren’t getting impressions, try Phrase or Broad Match. This gives the algorithm more room to find users searching for the same thing with slightly different phrasing that may be less restricted.

Think of it like fishing: if you can’t use a spear, use a net. You’ll catch some fish you don’t want, but that tradeoff helps you catch the ones you do want more easily.

4. Feed the AI with offline conversion tracking

Most businesses in these categories, such as law firms or banks, don’t make sales on their websites. The website generates a lead, and the sale happens over the phone or in an office.

If you want Google to find better users, you must feed that real-world data back into the system. Use Offline Conversion Tracking (OCT) to show Google which leads became customers. Even if you must navigate HIPAA or other privacy regulations, there are ways to do this safely.

Consult your legal team, but don’t skip this step. It’s the best way to train the algorithm when you can’t use your own audiences and to ensure Smart Bidding works at its full potential.

5. Creative-Led targeting

When you can’t tell Google who to target with a list, you have to tell the user who the ad is for through your creative. Your headlines and images should qualify the lead.

Be specific in your copy. For example, instead of β€œNeed a Lawyer?” try β€œDefense Attorney for Small Business.” This attracts your target audience and encourages people who aren’t a fit to scroll past, saving you money and improving your conversion rate.

Running Google Ads in a sensitive category is a challenge, but it’s far from impossible. By shifting your focus from who the person is to what they’re looking for and how you speak to them, you can still drive incredible results.

This article is part of our ongoing Search Engine Land series, Everything you need to know about Google Ads in less than 3 minutes. In each edition, Jyll highlights a different Google Ads feature, and what you need to know to get the best results from it – all in a quick 3-minute read.

20 practical ways to use AI in SEO

1 April 2026 at 17:00
20 practical ways to use AI in SEO

AI has changed how I work after nearly two decades in digital marketing. The shift has been meaningful, freeing up time, reducing the grinding parts of the job, and making some genuinely hard tasks faster.

That doesn’t mean it does the work for you, transforms everything overnight, or saves you 40 hours a week. In real-world SEO, with real clients and real deadlines, it’s a tool that makes parts of the job easier, not something that replaces the work itself.

Here are 20 ways I actually use it. Some are specific to SEO. Some are broader, but relevant to anyone working in the industry. All of them are practical, tested, and honest about their limitations.

Content creation and copywriting

1. Writing first drafts

The single best way to use AI for content is to stop expecting it to produce something publishable and start treating it as a very fast first-draft machine.Β 

  • Feed it your brief, your target keyword, your audience, and your angle. Get a structure back.Β 
  • Then rewrite it in your voice. Add in the expertise that only you know, not a vanilla version of what’s online.

The content AI produces out of the box is average. Your job is to make it good. Reference real-life stories, case studies, and statistics, and showcase your personal viewpoint and expertise.

The time savings are in not starting from a blank page.

2. Generating meta title and description variations

Give Claude or ChatGPT your target keyword, page topic, and character limits. Ask for 10 variations of your meta title and descriptions. You’ll use one, maybe combine two, but the process takes two minutes instead of 20. For large sites with hundreds of pages, this alone is worth the subscription.

Many tools allow you to upload CSV files, add AI’s suggested ideas, and download them for review. Don’t skip this step. A human eye is where the value sits

3. Refreshing underperforming content

Paste an existing page or blog post that has dropped in rankings. Ask AI to identify what’s missing, what could be expanded, and what feels outdated.Β 

It won’t always be right, but it gives you a starting point instead of reading the whole thing yourself with fresh eyes you don’t have at 4 p.m. on a Thursday.

Make sure to give context. Long prompts with lots of detail will produce much better results than pasting a page in cold.Β 

4. Generating FAQ sections

Prompt AI to generate the 10 most common questions for your target keyword. Cross-reference with People Also Ask and your own research.Β 

Answer them, and you now have an FAQ section, featured snippet opportunities, and a content gap analysis in about 10 minutes.

5. Writing alt text at scale

Nobody enjoys writing alt text for 200 product images. Describe the image, give it the context of the page it sits on, and include the target keyword. Then ask for alt text that’s descriptive and naturally includes the term where relevant. It’s not glamorous, but it’s necessary and faster.

You can also run a website through Screaming Frog, export it to a CSV file, upload it to your AI of choice, and ask it to write the alt text. This only works well if the file names are descriptive, and again, a human eye is key. This is about increasing speed, rather than handing it over to AI completely.

Dig deeper: How to use AI for SEO without losing your brand voice

Technical SEO

6. Understanding error messages and log files

Not everyone working in SEO has a developer background. AI is useful for:

  • Translating technical error messages.
  • Explaining what a server log is telling you.
  • Helping you understand why a page is excluded from indexing.Β 

Paste in the output, ask it to explain it in plain English, and then ask what the fix should be. Verify the answer, but it gets you most of the way there.

7. Writing schema markup

Schema is one of those things everyone knows they should be doing more of, and nobody finds especially enjoyable.Β 

Describe the content of your page to your AI of choice, tell it what schema type is relevant (FAQ, Article, LocalBusiness, Product, etc.), and ask it to generate the JSON-LD.Β 

Check it in Google’s Rich Results Test before implementation. This used to take me 20 minutes per page type. Now it takes five.

8. Creating regex for Google Search Console

If you use regex in GSC filters and you’re not a developer, AI is your new best friend. Describe what you’re trying to filter, for example, all URLs containing a specific subfolder, or all queries including a particular term, and ask for the regex string.Β 

It gets it right more often than not, and you can ask it to explain the logic so you actually understand what you’re implementing.

9. Analyzing crawl data with prompts

If you export a crawl from Screaming Frog or Sitebulb and you’re not sure what to prioritize, paste the summary data into your AI tool and ask it to help you identify the highest-priority issues based on the site’s goals.

It won’t replace your expertise, but it’s a useful sounding board when you’re staring at a spreadsheet with 47 issues and a client call in an hour.

Dig deeper: 6 tactical ways to responsibly use AI for everyday SEO

Reporting and analysis

10. Writing the narrative around the numbers

This is one of the most underrated uses of AI in SEO work. You have the data. You have the graphs. What takes time is writing the commentary that explains what happened, why, and what comes next.Β 

Feed AI your key metrics and the context of what was happening that month (algorithm updates, campaign launches, seasonality), and ask it to draft the narrative section of your report. Edit it, add your actual insight, but stop writing it from scratch every month.

You can even upload reports from various data sources and ask it to combine and summarize them. This saves me hours every month when I’m putting together reports.

11. Summarizing long reports for clients

Not every client wants to read a 12-page report. Ask AI to summarize your report into a five-bullet executive summary. Give it to clients at the top of the document.Β 

The ones who want details will read on. The ones who don’t will feel informed without asking you to talk them through every chart on the next call.

Ask AI to create the executive summary for someone who doesn’t know anything about SEO, and it’ll give you something simple and easy to understand.

12. Identifying anomalies in data

Paste a table of your keyword rankings or traffic data, and ask AI to flag anything that looks unusual, including significant drops, unexpected gains, or patterns that don’t match the previous period.Β 

It won’t replace proper analysis, but it’s a useful first pass when you’re managing a large amount of information and can’t give every dataset the attention it deserves.

Dig deeper: How to build AI confidence inside your SEO team

Get the newsletter search marketers rely on.


Research and competitor analysis

13. Conducting competitor content gap analysis

List your top three competitors and your own site. Ask AI to help you think through what content topics they’re likely covering that you’re not, based on their positioning and audience.Β 

Then, validate that with actual keyword research tools. AI can’t see competitor data directly, but it’s useful for hypothesis generation before you do the manual work.

14. Understanding a new industry quickly

When you take on a client in an industry you don’t know well, you need to get up to speed fast. Ask your AI to give you a primer on the industry:Β 

  • Key terminology.
  • The main players.
  • The buying cycle.
  • How people typically search for solutions in this space.
  • What the common pain points are.Β 

It saves you an embarrassing amount of time in discovery calls.

15. Identifying search intent mismatches

Paste a list of your target keywords and ask AI to categorize them by search intent: informational, navigational, commercial, and transactional. Then compare that against the page type you’re targeting them with.Β 

You’ll almost certainly find mismatches. This is a task that’s straightforward to describe, but tedious to do manually across hundreds of keywords.

Dig deeper: How to use AI response patterns to build better content

Client communication and account management

16. Drafting difficult client emails

Everyone has had to write a difficult email, whether it’s explaining why rankings have dropped, why a deadline was missed, or why they need to do something you know they don’t want to do.Β 

These emails take a disproportionate amount of emotional energy to write. Give your AI the situation, the context, and what you need the client to understand or do, and ask for a draft that’s clear, professional, and honest.

Edit it. Send it. Move on.

17. Writing SOPs and process documentation

If you’ve been meaning to document your processes and just haven’t gotten around to it, AI removes the excuse.Β 

Describe a process out loud (or in rough notes), paste it in, and ask for a structured SOP with numbered steps, decision points, and notes.Β 

The first version will need editing, but having a framework to work from is the difference between getting it done and it sitting on the to-do list for another quarter.

18. Preparing for client calls

Before a client call, paste in your recent report data, any issues from the previous month, and what you need to cover.Β 

Ask your AI to help you structure the agenda and anticipate questions the client might ask based on the data. You’ll go into the call more prepared and less likely to be caught off guard.

Productivity and admin

19. Processing your own thinking

This one sounds vague, but it’s one of the ways I use AI most.Β 

When I have a problem I can’t get clear on, a strategy decision I’m going back and forth on, or a piece of work I can’t find the right angle for, I talk it through with Claude (my AI buddy of choice) to clarify my own thinking. It asks questions, reflects things back, and helps me arrive at a point of view faster than I would staring at a blank document.

Ask your AI to be brutally honest with you. Otherwise, it’ll just keep agreeing with you and telling you that you’re truly an expert on every topic.

20. Building prompts you actually reuse

The biggest productivity gain from AI isn’t any individual use. It’s building a library of prompts that work for your specific workflow and reusing them consistently.

Every time you get a good result from an AI tool, save the prompt. Over time, you build a system, rather than starting from scratch every time. This is the thing most people skip, and it’s the thing that compounds.

Top tip: In the paid version of many AI tools, you can create projects and have specific instructions for each one. This is invaluable for saving time by not having to include all of this information in every prompt you use.

Dig deeper: Why SEO teams need to ask β€˜should we use AI?’ not just β€˜can we?’

What these use cases don’t replace

None of these tips replace the expertise, judgment, and client relationships that make a good SEO professional.

AI doesn’t know the business the way you do. It doesn’t understand the nuance of an industry, the history of an account, or the particular quirks of a contact you deal with regularly.

AI reduces the time spent on tasks that don’t require that expertise, so you have more of it available for the work that does.

Use AI as a tool. Stay skeptical of the hype. And for the love of good search results, edit everything before it goes anywhere near a client.

Dig deeper: Could AI eventually make SEO obsolete?

Looking for a job? It could be a scam β€” NordVPN uncovers phishing campaign impersonating top brands' recruiters

Cybercriminals are impersonating top brands like Meta, Disney, and Spotify in a highly sophisticated new phishing campaign designed to hijack your Facebook account. Here is everything you need to know to stay safe.

NVIDIA’s new Dynamic MFG feature could make games smoother… or just weirder. Either way, the real GPU battleground is shifting to software.

Early tests of NVIDIA's new DLSS 4.5 features, Dynamic Multi Frame Generation (MFG) and MFG 6X, have shown positive results. It certainly seems like AI-assisted software is the new GPU frontier, and one day that won't be controversial.

Nintendo's crusade against Palworld just got a reality check from the US Government as "summon and fight" patent rejected

1 April 2026 at 17:39
Nintendo’s legal crusade against Palworld just hit a massive roadblock in the United States. Following a rare, high-level review by the U.S. Patent and Trademark Office (USPTO), a patent examiner has issued a "non-final" rejection of all 26 claims in Nintendo’s controversial "summon and fight" patent.

(PR) SEMI Projects Double-Digit Growth in Global 300 mm Fab Equipment Spending for 2026 and 2027

1 April 2026 at 18:06
Worldwide 300 mm fab equipment spending is expected to increase 18% to $133 billion in 2026 and 14% to $151 billion in 2027, SEMI reported today in its latest 300 mm Fab Outlook. This strong growth reflects surging AI chip demand for data centers and edge devices, as well as the growing commitment to semiconductor self-sufficiency across key regions through localized industrial ecosystems and supply chain restructuring. Looking further out, the report projects investment will continue to increase 3% to $155 billion in 2028 and another 11% to $172 billion in 2029, respectively.

"AI is resetting the scale of semiconductor manufacturing investment," said Ajit Manocha, President and CEO of SEMI. "With global 300 mm fab equipment spending projected to exceed $150 billion in 2027 for the first time, the industry is making historic, sustained commitments to the advanced capacity and resilient supply chains needed to power the AI era."

(PR) Heroes of Might and Magic: Olden Era Launches Into Early Access on April 30

1 April 2026 at 17:46
Hooded Horse, Unfrozen Studio, and Ubisoft are excited to announce the news that everyone has been waiting for - Heroes of Might and Magic: Olden Era will release on PC via Steam Early Access and the Microsoft Store (via Game Preview) on April 30, 2026. It will also be coming to PC Game Pass day one.

Made for series veterans and newcomers alike, Heroes of Might and Magic: Olden Era is built on the familiar foundations of one of the most critically acclaimed strategy series of all time, introducing new and classic game modes that will let people play solo or with friends however they please. Engage in strategic empire building, epic turn-based tactical battles, and in-depth RPG mechanics, all while exploring a vibrant, never-before-seen land full of secrets and dangers.

(PR) Intel to Repurchase 49% Equity Interest in Ireland Fab Joint Venture

1 April 2026 at 17:28
Intel Corporation (Nasdaq: INTC) and Apollo (NYSE: APO) today announced a definitive agreement for Intel to repurchase the 49% equity interest in the joint venture related to Intel's Fab 34 in Ireland not held by Intel for $14.2 billion. The agreement reflects Intel's continued business momentum underpinned by the growing and essential role CPUs play in the era of AI, a significantly strengthened balance sheet and the strong partnership between Intel and Apollo.

In 2024, Apollo-managed funds and affiliates led an $11.2 billion investment to acquire a 49% equity interest in a joint venture entity related to Fab 34, providing Intel with equity-like capital while preserving balance sheet strength. This transaction provided Intel with significant financial flexibility and enabled the company to unlock and redeploy capital to advance its strategic priorities including accelerating the buildout of Intel 4 and Intel 3, the most advanced processes manufactured in Europe, and of Intel 18A, the most advanced process developed and manufactured in the U.S. today.

Gigabyte X870E Aorus Xtreme AI Top Motherboard Review: Solid flagship, but the X3D version is what you want today

1 April 2026 at 18:25
The Gigabyte X870E Aorus Xtreme AI Top delivers strong performance, premium features, and a slick appearance, but high pricing and newer refresh boards, like the X3D version, make it a tough sell unless you snag a refurb deal.

DiagramDeck – Create and collaborate on hosted draw.io diagrams for your team


DiagramDeck is a cloud-based diagramming platform that hosts and manages draw.io for your team. Import and export .drawio files, edit together in real time with comments and live cursors, and use AWS, GCP, and Azure shape libraries to design cloud architectures, flowcharts, UML, ER, and network diagrams.

It removes self-hosting overhead with managed uptime, backups, and security, and adds team management, SSO, and compliance such as SOC 2 and GDPR. Use it as a modern alternative to Lucidchart and Visio while keeping the draw.io ecosystem.

View startup

SupaSailing – The first ERP platform for the entire nautical business in one place


SupaSailing is the first operational ERP platform built for nautical professionals including fleet managers, charter companies, brokers, and marinas. These businesses previously used spreadsheets, disconnected tools, and email threads since no integrated system existed for this industry.

Six modules cover crew and fleet management, charter enquiries, brokerage CRM, berth management, refit projects, and ISM compliance. All modules are connected with no duplicate data, providing full operational visibility across the business.

View startup

Focana – Desktop attention anchor for ADHD to help you finish what you started


It's easier than ever to get distracted while working on your computer. A quick email check, a Slack ping, one ChatGPT question, and boom, 30 minutes gone. "What was I supposed to be doing?"

Most focus tools either block apps you need or disappear when you switch tabs or apps. Neither works. You need an anchor, not a blocker. Focana keeps one task and a timer always visible on your screen, delivers gentle visual nudges and check-ins to keep you locked in, captures stray thoughts in the Parking Lot so they don't derail your session, and allows you to leave notes for context to pick up where you left off. All with no accounts, no sync, and no cloudβ€”just a calm companion for busy brains.

View startup

Block the Prompt, Not the Work: The End of "Doctor No"

There is a character that keeps appearing in enterprise security departments, and most CISOs know exactly who that is. It doesn’t build. It doesn’t enable. Its entire function is to say "No." No to ChatGPT. No to DeepSeek. No to the file-sharing tool the product team swears by. For years, this looked like security. But in 2026, "Doctor No" is no longer just a management headache &

Casbaneiro Phishing Targets Latin America and Europe Using Dynamic PDF Lures

A multi-pronged phishing campaign is targeting Spanish-speaking users in organizations across Latin America and Europe to deliver Windows banking trojans like Casbaneiro (aka Metamorfo) via another malware called Horabot. The activity has been attributed to a Brazilian cybercrime threat actor tracked as Augmented Marauder and Water Saci. The e-crime group was first documented by Trend Micro in

Microsoft Warns of WhatsApp-Delivered VBS Malware Hijacking Windows via UAC Bypass

Microsoft is calling attention to a new campaign that has leveraged WhatsApp messages to distribute malicious Visual Basic Script (VBS) files. The activity, beginning in late February 2026, leverages these scripts to initiate a multi-stage infection chain for establishing persistence and enabling remote access. It's currently not known what lures the threat actors use to trick users into

New Chrome Zero-Day CVE-2026-5281 Under Active Exploitation β€” Patch Released

Google on Thursday released security updates for its Chrome web browser to address 21 vulnerabilities, including a zero-day flaw that it said has been exploited in the wild. The high-severity vulnerability, CVE-2026-5281 (CVSS score: N/A), concerns a use-after-free bug in Dawn, an open-source and cross-platform implementation of the WebGPU standard. "Use-after-free in Dawn in Google Chrome prior

MSI GPU Safeguard+ is a game-changing tech for 12V-2Γ—6 GPUs

MSI GPU Safeguard+ finally allows me to trust 12V-2Γ—6 GPUs If I’m honest, I’m not a fan of 12V-2Γ—6 or 12VHPWR. There have been far too many reports of burnt graphics cards or melted power connectors to ignore. If I were to spend many hundreds, or perhaps thousands, on a new graphics card, I want […]

The post MSI GPU Safeguard+ is a game-changing tech for 12V-2Γ—6 GPUs appeared first on OC3D.

(PR) NVIDIA Invests $2 Billion in Marvell and Expanded NVLink Fusion Partnership

1 April 2026 at 16:56
NVIDIA and Marvell Technology, Inc. (NASDAQ: MRVL) today announced a strategic partnership to connect Marvell to the NVIDIA AI factory and AI-RAN ecosystem through NVIDIA NVLink Fusion offering customers building on NVIDIA architectures greater choice and flexibility in developing next-generation infrastructure. The companies will also collaborate on silicon photonics technology.

In addition, NVIDIA has invested $2 billion in Marvell.

β€˜Google Zero’ misses the real problem: Your next visitor isn’t human

1 April 2026 at 16:00
β€˜Google Zero’ misses the real problem- Your next visitor isn’t human

Barry Adams recently published β€œGoogle Zero is a Lie” in his SEO for Google News newsletter, arguing that the narrative of Google traffic disappearing is false and dangerous.

His data backs it up. Similarweb and Graphite data show only a 2.5% decline in Google traffic to top websites globally. Google still accounts for nearly 20% of all web visits.

The widely cited Chartbeat figure showing a 33% decline? It’s skewed by a handful of large publishers hit by algorithm updates. Publishers who abandon SEO in the face of this panic are making a self-fulfilling prophecy, ceding traffic to competitors who keep optimizing.

He’s right. And he’s looking at the wrong problem.

Humans are still clicking Google results. What has changed is that a growing share of your visitors isn’t human at all.

The tipping point already happened

Automated traffic surpassed human activity for the first time in a decade, per the 2025 Imperva Bad Bot Report. Bots now account for 51% of all web traffic. Not β€œsoon.” Not β€œby 2027.” Now.

That number includes everything from scrapers to brute-force login bots. But the fastest-growing segment is AI crawlers.

AI crawlers now represent 51.69% of all crawler traffic, surpassing traditional search engine crawlers at 34.46%, Cloudflare’s 2025 Year in Review found. AI bot crawling grew more than 15x year over year. Cloudflare observed roughly 50 billion AI crawler requests per day by late 2025.

Akamai’s data tells a similar story: AI bot activity surged 300% over the past year, with OpenAI alone accounting for 42.4% of all AI bot requests.

Chart 1 - bot-vs-human-traffic

So while Adams is correct that human Google traffic hasn’t collapsed, something else is happening on the other side of the server logs.

The take-versus-give ratio

Cloudflare published crawl-to-referral ratios for AI bots. Look at these numbers.

Anthropic’s ClaudeBot crawls 23,951 pages for every single referral it sends back to a website. OpenAI’s GPTBot: 1,276 to 1. Training now drives nearly 80% of all AI bot activity, up from 72% the year before.

crawl-to-referral-ratio

Compare that to traditional Googlebot, which has always operated on a crawl-and-send-traffic-back model. Google crawls your site, indexes it, and sends 831x more visitors than AI systems. The deal was simple: let me read your content, and I’ll send you people who want it.

That deal is fraying even on Google’s own turf. Queries where Google shows an AI Overview see 58-61% lower organic click-through rates, according to Ahrefs and Seer Interactive studies covering millions of impressions through late 2025.

Google’s newer AI Mode is worse. Semrush data shows a 93% zero-click rate in those sessions. AI Overviews now trigger on roughly 25-48% of U.S. searches, depending on the dataset, and that number keeps climbing.

And when Google’s AI features do cite sources, they’re increasingly citing themselves. Google.com is the No. 1 cited source in 19 of 20 niches, accounting for 17.42% of all citations, an SE Ranking study of over 1.3 million AI Mode citations found. That tripled from 5.7% in June 2025. Add YouTube and other Google properties, and they make up roughly 20% of all AI Mode sources.

So the old deal is being rewritten even by Google. AI crawlers from other companies skip the pretense entirely: let me read your content so I can answer questions about it without ever sending anyone your way.

The agentic shift

The bot traffic numbers are already here. The next wave is bigger: AI agents acting on behalf of humans.

In 2024, Gartner predicted that traditional search engine traffic would drop 25% by 2026 as AI chatbots and agents handle queries. That prediction is tracking. Its October 2025 strategic predictions go further: 90% of B2B buying will be AI-agent intermediated by 2028, pushing over $15 trillion in B2B spend through AI agent exchanges.

This isn’t theoretical.

  • Salesforce reported that AI agents influenced 20% of all global orders during Cyber Week 2025, driving $67 billion in sales.
  • Retailers with AI agents saw 13% sales growth compared to 2% for those without.
  • Google is building for this with initiatives like the Universal Commerce Protocol for agent-led shopping.

Gartner says 40% of enterprise applications will have task-specific AI agents by the end of 2026, up from less than 5% in 2025. eMarketer projects AI platforms will drive $20.9 billion in retail spending in 2026, nearly 4x 2025 figures.

agentic-commerce-trajectory

Think about what that looks like in practice. An AI agent researches vendors for a procurement team. It doesn’t see your hero banner. It doesn’t notice your trust badges. It reads your structured data, compares your specs to those of three competitors, and builds a shortlist.

That β€œvisit” might show up in your analytics as a bot hit with a zero-second session duration. Or it might not show up at all.

Get the newsletter search marketers rely on.


What agentic SEO actually looks like

So what do you optimize for when the visitor is a machine making decisions for a human?

It’s not the same as traditional SEO. And it’s not the same as the AI Overviews optimization most people are focused on right now. AI Overviews are still Google. Still one search engine, still largely the same ranking infrastructure, still (mostly) one answer format.

Agentic SEO is about being useful to software that’s pulling from search APIs, crawling directly, and using LLM reasoning to make recommendations. That software doesn’t care about your page layout. It cares about whether it can extract what it needs.

I think a few things start to matter a lot more.

Structured data becomes load-bearing

Schema markup has always been a β€œnice to have” for rich snippets. When an AI agent compares your product to three competitors, structured data lets it read your specs without having to guess. Think product schema, FAQ schema, and pricing tables in clean HTML. These go from SEO hygiene to core infrastructure.

Dig deeper: How schema markup fits into AI search β€” without the hype

Content needs to answer questionsΒ 

AI agents don’t search for β€œbest CRM for small business.” They ask compound questions: β€œWhich CRM under $50/user/month integrates with QuickBooks and has a mobile app with offline capability?” If your content only answers the first version, you’re invisible to the second.

Freshness and accuracy get audited differently

A human might not notice your pricing page is 8 months stale. An AI agent cross-referencing your pricing against competitors will flag the discrepancy. Or worse, use the outdated number in its recommendation and cost you the deal.

Your robots.txt policy is now a business decision

Blocking AI crawlers feels protective, but it means AI agents can’t recommend you. Allowing them means your content trains models that may never send you traffic. There’s no clean answer.

But pretending it’s just a technical setting is a mistake. New IETF standards are emerging to give publishers more granular control, but they’re not widely adopted yet.

Dig deeper: Technical SEO for generative search: Optimizing for AI agents

The measurement gap

Most analytics setups can’t tell the difference between a human visit, a bot crawl, and an AI agent evaluating your site on someone’s behalf. GA4 filters most bot traffic. Server logs show the raw picture, but take work to parse. Even then, figuring out whether an AI agent’s visit led to an actual sale is basically impossible right now.

This is where the β€œGoogle Zero” framing does real damage.

If you’re only measuring organic sessions from Google, you’re blind to a channel that doesn’t show up in that number. Your traffic could look stable while an AI agent steers $50,000 in annual spend to your competitor because their product schema was more complete.

I don’t think we have good measurement for this yet. Nobody does. But ignoring the problem because Google sessions look fine is like checking your print ad response rate in 2005 and deciding the web wasn’t worth paying attention to.

What to do about it

I don’t have a playbook for this. It’s too new. But I can tell you what we’re doing at our agency.

  • Audit your structured data like it’s your storefront: Evaluate whether your website’s schema is present and well-formed. Look into structured data, content structure, and technical health. Make sure product, service, FAQ, and organization markup is complete, accurate, and current. This is table stakes.
  • Answer compound questions: Look at your top landing pages. Do they answer the specific, multi-variable questions an AI agent would ask? Or just the broad keyword query a human would type?
  • Check your server logs: Look for GPTBot, ClaudeBot, PerplexityBot, and other AI user agents. Understand how much of your traffic is already non-human. If you’re on Cloudflare, their bot analytics dashboard makes this easy without parsing raw logs. You’ll probably be surprised either way.
  • Make a conscious robots.txt decision: Understand the trade-offs, and make it a business decision with your leadership team.
  • Start tracking AI citations: Tools like Semrush, Scrunch, DataForSEO, and others can show when AI platforms mention your brand. The data is directional, not precise. But it’s better than nothing.
  • Don’t abandon Google SEO: Adams is right that Google traffic is still massive and still valuable. The agentic web doesn’t replace Google. It adds a new layer. You need both.

The real question

The β€œGoogle Zero” argument pits one extreme against another, even as the actual shift is quieter and more important.

The web is becoming a place where the majority of visitors are machines. Some send traffic back. Most don’t. Some of them make purchasing decisions on behalf of humans. That number is growing fast.

The SEOs who do well here won’t be the ones arguing about whether Google traffic moved 2.5%. They’ll be the ones who figured out how to be useful to both human visitors and the AI agents acting on their behalf.

We’ve spent 25 years optimizing for how humans find things. Now we need to figure out how machines find things for humans.

That’s not Google Zero. We don’t have a name for it yet. But it’s already here.

If you want to go deeper on GEO and agentic SEO, I’m teaching an SMX Master Class on Generative Engine Optimization on April 14. It covers structured data implementation, AI visibility measurement, content optimization for AI systems, and the practical side of everything in this article.

3 Reasons Attackers Are Using Your Trusted Tools Against You (And Why You Don’t See It Coming)

For years, cybersecurity has followed a familiar model: block malware, stop the attack. Now, attackers are moving on to what’s next. Threat actors now use malware less frequently in favor of what’s already inside your environment, including abusing trusted tools, native binaries, and legitimate admin utilities to move laterally, escalate privileges, and persist without raising alarms. Most

Nvidia adds β€œAuto Shader Compilation Beta” to the Nvidia App

Nvidia aims to tackle shader stutter with its Auto Shader Compilation Beta Nvidia is taking action against shader compilation stutter. With its new Auto Shader Compilation (ASC) feature, Nvidia are giving gamers the option to rebuild game shaders outside of runtime to deliver a smoother gaming experience. When your PC is idling, it can be […]

The post Nvidia adds β€œAuto Shader Compilation Beta” to the Nvidia App appeared first on OC3D.

Raspberry Pi Announces More Price Hikes, 3 GB Raspberry Pi 4 SKU

1 April 2026 at 14:36
One could be forgiven for thinking it's an April fools, but alas, Raspberry Pi has announced yet another price hike due to the increasing costs of DRAM. Its CEO, Eben Upton announced in a blog post that the company has seen a seven-fold increase in the cost of LPDDR4 DRAM, which is used in both the Raspberry Pi 4 and 5. All 4 GB and up SKUs of the aforementioned products will see a price hike that ranges from US$25 to US$100. Other products will also see an increase in price and you can find all the price bumps in the table below.

At the same time, the company is launching a new 3 GB SKU of the Raspberry Pi 4, which will launch at US$83.75. The new SKU is available today from all authorised resellers globally. The price increases mean that the 16 GB SKU of the Raspberry Pi 5 now comes in at US$305, which is more than what a lot of mini PCs set you back six months ago. The 16 GB Raspberry Pi 500+ keyboard computer comes in at a whopping US$410, suggesting that some products are unlikely to sell, as they've simply become uncompetitive. The only good news today is that the 1 and 2 GB SKUs for the Raspberry Pi 4 and 5 won't see any price hikes this time around, alongside the 4 GB SKU of the Raspberry Pi 400. Raspberry Pi is promising to lower its pricing as soon as the cost for DRAM goes down at some point in the future.

AI can clone open-source software in minutes, and that's a problem

1 April 2026 at 15:03

Two software researchers recently demonstrated how modern AI tools can reproduce entire open-source projects, creating proprietary versions that appear both functional and legally distinct. The partly-satirical demonstration shows how quickly artificial intelligence can blur long-standing boundaries between coding innovation, copyright law, and the open-source principles that underpin much of the...

Read Entire Article

Ascenda – Track clarity, energy, and mood for better mental performance


Gen AI gives us productivity superpowers, but the risk is mental fatigue. Ascenda helps you track how your mind is performing day to day. With a quick daily check-in, it shows patterns in clarity, energy, mood, recovery, focus, and decision load so you can protect your best work.

Built with input from a psychologist, neuroscientist, and engineer-founder with lived experience, Ascenda is a Whoop-like layer for the mind: signals, patterns, and early awareness before stress leads to poor decisions, lost focus, or burnout.

View startup

How to reduce cost-per-hire with LinkedIn recruitment campaigns

1 April 2026 at 15:00
How to reduce cost-per-hire with LinkedIn recruitment campaigns

LinkedIn is one of the most powerful platforms for recruiting top-tier talent. It’s also one of the easiest places to waste budget if campaigns aren’t structured correctly.

Many recruitment campaigns fail because they prioritize visibility over intent. More impressions don’t equal better hires. Broad targeting and generic messaging often lead to an influx of unqualified applicants, driving up cost-per-hire and slowing down hiring timelines.

The most effective LinkedIn recruitment strategies focus on one thing: attracting and converting high-intent candidates while filtering out poor-fit applicants before they ever click. Let’s break down exactly how to do that.

Shift your strategy: Optimize for intent vs. reach

The biggest mistake advertisers make on LinkedIn is targeting based solely on job titles, industries, and years of experience.

While this may generate volume, it rarely produces efficiency. Instead, high-performing campaigns are built around intent-based targeting β€” reaching candidates who are qualified and more likely to consider a new opportunity.

This requires a layered approach:

  • Core fit: Job titles, skills, and certifications.
  • Behavioral signals: Open-to-work status, group memberships, and engagement with industry content.
  • Career friction indicators: Burnout-prone roles, companies experiencing layoffs, and limited growth environments.

By combining these layers, you move beyond β€œwho they are” and begin targeting why they might be ready to make a change β€” which is where real performance gains happen.

Use ad creative to pre-qualify candidates

Your ad creative isn’t just there to attract attention. It should actively filter your audience. One of the most effective ways to control cost-per-hire is to discourage unqualified candidates from clicking in the first place.

Strong recruitment ads follow a structured approach:

  • Call out a specific pain point or identity: β€œBurned out from long shifts in healthcare?”
  • Clearly define who the role is for: β€œThis role is designed for licensed RNs with 3+ years of experience.”
  • Highlight meaningful value: Think flexibility, compensation, career growth, or mission.
  • Set expectations upfront: β€œNot an entry-level position” or β€œRequires managing enterprise accounts.”

This combination of attraction and exclusion ensures that the candidates who do click on your ads are far more likely to convert.

Dig deeper: LinkedIn Ads on a budget: How one playbook drove sub-$10 CPL

Structure campaigns by candidate intent level

Rather than running a single campaign, high-performing LinkedIn strategies segment audiences based on intent.

High-intent (bottom funnel)

These are active job seekers who offer the highest conversion opportunity. Following this structure:

  • Target: Open-to-work users, recent job seekers, retargeting audiences.
  • Messaging: Direct response (β€œApply now”).
  • Outcome: Highest conversion rates and lowest cost-per-hire.

Warm passive talent (mid funnel)

These candidates aren’t actively applying but are open to change.

  • Target: Skills, competitor companies, niche groups.
  • Messaging: Career upgrades, better lifestyle, growth opportunities.
  • Outcome: Scalable pipeline of qualified candidates.

Cold passive talent (top funnel)

These are long-term potential candidates to start building your pipeline, with the intent to move them to the middle of the funnel and eventually the bottom of the funnel.

  • Target: Broader audiences and lookalikes.
  • Messaging: Employer brand, culture, β€œday in the life.”
  • Outcome: Reduces future acquisition costs over time.

Control costs through smarter bidding and optimization

LinkedIn’s ad platform can quickly become expensive without proper controls. Start with manual CPC bidding to maintain control, then test automated delivery once performance data is established.

More importantly, optimize for the right metrics. Focus on qualified applications instead of clicks. Track downstream actions, such as interview and hire rates.

Be prepared to make fast decisions. Ads with high click-through rates but low application rates often indicate poor alignment. Ads that generate many applications, but few interviews signal weak pre-qualification.

Efficiency comes from eliminating wasted spend earlier, rather than later. It conserves ad spend and minimizes overlapping audiences and hitting the wrong targets.

Dig deeper: LinkedIn Ads retargeting: How to reach prospects at every funnel stage

Get the newsletter search marketers rely on.


Improve conversion rates with a two-step application process

A common but costly mistake is sending candidates directly to long, complex application forms. Instead, use a two-step funnel:

  • Pre-qualification landing page.
    • Role overview and expectations.
    • Compensation transparency.
    • Clear β€œwho this is (and isn’t) for.”
  • Application.
    • Short form or LinkedIn Easy Apply.

This approach sets expectations, filters candidates, and significantly improves application quality β€” often reducing cost-per-hire by 30-50%.

Use retargeting to capture missed opportunities

Not every qualified candidate applies on the first interaction. Retargeting allows you to re-engage high-intent users who have already shown interest.

Build audiences from:

  • Career page visitors.
  • Job post viewers.
  • Video viewers (50%+ engagement).

Then serve follow-up messaging such as:

  • β€œStill considering a move?”
  • β€œLast chance to apply”
  • Employee testimonials or success stories.

Retargeting campaigns are often the most cost-efficient part of your entire strategy.

Advanced strategies to increase ROI

Once the fundamentals are in place, there are several advanced tactics that can further improve performance:

  • Competitor targeting: Target employees at competing companies and position your opportunity as a clear upgrade β€” whether through compensation, flexibility, or culture.
  • Skill-based campaign segmentation: Instead of grouping all candidates together, build campaigns around specific skills or certifications. This reduces competition in the ad auction and often lowers cost-per-click.
  • Selective use of Message Ads: Message ads can be effective for senior or hard-to-fill roles β€” but only when targeting is highly refined. Otherwise, they can quickly become cost-prohibitive.

Here’s an example of a successful LinkedIn InMail message that recently drove over 70% high-intent applications for an HVAC sales client:

Message body:

Hi [First Name],

This might be a stretch β€” but your background in HVAC sales caught my attention.

We’re hiring experienced sales reps who are tired of unpredictable commissions and weekend-heavy schedules.

This role is built for reps who:

  • Have 3+ years in HVAC or home services sales
  • Are comfortable running in-home consultations
  • Want a more stable, high-earning structure

What’s different:

  • No weekend appointments
  • Pre-qualified, inbound leads (no cold knocking)
  • Six-figure earning potential with consistency

That said, this isn’t a fit for entry-level reps or those new to sales.

If you’d be open to a quick 10-minute conversation to see if it’s worth exploring, I’m happy to share more.

If not, no worries at all β€” appreciate you taking a look.

β€” [Name]

Stating upfront the need for β€œexperienced sales reps” immediately establishes relevance and increases response rates while reducing irrelevant replies.Β 

Focusing on what matters to potential candidates, such as no weekend appointments and compensation structure, speaks to the audience’s needs versus the company’s.

Closing the conversation with the reminder that this isn’t an entry-level position weeds out wasted conversations and reduces cost-per-hire.

Dig deeper: LinkedIn Message Ads: Everything you need to know

Intent beats reach in LinkedIn recruitment

The most effective LinkedIn recruitment campaigns rely on better strategy.

When you focus on intent-based targeting, pre-qualification within ad creative, funnel segmentation, and conversion optimization, you create a system that attracts the right candidates while minimizing wasted spend.

Ultimately, reducing cost-per-hire is about reaching the right people, at the right time, with the right message.

Microsoft 365 will soon have helpers that take actions for you β€” here’s what that means

Microsoft’s latest hire is bringing OpenClaw and personal AI agents to Microsoft 365, but it also raises questions about the company's commitment to reducing AI integrations across its tech stack.

Microsoft Issues Emergency Fix for Windows 11 Update Installation Errors

1 April 2026 at 13:08
Late last week, Microsoft released its KB5079391 non-security feature update for Windows 11, which was officially pulled due to widespread installation errors. Today, the company is issuing the out-of-band KB5086672 update to address this problem, as Microsoft has identified the source of the issue and the update can now be safely applied. This latest out-of-band KB5086672 update includes the KB5079473 package released on March 10, KB5085516 released on March 21, and the previously pulled KB5079391 released on March 26. Microsoft has combined all of these into the new KB5086672 package, which addresses the issues that appeared and introduces a variety of new features. Finally, the old installation error message, "Some update files are missing or have problems. We'll try to download the update again later. Error code: (0x80073712)," has been resolved for good.

Microsoft notes that this out-of-band update is available through Windows Update for devices running Windows 11 that have already installed KB5079473 or a later update. It is also available for manual download from the Microsoft Update Catalog. Currently, there are no known issues with this update, and if any arise, Microsoft will highlight them on their support documents website. Interestingly, KB5086672 is one of the first steps by Microsoft toward resolving the issues users have experienced with Windows 11 updates, and hopefully just the beginning of the overhaul that Microsoft has promised. Future non-security feature updates could also focus on other quality-of-life improvements, and installation errors should become less common.

FlowCastle – Build powerful chatbots with a no-code/low-code platform


FlowCastle is a visual platform for building AI-powered chatbots without code. Use a drag-and-drop editor to design flows, then extend with TypeScript actions, HTTP requests, and integrations like Google Sheets. Launch on Telegram and reuse logic across brands with white-labeling. Accept payments, manage catalogs, and track orders inside the bot. Hand off to humans with live chat, run smart broadcasts, and monitor funnels with goal-based analytics. An AI copilot helps generate flows, write copy, and optimize automations.

View startup

HankRing – Find and rate dishes you crave, not venues, and track your finds


HankRing helps you find the best versions of the specific dishes and drinks you crave. Choose your Hanks, see verified, likely, and potential spots on a map, and rate the dishβ€”never the venueβ€”to build consensus for the community. Browse Top 50 and trending categories, add missing spots, and keep a private journal of every rating and verification. With thousands of curated places preloaded, you can discover great food from day one and plan where to hunt next.

View startup

Narratex – Write fiction with an AI mentor that remembers your whole story


Narratex is a writing workspace for fiction authors that unifies your Story Blueprint, a full editor, and an AI collaborator that remembers context across sessions. It keeps track of your characters, plot threads, settings, and themes so you never need to re-explain your magic system or paste cast lists again.

Start by importing your existing work and building your blueprint, then write with an assistant that's already read everything you've created, keeping you consistent and focused chapter to chapter.

View startup

(PR) ASUS Announces UGen300 USB AI Accelerator

1 April 2026 at 12:48
ASUS today announced the UGen300 USB AI Acceleratorβ€”the first AI USB device from ASUS, bringing inference performance directly to any device. An M.2 version is also available. This slim AI accelerator measures 105 x 50 x 18 mm and features the Hailo-10H AI processor that delivers 40 AI TOPS of dedicated power to support large language models such as LLMs, VLMs, and more. UGen300 includes 8 GB LPDDR4 dedicated memory and connects to other devices via a USB-C interface, consuming just 2.5 watts of power under typical workloads. The convenient plug-and-play design ensures cross-platform compatibility with Windows, Linux, and Android. UGen300 also supports major AI frameworks like TensorFlow, PyTorch, and ONNXβ€”right out of the box.

"By integrating the Hailo-10H into a ubiquitous USB device, ASUS brings the full power of AI and generative AI to everyone" said Max Glover, Chief Revenue Officer of Hailo. "We're excited to see how our developer community will use this plug-and-play accelerator to push the boundaries of on-device AI. This is exactly how Hailo envisions the future of AI: accessible, affordable, and designed for anyone to build with."

(PR) AI Compute Demand Drives 44% YoY Growth for Top 10 Global Fabless IC Firms in 2025

1 April 2026 at 12:41
Continued investment in AI infrastructure by major CSPs, including purchases of GPUs and deployment of in-house ASICs, has driven strong growth among AI-related chip designers, according to TrendForce's latest findings. In 2025, the total revenue of the top 10 fabless IC design houses exceeded US$359.4 billion, up 44% YoY. NVIDIA maintained its leading position, while Broadcom moved up to second place due to increased involvement in AI, overtaking Qualcomm, which continues to depend more heavily on consumer electronics.

Industry leader NVIDIA delivered another year of record revenue, supported by its strong AI chip portfolio and computing ecosystem. The company's fourth quarter revenue from data centers accounted for as much as 90% of its total. Full-year revenue rose 65% YoY to $205.7 billionβ€”the fastest growth among the top playersβ€”with its share of total top-ten revenue increasing further to 57%.

Google Attributes Axios npm Supply Chain Attack to North Korean Group UNC1069

Google has formally attributed the supply chain compromise of the popular Axios npm package to a financially motivated North Korean threat activity cluster tracked as UNC1069. "We have attributed the attack to a suspected North Korean threat actor we track as UNC1069," John Hultquist, chief analyst at Google Threat Intelligence Group (GTIG), told The Hacker News in a statement. "North Korean

Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms

Anthropic on Tuesday confirmed that internal code for its popular artificial intelligence (AI) coding assistant, Claude Code, had been inadvertently released due to a human error. "No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement shared with CNBC News. "This was a release packaging issue caused by human error, not a security

Pine AI – Cut bills, cancel subs, and win refunds with an AI that makes calls


Pine doesn’t just draft or organize β€” it emails, calls, researches, plans, follows up, and persists until the job is done. For companies, Pine acts as an execution arm across CEOs, operations, finance, sales, marketing, and executive assistants β€” closing open loops, renegotiating contracts, chasing invoices, coordinating vendors, and unblocking stalled deals. For individuals who value time more than money, Pine handles life’s friction β€” negotiating bills, canceling subscriptions, filing claims, and waiting on hold. Pine turns decisions into outcomes β€” autonomously, persistently, and without expanding headcount.

View startup

Claras – Get instant transcripts and chat with any YouTube video using AI


Claras lets you get instant transcripts and chat with any YouTube video using AI. It analyzes full videos to answer questions, generate summaries, and build a clickable table of contents so you can jump to key moments with confidence. You can highlight insights, save notes, and export to TXT or PDF. Use transcripts to power ChatGPT, Claude, or custom agents, and collaborate with teammates.

View startup

PrimeClaws – Host your OpenClaw AI agent 24/7 with no server management


OpenClaw Hosting is a managed cloud platform for running OpenClaw, the open-source autonomous AI agent, 24/7 without dealing with servers or Docker. It supports any OpenAI-compatible model, including Claude, GPT, Gemini, and local models via Ollama, and includes free access to Kimi K2.5. Connect your agent to Telegram, WhatsApp, Slack, Discord, Signal, or iMessage, and keep data private with isolated containers and local-first storage. The platform handles updates, monitoring, and scaling so your agent stays online and productive.

View startup

Iran’s internet shutdown proves we need to go beyond Starlink and VPNs β€” this tech could be the solution

Direct-to-Cell (D2C) has the technical power to be the solution to today's internet shutdowns. Digital rights groups, Access Now and WITNESS, are now calling on software developers and lawmakers to ensure this happens.

Grails – Audit, benchmark, and source strategic domains backed by data


Grails provides domain intelligence to help VCs, founders, and operators evaluate company domains, discover naming opportunities, and connect with owners. Use domain health audits, industry and funding-stage benchmarks, valuations, risk scoring, and curated lists to spot gaps and acquisition targets. Post a domain request and get responses from owners, or browse available strategic names and work with verified brokers to move fast and avoid costly mistakes.

View startup

WhatNext – Get a complete AI-planned night out in seconds


WhatNext is an AI-powered planner that instantly builds complete itineraries for date nights, friend hangouts, day trips, and weekend adventures using real places, venues, and live events near you. Enter your location and vibe, and it assembles dinner, activities, dessert, and drinks with Google Maps links. Customize budget and preferences, regenerate alternatives, save favorites, and use it across 50+ US cities β€” free to start

View startup

Accomplish It – Capture, organize, and showcase your career accomplishments


Accomplish It helps you capture, organize, and showcase your career accomplishments. Connect work sources like GitHub and Jira or reply to periodic prompts, and its AI records, categorizes, and turns results into resume-ready statements. Build a living resume to share a timeline, export polished resumes and career artifacts, and benchmark progress by role to stay ready for reviews and new opportunities.

View startup

Multi-GPU Returns – Nvidia unveils β€œAI SLI” to power DLSS 5

Return of the King – Multi-GPU PC gaming is ready for a comeback During the company’s DLSS 5 reveal, Nvidia teased something massive. When demoing their next-generation DLSS features, Nvidia were running multi-GPU systems. While Nvidia confirmed that DLSS 5 will be usable on single-GPU systems later this year, this demo highlighted something bigger: the […]

The post Multi-GPU Returns – Nvidia unveils β€œAI SLI” to power DLSS 5 appeared first on OC3D.

Ubisoft in Legal Hot Water over The Crew Shutdown

1 April 2026 at 03:51
If you've been following the Stop Killing Games movement, you'll know that Ubisoft shutting down The Crew, a fairly modern video game by most standards, having launched in 2014, has ruffled some feathers. Now, as reported by Reuters, Ubisoft has been taken to court by French consumer action group, UFC-Que Choisir, who argues that the contractual practices that Ubisoft engages in when it sells games may be abusive and deny consumers their rights.

Ubisoft, as is the case with many gaming companies, argues that players buy limited licenses to play the games they pay forβ€”not an actual productβ€”and that the license can be revoked at any time. With lawsuits like the one brought against Ubisoft, UFC-Que Choisir intends to put an end to these "harmful practices," remove the relevant clauses from sales contracts, and make Ubisoft recognize the collective harm done to the collective interests of consumers.

Pragmata Goes Gold, Capcom Readies for April 17 Launch

1 April 2026 at 03:33
After previously announcing an April 24 launch date, Capcom moved the launch of Pragmata forward to April 17 and subsequently celebrated hitting 2 million demo downloads and game wishlists. Now, in another stroke of positive news, Capcom has announced that Pragmata has gone gold, meaning that the game is in a stable, functional state and ready for launch on PlayStation 5, Nintendo Switch 2, Xbox Series X|S, and Windows.

Pragmata was originally announced in 2020, with the release date originally slated for 2022. However, the game went through multiple iterations during that time and ended up being pushed back to 2026. Pragmata was Capcom's first new franchise since the launch of Dragon's Dogma in 2012, and it seems to be attempting to implement an interesting combination of third-person shooter combat and hacking mechanics, alongside sci-fi, narrative- and exploration-driven core gameplay.

Disco Elysium Dev Announces Launch Date for Zero Parades: For Dead Spies Alongside New Trailer

1 April 2026 at 03:13
ZA/UM, the indie game studio famous for the avant-garde Disco Elysium, has officially announced that its next game, Zero Parades: For Dead Spies, will launch on May 21, 2026 on Steam, the Epic Games Store, and GOG, with a PS5 release planned for later in 2026. The announcement was made alongside the release of an appropriately eerie release date trailer.

Zero Parades: For Dead Spies is a story-rich indie spy thriller RPG that follows a renowned spy, Hershel Wilk, on one last mission. According to the game's Steam Store page and the published imagery, it will have a customizable skill tree, a strong narrative in which choices matter, and a decent bit of tactical gameplay, all wrapped in a surrealist aesthetic.

New 'The Lord of the Rings' RPG in Development at Crystal Dynamics

1 April 2026 at 02:52
There were rumors of a new The Lord of the Rings game in development late in 2025, but not much else was known about it other than that it had a sizeable budget of around $100 million and was to compete with Hogwarts Legacy when it came to game design and mechanics. Now, Insider Gaming has reported that the new The Lord of the Rings game is being developed by Crystal Dynamics, not Warhorse Studios, although the report claims that there may be another LOTR game in development at Warhorse.

The game said to be in development at Crystal Dynamics is a third-person action RPG that was claimed to be funded by the Abu Dhabi Investment Office, and it has already been in development for a while. Neither Crystal Dynamics nor Embracer Group have confirmed that the game is in development, but it may be welcome news to The Lord of the Rings fans that were looking forward to the Lord of the Rings MMO that Amazon recently cancelled.

dubltap.io – 8 AI apps each built to do one thing well, AI that actually does stuff


dubltap.io is an ecosystem of 8 single-purpose AI web apps. Each one solves one problem well. Market Maven offers competitive intelligence. Bad Mutha Forker transforms recipes. CLIFF NOTEZ analyzes documents. There are 5 more tools for sales, design, music, side hustles, and cognitive enhancement. All are free to try.

View startup

Painkiller Ideas – Discover and validate real startup ideas with AI-driven research


Painkiller Ideas helps founders find ideas worth building and validate them fast. It scrapes Reddit, Hacker News, GitHub, and Product Hunt for real complaints, then uses AI to score pain intensity, market size, and competition. Submit any concept to get market sizing, competitor analysis, ideal customer profiles, pricing strategy, and a prioritized validation roadmap. Access playbooks, prompts, landing page wireframes, and brand assets, and join a community of builders to source problems and compare notes.

View startup

Lenovo Yoga Mini i Surfaces with Intel Panther Lake and 32 GB RAM

1 April 2026 at 00:24
Before its global rollout, Lenovo first launched the Yoga Mini i mini PC in China, a few months after its introduction at CES 2026. The Yoga Mini i is built around the Intel Panther Lake platform, with configurations listed up to a Core Ultra X7 385H processor at a 45 W TDP. Graphics are handled by integrated Intel Arc B-series GPUs, with the top configuration reaching Arc B390. The system also includes an NPU rated at up to 50 TOPS, aligning it with Microsoft Copilot+ PC requirements. Memory goes up to 32 GB of LPDDR5x, paired with up to 2 TB of PCIe Gen 4 storage. Despite its compact footprint, measuring 130 x 130 x 48.5 mm and weighing around 600 g, the mini PC offers a relatively complete I/O setup. This includes multiple USB-C ports with Thunderbolt 4 and DisplayPort support, HDMI 2.1, USB-A, and 2.5 GbE, Wi-Fi 7 and Bluetooth 6. The system integrates basic audio hardware with a 2 W built-in speaker and dual microphones. Security features include a fingerprint reader built into the power button, Human Presence Detection, and Walk Away Lock. Power comes from a 100 W adapter.

At this moment, Lenovo is only offering a lower-tier configuration in China equipped with an Intel Core Ultra 5 325 processor with 16 GB of RAM and a 512 GB SSD. This model is priced at CNY 5,499 (around $800), indicating that earlier references to a $699 starting price will likely apply to similar entry-level SKUs rather than higher-end Core Ultra X7 variants. Lenovo still lists the Yoga Mini i as "coming soon" in other regions, with a broader rollout expected later this year, presumably even before this year's Computex, which is held on June 2-5.

Lyn Career – Helps developers track applications, prep for interviews, and review CVs


Lyn Career is a career intelligence platform that turns your job search into a strategic plan. It lets you track every application in one dashboard and extract job details from URLs, screenshots, or PDFs. You get match scores, skill gap insights, and rejection pattern analysis. It offers CV intelligence with actionable rewrites, role-specific interview prep, offer comparisons, and smart follow-up reminders with ghost detection. A built-in kanban, calendar sync, and contact CRM help manage pipelines and relationships clearly.

View startup

Valyris – Stress-test your crowdfunding or investor pitch in minutes


Valyris helps founders find and fix weak points in a campaign or investor pitch before high-stakes reviews. It tests narrative clarity, proof strength, internal consistency, timing and exposure, ask/raise logic, and delivery credibility to reveal blind spots and rank priorities. Start with a free 8-question check, then upgrade to an Audit or Deep Audit for a fast PDF diagnosis with key fragilities, likely objections and responses, contradiction mapping, evidence scoring, and a concrete fix plan. It's designed for Kickstarter, Indiegogo, Seedrs, Crowdcube, Y Combinator, Techstars, and direct investor outreach.

View startup

Intel "Wildcat Lake" Core 300 Series Specifications Surface

31 March 2026 at 23:34
Intel's "Wildcat Lake" processors, part of the Core 300 series non-Ultra family, have been leaked by a reputable source Jaykihn0 on X, revealing the entire lineup across various configurations and SKUs. The lineup includes six SKUs across the Core 3, Core 5, and Core 7 tiers, all designed to operate within a 15 to 35 W TDP range. Each model features a hybrid core configuration, pairing two "Cougar Cove" P-cores with four low-power efficiency cores, completely omitting the traditional "Darkmont" E-cores. Boost clocks range from 4.3 GHz on the entry-level Core 3 304 up to 4.8 GHz on the Core 7 360. All six SKUs share 6 MB of L3 cache, a single NPU tile, and integrated Xe3 graphics. The leak suggests that Intel is bringing architecture closely related to the Core Ultra 300 "Panther Lake" mobile platform into the embedded and industrial space, or perhaps into low-cost laptop configurations that don't require the power of "Panther Lake," appealing to buyers seeking budget-friendly options.

The 2P+0E+4LPE core layout is a deliberate trade-off, prioritizing efficiency over raw multithreaded performance, which suits the thermal constraints common in edge and IoT deployments. NPU performance figures range between 15 and 17 TOPS across the lineup. While this won't power the largest LLMs, it may be more than sufficient for on-device inference in industrial or automation settings. The Core 3 304 deserves special mention: it reduces to a single P-core and one Xe graphics unit, creating a clear cost-optimized option at the bottom of the lineup. SIPP certification, important for buyers needing stable, long-lifecycle platform support, is available on the Core 7 360 and Core 5 330 but not consistently across the lineup. Notably, there is no vPro support on any SKU, clearly distinguishing "Wildcat Lake" from Intel's enterprise mobile portfolio.

NVIDIA Launches Auto Shader Compilation for Faster Game Loading and Less Stuttering

31 March 2026 at 22:37
The NVIDIA App update today introduced some interesting features, such as DLSS 4.5 dynamic multi-frame generation and a 6x mode. Additionally, the app now includes a new beta version of NVIDIA Auto Shader Compilation (ASC). This feature takes DirectX 12 shaders from games and quietly compiles them while the system is idle or not running any graphically intensive tasks. Typically, when you start a game, you have to wait for all assets to load and shaders to compile before you can begin playing. However, with ASC, NVIDIA aims to shorten this process by pre-compiling shaders to reduce loading times and, interestingly, decrease in-game stuttering, which can occur when shaders don't load properly. NVIDIA states that this feature is opt-in within the NVIDIA App and can be enabled by navigating to the Graphics Tab > Global Settings > Shader Cache. Once in the menu, users can access a range of settings, including the option to turn on Auto Shader Compilation.

Since ASC uses a separate folder, users will need to allocate sufficient disk space to store the shaders that ASC will access. In the NVIDIA App, gamers can choose the "Compile Now" option to pre-compile all game shaders immediately by clicking on three dots, or they can wait for the system to do it automatically when it becomes idle. As compiling shaders requires some computing power, there are settings to control system utilization, with the default set to medium. The NVIDIA App will also display the date of the last compilation. Interestingly, ASC will perform its functions once a game is downloaded and after a new driver update is installed for optimal performance. NVIDIA requires GeForce Game Ready Driver 595.97 WHQL or newer for ASC to work, and more optimizations are expected as the beta testing concludes in the coming weeks.
❌
❌