AnyToURL turns any file into a short, shareable link in seconds. Drag and drop to upload, then share an instant URL with browser previews for images, PDFs, and documents. Files are delivered over a global edge network for fast access worldwide. Add password protection, keep files temporary or make them permanent on paid plans, and manage uploads via API or CLI with custom domains and branding, supporting sizes up to 10GB.
NanoMaker AI is an all-in-one creative platform that lets you generate and edit images using Nano banana AI, videos, music, and voice with the world's top AI models under one subscription. Work in a seamless workflow: turn an image into a video, add background music, and export without switching tools. Use prompt-based editing, background removal, lighting control, and style transfer to produce consistent, professional results for marketing, content creation, education, and e-commerce.
People are going gaga over Apple's Lil Finder Guy, who arrived just in time for Apple's 50th Anniversary, but there's more to him than just a cute face.
Delve faces new allegations that it violated the open source license of its customer, Sim.ai, by taking the customers's tool and passing it off as its own.
Sonder profiles are completely unstructured, encouraging users to build something that looks like a mood board or a digital collage. Think MySpace rather than LinkedIn.
The Alienware Aurora Gaming Desktop, a top-of-the-line PC built for 1440p gaming, is currently enjoying a 23% discount as part of Dell's Spring Sales event.
Dell's Spring Sale is underway, and it's brought a 22% discount to one of our favorite mid-range gaming laptops with an RTX 5060 GPU and 32GB of RAM, the Alienware 16X Aurora.
The Early Access Program invites researchers to design and propose quantum experiments that push the boundaries of what current hardware can achieve. It is a selective program β the processor will not be publicly available β and Google is setting firm deadlines for participation. Research teams have until May 15,...
Embed AI into your app or site in just 3 lines of code. Normally, building AI into mature apps or websites requires dealing with vector databases, custom integration pipelines, authentication, and brittle LLM calls, which distract core engineering teams from shipping product features. EmbedAI solves this by providing a drop-in component that abstracts away infrastructure, letting you inject AI into your app logic without restructuring your backend. It requires zero backend maintenance or database provisioning, offers seamless UI matching your brand rules, and gives you complete control with your own API keys.
Lutily gives salons a branded booking page at yourname.lutily.com that lets clients pick services, choose a staff member, and reserve a real open slot in under a minute. It never shows competitors, charges no commission, and has no perβstaff fees. Every booking is phoneβverified to reduce noβshows.
Use Lutily to stack appointments to fill gaps, run a smart waitlist that autoβoffers newly opened times, and control working hours by date. Manage a colorβcoded calendar, team permissions, client history and notes, and automatic SMS confirmations and reminders, with instant rescheduling and cancellations.
Researchers examined millions of webpages and found thousands of exposed API credentials, revealing persistent security gaps across cloud services and development environments.
Samsung's new 2026 TVs have been released, which means you can score massive discounts on existing models, and I've rounded up the 12 best deals from $249.99.
HP repackages Humane AIβs failed technology into HP IQ, combining on-device intelligence, spatial awareness, and IT control for workplace collaboration.
Memory deals are more important than ever with the current RAM pricing surges, and Woot! is currently home to today's best deal. It expires tonight (if not sooner), so don't hold out too long.
Minecraftβs latest Bedrock Edition preview shows off sulfur caves, springs, and new blocks for Bedrock Edition, while Java Edition players can try the Herdcraft April Fools snapshot that reimagines inventory entirely.
Testing Speechifyβs new Windows app proves voice typing is faster than a keyboard, though AI quirks and a $29 monthly fee remain significant hurdles.
In its MLPerf Inference 6.0 submission, AMD did not simply revisit familiar benchmarks with a faster GPU. It expanded into first-time workloads, crossed the 1-million-tokens-per-second threshold at multinode scale and showed that partners can reproduce the results across a broader ecosystem. That combination matters because our customers no longer evaluate inference platforms on one metric alone. They want competitive single-node performance, efficient scale-out, faster bring-up on new models, reproducible results across partner systems and confidence that the software stack can keep pace. MLPerf Inference 6.0 let us show all of that in one submission.
Just as important, we showed that these results are not isolated. A broad partner ecosystem submitted across four AMD Instinct GPU types that closely reproduced numbers submitted by AMD and the first three-GPU heterogeneous MLPerf submission demonstrated that AMD hardware and AMD ROCm software can orchestrate meaningful inference throughput even across systems in different geographies.
The US government has selected BlackSky to design and build the next generation of its space surveillance capabilities. The newly announced contract is an indefinite delivery/indefinite quantity (IDIQ) agreement, meaning the company will provide as many satellites and monitoring services as the Air Force Research Laboratory requires for its missions....
Experience the pinnacle of gaming technology with GCS Cheats, the industryβs leading provider of state-of-the-art gameplay modifications. It features the most intuitive interface and the lightest system footprint in its class, offering a powerful and easy-to-use level of customization. Every tool is designed for maximum stability, providing seamless integration into todayβs biggest games.
The Computer Emergency Response Team of Ukraine (CERT-UA) has disclosed details of a new phishing campaign in which the cybersecurity agency itself was impersonated to distribute a remote administration tool known as AGEWHEEZE.
As part of the attacks, the threat actors, tracked as UAC-0255, sent emails on March 26 and 27, 2026, posing as CERT-UA to distribute a password-protected ZIP archive
The Shark FlexBreeze HydroGo is the best fan around and it's currently at a record low price. Here's why you should buy it now ahead of the summer heat.
Arriving now for Windows 11 Insiders is a buff to Task Manager that provides far more information about your PC's NPU usage and performance. Here's how it works and why it makes sense in 2026.
The tests, conducted by NJ Tech, used identical hardware: an AMD Ryzen 5 5600X paired with a Radeon RX 6700 XT, alongside 16GB of DDR4 memory, a 2TB NVMe SSD, a Corsair RM1000x power supply, and a Gigabyte X570 Aorus Elite motherboard. On the software side, Windows 11 ran AMD's...
Early reviews of the AirPods Max 2 highlight improved sound quality, excellent noise cancellation, and deeper Apple ecosystem integration, powered by the new H2 chip and added smart features.
Formo makes analytics and attribution easy for DeFi apps, so you can focus on growth. Understand who your users are, where they come from, and what they do onchain. Measure what matters and drive growth onchain with the data platform for onchain apps. Get the best of web, product, and onchain analytics on one versatile platform.
Lifeplanr visualizes your entire life as 4,680 weeks and lets you plan, journal, map travel, and track finances with a built-in FIRE calculator. You can see life phases at a glance, tag moods, attach photos, and scratch off countries youβve visited.
Install it as a PWA on any device, switch between 10 themes, and use it offline. Your data stays on your device by default, with optional Pro cloud sync and easy export.
Google Ads quietly added an auto-apply setting to its experiments feature β and itβs turned on by default, meaning winning experiment variants can be automatically pushed live without manual review.
How it works. Advertisers can choose between two modes β directional results (the default) or statistical significance at 80%, 85%, or 95% confidence levels. There is one built-in safeguard: if a chosen success metric performs significantly worse in the test arm, the change wonβt be automatically applied.
Why we care. Experiments are one of the most powerful tools in a Google Ads account. Automating the apply step could speed up testing cycles, but it also removes a critical checkpoint where advertisers catch unintended consequences before they affect live campaigns.
The catch. Experiments only allow two success metrics. That means a third metric you care about β one you didnβt or couldnβt select β could quietly be declining in the background, and the auto-apply setting would never catch it. The guardrails protect what you told Google to watch, not everything that matters.
The bottom line. The auto-apply feature is a reasonable shortcut for straightforward tests, but for anything consequential, manual review is still worth the extra step. Run the experiment, let it reach significance, then dig into the full data before pulling the trigger yourself.
First seen. This update was spotted by Google Ads specialist Bob Meijer who shared the update on LinkedIn.
Bing appears to be testing a significantly expanded sponsored products section in its shopping search results, featuring a double-rowed carousel that takes up considerably more real estate than its current format.
What was spotted: The test was flagged by Digital Marketer Sachin Patel, who noticed the expanded layout while searching for cushions on Bing. The format pairs a large double-rowed sponsored carousel with organic cards from individual websites beneath it.
Why we care. If this format rolls out broadly, it means significantly more screen space dedicated to sponsored products β which typically translates to higher visibility and more clicks for retailers running Microsoft Shopping campaigns. The double-rowed carousel format is also a more visually competitive layout, putting Bingβs shopping ads closer in prominence to what Google Shopping already offers.
The catch: The test appears to be limited β not all users are seeing it. Search industry veteran Mordy Oberstein checked his own results and got a noticeably more compact layout, suggesting Bing is still in early experimentation mode.
The bottom line: Bing quietly runs a lot of SERP experiments that never make it to full rollout, so this one is worth watching but not banking on. Retailers running Microsoft Shopping campaigns should keep an eye out for any uptick in impressions if the format expands.
First spotted. This test was was spotted by Sachin Paten who shared a screenshot of the test on X.
SEO tools were the most replaced martech application in 2025 β but not for the reason you might expect.
According to the 2025 MarTech Replacement Survey, SEO platforms topped the list of replaced tools for the first time, overtaking categories like marketing automation platforms (MAPs), which had led for the past five years.
At first glance, that might suggest instability in SEO. After all, the discipline is being reshaped by LLMs, AI-generated answers, and the rise of zero-click search experiences β all of which challenge traditional keyword tracking and ranking-based workflows.
But the data tells a more nuanced story.
SEO tools: most replaced, but stabilizing
Even though SEO tools were the most replaced category in 2025, they were replaced at a slower rate than in prior years.
In other words, theyβre now the most commonly replaced β but also more stable than before.
That shift suggests a maturing category. Rather than widespread churn, you appear to be consolidating, upgrading, or refining your SEO stack as search evolves.
Meanwhile, several other major martech categories saw sharper year-over-year declines in replacements:
CRM replacements fell more than 12% from 2024 to 2025, reaching their lowest level in the surveyβs history.
MAPs, email platforms, and CMS tools also declined compared to 2024.
Why SEO tools are being replaced
So if SEO tools arenβt being swapped out due to instability, whatβs driving the changes?
The survey points to three primary factors:
1. AI capabilities
For the first time, the survey asked about AIβs role in replacement decisions β and the impact was significant.
37.1% cited AI capabilities as an important factor.
33.9% said they wanted AI capabilities when replacing a tool.
This reflects a broader shift in SEO tooling, with platforms rapidly integrating AI for:
Content generation and optimization.
SERP analysis and intent modeling.
Workflow automation.
In many cases, replacing your SEO tool isnβt about abandoning SEO β itβs about upgrading to AI-native capabilities.
2. Cost pressures
Cost has become a major driver of martech replacement decisions, including SEO tools:
43.8% of marketers cited cost reduction as a reason for replacing an application in 2025.
Thatβs up sharply from 23% in 2024 and 22% in 2023.
This suggests growing pressure to optimize and rationalize your SEO tech stack, especially as you evaluate overlapping functionality across tools.
3. Changing needs in a shifting search landscape
As search behavior changes, so do expectations for SEO platforms.
Traditional rank tracking and keyword monitoring are no longer sufficient on their own. Teams are increasingly looking for tools that can:
Surface insights across AI-driven SERPs
Track visibility beyond clicks
Integrate with broader marketing and data systems
That evolution is likely contributing to replacement activity β even as overall stability increases.
AI is reviving custom-built SEO tools
One of the more notable trends in the 2025 survey is the resurgence of homegrown solutions, including for SEO workflows.
Replacing commercial martech tools with homegrown applications accounted for:
8.1% of replacements in 2025
Up from 3.4% in 2024 and 5% in 2023
This marks a meaningful shift after years of near-total reliance on commercial platforms.
βAI-assisted coding is changing the calculus of build vs. buy,β said martech analyst Scott Brinker. βItβs easier and faster to build than ever before. Companies should still buy applications where they have no comparative advantage. But in cases where they can tailor capabilities to differentiate their operations or customer experience, custom-built software is an increasingly attractive option.β
For SEO teams, this could mean more organizations building:
Custom data pipelines.
Proprietary SERP tracking systems.
AI-driven analysis tools tailored to their specific needs.
Other martech categories show even greater stability
While SEO tools led in total replacements, the broader martech landscape is becoming more stable.
Several major categories saw declining replacement rates in 2025, including:
CRM platforms (down more than 12% year over year)
Marketing automation platforms
Email distribution tools
Content management systems
This suggests that many organizations are settling into core systems while selectively updating areas β like SEO β that are changing faster.
Methodology
Invitations to take the 2025 MarTech Replacement Survey were distributed via email, website, and social media in Q4 2025.
A total of 207 marketers responded. Findings are based on the 154 respondents (60%) who said they had replaced a martech application in the previous 12 months.
Hop on over to Amazon right now and you can score some great tech deals in its Easter sale β I've picked out the 22 best offers with prices starting at Β£9.98.
It's Apple's 50th anniversary today, and while its employees are being treated to a Paul McCartney concert, you can treat yourself to today's best Apple deals from $15.99.
The Meta-owned company said it identified around 200 users who were tricked into installing a fake version of WhatsApp that was actually Italian-made spyware.
Sony hopes to deliver a better-than-Xbox Series S experience on its next-gen handheld with next-gen upscaling According to the leaker KeplerL2, Sonyβs next-generation PlayStation 6 Handheld should feature a GPU that surpasses the Xbox Series S and Nintendo Switch 2. In fact, the leaker thinks that the systemβs GPU will be βa bit ahead of [β¦]
EK by LM TEK, is proud to introduce the EK-Quantum VectorΒ³ TUF RTX 5070 Ti/5080 - Plexi, a high-performance full-cover water block compatible with both the ASUS TUF Gaming GeForce RTX 5080 and the ASUS TUF Gaming GeForce RTX 5070 Ti. Designed to deliver exceptional thermal performance across the GPU core, VRAM, and power stages, the EK-Quantum VectorΒ³ TUF RTX 5070 Ti / 5080 features an optimized open split-flow cooling engine, next-gen pre-cut thermal pads,and a full-coverage black anodized aluminium backplate. Now available now at the EK Shop and local resellers.
Engineered for the ASUS TUF Gaming GeForce RTX 5080 and the ASUS TUF Gaming GeForce RTX 5070 Ti, the EK-Quantum VectorΒ³ delivers high performance liquid-cooling for enthusiasts who demand more. Featuring EK's expanded next-gen cooling engine, pre-cut high-performance thermal pads, and an advanced gasket design, this block ensures your GPU stays cool and efficient - even under the heaviest gaming, rendering, or overclocking loads.
Google officially announced its new anti-ransomware protections in September 2025, and the company is now making these tools available to Workspace customers using Google Drive for Desktop. The security features leverage a specially trained AI model, which has been further developed and refined over the past few months.
A diving accident at age 16 left Buckwalter paralyzed from the chest down. In 2024, he enrolled in a Caltech brain-computer interface study and underwent a craniotomy to have six Blackrock Neurotech chips implanted in his motor cortex.
Local chip makers take 41% of the Chinese AI semiconductor market, while Nvidia's market share falls to 55% β down from the company's claimed high of 95%.
The Nvidia App can now automatically recompile shaders for you in the background after every GPU driver update. This should save gamers several minutes across different titles, especially blockbuster ones, where shader compilation can often delay your session. You still need to compile shaders for the first time after a new install, however.
AI-powered ad bidding systems are highly sophisticated, but conversion tracking hasnβt kept pace. Ad platforms encourage advertisers to track more actions, while many experts argue for tracking only final outcomes.
Both are partly true. Neither is universally correct.
In practice, both over- and under-signaling can hurt PPC performance. Too many loosely defined micro-conversions introduce noise. Bidding shifts toward easy, low-value actions, inflating reported performance while eroding real results. Too few signals leave the system without enough data to learn.
This dynamic is most visible in Performance Max and Search plus PMax setups, where the system optimizes toward whatever signals itβs given β regardless of whether they reflect real business value.
Hereβs what happens when micro-conversions outnumber real conversions, why bidding systems behave this way, and how to build a conversion framework that aligns signal volume with business impact.
The myth of the βdata-hungryβ PPC algorithm
The idea that algorithms need as much data as possible has been repeated so often that itβs become an assumption. Platform documentation, automated recommendations, and many PPC blog posts reinforce the same message: more signals equal better learning.
Bidding systems require a minimum level of signal density to function, but they donβt benefit from indiscriminate micro-conversion signals. More data isnβt always better data.
Adding low-intent or loosely correlated actions often degrades performance by shifting optimization toward behaviors that donβt correlate with revenue.
Machine learning systems donβt evaluate the strategic relevance of a signal. They evaluate frequency, consistency, and predictability.
When an account includes a mix of high- and low-intent micro-conversions β purchases, add-to-carts, pageviews, video plays, and soft leads β the system doesnβt inherently understand which actions matter most to the business.
Without a clear value hierarchy, the bidding algorithm treats all signals as valid optimization targets. This creates a structural bias toward high-frequency, low-value actions because theyβre easier and cheaper to achieve. The result is a bidding pattern that maximizes conversion volume while minimizing business impact.
Why value-based bidding helps, but canβt fix everything
Many practitioners advocate for value-based bidding, where each micro-conversion is assigned a relative financial or hierarchical value. In theory, this helps the system understand which signals matter most. You can also instruct the platform to maximize conversion value, which should push the algorithm toward higher-value purchases or sales-qualified leads (SQLs).
But value-based bidding isnβt a complete solution. When too many micro-conversions are included β even with assigned values β the system can still become overwhelmed. A high volume of low-intent signals can dilute intent and distort the value hierarchy.
The issue isnβt just a lack of context.
Every signal becomes part of the optimization math. If the model weighs signals by volume rather than business importance, low-intent micro-conversions will dominate. Assigning values helps clarify priorities, but it canβt override signal imbalance. At a certain point, the math wins.
How PPC bidding follows the path of least resistance
In practice, this shows up as a βpath of least resistanceβ problem.
Even with values assigned, bidding algorithms still optimize toward the signals theyβre given. When low-intent micro-conversions are included as Primary actions, the system treats them as efficient ways to increase conversion volume. This isnβt an error. Itβs expected behavior for a model designed to maximize conversions within a set budget.
When those signals occur more frequently, the system gravitates toward them. A signal that fires hundreds of times a day will exert more influence than a high-value action that fires only a handful of times per week.
This dynamic is especially visible in PMax. The system evaluates signals across channels, audiences, and placements, and pursues the cheapest, most abundant path to conversion. If a contact page visit or key pageview is treated as a Primary signal, PMax may prioritize it over a purchase or SQL because itβs easier to achieve at scale.
Thatβs why PMax often reports strong conversion volume and low CPA while revenue remains flat or declines. The system is performing as instructed, but the inputs lack a disciplined signal hierarchy. Value-based bidding improves structure, but without restraint in the number and type of signals, it canβt fully prevent the problem.
When low-value actions are tracked as Primary conversions, platform-reported performance becomes disconnected from business outcomes. Metrics such as CPA, ROAS, and conversion rate may improve, but those gains are often illusory.
For example:
A campaign may show a 40% reduction in CPA because the system is optimizing toward pageviews rather than purchases.
ROAS may increase because the system attributes inflated value to actions that donβt correlate with revenue.
Conversion volume may spike due to high-frequency micro-conversions.
These patterns create a false sense of success, leading advertisers to scale budgets prematurely and erode contribution margin.
Diluted intent and double-counting
When multiple micro-conversions are tracked as Primary, a single user journey can generate multiple wins for the algorithm.Β
For example, a user who views a product page, signs up for a newsletter, and adds an item to cart may be counted as three conversions from a single click. If values are assigned to each step, conversion value and ROAS become inflated as well.
This inflates conversion volume, inflates conversion value, and distorts bidding behavior. The system interprets this as a high-value user and begins overbidding on similar traffic, even if the user never completes a purchase.
In many accounts, micro-conversions outnumber real conversions by a ratio of 500 to 1 or more. This imbalance has significant implications for bidding behavior.
When frequency overwhelms value
If an account records 500 pageviews, 200 add-to-carts, 50 lead form starts, 10 purchases, and all actions are treated as Primary, the system receives 760 signals for every 10 that actually matter.
Without distinct values, the algorithm canβt differentiate between a $0.05 action and a $500 action. It optimizes toward the most frequent signals because they provide the clearest path to increasing conversion volume.
Even when values are assigned, overvaluing micro-conversions teaches the algorithm to pursue easy wins. The result is a maximized conversion value metric that looks strong in the dashboard but isnβt reflected in actual sales.
The consequences of signal imbalance
When micro-conversions dominate the signal mix:
Bidding shifts toward low-intent traffic because it produces more conversions.
Budgets are allocated inefficiently as the system chases cheap signals.
Real ROAS declines, even as platform-reported ROAS appears strong.
Scaling becomes risky because the system is optimizing toward the wrong outcomes.
Thatβs why accounts with high micro-conversion volume often show strong platform metrics but weak financial performance.
When microβconversions stop helping
Micro-conversions are useful when an account lacks enough real conversion volume to support stable bidding. However, once a campaign consistently reaches 30 to 60 real conversions per month, they no longer provide meaningful benefit.
At that point, the system has enough high-quality data to optimize effectively. Continuing to rely on micro-conversions introduces unnecessary noise and increases the risk of misaligned bidding.
This is the point to transition from tCPA to tROAS and let real revenue guide optimization.
Primary actions influence bidding, while Secondary actions provide visibility without affecting optimization. This four-part litmus test helps determine which actions should be treated as Primary.
1. The volume threshold
Micro-conversions should be used only when real conversion volume isnβt sufficient to support stable bidding. As a general guideline:
Below 30 real conversions per month: A high-intent micro-conversion may be needed to give the system enough data.
30 to 60 real conversions per month: Begin reducing reliance on micro-conversions.
60 or more real conversions per month: Remove micro-conversions from Primary status and rely on revenue-based optimization.
This threshold ensures micro-conversions serve as a temporary bridge, not a permanent crutch.
2. The necessary step test
A Primary action should represent a required step in the conversion journey, such as:
Add to cart.
Begin checkout.
Start lead form.
Actions that arenβt required steps β such as contact page visits, whitepaper downloads, or time on site β shouldnβt be treated as Primary. These may indicate interest, but they donβt reliably predict revenue.
3. The valuation test
If an action canβt be assigned a realistic financial value, it shouldnβt be used as a Primary conversion. Assigning arbitrary values introduces risk and can distort bidding behavior.
Actions such as time on site or scroll depth fail this test because they donβt consistently correlate with revenue. However, if CRM data shows a reliable statistical correlation with revenue, that can justify including the action.
4. The simplicity test
Even if multiple actions pass the first three tests, only the strongest one or two should be designated as Primary. Including too many Primary actions increases the risk of double-counting and overbidding.
A streamlined Primary set ensures the system focuses on the most meaningful signals.
Use Secondary conversions as a diagnostic tool
Secondary conversions provide visibility into user behavior without influencing bidding. Theyβre a useful diagnostic tool for understanding funnel performance and evaluating new signals.
Visibility without optimization risk
Tracking actions such as newsletter signups, video views, or soft leads as Secondary lets you monitor engagement without shifting bidding toward low-value behaviors.
This approach preserves data integrity while maintaining control over optimization.
Funnel analysis and bottleneck identification
Secondary conversions reveal where users drop off in the funnel. For example:
High Add-to-Cart volume but low purchase volume indicates checkout friction.
High MQL volume but low SQL volume suggests targeting or qualification issues.
These insights support more informed optimization decisions.
Safe testing environment
New signals should be tracked as Secondary for several weeks before being considered for Primary status. This allows you to evaluate frequency, correlation with revenue, stability, and predictive value.
Only signals that demonstrate consistent value should be promoted to Primary.
Assign micro-conversion values using a safety discount
When micro-conversions are used, they must be assigned values that reflect their true contribution to revenue. Overvaluing micro-conversions is a common cause of inflated platform performance and misaligned bidding.
Calculating baseline value
The baseline value of a micro-conversion is determined by:
Baseline value = Conversion rate to sale x Average order value (AOV) or profit
For example:
Ecommerce: If 25% of add-to-carts convert and AOV is $1,600, the baseline value is $400.
Lead generation: If 10% of demo requests convert to $5,000 profit, the baseline value is $500.
Applying the 25% safety discount
The baseline value shouldnβt be used directly. Instead, apply a 25% reduction:
$400 becomes $300.
$500 becomes $375.
This discount helps prevent overbidding by ensuring the system doesnβt overvalue micro-conversions relative to actual revenue.
Undervaluing is safer than overvaluing
Undervaluing micro-conversions may slightly slow learning, but it doesnβt distort bidding. Overvaluing them can push the system toward low-intent traffic, leading to rapid budget misallocation.
The safety discount provides a buffer that protects contribution margin while still supplying useful data.
Where PPC experts draw the line on micro-conversions
Practitioners consistently point to the same principle: signal discipline matters more than signal volume.
Julie Friedman Bacchini emphasizes that every conversion action becomes a signal the system optimizes toward. Using more than one Primary action introduces ambiguity β βitβs suddenly muddierβ β and skipping values makes it easier for the system to latch onto lower-value signals. Values donβt need to be exact, but they must be relative.
She also notes that micro-conversions can help low-volume campaigns reach data thresholds, but they arenβt a substitute for real Primary conversions. Removing them later can mean βstarting over to a large extent on system learning.β
Jordan Brunelle takes a similarly disciplined approach: βThere can definitely be too many.β He recommends starting with one strong signal of intent and watching the ratio between micro-conversions and real outcomes. If volume is high but outcomes are low, it often signals a targeting or signal issue.
Signal discipline is the real competitive advantage
The debate around micro-conversions often focuses on quantity. But the real differentiator isnβt volume, but discipline.
Bidding systems optimize toward the signals theyβre given. When the signal mix is cluttered, performance drifts. When itβs clear and intentional, the system aligns with real business outcomes.
Micro-conversions should be selectively used and continuously evaluated. Start with a simple audit:
Identify all Primary conversions.
If more than two or three actions are Primary, the account is likely over-signaled.
Apply the litmus test.
Remove any Primary actions that fail the volume, necessary step, valuation, or simplicity tests.
Move nonessential actions to Secondary.
Assign conservative values to remaining micro-conversions.
Use the safety discount to avoid overbidding.
Monitor performance for 30 days, focusing on revenue, contribution margin, and signal distribution.
Micro-conversions should be a temporary bridge. Once real conversion volume is sufficient, optimization should be guided by revenue. A disciplined signal architecture gives automation what it needs to perform as intended: efficient, predictable, and aligned with real business outcomes.
If youβre a lawyer, college administrator, or financial services provider, youβve likely seen the frustrating βEligible (Limited)β status in your Google Ads account. It can feel like youβre fighting Google with one hand tied behind your back when your remarketing lists, exact match keywords, and more donβt work as intended.
While it might feel like Google Ads is out to get you when you operate in a so-called βsensitive interest category,β there are specific reasons for these rules. More importantly, there are specific ways to succeed despite them.
This article will cover what the personalized advertising policies are, what they mean for your account, and five specific tactics you can use to succeed with Google Ads.
Why does Google have personalized advertising policies?
Google provides detailed explanations in its official policy documentation, but it comes down to two things: legal requirements and ethical standards.
In the United States, for example, the Fair Housing Act and employment laws prevent discrimination based on age, gender, or location. If youβre advertising a job opening or a new apartment complex, Google canβt allow you to exclude people based on those demographics because doing so would be against the law.
Then thereβs the ethical side. Imagine youβre running a rehab center. If someone visits your site, Googleβs βsensitive interestβ policy prevents you from following them around the internet with targeted banner ads like, βStill struggling with addiction? Come to our clinic.β
That kind of remarketing is intrusive and, frankly, predatory when it targets someoneβs health and struggles. To protect the user experience and maintain a sense of privacy, Google limits how personal data can be used in these high-stakes industries.
What canβt you do in a sensitive interest category?
If you fall into one of these categories β housing, employment, credit, healthcare, or legal services β the biggest impact is usually on your audience targeting.
Hereβs what you canβt use:
Website or App Remarketing Lists, including the Google-engaged audience: You canβt target people who have previously visited your website or used your app.
Customer Match: You canβt upload your own email lists or phone numbers to target existing clients.
YouTube Audiences: You canβt target people based on how theyβve interacted with your videos.
Custom Segments: You arenβt allowed to build specialized audiences based on specific search terms or types of websites people visit
For certain categories in certain countries, like housing, credit, and employment in the United States, thereβs further βdemographic strippingβ β you canβt target by age, gender, parental status, or ZIP code. Your Smart Bidding strategies wonβt use these signals as inputs either.
The good news: What can you do in a sensitive interest category?
Itβs easy to focus on whatβs gone, but what still works is a much longer list. Even in a restricted industry, you still have access to the core engine of Google Ads. You can still use:
Keywords, feeds, and keywordless technology: These rely on intent (queries) rather than identity, so they are perfectly fine in Search, Shopping, and Performance Max.
Googleβs audiences: Affinities, In-Market, Detailed demographics, and Life Events segments are still fully at your disposal, where eligible, in Demand Gen, Display, Video, Search, and Shopping.
Optimized targeting: Googleβs AI can still find people likely to convert based on your historical converters, in Demand Gen, Display, and Performance Max.
Content Targeting: You can choose to show your ads on specific keywords, topics, and placements in Display and Video campaigns.
Conversion tracking: Yes, you can still track conversions and use features like Enhanced Conversions, Offline Conversion Import, and Consent Mode. While your internal legal team may have reservations or restrictions around your website tracking, Googleβs Personalized advertising policy doesnβt restrict any conversion tracking.
5 strategies to win in sensitive categories
If you want to move the needle without relying on remarketing, you need to rethink your account structure and messaging. Here are five things you can do right now.
1. The βSeparate Domainβ strategy
If your business offers a mix of services β some sensitive, some not β donβt let the sensitive ones βpoisonβ your whole account. Think of a spa that offers haircuts, pedicures, and Botox. Haircuts are fine; Botox is a medical procedure that triggers sensitive category restrictions.
If you put them all on one site, your entire remarketing capability might get shut down. Consider putting the sensitive service on a separate domain and a separate Google Ads account. This lets you use every available tool for your main business while the sensitive portion operates under the necessary restrictions.
2. Choose Demand Gen over Display
If you want to use image or video ads, use Demand Gen instead of the standard Display Network. In my experience, Demand Gen delivers higher-quality audiences and tends to perform better in restricted niches.
3. Lean into phrase and broad Match
You might be tempted to stick to Exact Match keywords to keep things tight. However, in sensitive categories, Google may restrict ads on very narrow, specific queries for privacy reasons. If your Exact Match keywords arenβt getting impressions, try Phrase or Broad Match. This gives the algorithm more room to find users searching for the same thing with slightly different phrasing that may be less restricted.
Think of it like fishing: if you canβt use a spear, use a net. Youβll catch some fish you donβt want, but that tradeoff helps you catch the ones you do want more easily.
4. Feed the AI with offline conversion tracking
Most businesses in these categories, such as law firms or banks, donβt make sales on their websites. The website generates a lead, and the sale happens over the phone or in an office.
If you want Google to find better users, you must feed that real-world data back into the system. Use Offline Conversion Tracking (OCT) to show Google which leads became customers. Even if you must navigate HIPAA or other privacy regulations, there are ways to do this safely.
Consult your legal team, but donβt skip this step. Itβs the best way to train the algorithm when you canβt use your own audiences and to ensure Smart Bidding works at its full potential.
5. Creative-Led targeting
When you canβt tell Google who to target with a list, you have to tell the user who the ad is for through your creative. Your headlines and images should qualify the lead.
Be specific in your copy. For example, instead of βNeed a Lawyer?β try βDefense Attorney for Small Business.β This attracts your target audience and encourages people who arenβt a fit to scroll past, saving you money and improving your conversion rate.
Running Google Ads in a sensitive category is a challenge, but itβs far from impossible. By shifting your focus from who the person is to what theyβre looking for and how you speak to them, you can still drive incredible results.
This article is part of our ongoing Search Engine Land series, Everything you need to know about Google Ads in less than 3 minutes. In each edition, Jyll highlights a different Google Ads feature, and what you need to know to get the best results from it β all in a quick 3-minute read.
AI has changed how I work after nearly two decades in digital marketing. The shift has been meaningful, freeing up time, reducing the grinding parts of the job, and making some genuinely hard tasks faster.
That doesnβt mean it does the work for you, transforms everything overnight, or saves you 40 hours a week. In real-world SEO, with real clients and real deadlines, itβs a tool that makes parts of the job easier, not something that replaces the work itself.
Here are 20 ways I actually use it. Some are specific to SEO. Some are broader, but relevant to anyone working in the industry. All of them are practical, tested, and honest about their limitations.
Content creation and copywriting
1. Writing first drafts
The single best way to use AI for content is to stop expecting it to produce something publishable and start treating it as a very fast first-draft machine.Β
Feed it your brief, your target keyword, your audience, and your angle. Get a structure back.Β
Then rewrite it in your voice. Add in the expertise that only you know, not a vanilla version of whatβs online.
The content AI produces out of the box is average. Your job is to make it good. Reference real-life stories, case studies, and statistics, and showcase your personal viewpoint and expertise.
The time savings are in not starting from a blank page.
2. Generating meta title and description variations
Give Claude or ChatGPT your target keyword, page topic, and character limits. Ask for 10 variations of your meta title and descriptions. Youβll use one, maybe combine two, but the process takes two minutes instead of 20. For large sites with hundreds of pages, this alone is worth the subscription.
Many tools allow you to upload CSV files, add AIβs suggested ideas, and download them for review. Donβt skip this step. A human eye is where the value sits
3. Refreshing underperforming content
Paste an existing page or blog post that has dropped in rankings. Ask AI to identify whatβs missing, what could be expanded, and what feels outdated.Β
It wonβt always be right, but it gives you a starting point instead of reading the whole thing yourself with fresh eyes you donβt have at 4 p.m. on a Thursday.
Make sure to give context. Long prompts with lots of detail will produce much better results than pasting a page in cold.Β
4. Generating FAQ sections
Prompt AI to generate the 10 most common questions for your target keyword. Cross-reference with People Also Ask and your own research.Β
Answer them, and you now have an FAQ section, featured snippet opportunities, and a content gap analysis in about 10 minutes.
5. Writing alt text at scale
Nobody enjoys writing alt text for 200 product images. Describe the image, give it the context of the page it sits on, and include the target keyword. Then ask for alt text thatβs descriptive and naturally includes the term where relevant. Itβs not glamorous, but itβs necessary and faster.
You can also run a website through Screaming Frog, export it to a CSV file, upload it to your AI of choice, and ask it to write the alt text. This only works well if the file names are descriptive, and again, a human eye is key. This is about increasing speed, rather than handing it over to AI completely.
Not everyone working in SEO has a developer background. AI is useful for:
Translating technical error messages.
Explaining what a server log is telling you.
Helping you understand why a page is excluded from indexing.Β
Paste in the output, ask it to explain it in plain English, and then ask what the fix should be. Verify the answer, but it gets you most of the way there.
7. Writing schema markup
Schema is one of those things everyone knows they should be doing more of, and nobody finds especially enjoyable.Β
Describe the content of your page to your AI of choice, tell it what schema type is relevant (FAQ, Article, LocalBusiness, Product, etc.), and ask it to generate the JSON-LD.Β
Check it in Googleβs Rich Results Test before implementation. This used to take me 20 minutes per page type. Now it takes five.
8. Creating regex for Google Search Console
If you use regex in GSC filters and youβre not a developer, AI is your new best friend. Describe what youβre trying to filter, for example, all URLs containing a specific subfolder, or all queries including a particular term, and ask for the regex string.Β
It gets it right more often than not, and you can ask it to explain the logic so you actually understand what youβre implementing.
9. Analyzing crawl data with prompts
If you export a crawl from Screaming Frog or Sitebulb and youβre not sure what to prioritize, paste the summary data into your AI tool and ask it to help you identify the highest-priority issues based on the siteβs goals.
It wonβt replace your expertise, but itβs a useful sounding board when youβre staring at a spreadsheet with 47 issues and a client call in an hour.
This is one of the most underrated uses of AI in SEO work. You have the data. You have the graphs. What takes time is writing the commentary that explains what happened, why, and what comes next.Β
Feed AI your key metrics and the context of what was happening that month (algorithm updates, campaign launches, seasonality), and ask it to draft the narrative section of your report. Edit it, add your actual insight, but stop writing it from scratch every month.
You can even upload reports from various data sources and ask it to combine and summarize them. This saves me hours every month when Iβm putting together reports.
11. Summarizing long reports for clients
Not every client wants to read a 12-page report. Ask AI to summarize your report into a five-bullet executive summary. Give it to clients at the top of the document.Β
The ones who want details will read on. The ones who donβt will feel informed without asking you to talk them through every chart on the next call.
Ask AI to create the executive summary for someone who doesnβt know anything about SEO, and itβll give you something simple and easy to understand.
12. Identifying anomalies in data
Paste a table of your keyword rankings or traffic data, and ask AI to flag anything that looks unusual, including significant drops, unexpected gains, or patterns that donβt match the previous period.Β
It wonβt replace proper analysis, but itβs a useful first pass when youβre managing a large amount of information and canβt give every dataset the attention it deserves.
List your top three competitors and your own site. Ask AI to help you think through what content topics theyβre likely covering that youβre not, based on their positioning and audience.Β
Then, validate that with actual keyword research tools. AI canβt see competitor data directly, but itβs useful for hypothesis generation before you do the manual work.
14. Understanding a new industry quickly
When you take on a client in an industry you donβt know well, you need to get up to speed fast. Ask your AI to give you a primer on the industry:Β
Key terminology.
The main players.
The buying cycle.
How people typically search for solutions in this space.
What the common pain points are.Β
It saves you an embarrassing amount of time in discovery calls.
15. Identifying search intent mismatches
Paste a list of your target keywords and ask AI to categorize them by search intent: informational, navigational, commercial, and transactional. Then compare that against the page type youβre targeting them with.Β
Youβll almost certainly find mismatches. This is a task thatβs straightforward to describe, but tedious to do manually across hundreds of keywords.
Everyone has had to write a difficult email, whether itβs explaining why rankings have dropped, why a deadline was missed, or why they need to do something you know they donβt want to do.Β
These emails take a disproportionate amount of emotional energy to write. Give your AI the situation, the context, and what you need the client to understand or do, and ask for a draft thatβs clear, professional, and honest.
Edit it. Send it. Move on.
17. Writing SOPs and process documentation
If youβve been meaning to document your processes and just havenβt gotten around to it, AI removes the excuse.Β
Describe a process out loud (or in rough notes), paste it in, and ask for a structured SOP with numbered steps, decision points, and notes.Β
The first version will need editing, but having a framework to work from is the difference between getting it done and it sitting on the to-do list for another quarter.
18. Preparing for client calls
Before a client call, paste in your recent report data, any issues from the previous month, and what you need to cover.Β
Ask your AI to help you structure the agenda and anticipate questions the client might ask based on the data. Youβll go into the call more prepared and less likely to be caught off guard.
Productivity and admin
19. Processing your own thinking
This one sounds vague, but itβs one of the ways I use AI most.Β
When I have a problem I canβt get clear on, a strategy decision Iβm going back and forth on, or a piece of work I canβt find the right angle for, I talk it through with Claude (my AI buddy of choice) to clarify my own thinking. It asks questions, reflects things back, and helps me arrive at a point of view faster than I would staring at a blank document.
Ask your AI to be brutally honest with you. Otherwise, itβll just keep agreeing with you and telling you that youβre truly an expert on every topic.
20. Building prompts you actually reuse
The biggest productivity gain from AI isnβt any individual use. Itβs building a library of prompts that work for your specific workflow and reusing them consistently.
Every time you get a good result from an AI tool, save the prompt. Over time, you build a system, rather than starting from scratch every time. This is the thing most people skip, and itβs the thing that compounds.
Top tip: In the paid version of many AI tools, you can create projects and have specific instructions for each one. This is invaluable for saving time by not having to include all of this information in every prompt you use.
None of these tips replace the expertise, judgment, and client relationships that make a good SEO professional.
AI doesnβt know the business the way you do. It doesnβt understand the nuance of an industry, the history of an account, or the particular quirks of a contact you deal with regularly.
AI reduces the time spent on tasks that donβt require that expertise, so you have more of it available for the work that does.
Use AI as a tool. Stay skeptical of the hype. And for the love of good search results, edit everything before it goes anywhere near a client.
OpenClaw Skills are modular extensions that tell your AI agent what tools to use and how. Here's how they work, what they're good for, and the risks to consider before you install.
Cybercriminals are impersonating top brands like Meta, Disney, and Spotify in a highly sophisticated new phishing campaign designed to hijack your Facebook account. Here is everything you need to know to stay safe.
The American toy-making giant noted that it was continuing to "implement measures to secure its business operations," suggesting that the hackers may still be in the company's systems.
Early tests of NVIDIA's new DLSS 4.5 features, Dynamic Multi Frame Generation (MFG) and MFG 6X, have shown positive results. It certainly seems like AI-assisted software is the new GPU frontier, and one day that won't be controversial.
Nintendoβs legal crusade against Palworld just hit a massive roadblock in the United States. Following a rare, high-level review by the U.S. Patent and Trademark Office (USPTO), a patent examiner has issued a "non-final" rejection of all 26 claims in Nintendoβs controversial "summon and fight" patent.
Worldwide 300 mm fab equipment spending is expected to increase 18% to $133 billion in 2026 and 14% to $151 billion in 2027, SEMI reported today in its latest 300 mm Fab Outlook. This strong growth reflects surging AI chip demand for data centers and edge devices, as well as the growing commitment to semiconductor self-sufficiency across key regions through localized industrial ecosystems and supply chain restructuring. Looking further out, the report projects investment will continue to increase 3% to $155 billion in 2028 and another 11% to $172 billion in 2029, respectively.
"AI is resetting the scale of semiconductor manufacturing investment," said Ajit Manocha, President and CEO of SEMI. "With global 300 mm fab equipment spending projected to exceed $150 billion in 2027 for the first time, the industry is making historic, sustained commitments to the advanced capacity and resilient supply chains needed to power the AI era."
Hooded Horse, Unfrozen Studio, and Ubisoft are excited to announce the news that everyone has been waiting for - Heroes of Might and Magic: Olden Era will release on PC via Steam Early Access and the Microsoft Store (via Game Preview) on April 30, 2026. It will also be coming to PC Game Pass day one.
Made for series veterans and newcomers alike, Heroes of Might and Magic: Olden Era is built on the familiar foundations of one of the most critically acclaimed strategy series of all time, introducing new and classic game modes that will let people play solo or with friends however they please. Engage in strategic empire building, epic turn-based tactical battles, and in-depth RPG mechanics, all while exploring a vibrant, never-before-seen land full of secrets and dangers.
Intel Corporation (Nasdaq: INTC) and Apollo (NYSE: APO) today announced a definitive agreement for Intel to repurchase the 49% equity interest in the joint venture related to Intel's Fab 34 in Ireland not held by Intel for $14.2 billion. The agreement reflects Intel's continued business momentum underpinned by the growing and essential role CPUs play in the era of AI, a significantly strengthened balance sheet and the strong partnership between Intel and Apollo.
In 2024, Apollo-managed funds and affiliates led an $11.2 billion investment to acquire a 49% equity interest in a joint venture entity related to Fab 34, providing Intel with equity-like capital while preserving balance sheet strength. This transaction provided Intel with significant financial flexibility and enabled the company to unlock and redeploy capital to advance its strategic priorities including accelerating the buildout of Intel 4 and Intel 3, the most advanced processes manufactured in Europe, and of Intel 18A, the most advanced process developed and manufactured in the U.S. today.
The exposure traces back to version 2.1.88 of the @anthropic-ai/claude-code package on npm, which was published with a 59.8MB JavaScript source map intended only for internal debugging.
Crimson Desert, which was one of our Most Anticipated Games of 2026, arrived to review scores of between 60 and 70 in most cases. It also held a disappointing Steam user rating of Mostly Positive. The failure to meet expectations resulted in developer Pearl Abyss' shares falling almost 29%.
The company declined to comment on the total scope of the layoffs, though some estimates suggest they could affect as many as 20,000 to 30,000 workers. Oracle employed about 162,000 people worldwide as of the end of May.
The Gigabyte X870E Aorus Xtreme AI Top delivers strong performance, premium features, and a slick appearance, but high pricing and newer refresh boards, like the X3D version, make it a tough sell unless you snag a refurb deal.
DiagramDeck is a cloud-based diagramming platform that hosts and manages draw.io for your team. Import and export .drawio files, edit together in real time with comments and live cursors, and use AWS, GCP, and Azure shape libraries to design cloud architectures, flowcharts, UML, ER, and network diagrams.
It removes self-hosting overhead with managed uptime, backups, and security, and adds team management, SSO, and compliance such as SOC 2 and GDPR. Use it as a modern alternative to Lucidchart and Visio while keeping the draw.io ecosystem.
SupaSailing is the first operational ERP platform built for nautical professionals including fleet managers, charter companies, brokers, and marinas. These businesses previously used spreadsheets, disconnected tools, and email threads since no integrated system existed for this industry.
Six modules cover crew and fleet management, charter enquiries, brokerage CRM, berth management, refit projects, and ISM compliance. All modules are connected with no duplicate data, providing full operational visibility across the business.
It's easier than ever to get distracted while working on your computer. A quick email check, a Slack ping, one ChatGPT question, and boom, 30 minutes gone. "What was I supposed to be doing?"
Most focus tools either block apps you need or disappear when you switch tabs or apps. Neither works. You need an anchor, not a blocker. Focana keeps one task and a timer always visible on your screen, delivers gentle visual nudges and check-ins to keep you locked in, captures stray thoughts in the Parking Lot so they don't derail your session, and allows you to leave notes for context to pick up where you left off. All with no accounts, no sync, and no cloudβjust a calm companion for busy brains.
Get ready for YouTube Brandcast 2026 at Lincoln Center. Host Trevor Noah joins CEO Neal Mohan and a powerhouse lineup of creators to demonstrate why YouTube is the futurβ¦
There is a character that keeps appearing in enterprise security departments, and most CISOs know exactly who that is. It doesnβt build. It doesnβt enable. Its entire function is to say "No."
No to ChatGPT.
No to DeepSeek.
No to the file-sharing tool the product team swears by.
For years, this looked like security. But in 2026, "Doctor No" is no longer just a management headache &
A multi-pronged phishing campaign is targeting Spanish-speaking users in organizations across Latin America and Europe to deliver Windows banking trojans like Casbaneiro (aka Metamorfo) via another malware called Horabot.
The activity has been attributed to a Brazilian cybercrime threat actor tracked as Augmented Marauder and Water Saci. The e-crime group was first documented by Trend Micro in
Microsoft is calling attention to a new campaign that has leveraged WhatsApp messages to distribute malicious Visual Basic Script (VBS) files.
The activity, beginning in late February 2026, leverages these scripts to initiate a multi-stage infection chain for establishing persistence and enabling remote access. It's currently not known what lures the threat actors use to trick users into
Google on Thursday released security updates for its Chrome web browser to address 21 vulnerabilities, including a zero-day flaw that it said has been exploited in the wild.
The high-severity vulnerability, CVE-2026-5281 (CVSS score: N/A), concerns a use-after-free bug in Dawn, an open-source and cross-platform implementation of the WebGPU standard.
"Use-after-free in Dawn in Google Chrome prior
Use this code to save $115 on G.SKILL Flare X5 Series DDR5-6000 32GB kit right now β at under $380 that's a huge deal in today's volatile memory market.
MSI GPU Safeguard+ finally allows me to trust 12V-2Γ6 GPUs If Iβm honest, Iβm not a fan of 12V-2Γ6 or 12VHPWR. There have been far too many reports of burnt graphics cards or melted power connectors to ignore. If I were to spend many hundreds, or perhaps thousands, on a new graphics card, I want [β¦]
The Raspberry Pi has always been a poster child for cheap computing, but thanks to a third price rise in recent months, it's no longer the default best choice.
Microsoft has unveiled Xbox PC Remote Tools, a new suite designed to streamline deployment, testing, and debugging for Windows game developers across remote devices.
NVIDIA and Marvell Technology, Inc. (NASDAQ: MRVL) today announced a strategic partnership to connect Marvell to the NVIDIA AI factory and AI-RAN ecosystem through NVIDIA NVLink Fusion offering customers building on NVIDIA architectures greater choice and flexibility in developing next-generation infrastructure. The companies will also collaborate on silicon photonics technology.
In addition, NVIDIA has invested $2 billion in Marvell.
Starting Tuesday, Gmail account holders in the US can change the usernames that appear on their email addresses without losing the address itself. The change affects accounts used to log into mail, photos, Google Drive, and other services.
In recent years, Denuvo has managed to fight widespread PC piracy thanks to its hard-to-crack anti-tamper technology. However, Denuvo's aura of invincibility has recently melted away like snow in the sun. A new virtualization-based method is apparently good enough to crack even the latest triple-A game releases, although the cracks...
Barry Adams recently published βGoogle Zero is a Lieβ in his SEO for Google News newsletter, arguing that the narrative of Google traffic disappearing is false and dangerous.
His data backs it up. Similarweb and Graphite data show only a 2.5% decline in Google traffic to top websites globally. Google still accounts for nearly 20% of all web visits.
The widely cited Chartbeat figure showing a 33% decline? Itβs skewed by a handful of large publishers hit by algorithm updates. Publishers who abandon SEO in the face of this panic are making a self-fulfilling prophecy, ceding traffic to competitors who keep optimizing.
Heβs right. And heβs looking at the wrong problem.
Humans are still clicking Google results. What has changed is that a growing share of your visitors isnβt human at all.
That number includes everything from scrapers to brute-force login bots. But the fastest-growing segment is AI crawlers.
AI crawlers now represent 51.69% of all crawler traffic, surpassing traditional search engine crawlers at 34.46%, Cloudflareβs 2025 Year in Review found. AI bot crawling grew more than 15x year over year. Cloudflare observed roughly 50 billion AI crawler requests per day by late 2025.
Akamaiβs data tells a similar story: AI bot activity surged 300% over the past year, with OpenAI alone accounting for 42.4% of all AI bot requests.
So while Adams is correct that human Google traffic hasnβt collapsed, something else is happening on the other side of the server logs.
Anthropicβs ClaudeBot crawls 23,951 pages for every single referral it sends back to a website. OpenAIβs GPTBot: 1,276 to 1. Training now drives nearly 80% of all AI bot activity, up from 72% the year before.
Compare that to traditional Googlebot, which has always operated on a crawl-and-send-traffic-back model. Google crawls your site, indexes it, and sends 831x more visitors than AI systems. The deal was simple: let me read your content, and Iβll send you people who want it.
Googleβs newer AI Mode is worse. Semrush data shows a 93% zero-click rate in those sessions. AI Overviews now trigger on roughly 25-48% of U.S. searches, depending on the dataset, and that number keeps climbing.
And when Googleβs AI features do cite sources, theyβre increasingly citing themselves. Google.com is the No. 1 cited source in 19 of 20 niches, accounting for 17.42% of all citations, an SE Ranking study of over 1.3 million AI Mode citations found. That tripled from 5.7% in June 2025. Add YouTube and other Google properties, and they make up roughly 20% of all AI Mode sources.
So the old deal is being rewritten even by Google. AI crawlers from other companies skip the pretense entirely: let me read your content so I can answer questions about it without ever sending anyone your way.
The agentic shift
The bot traffic numbers are already here. The next wave is bigger: AI agents acting on behalf of humans.
In 2024, Gartner predicted that traditional search engine traffic would drop 25% by 2026 as AI chatbots and agents handle queries. That prediction is tracking. Its October 2025 strategic predictions go further: 90% of B2B buying will be AI-agent intermediated by 2028, pushing over $15 trillion in B2B spend through AI agent exchanges.
This isnβt theoretical.
Salesforce reported that AI agents influenced 20% of all global orders during Cyber Week 2025, driving $67 billion in sales.
Retailers with AI agents saw 13% sales growth compared to 2% for those without.
Gartner says 40% of enterprise applications will have task-specific AI agents by the end of 2026, up from less than 5% in 2025. eMarketer projects AI platforms will drive $20.9 billion in retail spending in 2026, nearly 4x 2025 figures.
Think about what that looks like in practice. An AI agent researches vendors for a procurement team. It doesnβt see your hero banner. It doesnβt notice your trust badges. It reads your structured data, compares your specs to those of three competitors, and builds a shortlist.
That βvisitβ might show up in your analytics as a bot hit with a zero-second session duration. Or it might not show up at all.
So what do you optimize for when the visitor is a machine making decisions for a human?
Itβs not the same as traditional SEO. And itβs not the same as the AI Overviews optimization most people are focused on right now. AI Overviews are still Google. Still one search engine, still largely the same ranking infrastructure, still (mostly) one answer format.
Agentic SEO is about being useful to software thatβs pulling from search APIs, crawling directly, and using LLM reasoning to make recommendations. That software doesnβt care about your page layout. It cares about whether it can extract what it needs.
I think a few things start to matter a lot more.
Structured data becomes load-bearing
Schema markup has always been a βnice to haveβ for rich snippets. When an AI agent compares your product to three competitors, structured data lets it read your specs without having to guess. Think product schema, FAQ schema, and pricing tables in clean HTML. These go from SEO hygiene to core infrastructure.
AI agents donβt search for βbest CRM for small business.β They ask compound questions: βWhich CRM under $50/user/month integrates with QuickBooks and has a mobile app with offline capability?β If your content only answers the first version, youβre invisible to the second.
Freshness and accuracy get audited differently
A human might not notice your pricing page is 8 months stale. An AI agent cross-referencing your pricing against competitors will flag the discrepancy. Or worse, use the outdated number in its recommendation and cost you the deal.
Blocking AI crawlers feels protective, but it means AI agents canβt recommend you. Allowing them means your content trains models that may never send you traffic. Thereβs no clean answer.
But pretending itβs just a technical setting is a mistake. New IETF standards are emerging to give publishers more granular control, but theyβre not widely adopted yet.
Most analytics setups canβt tell the difference between a human visit, a bot crawl, and an AI agent evaluating your site on someoneβs behalf. GA4 filters most bot traffic. Server logs show the raw picture, but take work to parse. Even then, figuring out whether an AI agentβs visit led to an actual sale is basically impossible right now.
This is where the βGoogle Zeroβ framing does real damage.
If youβre only measuring organic sessions from Google, youβre blind to a channel that doesnβt show up in that number. Your traffic could look stable while an AI agent steers $50,000 in annual spend to your competitor because their product schema was more complete.
I donβt think we have good measurement for this yet. Nobody does. But ignoring the problem because Google sessions look fine is like checking your print ad response rate in 2005 and deciding the web wasnβt worth paying attention to.
I donβt have a playbook for this. Itβs too new. But I can tell you what weβre doing at our agency.
Audit your structured data like itβs your storefront: Evaluate whether your websiteβs schema is present and well-formed. Look into structured data, content structure, and technical health. Make sure product, service, FAQ, and organization markup is complete, accurate, and current. This is table stakes.
Answer compound questions: Look at your top landing pages. Do they answer the specific, multi-variable questions an AI agent would ask? Or just the broad keyword query a human would type?
Check your server logs: Look for GPTBot, ClaudeBot, PerplexityBot, and other AI user agents. Understand how much of your traffic is already non-human. If youβre on Cloudflare, their bot analytics dashboard makes this easy without parsing raw logs. Youβll probably be surprised either way.
Make a conscious robots.txt decision: Understand the trade-offs, and make it a business decision with your leadership team.
Start tracking AI citations: Tools like Semrush, Scrunch, DataForSEO, and others can show when AI platforms mention your brand. The data is directional, not precise. But itβs better than nothing.
Donβt abandon Google SEO: Adams is right that Google traffic is still massive and still valuable. The agentic web doesnβt replace Google. It adds a new layer. You need both.
The real question
The βGoogle Zeroβ argument pits one extreme against another, even as the actual shift is quieter and more important.
The web is becoming a place where the majority of visitors are machines. Some send traffic back. Most donβt. Some of them make purchasing decisions on behalf of humans. That number is growing fast.
The SEOs who do well here wonβt be the ones arguing about whether Google traffic moved 2.5%. Theyβll be the ones who figured out how to be useful to both human visitors and the AI agents acting on their behalf.
Weβve spent 25 years optimizing for how humans find things. Now we need to figure out how machines find things for humans.
Thatβs not Google Zero. We donβt have a name for it yet. But itβs already here.
If you want to go deeper on GEO and agentic SEO, Iβm teaching an SMX Master Class on Generative Engine Optimization on April 14. It covers structured data implementation, AI visibility measurement, content optimization for AI systems, and the practical side of everything in this article.
For years, cybersecurity has followed a familiar model: block malware, stop the attack. Now, attackers are moving on to whatβs next.
Threat actors now use malware less frequently in favor of whatβs already inside your environment, including abusing trusted tools, native binaries, and legitimate admin utilities to move laterally, escalate privileges, and persist without raising alarms. Most
Nvidia aims to tackle shader stutter with its Auto Shader Compilation Beta Nvidia is taking action against shader compilation stutter. With its new Auto Shader Compilation (ASC) feature, Nvidia are giving gamers the option to rebuild game shaders outside of runtime to deliver a smoother gaming experience. When your PC is idling, it can be [β¦]
Fallout Season 2 has reached 83 million viewers within its first 13 weeks, making it one of Prime Videoβs biggest returning shows and highlighting strong demand for the franchise.
One could be forgiven for thinking it's an April fools, but alas, Raspberry Pi has announced yet another price hike due to the increasing costs of DRAM. Its CEO, Eben Upton announced in a blog post that the company has seen a seven-fold increase in the cost of LPDDR4 DRAM, which is used in both the Raspberry Pi 4 and 5. All 4 GB and up SKUs of the aforementioned products will see a price hike that ranges from US$25 to US$100. Other products will also see an increase in price and you can find all the price bumps in the table below.
At the same time, the company is launching a new 3 GB SKU of the Raspberry Pi 4, which will launch at US$83.75. The new SKU is available today from all authorised resellers globally. The price increases mean that the 16 GB SKU of the Raspberry Pi 5 now comes in at US$305, which is more than what a lot of mini PCs set you back six months ago. The 16 GB Raspberry Pi 500+ keyboard computer comes in at a whopping US$410, suggesting that some products are unlikely to sell, as they've simply become uncompetitive. The only good news today is that the 1 and 2 GB SKUs for the Raspberry Pi 4 and 5 won't see any price hikes this time around, alongside the 4 GB SKU of the Raspberry Pi 400. Raspberry Pi is promising to lower its pricing as soon as the cost for DRAM goes down at some point in the future.
Days after Iran warned that offices and infrastructure belonging to US companies involved in military technology in the Middle East would be targeted, the IRGC updated its threat on Telegram.
Two software researchers recently demonstrated how modern AI tools can reproduce entire open-source projects, creating proprietary versions that appear both functional and legally distinct. The partly-satirical demonstration shows how quickly artificial intelligence can blur long-standing boundaries between coding innovation, copyright law, and the open-source principles that underpin much of the...
Oracle reportedly cut over 10,000 positions based on reports from various employees. The move comes after the company has spent billions of dollars on AI infrastructure, with some saying that it will be in the red until 2030 after all this spending.
Jonathan Spalletta is charged with computer fraud and money laundering for stealing more than $53.3 million worth of cryptocurrency from Uranium Finance, resulting in its shut down.
Gen AI gives us productivity superpowers, but the risk is mental fatigue. Ascenda helps you track how your mind is performing day to day. With a quick daily check-in, it shows patterns in clarity, energy, mood, recovery, focus, and decision load so you can protect your best work.
Built with input from a psychologist, neuroscientist, and engineer-founder with lived experience, Ascenda is a Whoop-like layer for the mind: signals, patterns, and early awareness before stress leads to poor decisions, lost focus, or burnout.
LinkedIn is one of the most powerful platforms for recruiting top-tier talent. Itβs also one of the easiest places to waste budget if campaigns arenβt structured correctly.
Many recruitment campaigns fail because they prioritize visibility over intent. More impressions donβt equal better hires. Broad targeting and generic messaging often lead to an influx of unqualified applicants, driving up cost-per-hire and slowing down hiring timelines.
The most effective LinkedIn recruitment strategies focus on one thing: attracting and converting high-intent candidates while filtering out poor-fit applicants before they ever click. Letβs break down exactly how to do that.
Shift your strategy: Optimize for intent vs. reach
The biggest mistake advertisers make on LinkedIn is targeting based solely on job titles, industries, and years of experience.
While this may generate volume, it rarely produces efficiency. Instead, high-performing campaigns are built around intent-based targeting β reaching candidates who are qualified and more likely to consider a new opportunity.
This requires a layered approach:
Core fit: Job titles, skills, and certifications.
Behavioral signals: Open-to-work status, group memberships, and engagement with industry content.
Career friction indicators: Burnout-prone roles, companies experiencing layoffs, and limited growth environments.
By combining these layers, you move beyond βwho they areβ and begin targeting why they might be ready to make a change β which is where real performance gains happen.
Your ad creative isnβt just there to attract attention. It should actively filter your audience. One of the most effective ways to control cost-per-hire is to discourage unqualified candidates from clicking in the first place.
Strong recruitment ads follow a structured approach:
Call out a specific pain point or identity: βBurned out from long shifts in healthcare?β
Clearly define who the role is for: βThis role is designed for licensed RNs with 3+ years of experience.β
Highlight meaningful value: Think flexibility, compensation, career growth, or mission.
Set expectations upfront: βNot an entry-level positionβ or βRequires managing enterprise accounts.β
This combination of attraction and exclusion ensures that the candidates who do click on your ads are far more likely to convert.
Messaging: Career upgrades, better lifestyle, growth opportunities.
Outcome: Scalable pipeline of qualified candidates.
Cold passive talent (top funnel)
These are long-term potential candidates to start building your pipeline, with the intent to move them to the middle of the funnel and eventually the bottom of the funnel.
Target: Broader audiences and lookalikes.
Messaging: Employer brand, culture, βday in the life.β
Outcome: Reduces future acquisition costs over time.
Control costs through smarter bidding and optimization
LinkedInβs ad platform can quickly become expensive without proper controls. Start with manual CPC bidding to maintain control, then test automated delivery once performance data is established.
More importantly, optimize for the right metrics. Focus on qualified applications instead of clicks. Track downstream actions, such as interview and hire rates.
Be prepared to make fast decisions. Ads with high click-through rates but low application rates often indicate poor alignment. Ads that generate many applications, but few interviews signal weak pre-qualification.
Efficiency comes from eliminating wasted spend earlier, rather than later. It conserves ad spend and minimizes overlapping audiences and hitting the wrong targets.
Improve conversion rates with a two-step application process
A common but costly mistake is sending candidates directly to long, complex application forms. Instead, use a two-step funnel:
Pre-qualification landing page.
Role overview and expectations.
Compensation transparency.
Clear βwho this is (and isnβt) for.β
Application.
Short form or LinkedIn Easy Apply.
This approach sets expectations, filters candidates, and significantly improves application quality β often reducing cost-per-hire by 30-50%.
Use retargeting to capture missed opportunities
Not every qualified candidate applies on the first interaction. Retargeting allows you to re-engage high-intent users who have already shown interest.
Build audiences from:
Career page visitors.
Job post viewers.
Video viewers (50%+ engagement).
Then serve follow-up messaging such as:
βStill considering a move?β
βLast chance to applyβ
Employee testimonials or success stories.
Retargeting campaigns are often the most cost-efficient part of your entire strategy.
Advanced strategies to increase ROI
Once the fundamentals are in place, there are several advanced tactics that can further improve performance:
Competitor targeting: Target employees at competing companies and position your opportunity as a clear upgrade β whether through compensation, flexibility, or culture.
Skill-based campaign segmentation: Instead of grouping all candidates together, build campaigns around specific skills or certifications. This reduces competition in the ad auction and often lowers cost-per-click.
Selective use of Message Ads: Message ads can be effective for senior or hard-to-fill roles β but only when targeting is highly refined. Otherwise, they can quickly become cost-prohibitive.
Hereβs an example of a successful LinkedIn InMail message that recently drove over 70% high-intent applications for an HVAC sales client:
Message body:
Hi [First Name],
This might be a stretch β but your background in HVAC sales caught my attention.
Weβre hiring experienced sales reps who are tired of unpredictable commissions and weekend-heavy schedules.
This role is built for reps who:
Have 3+ years in HVAC or home services sales
Are comfortable running in-home consultations
Want a more stable, high-earning structure
Whatβs different:
No weekend appointments
Pre-qualified, inbound leads (no cold knocking)
Six-figure earning potential with consistency
That said, this isnβt a fit for entry-level reps or those new to sales.
If youβd be open to a quick 10-minute conversation to see if itβs worth exploring, Iβm happy to share more.
If not, no worries at all β appreciate you taking a look.
β [Name]
Stating upfront the need for βexperienced sales repsβ immediately establishes relevance and increases response rates while reducing irrelevant replies.Β
Focusing on what matters to potential candidates, such as no weekend appointments and compensation structure, speaks to the audienceβs needs versus the companyβs.
Closing the conversation with the reminder that this isnβt an entry-level position weeds out wasted conversations and reduces cost-per-hire.
The most effective LinkedIn recruitment campaigns rely on better strategy.
When you focus on intent-based targeting, pre-qualification within ad creative, funnel segmentation, and conversion optimization, you create a system that attracts the right candidates while minimizing wasted spend.
Ultimately, reducing cost-per-hire is about reaching the right people, at the right time, with the right message.
Jo Nesbo's Detective Hole has taken Netflix by storm this week β but there's a bigger issue with the new Netflix crime drama that not just being 'relentlessly grim'.
OpenClaw can browse the web, run shell commands, and send emails on your behalf, but it comes with documented security risks that every user should understand before deploying it.
From multi-agent dev pipelines to smart home controllers and overnight trading bots, here are 10 of the most interesting OpenClaw community builds right now.
Microsoftβs latest hire is bringing OpenClaw and personal AI agents to Microsoft 365, but it also raises questions about the company's commitment to reducing AI integrations across its tech stack.
Late last week, Microsoft released its KB5079391 non-security feature update for Windows 11, which was officially pulled due to widespread installation errors. Today, the company is issuing the out-of-band KB5086672 update to address this problem, as Microsoft has identified the source of the issue and the update can now be safely applied. This latest out-of-band KB5086672 update includes the KB5079473 package released on March 10, KB5085516 released on March 21, and the previously pulled KB5079391 released on March 26. Microsoft has combined all of these into the new KB5086672 package, which addresses the issues that appeared and introduces a variety of new features. Finally, the old installation error message, "Some update files are missing or have problems. We'll try to download the update again later. Error code: (0x80073712)," has been resolved for good.
Microsoft notes that this out-of-band update is available through Windows Update for devices running Windows 11 that have already installed KB5079473 or a later update. It is also available for manual download from the Microsoft Update Catalog. Currently, there are no known issues with this update, and if any arise, Microsoft will highlight them on their support documents website. Interestingly, KB5086672 is one of the first steps by Microsoft toward resolving the issues users have experienced with Windows 11 updates, and hopefully just the beginning of the overhaul that Microsoft has promised. Future non-security feature updates could also focus on other quality-of-life improvements, and installation errors should become less common.
Napster made digital music feel limitless for the first time, then vanished in lawsuits, rebrands, and sales. Its name faded, but its ideas still shape how the world listens.
New research from Google suggests that future quantum computers will develop quickly enough to pose a risk to elliptic-curve cryptography, used in cryptocurrencies like Bitcoin, as soon as 2029, and its researchers say action should be taken now to prepare.
FlowCastle is a visual platform for building AI-powered chatbots without code. Use a drag-and-drop editor to design flows, then extend with TypeScript actions, HTTP requests, and integrations like Google Sheets. Launch on Telegram and reuse logic across brands with white-labeling. Accept payments, manage catalogs, and track orders inside the bot. Hand off to humans with live chat, run smart broadcasts, and monitor funnels with goal-based analytics. An AI copilot helps generate flows, write copy, and optimize automations.
HankRing helps you find the best versions of the specific dishes and drinks you crave. Choose your Hanks, see verified, likely, and potential spots on a map, and rate the dishβnever the venueβto build consensus for the community. Browse Top 50 and trending categories, add missing spots, and keep a private journal of every rating and verification. With thousands of curated places preloaded, you can discover great food from day one and plan where to hunt next.
Narratex is a writing workspace for fiction authors that unifies your Story Blueprint, a full editor, and an AI collaborator that remembers context across sessions. It keeps track of your characters, plot threads, settings, and themes so you never need to re-explain your magic system or paste cast lists again.
Start by importing your existing work and building your blueprint, then write with an assistant that's already read everything you've created, keeping you consistent and focused chapter to chapter.
Sometimes, that long-rumored Apple product stays a rumor and never graces the real world. Here are our favorite near-mythical gadgets from the company's 50-year history.
ASUS today announced the UGen300 USB AI Acceleratorβthe first AI USB device from ASUS, bringing inference performance directly to any device. An M.2 version is also available. This slim AI accelerator measures 105 x 50 x 18 mm and features the Hailo-10H AI processor that delivers 40 AI TOPS of dedicated power to support large language models such as LLMs, VLMs, and more. UGen300 includes 8 GB LPDDR4 dedicated memory and connects to other devices via a USB-C interface, consuming just 2.5 watts of power under typical workloads. The convenient plug-and-play design ensures cross-platform compatibility with Windows, Linux, and Android. UGen300 also supports major AI frameworks like TensorFlow, PyTorch, and ONNXβright out of the box.
"By integrating the Hailo-10H into a ubiquitous USB device, ASUS brings the full power of AI and generative AI to everyone" said Max Glover, Chief Revenue Officer of Hailo. "We're excited to see how our developer community will use this plug-and-play accelerator to push the boundaries of on-device AI. This is exactly how Hailo envisions the future of AI: accessible, affordable, and designed for anyone to build with."
Continued investment in AI infrastructure by major CSPs, including purchases of GPUs and deployment of in-house ASICs, has driven strong growth among AI-related chip designers, according to TrendForce's latest findings. In 2025, the total revenue of the top 10 fabless IC design houses exceeded US$359.4 billion, up 44% YoY. NVIDIA maintained its leading position, while Broadcom moved up to second place due to increased involvement in AI, overtaking Qualcomm, which continues to depend more heavily on consumer electronics.
Industry leader NVIDIA delivered another year of record revenue, supported by its strong AI chip portfolio and computing ecosystem. The company's fourth quarter revenue from data centers accounted for as much as 90% of its total. Full-year revenue rose 65% YoY to $205.7 billionβthe fastest growth among the top playersβwith its share of total top-ten revenue increasing further to 57%.
Google has formally attributed the supply chain compromise of the popular Axios npm package to a financially motivated North Korean threat activity cluster tracked as UNC1069.
"We have attributed the attack to a suspected North Korean threat actor we track as UNC1069," John Hultquist, chief analyst at Google Threat Intelligence Group (GTIG), told The Hacker News in a statement.
"North Korean
Anthropic on Tuesday confirmed that internal code for its popular artificial intelligence (AI) coding assistant, Claude Code, had been inadvertently released due to a human error.
"No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement shared with CNBC News. "This was a release packaging issue caused by human error, not a security
All the ways to watch 2026 Dwars door Vlaanderen live streams online and from anywhere, as the World Tour peloton once more hits the unforgiving and unpredictable roads of Flanders.
Pine doesnβt just draft or organize β it emails, calls, researches, plans, follows up, and persists until the job is done. For companies, Pine acts as an execution arm across CEOs, operations, finance, sales, marketing, and executive assistants β closing open loops, renegotiating contracts, chasing invoices, coordinating vendors, and unblocking stalled deals. For individuals who value time more than money, Pine handles lifeβs friction β negotiating bills, canceling subscriptions, filing claims, and waiting on hold. Pine turns decisions into outcomes β autonomously, persistently, and without expanding headcount.
Claras lets you get instant transcripts and chat with any YouTube video using AI. It analyzes full videos to answer questions, generate summaries, and build a clickable table of contents so you can jump to key moments with confidence. You can highlight insights, save notes, and export to TXT or PDF. Use transcripts to power ChatGPT, Claude, or custom agents, and collaborate with teammates.
OpenClaw Hosting is a managed cloud platform for running OpenClaw, the open-source autonomous AI agent, 24/7 without dealing with servers or Docker. It supports any OpenAI-compatible model, including Claude, GPT, Gemini, and local models via Ollama, and includes free access to Kimi K2.5. Connect your agent to Telegram, WhatsApp, Slack, Discord, Signal, or iMessage, and keep data private with isolated containers and local-first storage. The platform handles updates, monitoring, and scaling so your agent stays online and productive.
From carefully crafted marketing lines to terms the company would probably rather forget, these phrases capture the impact, influence and missteps of Apple over the last 50 years.
Direct-to-Cell (D2C) has the technical power to be the solution to today's internet shutdowns. Digital rights groups, Access Now and WITNESS, are now calling on software developers and lawmakers to ensure this happens.
The all-stock transaction, which AMD described as a "once-in-a-generation opportunity to unify x86 innovation," would combine the two companies under a single umbrella just a few years after such an outcome would have sounded ridiculous.
Grails provides domain intelligence to help VCs, founders, and operators evaluate company domains, discover naming opportunities, and connect with owners. Use domain health audits, industry and funding-stage benchmarks, valuations, risk scoring, and curated lists to spot gaps and acquisition targets. Post a domain request and get responses from owners, or browse available strategic names and work with verified brokers to move fast and avoid costly mistakes.
WhatNext is an AI-powered planner that instantly builds complete itineraries for date nights, friend hangouts, day trips, and weekend adventures using real places, venues, and live events near you. Enter your location and vibe, and it assembles dinner, activities, dessert, and drinks with Google Maps links.
Customize budget and preferences, regenerate alternatives, save favorites, and use it across 50+ US cities β free to start
Accomplish It helps you capture, organize, and showcase your career accomplishments. Connect work sources like GitHub and Jira or reply to periodic prompts, and its AI records, categorizes, and turns results into resume-ready statements. Build a living resume to share a timeline, export polished resumes and career artifacts, and benchmark progress by role to stay ready for reviews and new opportunities.
Instagram had been using the proprietary ratings scales without permission and will now include a disclaimer saying it βdidnβt work with the MPA.β
A new survey from the app and research platform Suzy found that Snapchatters also use their content to influence travel decisions in their social circles.
The latest artificial intelligence-enhanced designs support all-day wear, allowing users to capture and record images and video in virtually any situation.
With ice levels at an all-time low, sealife photographer Justin Hofman ventured into antarctica's south earlier than ever and had an encounter with a rare Ross seal.
Return of the King β Multi-GPU PC gaming is ready for a comeback During the companyβs DLSS 5 reveal, Nvidia teased something massive. When demoing their next-generation DLSS features, Nvidia were running multi-GPU systems. While Nvidia confirmed that DLSS 5 will be usable on single-GPU systems later this year, this demo highlighted something bigger: the [β¦]
If you've been following the Stop Killing Games movement, you'll know that Ubisoft shutting down The Crew, a fairly modern video game by most standards, having launched in 2014, has ruffled some feathers. Now, as reported by Reuters, Ubisoft has been taken to court by French consumer action group, UFC-Que Choisir, who argues that the contractual practices that Ubisoft engages in when it sells games may be abusive and deny consumers their rights.
Ubisoft, as is the case with many gaming companies, argues that players buy limited licenses to play the games they pay forβnot an actual productβand that the license can be revoked at any time. With lawsuits like the one brought against Ubisoft, UFC-Que Choisir intends to put an end to these "harmful practices," remove the relevant clauses from sales contracts, and make Ubisoft recognize the collective harm done to the collective interests of consumers.
Pragmata was originally announced in 2020, with the release date originally slated for 2022. However, the game went through multiple iterations during that time and ended up being pushed back to 2026. Pragmata was Capcom's first new franchise since the launch of Dragon's Dogma in 2012, and it seems to be attempting to implement an interesting combination of third-person shooter combat and hacking mechanics, alongside sci-fi, narrative- and exploration-driven core gameplay.
ZA/UM, the indie game studio famous for the avant-garde Disco Elysium, has officially announced that its next game, Zero Parades: For Dead Spies, will launch on May 21, 2026 on Steam, the Epic Games Store, and GOG, with a PS5 release planned for later in 2026. The announcement was made alongside the release of an appropriately eerie release date trailer.
Zero Parades: For Dead Spies is a story-rich indie spy thriller RPG that follows a renowned spy, Hershel Wilk, on one last mission. According to the game's Steam Store page and the published imagery, it will have a customizable skill tree, a strong narrative in which choices matter, and a decent bit of tactical gameplay, all wrapped in a surrealist aesthetic.
Agents can generate outdated Gemini API code because their training data has a cutoff date. We built two complementary tools to fix this.The Gemini API Docs MCP (https:/β¦
There were rumors of a new The Lord of the Rings game in development late in 2025, but not much else was known about it other than that it had a sizeable budget of around $100 million and was to compete with Hogwarts Legacy when it came to game design and mechanics. Now, Insider Gaming has reported that the new The Lord of the Rings game is being developed by Crystal Dynamics, not Warhorse Studios, although the report claims that there may be another LOTR game in development at Warhorse.
The game said to be in development at Crystal Dynamics is a third-person action RPG that was claimed to be funded by the Abu Dhabi Investment Office, and it has already been in development for a while. Neither Crystal Dynamics nor Embracer Group have confirmed that the game is in development, but it may be welcome news to The Lord of the Rings fans that were looking forward to the Lord of the Rings MMO that Amazon recently cancelled.
One key addition is support for rendering inline graphics such as Sixel images, allowing advanced command-line tools like the Windows Package Manager (WinGet) to display app icons and other visuals directly in the console.
Sources newsletter author Alex Heath recently said he "knows for a fact" that some senior Disney executives are waiting for the right moment to buy Fortnite and Unreal Engine developer Epic Games. However, others within the company disagree.
dubltap.io is an ecosystem of 8 single-purpose AI web apps. Each one solves one problem well. Market Maven offers competitive intelligence. Bad Mutha Forker transforms recipes. CLIFF NOTEZ analyzes documents. There are 5 more tools for sales, design, music, side hustles, and cognitive enhancement. All are free to try.
Painkiller Ideas helps founders find ideas worth building and validate them fast. It scrapes Reddit, Hacker News, GitHub, and Product Hunt for real complaints, then uses AI to score pain intensity, market size, and competition. Submit any concept to get market sizing, competitor analysis, ideal customer profiles, pricing strategy, and a prioritized validation roadmap. Access playbooks, prompts, landing page wireframes, and brand assets, and join a community of builders to source problems and compare notes.
Delta Air Lines is bringing faster, satellite-powered Wi-Fi to its planes via Amazonβs Leo network β but with the rollout not starting until 2028, United Airlines remains the better bet for high-speed connectivity right now.
Storage prices keep rising, and it's getting ever more difficult to find affordable SSDs for PCs and laptops. That changes with this 1TB Kingston NV3 SSD, nearly 50% off for one day only.
Amazon is hosting a special 25% discount for the vaunted WD_Black SN770M SSD in honor of World Backup Day, allowing portable gamers to upgrade their handheld console's storage size for less.
Before its global rollout, Lenovo first launched the Yoga Mini i mini PC in China, a few months after its introduction at CES 2026. The Yoga Mini i is built around the Intel Panther Lake platform, with configurations listed up to a Core Ultra X7 385H processor at a 45 W TDP. Graphics are handled by integrated Intel Arc B-series GPUs, with the top configuration reaching Arc B390. The system also includes an NPU rated at up to 50 TOPS, aligning it with Microsoft Copilot+ PC requirements. Memory goes up to 32 GB of LPDDR5x, paired with up to 2 TB of PCIe Gen 4 storage. Despite its compact footprint, measuring 130 x 130 x 48.5 mm and weighing around 600 g, the mini PC offers a relatively complete I/O setup. This includes multiple USB-C ports with Thunderbolt 4 and DisplayPort support, HDMI 2.1, USB-A, and 2.5 GbE, Wi-Fi 7 and Bluetooth 6. The system integrates basic audio hardware with a 2 W built-in speaker and dual microphones. Security features include a fingerprint reader built into the power button, Human Presence Detection, and Walk Away Lock. Power comes from a 100 W adapter.
At this moment, Lenovo is only offering a lower-tier configuration in China equipped with an Intel Core Ultra 5 325 processor with 16 GB of RAM and a 512 GB SSD. This model is priced at CNY 5,499 (around $800), indicating that earlier references to a $699 starting price will likely apply to similar entry-level SKUs rather than higher-end Core Ultra X7 variants. Lenovo still lists the Yoga Mini i as "coming soon" in other regions, with a broader rollout expected later this year, presumably even before this year's Computex, which is held on June 2-5.
Lyn Career is a career intelligence platform that turns your job search into a strategic plan. It lets you track every application in one dashboard and extract job details from URLs, screenshots, or PDFs. You get match scores, skill gap insights, and rejection pattern analysis. It offers CV intelligence with actionable rewrites, role-specific interview prep, offer comparisons, and smart follow-up reminders with ghost detection. A built-in kanban, calendar sync, and contact CRM help manage pipelines and relationships clearly.
Valyris helps founders find and fix weak points in a campaign or investor pitch before high-stakes reviews. It tests narrative clarity, proof strength, internal consistency, timing and exposure, ask/raise logic, and delivery credibility to reveal blind spots and rank priorities.
Start with a free 8-question check, then upgrade to an Audit or Deep Audit for a fast PDF diagnosis with key fragilities, likely objections and responses, contradiction mapping, evidence scoring, and a concrete fix plan. It's designed for Kickstarter, Indiegogo, Seedrs, Crowdcube, Y Combinator, Techstars, and direct investor outreach.
Amazon's Big Spring Sale ends today, so I've rounded up my 13 favorite TV deals that I'd buy with my own money, including record-low prices on 4K, QLED, and OLED TVs.
Less than a year after launching, with checks from some of the biggest names in Silicon Valley, crowdsourced AI model feedback startup Yupp is closing its business, the company said Tuesday.
Intel's "Wildcat Lake" processors, part of the Core 300 series non-Ultra family, have been leaked by a reputable source Jaykihn0 on X, revealing the entire lineup across various configurations and SKUs. The lineup includes six SKUs across the Core 3, Core 5, and Core 7 tiers, all designed to operate within a 15 to 35 W TDP range. Each model features a hybrid core configuration, pairing two "Cougar Cove" P-cores with four low-power efficiency cores, completely omitting the traditional "Darkmont" E-cores. Boost clocks range from 4.3 GHz on the entry-level Core 3 304 up to 4.8 GHz on the Core 7 360. All six SKUs share 6 MB of L3 cache, a single NPU tile, and integrated Xe3 graphics. The leak suggests that Intel is bringing architecture closely related to the Core Ultra 300 "Panther Lake" mobile platform into the embedded and industrial space, or perhaps into low-cost laptop configurations that don't require the power of "Panther Lake," appealing to buyers seeking budget-friendly options.
The 2P+0E+4LPE core layout is a deliberate trade-off, prioritizing efficiency over raw multithreaded performance, which suits the thermal constraints common in edge and IoT deployments. NPU performance figures range between 15 and 17 TOPS across the lineup. While this won't power the largest LLMs, it may be more than sufficient for on-device inference in industrial or automation settings. The Core 3 304 deserves special mention: it reduces to a single P-core and one Xe graphics unit, creating a clear cost-optimized option at the bottom of the lineup. SIPP certification, important for buyers needing stable, long-lifecycle platform support, is available on the Core 7 360 and Core 5 330 but not consistently across the lineup. Notably, there is no vPro support on any SKU, clearly distinguishing "Wildcat Lake" from Intel's enterprise mobile portfolio.
The NVIDIA App update today introduced some interesting features, such as DLSS 4.5 dynamic multi-frame generation and a 6x mode. Additionally, the app now includes a new beta version of NVIDIA Auto Shader Compilation (ASC). This feature takes DirectX 12 shaders from games and quietly compiles them while the system is idle or not running any graphically intensive tasks. Typically, when you start a game, you have to wait for all assets to load and shaders to compile before you can begin playing. However, with ASC, NVIDIA aims to shorten this process by pre-compiling shaders to reduce loading times and, interestingly, decrease in-game stuttering, which can occur when shaders don't load properly. NVIDIA states that this feature is opt-in within the NVIDIA App and can be enabled by navigating to the Graphics Tab > Global Settings > Shader Cache. Once in the menu, users can access a range of settings, including the option to turn on Auto Shader Compilation.
Since ASC uses a separate folder, users will need to allocate sufficient disk space to store the shaders that ASC will access. In the NVIDIA App, gamers can choose the "Compile Now" option to pre-compile all game shaders immediately by clicking on three dots, or they can wait for the system to do it automatically when it becomes idle. As compiling shaders requires some computing power, there are settings to control system utilization, with the default set to medium. The NVIDIA App will also display the date of the last compilation. Interestingly, ASC will perform its functions once a game is downloaded and after a new driver update is installed for optimal performance. NVIDIA requires GeForce Game Ready Driver 595.97 WHQL or newer for ASC to work, and more optimizations are expected as the beta testing concludes in the coming weeks.
According to VideoCardz and other sources, detailed specifications for Rubin-based gaming GPUs simply don't exist in finalized form yet, that's despite a flood of confident claims circulating across social media and YouTube last weekend.
Nvidia DLSS 4.5 with dynamic frame generation is now available for RTX 50 GPUs using the Nvidia App (enable beta updates). The feature adjusts frame-gen in real time to balance performance and image quality. The update also adds MFG modes of up to 6x, along with beta automatic shader compilation to reduce in-game stutter.
The arrival of Nvidia's DLSS 4.5 Dynamic Multi Frame Generation mode and its extended 5X and 6X multipliers promises more control and higher generated frame rates for GeForce RTX 50-series graphics cards. We went hands-on to see just how far AI can stretch one input frame.
The Card Shop Store is a marketplace for buying, selling, and vaulting sports, TCG, and entertainment trading cards. It supports direct sales and auctions, offers storefronts for sellers, and features CardShares for fractional physical ownership. You can browse graded and raw cards, track conditions and prices, and manage secure transactions. Use the web or mobile apps to list inventory, join breaks and auctions, and keep high-value cards safe in vault storage.
Tonimus automates social media growth for creators by generating, posting, and engaging in your brand voice while reporting revenue and personalized insights. Instead of guessing, creators know which platform earns money, audience authenticity, and insights across your genre based on real data. Tonimus not only tells you how many followers you have but also what they're worth and what to do next. It shows creators exactly which content drives revenue and automates creating more of it.
AnveVoice brings voice-first conversations to your website so visitors can speak naturally and get things done. It listens, understands intent, and acts on the page by scrolling, navigating, filling forms, and booking meetings while remembering preferences across sessions.
Embed a single script to add it to Shopify, WordPress, Webflow, Wix, Squarespace, React, or any site. A dashboard tracks sessions, conversions, and usage in real time so you can monitor performance and scale with transparent, token-based pricing.
Google on Monday said it's officially rolling out Android developer verification to all developers to combat the problem of bad actors distributing harmful apps while "hiding behind anonymity."
The development comes ahead of a planned verification mandate that goes into effect in Brazil, Indonesia, Singapore, and Thailand this September, before it expands globally next year.
As part of this
YouTube used its NewFront presentation to unveil a significant upgrade to its Creator Partnerships platform, adding Gemini-powered creator matching, stronger measurement tools, and new ways to run creator content as paid ads.
Why we care. Influencer marketing has become a core part of many brandsβ strategies, but finding the right creators at scale and proving ROI is a pain point. tackles influencer marketingβs two biggest friction points β finding the right creator and proving ROI.
Gemini-powered matching cuts through the noise of three million creators, while the ability to run creator content as paid Shorts and in-stream ads makes performance measurable like any standard campaign, backed by a reported 30% conversion lift.
How it works. The updated platform uses Gemini to recommend creators from a pool of more than three million YouTube Partner Program members, filtered by campaign goals. Advertisers get more control over who they work with and better visibility into how those partnerships perform.
The big new feature. A revamped Creator Partnerships boost lets brands run creator-made content directly as Shorts and in-stream ads β formats YouTube says deliver an average 30% lift in conversions.
The big picture. The announcement builds on BrandConnect, YouTubeβs existing creator monetization infrastructure, showing that the platform is doubling down on the creator economy as a growth lever for advertisers β not just a content strategy.
Reddit ranks as the most-cited domain in AI-generated answers, followed by YouTube and LinkedIn, based on a new analysis of 30 million sources by Peec AI, an AI search analytics tool.
The findings. Reddit was the most-cited source across ChatGPT, Google AI Mode, Gemini, Perplexity, and AI Overviews. YouTube, LinkedIn, Wikipedia, and Forbes also ranked in the top five. Review platforms like Yelp and G2 appeared often in recommendation queries.
The research showed which domains models rely on:
ChatGPT favored Wikipedia, Reddit, and editorial sites like Forbes.
Google leaned toward platforms like Facebook and Yelp.
Perplexity emphasized Reddit, LinkedIn, and G2 for B2B queries.
Why we care. To win in AI search, you need authority beyond your site. Brands that appear consistently across trusted third-party platforms are more likely to be cited.
Why these sources? AI systems prioritize perceived authority plus authentic user input:
Reddit leads because it captures real user discussions.
YouTube dominates video citations via transcripts and descriptions.
Wikipedia serves as both a live source and a training dataset.
About the data. The analysis covered 30 million sources across ChatGPT, Google AI Mode, Gemini, Perplexity, and AI Overviews, measuring domains directly cited in answers to isolate what shapes responses.
You can now order from Uber Eats and Grubhub using Alexa+, an experience Amazon says will be similar to chatting with a waiter at a restaurant or placing an order at a drive-thru.
Sony Corporation ("Sony") and TCL Electronics Holdings Limited (together with its subsidiaries, "TCL") today announced that Sony and TCL have entered into legally binding definitive agreements for a strategic partnership in the home entertainment field. This follows the memorandum of understanding announced on January 20, 2026, pursuant to which both parties have been conducting discussions.
Under this partnership, Sony will establish a wholly owned subsidiary (the "Preparatory Company") to assume its home entertainment business, and TCL will subscribe to a portion of the Preparatory Company's shares, forming a joint venture (the "New Company") with TCL holding 51% and Sony holding 49% of the shares. The New Company will succeed to Sony's home entertainment business, which includes product development and design, manufacturing, sales and logistics, and customer service for products such as Consumer TVs (BRAVIA), B2B Flat Panel Displays (B2B BRAVIA), B2B LED Displays, projectors, and home audio equipment such as home theater systems and audio components. The New Company is expected to operate this integrated business globally.
According to the official KB5079391 change log, the rollout was paused because users are seeing an "error 0x80073712" message during installation. Microsoft explains that this error occurs when files are missing or corrupted, preventing the update from completing successfully.
Think of Leo as Amazon's version of SpaceX Starlink. The division was established in 2019 and is the third largest satellite system in orbit according to the company. Even still, Leo does feel a bit behind the competition in terms of the sheer size of its constellation.
"Quickly spin up Copilot coding agents from anywhere on your macOS or Windows machine with Raycast," the note said, accompanied by a lightning bolt emoji and a link to install the app.
Nvidia announced today that it has invested $2 billion in Marvell Technology and entered a partnership connecting Marvell to Nvidia's AI factory and AI-RAN ecosystem through NVLink Fusion.
Iran's Islamic Revolutionary Guard Corps has issued a direct strike threat to a slew of U.S. tech companies including GPU giant Nvidia, Microsoft, Apple, Google, Meta, IBM, Cisco, and Tesla.
A high-severity security flaw in the TrueConf client video conferencing software has been exploited in the wild as a zero-day as part of a campaign targeting government entities in Southeast Asia dubbed TrueChaos.
The vulnerability in question is CVE-2026-3502 (CVSS score: 7.8), a lack of integrity check when fetching application update code, allowing an attacker to distribute a tampered update,
A newly published, unverified report claims Googleβs Gemini AI is instructed to mirror user tone and validate emotions while grounding its responses in fact and reality.
Why we care. If accurate, AI-generated search responses may vary based on how a query is phrased β not just the information available.
Whatβs new. The report centers on the inherent tension in the system-level instructions guiding how Gemini responds. The report, published by Elie Berreby, head of SEO and AI search at Adorama, suggested that Gemini is instructed to:
Match the userβs tone, energy, and intent.
Validate emotions before responding.
Deliver answers aligned with the userβs perspective.
What it means. The βoverly supportive mandate frequently overrides the factual grounding,β Berreby wrote. So instead of acting as a neutral aggregator, AI answers may:
Reinforce negative framing (βWhy is X bad?β).
Reinforce positive framing (βWhy is X great?β).
If public perception is negative, AI may amplify it. As the report suggests:
AI reflects existing sentiment signals.
It doesnβt βbalanceβ them the way blue links often do.
Query framing. The emotional framing of a query affects:
Which sources get cited.
How summaries are written.
The overall tone of the answer.
Googleβs AI Overviews already show tone shifts, often aligning with query intent beyond keywords. This report offers a possible explanation.
Unverified. Google hasnβt confirmed the leak. As Berreby noted in his report: βIβve decided to share only a fraction of the leaked internal system information with the general public. Iβm not sharing any sensitive data. This isnβt a zero-day exploit. This is a tiny leak.β
Google is giving retailers more firepower to promote loyalty program benefits directly within product listings β expanding the program internationally and into its newest AI-powered shopping experiences.
Whatβs new. Merchants can now highlight member pricing and exclusive shipping options directly on listings. Loyalty annotations have also expanded to local inventory ads and regional Shopping ads β making it easier to promote in-store or geography-specific perks.
Why we care. The more you can personalize an offer for a shopper, the better. Embedding member perks into the moment of purchase discovery β rather than requiring a separate loyalty app or webpage β makes programs more visible and more likely to drive sign-ups.
By the numbers. According to Google, some retailers have reported up to a 20% lift in click-through rates when showing tailored offers to existing loyalty members.
The big picture. Loyalty benefits will now appear on Googleβs AI-first surfaces, including AI Mode and Gemini, putting member offers in front of shoppers at an entirely new layer of the search experience.
Where itβs available. The expansion covers 14 countries β Australia, Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Netherlands, South Korea, Spain, the UK, and the US.
How to get started. Merchants activate the loyalty add-on in Merchant Center, configure member tiers, and set up pricing and shipping attributes. Connecting Customer Match lists in Google Ads is required to display strikethrough pricing and shipping perks to known members.
Donβt miss. US merchants can apply to join a pilot that uses Customer Match as a relationship data source for free listings β potentially expanding loyalty reach without additional ad spend.
Googlebot. Google has many more than one singular crawler, it has many crawlers for many purposes. So referencing Googlebot as a singular crawler, might not be super accurate anymore. Google documented many of its crawlers and user agents over here.
Limits. Recently, Google spoke about its crawling limits. Now, Gary Illyes dug into it more. He said:
Googlebot currently fetches up to 2MB for any individual URL (excluding PDFs).
This means it crawls only the first 2MB of a resource, including the HTTP header.
For PDF files, the limit is 64MB.
Image and video crawlers typically have a wide range of threshold values, and it largely depends on the product that theyβre fetching for.
For any other crawlers that donβt specify a limit, the default is 15MB regardless of content type.
Then what happens when Google crawls?
Partial fetching:Β If your HTML file is larger than 2MB, Googlebot doesnβt reject the page. Instead, it stops the fetch exactly at the 2MB cutoff. Note that the limit includes HTTP request headers.
Processing the cutoff:Β That downloaded portion (the first 2MB of bytes) is passed along to our indexing systems and the Web Rendering Service (WRS) as if it were the complete file.
The unseen bytes:Β Any bytes that existΒ afterΒ that 2MB threshold are entirely ignored. They arenβt fetched, they arenβt rendered, and they arenβt indexed.
Bringing in resources:Β Every referenced resource in the HTML (excluding media, fonts, and a few exotic files) will be fetched by WRS with Googlebot like the parent HTML. They have their own, separate, per-URL byte counter and donβt count towards the size of the parent page.
How Google renders these bytes. When the crawler accesses these bytes, it then passes it over to WRS, the web rendering service. βThe WRS processes JavaScript and executes client-side code similar to a modern browser to understand the final visual and textual state of the page. Rendering pulls in and executes JavaScript and CSS files, and processes XHR requests to better understand the pageβs textual content and structure (it doesnβt request images or videos). For each requested resource, the 2MB limit also applies,β Google explained.
Best practices. Google listed these best practices:
Keep your HTML lean:Β Move heavy CSS and JavaScript to external files. While the initial HTML document is capped at 2MB, external scripts, and stylesheets are fetched separately (subject to their own limits).
Order matters:Β Place your most critical elements β like meta tags,Β <title>Β elements,Β <link>Β elements, canonicals, and essential structured data β higher up in the HTML document. This ensures they are unlikely to be found below the cutoff.
Monitor your server logs:Β Keep an eye on your server response times. If your server is struggling to serve bytes, our fetchers will automatically back off to avoid overloading your infrastructure, which will drop your crawl frequency.
Podcast. Google also had a podcast on the topic, here it is:
Alec Newman, the voice behind Crimson Desert's protagonist, Kliff, has revealed how he had to keep pushing Pearl Abyss for clarity on the game's story and his character during the many years of development.
Amazon is ending its Big Spring Sale in style with a very limited time deal on the five-star Sony WH-1000XM5 Headphones, bringing them down to a record-low price.
Xbox is celebrating its 25th anniversary with a Fanta collaboration with themed bottles for a collection of games, as well as new in-game content for those same titles.
All the ways to watch Bosnia-Herzegovina vs Italy live streams online β including for FREE β in the decisive World Cup 2026 playoff qualifier in Zenica.
After a few days with AirPods Max 2, Appleβs new over-ear headphones feel familiar, but they're noticeably smarter and more adaptive in everyday use.
Five years after its debut, Apple's AirPods Max 2 arrives with the same iconic design but a completely rebuilt interior β and the engineers behind it say the H2 chip's headroom means the best may be yet to come.
The fitness tracking startup just closed a $575 million Series G with Cristiano Ronaldo and LeBron James among its investors. The obvious question looming over a round of this size at this valuation: Is an IPO coming?
Microsoft has officially pulled the plug on the legacy Remote Desktop client, forcing users into the "Windows App" as the March 27 support deadline passes.
Newegg is offering a 3% promo code for $25 Xbox Gift Cards. With this voucher, you will be able to shave off $25 off Xbox games or Xbox Game Pass subscription fees for 3% less.
The SDSQXH9-1T00-GZ6MA model of the SanDisk 1TB Extreme microSD UHS-I Card is one of the best ways to upgrade your Xbox Ally/Xbox Ally X's storage size, and it's now on sale for a 41% discount thanks to World Backup Day.
Xbox continues refining the Ally X experience with a new app update, adding a display widget, improved controls, and small but meaningful quality-of-life improvements across the board.
Xbox has officially unveiled its 25th anniversary partnership with Fanta, offering themed in-game rewards across Halo, Diablo IV, Call of Duty, World of Warcraft, and Forza Horizon 6, alongside prize giveaways and live event experiences.
Razer, the leading global lifestyle brand for gamers, today unveiled the Razer Pro Type Ergo, an ergonomic wireless keyboard designed to make long hours at the desk feel more natural and less fatiguing, while helping users get more done with less effort.
Pro Type Ergo is Razer's answer to a productivity market that has barely moved: a split-ergonomic keyboard that feels familiar from the first keystroke, cuts strain over time, and builds powerful workflow tools directly into the layout. For professionals who live on their keyboard, it is built to support comfortable, focused work all day, every day.
Eidos Montreal announced earlier this week in a LinkedIn post that it was laying off 124 of its employees and that its studio director, David Anfossi was leaving. The studio explained that the layoffs are a result of necessary cost-cutting measures and evolving project needs. The layoffs would affect both production and support teams, and the studio says that they are necessary cuts to allow the studio to concentrate its efforts where it can be most effective. Now, new reporting seems to suggest that the layoffs may have been partially caused by a game whose budget had ballooned and caused financial strain in the studio.
Following the layoffs, Insider Gaming reports that Eidos, who had previously worked on Deus Ex: Mankind Divided and Marvel's Guardians of the Galaxy, has also cancelled Wildlands, an in-development open-source action-adventure game that the studio had been working on for seven years. The publication reports that WIldlands had had a somewhat troubled development cycle prior to its cancellation, with the team having gone through four different game engines and burned through hundreds of millions of dollars in budget. These reports are further backed up by Jason Schreier's comments on Reddit. Further, according to Insider Gaming's sources, the game was in the debugging phase and nearing completion before Embracer, Eidos's parent company, shut it down.
The Intel Binary Optimization Tool (BOT) has been launched alongside the "Arrow Lake Refresh" series of processors, which includes the Core Ultra 5 250K Plus and Core Ultra 7 270K Plus models. While the tool is beneficial for gamers looking to extract a few extra frames from their setups, it may be a nightmare for makers of benchmarking tools like Geekbench by Primate Labs. Recently, Primate Labs testing found that BOT changes the way .exe applications run and concluded that Geekbench runs will now flag these BOT-enhanced runs. However, in deeper testing, Primate Labs discovered that Intel's BOT may deliver significant boosts in some applications like Object Remover and HDR, increasing performance by up to 30%. This is thanks to the deep vectorization that the BOT performs behind the scenes to optimize performance.
For example, Primate Labs used Intel's own Software Development Emulator (SDE) to measure how many instructions were executed and which types of instructions the program executed. Without BOT, Geekbench 6 required a total of 1.26 trillion instructions to finish, while a BOT-enhanced run completed with 1.08 trillion instructions. This is an impressive 14% reduction. However, when examining the execution by type, we see that BOT makes heavy use of vector instructions like SSE2 and AVX2. The number of scalar instructions needed to execute a program fell from 220 billion to 84.6 billion, while the number of vector instructions increased from 1.25 billion to 18.3 billion, a 13.7x increase. This means that Intel BOT finds a way to turn inefficient scalar code into vectorized instructions that are processed much more efficiently inside Intel CPUs. These techniques indicate a very complex behind-the-scenes process than was originally believed. The Geekbench v6.7 update will include a flag for BOT, allowing future Geekbench results to be easily distinguished as BOT-enhanced or not.
Blaze Entertainment is proud to announce the Evercade Nexus, the newest retro gaming handheld console from Evercade. Evercade continues to champion physical cartridges as the medium to relive the classic gaming experience, bringing more top-quality names to the ever-growing ecosystem with Rare's Banjo-Kazooie and Banjo-Tooie included. The latest iteration of the Evercade gaming experience draws on the feedback of Evercade fans and the demands of gaming in the current age, while also keeping the simplicity and ease of use that Evercade provides, and the continuing commitment to the nostalgia and experience of using and collecting physical cartridge media.
The Evercade Nexus is built to play with an ultra-bright 5.89" screen, the biggest ever screen on an Evercade, with a peak brightness of over 500 nits. The new, larger design allows for dual analogue sticks, giving Evercade players the full experience of 64 and 32-bit games, and helping recreate the feel of arcade-style gaming in your hands. All of this in a new larger form factor that is still light and comfortable to use and travel with, and a sleek new look with a black color scheme and a customizable RGB light-up logo.
Since the launch of the Steam Deck and the subsequent competing Windows gaming handhelds, Microsoft has been working on improving its UI for gaming consoles, culminating with the recent adoption of the Windows Full Screen Experience, which was later renamed to Xbox Mode. The latest update to Microsoft's gaming experience, however, comes by way of the Xbox app and its overlay, as spotted by ROGAllyLife. These new features will be available to everyone using compatible hardware and the Xbox app, although they are still in the preview version of the app, so they may only reach mainline status in a few weeks. The biggest update is a new display widget that was added to the Xbox Game Bar overlay, which adds controls like display refresh rate, resolution, projection mode, and Auto Super Resolution, allowing users to test different display configurations without leaving their games.
Users can now also change notification placement in the Xbox app, allowing them to see notifications without completely disrupting the gaming experience. The Xbox app now allows for eight notification placement optionsβthree positions along each screen edgeβand this can also be customized from the Game Bar overlay instead of necessitating a potentially game-breaking app switch. These updates are just the most recent in Microsoft's efforts to make handheld gaming more feasible on Windows, but it remains to be seen how Microsoft will change the regular Windows 11 experience after its recent promise to address quality and usability complaints.
AMD has quietly renamed its Anti-Lag 2 technology as part of the FSR package, now calling it "FSR Latency Reduction 2.0." This move aligns with AMD's recent trend of rebranding FSR-related technologies. The AMD Radeon marketing team has successfully unified FidelityFX Super Resolution under the FSR branding, although Anti-Lag 2 was previously an exception, bundled with other AMD technologies. The advanced graphics technology, once known as FidelityFX Super Resolution, is now simply called "FSR." This change is reflected on AMD's official product page, which notes that FSR stands for "formerly AMD FidelityFX Super Resolution." However, AMD has not formally announced this rebranding. These changes occurred before the official launch of the FSR "Redstone" product in late December last year. Now, every new announcement features the standard FSR language, suggesting that this renaming might be part of a broader update to Anti-Lag 2.
Since FSR is aimed at gamers, it is now included in the FSR package as "FSR Latency Reduction 2.0." With FSR Redstone, AMD has already grouped four technologies under the FSR "Redstone" name: FSR Upscaling, FSR Frame Generation, FSR Ray Regeneration, and FSR Radiance Caching. If the renaming becomes more than just a label update, FSR Latency Reduction 2.0 could become the fifth component of the FSR "Redstone" suite. Technologies like AMD Anti-Lag 2 are specifically designed to reduce latency by improving CPU and GPU coordination. Even without frame generation, it can lower latency in a game, but it may be especially useful when synthetic frames are involved, helping to keep latency at a level where any added delay is far less noticeable.
PDFsam Basic is a free, open-source tool for splitting, merging, and organizing PDFs. Version 6.0 adds three compression modes, better support for PDF 2.0 and UTF-8 text, stronger handling for malformed files, and more quality-of-life improvements.
Federal authorities recently charged Yih-Shyan Liaw, along with a company employee and an outside contractor, with smuggling roughly $2.5 billion worth of such servers to China. The case not only jolted investors β Supermicro's stock lost a third of its value the following day β but also reignited debate in...
We take a look at Noctua's current lineup of releases through Q2 2026, in addition to what products we might expect from them in the future, including the impressive Thermosiphon cooler.
YoloLiv's YoloCam S3 is a small, sturdy 4K/30 fps webcam that delivers excellent video β once you spend some time fiddling with the software. It's got a large sensor, a wide 82-degree field of view, and lightning-fast autofocus, but you'll need to plug it into a USB 3.0 port for it to function.
Manuscript is two things. For publishing houses, it's a tool that streamlines the entire editorial process and makes it 10 times more efficient. It uses AI ethically, handling the tedious parts of editing while keeping the artful, human side of publishing exactly where it belongs: with humans.
For authors, Manuscript is a full workspace that gives you a complete toolbox but leaves the writing entirely to you. Think of it as a Scrivener alternative built for the 21st centuryβone that will never write for you.
TapHum is a presence app that lets you tell someone you're thinking of them with a single tap. No messages, no emojis, no pressure to reply. Just tap their circle and they instantly feel it through a gentle vibration and a warm glow on their phone. It removes the need for words while keeping connections warm and effortless.
Each person in your circle gets their own glowing orb you can personalize with a custom color and nickname. Build daily streaks by tapping each other, see your shared timeline grow over time, and connect through QR codes or invite links. TapHum is for the people you don't need words with, like partners, parents, and best friends who just need to know you're there.
SEO hiring is shifting toward senior, strategy-led roles as AI reshapes search and expands the scope of the job. A new Semrush analysis of 3,900 listings shows companies now prioritize leadership, experimentation, and cross-channel visibility over pure technical execution.
Why we care. SEO hiring, career paths, and required skills are changing. Entry roles focus on execution, while most demand sits at the leadership level β owning strategy across search, AI assistants, and paid channels, with clear revenue impact.
What changed. Senior roles dominated, accounting for 59% of listings. Mid-level roles, such as specialists (15%) and managers (10%), trailed far behind.
Companies are shifting budget toward strategy as AI tools absorb more execution work.
The skills shift. In-demand capabilities extend beyond traditional SEO into coordination, testing, and decision-making:
Project management appeared in more than 30% of listings.
Communication led non-senior roles at 39.4%.
Experimentation appeared in 23.9% of senior roles compared with 14% of other roles.
Technical SEO appeared in about 6% of listings.
Tools and channels. The SEO tech stack now spans analytics, paid media, and data.
Google Analytics appeared in up to 47.7% of listings.
Google Ads appeared in 29% of listings.
SQL demand grew at the senior level.
AI tools like ChatGPT were increasingly listed.
AI expectations: AI literacy is moving from optional to expected:
31% of senior roles mentioned AI.
Nearly 10% referenced LLM familiarity.
AI search concepts like AI search and AEO appeared more often.
Pay and positioning: SEO is increasingly treated as a business function.
The median salary for senior roles reached $130,000, compared to $71,630 for others. Some listings were much higher.
Degree preferences skewed toward business and marketing.
Remote work is now standard. More than 40% of listings offered remote options, with little difference by seniority.
About the data: Semrush analyzed 3,900 U.S.-based SEO job listings from Indeed as of Nov. 25. Roles were deduplicated, segmented by seniority, and analyzed using semantic keyword extraction.
Technical SEO extends beyond indexing to how content is discovered and used, especially as AI systems generate answers instead of listing pages.
For generative engine optimization (GEO), the underlying tools and frameworks remain largely the same, but how you implement them determines whether your content gets surfaced β or overlooked.
That means focusing on how AI agents access your site, how content is structured for extraction, and how reliably it can be interpreted and reused in generated responses.
Agentic access control: Managing the bot frontier
From a technical standpoint, robots.txt is a tool you already use in your SEO arsenal. You need to add the right crawlers within your files to allow specific bots their own rights.Β
For example, you may want a training model like GPTBot to have access to your /public/ folder, but not your /private/ folder, and would need to do something like this:
Youβll also need to decide between model training and real-time search and citations. You might consider disallowing GPTBot and allowing OAI-SearchBot.
Within your robots.txt, you also need to consider Perplexity and Claude standards, which are tied to these bots:
Claude
ClaudeBot (Training)
Claude-User (Retrieval/Search)
Claude-SearchBot
PerplexityΒ
PerplexityBot (Crawler)
Perplexity-User (Searcher)
Adding to your agentic access is another new protocol β llms.txt, a markdown-based standard that provides a structured way for AI agents to access and understand your content.
While itβs not integrated into every agentβs algorithm or design, itβs a protocol worth paying attention to. For example, Perplexity offers an llms.txt that you can follow here. Youβll come across two flavors of llms.txt:
llms.txt: A concise map of links.
llms-full.txt: An aggregate of text content that makes it so that agents donβt have to crawl your entire site.
Even if Google and other AI tools arenβt reading llms.txt, itβs worth adapting for future use. You can read John Muellerβs reply about it below:
Extractability: Making content βfragment-readyβ
GEO focuses more on chunks of information, or fragments, to provide precise answers. Bloat is a problem with extractability, which means AI retrieval has issues with:
JavaScript execution.
Keyword-optimized content rather than entity-optimized content.
Weak content structures that fail to provide clear, concise answers.
You want your core content visible to users, bots, and agents. Achieving this goal is easier when you use semantic HTML, such as:
<article>
<section>
<aside>
The goal? Separate core facts from boilerplate content so your site shows up in answer blocks. Keep your context window lean so AI agents can read your pages without truncation. Creating content fragments will feed both search engines and agentic bots.
Structured data: The knowledge graph connective tissue
Schema.org has been a go-to for rich snippets, but itβs also evolving into a way to connect your entities online. What do I mean by this? In 2026, you can (and should) consider making these schemas a priority:
Organization and sameAs: A way to link your site to verified entities about you, such as Wikipedia, LinkedIn, or Crunchbase.
FAQPage and HowTo: Sections of low-hanging fruit in your content, such as your FAQs or how-to content.
SignificantLink: A directive that tells agents, βHey, this is an authoritative pillar of information.β
Connecting information and data for agents makes it easier for your site or business to be presented on these platforms. Once you have the basics down, you can then focus on performance and freshness.
AI is constantly scouring the internet to maintain a fresh dataset. If the information goes stale, the platform becomes less valuable to users, which is why retrieval-augmented generation (RAG) must become a focal point for you.
RAG allows AI models, like ChatGPT, to inject external context into a response through a prompt at runtime. You want your site to be part of an AIβs live search, which means following the recommendations from the previous sections. Additionally, focus on factors such as page speed, server response time, and errors.
In addition to RAG, add βlast updatedβ signals for your content. <time datetime=ββ> is one way to achieve this, along with schema headers, which are critical components for:
News queries.
Technical queries.
You can now start measuring your success through audits to see how your efforts are translating into real results for your clients.
You have everything in place and ready to go, but without audits, thereβs no way to benchmark your success. A few audit areas to focus on are:
Citation share: Rankings still exist, but itβs time to focus on mentions as well. You can do this manually, but for larger sites youβll want to use tools like Semrush.
Log file analysis: Are agents hitting your site? If so, which agents are where? You can do this through log analysis and even use AI to help parse all of the data for you.
The zero-click referral: Custom tracking parameters can help you identify traffic origins and βread moreβ links, but they only paint part of the picture. You also need to be aware that agents may append your parameters, which can impact your true referral figures.
Measuring success shows you the validity of your efforts and ensures you have KPIs you can share with clients or management.
Scaling GEO into 2027
Preparing your GEO strategy for 2027 requires changes in how you approach technical SEO, but it still builds on your current efforts. Youβll want to automate as much as you can, especially in a world with millions of custom GPTs.
Manual optimization? Ditch it for something that scales without requiring endless man-hours.
Technical SEO was long the core of ranking a site and ensuring you provided search bots and crawlers with an asset that was easy to crawl and index.
Now? Itβs shifting.
Your site must become the de facto source of truth for the worldβs models, and this is only possible by using the tools at your disposal.
Start with your robots.txt and work your way up to structure, fragmented data, and extractability. Audit your success over time and keep tweaking your efforts until you see positive results. Then, scale with automation.
CareCloud, a major provider of medical records storage, said hackers accessed one of its repositories of patient data earlier in March. It provides technology for more than 45,000 providers covering millions of patients.
The merger is a sign that the fitness industry is continuing to move toward consolidation to compete at a larger scale. Recent moves include MyFitnessPal acquiring Cal AI, an AI calorie counting app, and Strava buying two apps: cycling app The Breakaway and running app Runna.
Runway is launching a $10 million fund and startup program to back companies building with its AI video models, as it pushes toward interactive, real-time βvideo intelligenceβ applications.
Google's Gary Illyes published a blog post explaining how Googlebot works as one client of a centralized crawling platform, with new byte-level details.
The new Nvidia App beta enables DLSS 4.5 Dynamic Frame Generation and 6x Frame Generation With the newest Nvidia App beta, Nvidia has officially released DLSS 4.5 Dynamic Frame Generation and 6x Frame Generation. These options are available as DLSS Overrides on the Nvidia App and should become available to all RTX 50 series GPU [β¦]
We continue to improve our Kymera desktop line, not only in hardware but also in the way you can explore and configure each system. We are presenting new product pages for our lines, along with the return of one of the most requested formats by the community.
Kymera Cristal: power you can see
The Slimbook Kymera Cristal features a design that doesn't hide its power, it showcases it. Thanks to its tempered glass front and side panels, every component of the system becomes part of the design, proudly displaying the hardware with precision. This is a solution aimed at those seeking a system that combines extreme performance with striking aesthetics. From RGB-lit configurations to more understated setups, Kymera Cristal allows you to create an environment that reflects your style.
We recently heard from industry insiders that a $699 PlayStation 6 may still be theoretically possible, even with the current hardware market conditions resulting in steep prices for components like memory and storage. Shortly following that report, though, industry analyst, Matt Piscatella (via GamesRadar+) predicted that both the PlayStation 6 and the Xbox Helix consoles may cost as much as $999. He largely blames the AI industry demand and inflated hardware prices for the price increase, but noted that there isn't much certainty in the current hardware market, whether you're considering launch dates or pricing. Dr. Serkan Toto, CEO of Kantan Games, a consultancy firm, noted that, with the recent price increases to the PS5 line-up, Sony may have "baked in potential future fluctuations...instead of raising prices more frequently and over a longer period of time."
Toto goes on to say that "I think $999 at least for one variant of the PS6 is not impossible," potentially alluding to a PS6 Pro, if current industry prices are anything to go by, but Joost van Dreunen, a video games professor at NYU, argues that "we're quickly moving towards a world in which a $1,000 console will be the norm, and console gaming will be a luxury expenditure." Van Dreunen goes on to predict that we may see the next-gen consoles start at a 50% higher price than the current generation, which would mean a $600 starting price for the base PS6 digital edition and somewhere in the region of $750 for the disc drive model. On the Microsoft side of things, this would put the base model Xbox Helix somewhere around $450, while the "Series X" version would be around $750. Sony is also slated to release a standalone handheld game console that has been commonly referred to as the PlayStation Portable, but there is no indication of pricing on that.
Sony suspended orders for almost its entire lineup of SD and CFexpress memory cards. The company is citing the global semiconductor shortage that has made it impossible to meet demand. The move, announced by Sony Japan and spotted by PetaPixel, effectively pauses shipments to both partners and direct customers starting March 27. The suspension covers nearly the company's entire lineup, including CFexpress Type A and Type B cards, as well as higher-end SD offerings such as TOUGH-branded models. Lower-tier SD cards are also affected, suggesting the shortage isn't limited to premium components. Sony says supply is unlikely to meet demand "for the foreseeable future," and has stopped accepting new orders from distributors and through its own store.
A few exceptions remain. The 960 GB CFexpress Type B card is still in production, alongside some entry-level SF-UZ series SD cards, though the latter are already largely phased out in certain regions. More specifically, on the CFexpress side, all Type A capacities are affected (240 GB, 480 GB, 960 GB, and 1920 GB), along with the 240 GB and 480 GB Type B cards. On the SD side, the entire TOUGH lineup (64 GB, 128 GB, 256 GB), standard V60 cards across all capacities, and even budget V30 64 GB and 128 GB options are suspended. Existing inventory is still moving through the supply chain, so cards will remain available at retail for now, but restocking will stop once that supply runs out. Sony hasn't provided a timeline for resuming production, stating it will monitor component availability before making a decision.
NVIDIA has finally launched its long-teased Dynamic Multi Frame Generation (MFG) and Multi Frame Generation 6x mode today through a new NVIDIA app beta update. This marks the full public release of NVIDIA's DLSS 4.5 technology suite, which enables the GPU to generate up to five additional frames following each traditionally rendered frame using generative AI. Using the new MFG 6x mode results in a 6x performance uplift, meaning a game that traditionally runs at 60 FPS can now reach 360 FPS. Users will need to enable "beta and experimental features" in the NVIDIA app's Settings menu, and the GeForce Game Ready Driver 595.79 WHQL or newer is required to access all features. This will give a limited set of games (for now) a massive performance uplift, which includes ARC Raiders Flashpoint, Marvel Rivals Season 7, 007 First Light, CONTROL Resonant, and Tides of Annihilation. More games will get the official support as NVIDIA is working with game studios.
However, for setups where a monitor is maxed out at 240 Hz or 144 Hz, as many gaming panels are, using 6x MFG would be overkill. This is where Dynamic MFG comes into play. The technology determines which MFG multiplier is needed based on the display's refresh rate capability and the input framerate from the upscaler. NVIDIA calls this the "automatic transmission" for MFG, drawing a parallel to modern vehicle automatic transmission systems that switch gears based on demand. In graphically intensive scenarios, the multiplier can scale up to 4x, 5x, or 6x, while lighter scenes like settings menus or static sequences may only require a 2x multiplier to hit the target frame rate.
Instagram is testing a new subscription to the photo-and-video-sharing service called Instagram Plus. Pilot schemes have started in Mexico, Japan, and the Philippines, and it'll likely make its way to the US eventually.
Vivian, a university student in Hebei province who requested a pseudonym to speak freely to Rest of World, said her Rokid glasses help her pass difficult subjects. "Any subject that I may fail at," she said. The glasses can read text from her exam paper and project answers directly onto...
Samsung released a $210 Book Cover Keyboard Slim for the Galaxy Tab S11 Ultra when the tablet launched. It arrived alongside a more expensive option, the Pro Keyboard, in Korea earlier this month, and it's now available in the US.
Cybersecurity researchers have disclosed a security "blind spot" in Google Cloud's Vertex AI platform that could allow artificial intelligence (AI) agents to be weaponized by an attacker to gain unauthorized access to sensitive data and compromise an organization's cloud environment.
According to Palo Alto Networks Unit 42, the issue relates to how the Vertex AI permission model can be misused
In 1998, submitting a website to search engines was manual, methodical, and genuinely tedious. I remember 17 of them: AltaVista, Yahoo Directory, Excite, Infoseek, Lycos, WebCrawler, HotBot, Northern Light, Ask Jeeves, DMOZ, Snap, LookSmart, GoTo.com, AllTheWeb, Inktomi, iWon, and About.com.
Each had its own form, process, and wait time, and its own quiet judgment about whether your URL was worth including. We submitted manually, 18,000 pages in all. Yawn.
Google was barely a year old when we were doing this. But they were already building the thing that would make submission irrelevant.
PageRank meant Google followed links, and a site that other sites linked to would be found whether it submitted or not. The other 17 engines waited to be told about content. Google went looking, and within a few years, they got so good at finding content that manual submission became the exception rather than the norm.
You published, you waited, the bots arrived. For 20 years, that was the deal, and SEO optimized for a crawler that would show up sooner or later.
The irony is that weβre now shifting back. Not because Google got worse at finding things, but because the game has expanded in ways that pull alone canβt cover, and the revenue flowing through assistive and agentic channels doesnβt wait for a bot.
Pull isnβt the only entry mode
The pull model (bot discovers, selects, and fetches) remains the dominant entry mode for the web index. Whatβs changed is that pull is now one of five entry modes into the AI engine pipeline (the 10-gate sequence through which content passes before any AI system can recommend it), not the only one.Β
The pipeline has expanded, and new modes have been added alongside the existing model rather than replacing it, and the single entry mode that has been the norm for 20 years has expanded to five.
What follows is my taxonomy of those five modes, with an explanation of the advantages each one gives you at the two gates that determine whether content can compete: indexing and annotation.
The five entry modes differ by gates skipped, signal preserved, and revenue reached
Mode 1: Pull model
Traditional crawl-based discovery where all 10 pipeline gates apply and the bot decides everything. You start at gate zero and have no structural advantage by the time your content gets to annotation (which is where that content starts to contribute to your AI assistive agent/engine strategy). Youβre entirely dependent on the botβs schedule and the quality of what it finds when it arrives.
Mode 2: Push discovery
The brand proactively notifies the system that content exists or has changed, through IndexNow or manual submission.Β
Fabrice Canel built IndexNow at Bing for exactly this purpose: βIndexNow is all about knowing βnow.ββ It skips discovery, improves the chances of selection, and gets you straight to crawl. The content still needs to be crawled, rendered, and indexed, because IndexNow is a hint, not a guarantee.Β
You win speed and priority queue position, which means your content is eligible for recommendation days or weeks earlier than a competitor who waited for the bot. In fast-moving categories, that window is the difference between being in the answer and being absent from it.
Note: WebMCP helps with Modes 1 and 2 by making crawling, rendering, and indexing more reliable, retaining signal and confidence that would otherwise be lost through those three gates.Β
Because confidence is multiplicative across the pipeline, a higher passage rate at crawling, rendering, and indexing means your content arrives at annotation with significantly more surviving signal than a standard crawl delivers. The structural advantage compounds from there.
Mode 3: Push dataΒ
Structured data goes directly into the systemβs index, bypassing the entire bot phase. Google Merchant Center pushes product data with GTINs, prices, availability, and structured attributes. OpenAIβs Product Feed Specification powers ChatGPT Shopping that supports 15-minute refresh cycles.Β
Discovery, selection, crawling, and rendering donβt exist for this content, and the βtranslationβ at the indexing phase is seamless: it arrives at indexing already in machine-readable format, four gates skipped and one improved. That means the annotation advantage is significant.
This is where the money is for product-led businesses: where crawled content arrives as unstructured prose the system has to interpret and feed content arrives pre-labeled with explicit machine-readable entity type, category, and attributes. By structuring the data and injecting directly into indexing, youβre solving a huge chunk of the classification problem at annotation, which, as youβll see in the next article, is the single most important step in the 10-gate sequence.
As the confidence pipeline shows, each gate that passes at higher confidence compounds multiplicatively, so this is where you can get the β3x surviving-signal advantageβ I outline in βThe five infrastructure gates behind crawl, render, and index.β
Mode 4: Push via MCPΒ
Model Context Protocol (MCP) β a standard that lets AI agents query a brandβs live data during response generation β allows agents to retrieve data from brand systems on demand.Β
In February 2026, four infrastructure companies shipped agent commerce systems simultaneously. Stripe, Coinbase, Cloudflare, and OpenAI collectively wired a real-time transactional layer into the agent pipeline, live with Etsy and 1 million Shopify merchants.Β
Agentic commerce is key. MCP skips the entire DSCRI pipeline and then operates at three levels, each entering the pipeline at a different gate:Β
As a data source at recruitment.
As a grounding source at grounding.
As an action capability at won, where the transaction completes without a human in the loop.Β
The revenue consequences are already real: brands without MCP-ready data are losing transactions to those with it, because the agent canβt access their inventory, pricing, or availability in real time when it needs to make a decision. This is where you see multi-hundred percent gains in the surviving signal.
MCP is already simultaneously push and pull, depending on context.Β
Thereβs a dimension to Mode 4 that most people donβt think about much: the agent querying your MCP connection isnβt always a Big Tech recommendation system. Itβs increasingly the customerβs own AI, acting as their purchasing agent, evaluating your inventory and pricing in real time, with their credit card behind the query, completing the transaction without them opening a browser.
When your customerβs agent (letβs say OpenClaw-driven) comes knocking, agent-readable is the entry requirement. Agent-writable β the capacity for an agent to act, not just retrieve β is where youβll make the conversion. The brands without writable infrastructure will be losing transactions to competitors whose systems answered the query and handled the action.
Mode 5: Ambient
This is structurally different from the other four. Where Modes 1 through 4 change how content enters the pipeline, ambient research changes what triggers execution of the final gates.Β
The AI proactively pushes a recommendation into the userβs workflow without any query: Gemini suggesting a consultant in Google Sheets, a meeting summary in Microsoft Teams surfacing an expert, and autocomplete recommending your brand.Β
Ambient is the reward for reaching recruitment with accumulated confidence high enough that the system fires the execution gates on the userβs behalf, without being asked. You canβt optimize for ambient directly. You earn it β and the brands that earn it capture the 95% of the market that isnβt actively searching.
Several people have told me my obsession with ambient is misplaced, theoretical, and not a real thing in 2026. Iβve experienced it myself already, but the clearest demonstration came at an Entrepreneursβ Organization event where I was co-presenting with a French Microsoft AI specialist.Β
He demonstrated on Teams an unprompted push recommendation: a provider identified as the best solution to a problem his team had been discussing in the meeting. Nobody explicitly asked. Copilot listened, understood the problem, evaluated options, and push-recommended a supplier right after the meeting. Ambient isnβt theoretical. Itβs running on Teams, Gmail, and other tools we all use daily, right now.
Five entry modes, each with a different starting point, and they all converge at annotation. Annotation is the key to the entire pipeline. Every algorithm in the algorithmic trinity (LLM + knowledge graph + search) doesnβt use the content itself to recruit, it uses the annotations on your chunked content, and nothing reaches a user without being recruited.Β
Why is that important? Because accurate, complete, and confident annotation drives recruitment, and recruitment is competitive regardless of how content entered. A product feed arriving at indexing with zero lost signal competes at recruitment with a huge advantage over every crawled page, every other feed, and every MCP-connected competitor that entered by a different door.Β
You control more of this competition than most practitioners assume, but skipping gates gives you a structural advantage in surviving signal. It doesnβt exempt you from the competition itself.
That distinction matters here because annotation sits at the boundary. Itβs the last absolute gate: the system classifies your content based on your signals, independently of what any competitor has done. Nobody elseβs data changes how your entity is annotated. That makes annotation the last moment in the pipeline where you have the field entirely to yourself.
From recruitment onward, everything is relative. The field opens, every brand that passed annotation enters the same competitive pool, and the advantage you carried through the absolute phase becomes your starting position in a winner-takes-all race. Get annotation right, and you have a significant head start. Get it wrong, and no matter how much work you do to improve recruitment, grounding, or display, it will not catch up, because the misclassification and loss of confidence compound through every gate downstream.
Nobody in the industry was talking about this in 2020. I started making the point then, after a conversation on the record with Canel, and it still isnβt getting the attention it deserves.
Annotation is your last chance before competition arrives.
Search is one of three ways users encounter brands β and itβs the least valuable
The research modes on the userβs side have expanded, too. The SEO industry has traditionally focused on just one: implicit, when the user types a query. There was always one more: explicit brand queries, and now we have a third. Each research mode is defined by who initiates and what the user already knows.
This is the lowest-confidence mode of the three, because the user has already signaled very explicit intent: youβre only reaching people who already know your name. Bottom of the funnel, decision. Algorithmic confidence is important here to remove hedging (βthey say on their website,β βthey claim to beβ¦β) and replace it with absolute enthusiasm (βworld leader inβ¦,β βrenowned forβ¦β).
Implicit research removes the explicit query. The AI introduces the brand as a recommendation (or advocates for you) within a broader answer, and the user discovers the brand because the system considers it relevant to the conversation, staking its own credibility on the inclusion. Top- and mid-funnel, awareness and consideration. Algorithmic confidence is vital here to beat the competition and get onto the list when a user asks βbest X in Y marketβ or be cited when a user asks βexplain topic X.β
Ambient research requires the highest confidence of all. The system pushes the brand into the userβs workflow with no query, no explicit request, the algorithm is making a unilateral decision that this user, in this context, at this moment, needs to see your brand. That requires very significant levels of algorithmic confidence.
The format is small: a sentence, a credential, a contextual mention. The audience reached is the largest: people not yet in-market, not yet actively looking, who encounter your brand because the AI decided they should. And the kicker is that your brand gets the sale before the competition even starts.
For me, this is the structural insight that inverts how most brands prioritize, and where the real money is hiding. They optimize for implicit research, where competition is highest, the target you need to hit is widest, and the work is hardest.Β
Most SEOs underestimate explicit research (where profitability is highest) and completely ignore ambient, which reaches the 95% who arenβt yet looking and requires the deepest entity foundation to trigger. I call this the confidence inversion, first documented in May 2025: the smallest format requires the highest investment, and it reaches the most valuable audience.
The entity home website is the single source that feeds every mode
In 2019, AI engineers spent 80% to 90% of their time collecting, cleaning, and labeling data, and the remaining 10% to 20% on the work they actually wanted to do. They wryly called themselves data janitors. Today, Gartner estimates 60% of enterprises are still effectively stuck in the 2019 model, manually scrubbing data, while the companies that got organized early compound their advantage.
The same split is happening with brand content and entity management, for the same reason. Every push mode described in this article draws on data: product attributes for merchant feeds, structured entity data for MCP connections, and corroborated identity claims for ambient triggering.Β
If that data lives in scattered, inconsistent, contradictory sources, every push attempt is expensive to implement, structurally weak on arrival, and liable to contradict the previous one. Inconsistency is the annotation killer: the system encounters two different versions of who you are from two different push moments, and confidence drops accordingly.
The framing gap, where your proof exists but the algorithm canβt connect it to a coherent entity model, is a direct consequence of disorganized data, and it costs you in recommendation frequency every day it persists.
The entity home website β the full site structured as an education hub for algorithms, bots, and humans simultaneously, built around entity pillar pages that declare specific identity facets β becomes the single source that feeds every mode simultaneously.
Pull, push discovery, push data, MCP, and ambient all draw from the same clean, consistent, non-contradictory data. You build the structure once, maintain it in one place, and youβre ready for push and pull modes today, and any to come that donβt yet exist.
AI handles 80%, humans protect the other 20%
That foundation is only as strong as the corrections made to it. How this works in practice depends on where youβre starting from. For enterprises, the website typically mirrors an internal data structure that already exists:Β
Product catalogs.Β
CRM records.
Service definitions.
Organizational hierarchies.Β
The website becomes the public representation of structured data that lives inside the business, and the primary challenge is integration and maintenance.
For smaller businesses and personal brands, the direction often runs the other way: building the entity home website well is what forces you to figure out how your business is actually structured, what you genuinely offer, who you serve, and how everything connects. The website imposes discipline.Β
Weβre doing exactly this: centralizing everything as the structured data representation of the entire brand (personal or corporate). Getting the foundation right (who we are, what we offer, who we serve) is generally the heaviest lift. Building N-E-E-A-T-T credibility on top of that foundation is now comparatively straightforward, and every new push mode draws from the same organized source.
Hereβs where using AI fits into this work. It can handle roughly 80% of the organization: extracting structure from existing content, proposing taxonomies, drafting entity descriptions, mapping relationships, and flagging gaps. What it does poorly, and what humans need to correct, are the three failure modes that propagate silently through every downstream gate:
Factual errors, where something is simply wrong.
Inaccuracies, where something is approximately right but imprecise enough to mislead.
Confusions, where two different concepts are conflated, or an entity is ambiguous between interpretations.
Confusion is the sneakiest because it looks like data, passes automated quality checks, enters the pipeline with apparent confidence, and then causes annotation to misclassify in ways that compound through every gate downstream.
Alongside the errors sit the missed opportunities, which are equally costly and considerably less obvious:
Lost N-E-E-A-T-T credibility opportunities, where the systems underestimate or undervalue the entity because credibility signals exist but arenβt structured, corroborated, or framed in a way the algorithmic trinity can read. The authority exists, but the machine doesnβt understand it.
Annotation misclassification, where the entity is indexed coherently but placed in the wrong category, meaning it competes for the wrong queries entirely and never appears in the contexts where it should win. Correctly classified competitors take the recommendation: your brand is present in the pipeline, but absent from the competition that matters to your business.
Untriggered deliverability, where understandability is solid and credibility has crossed the trust threshold, but topical authority signals havenβt accumulated densely enough to push the entity across the deliverability threshold for proactive recommendation. The machine knows who you are and trusts you. It just doesnβt advocate for you yet.
The human doing the correction and optimization work is the competitive advantage. Because the errors are surreptitious and the opportunities non-obvious, the trick is finding where both actually are, fixing one, and acting on the other.
The errors are surreptitious. The opportunities are non-obvious. Finding both is the work that compounds.
Organize once, feed every mode that exists and every mode to come
The push layer is expanding. The brands that organize their data now β not perfectly, but consistently, and with a system for maintaining it β are building the infrastructure from which every current and future entry mode draws.
The brands still publishing and waiting for the bot (Mode 1) are optimizing for the least advantageous mode in a five-mode landscape, and that disadvantage gap widens with every passing cycle.
This is the seventh piece in my AI authority series.Β
OpenAI now allows users of ChatGPT to share their device location so that ChatGPT can know more precisely where the user is and serve better answers and results based on that location.
The feature is called location sharing, OpenAI wrote, βSharing your device location is completely optional and off until you choose to enable it. You can update device location sharing in Settings > Data Controls at any time.β
What it does. If ChatGPT knows your location, it can return better local results. OpenAI wrote:
βPrecise location means ChatGPT can use your deviceβs specific location, such as an exact address, to provide more tailored results.β
βFor example, if you ask βwhat are the best coffee shops near me?β, ChatGPT can use your precise location to provide more relevant nearby results. On mobile devices, you can choose to toggle off precise location separately while keeping approximate device location sharing on for additional control.β
Privacy. OpenAI said βChatGPT deletes precise location data after itβs used to provide a more relevant response.β Here is how ChatGPT uses that information:
βIf ChatGPTβs response includes information related to your specific location, such as the names of nearby restaurants or maps, that information becomes part of your conversation like any other response and will remain in your chat history unless you delete the conversation.β
Does it work. Does this work? Well, maybe not as well as youβd expect. Here is an example from Glenn Gabe:
I shared about the "Near Me ChatGPT Update" the other day and just let ChatGPT use my device location. This is supposed to enhance results for local queries. I just asked for the "best steakhouses near me" and several of the restaurants are ~45 minutes away. Both restaurants⦠pic.twitter.com/gRkMeuzMQt
Why we care. Making ChatGPT local results better is a bit deal in local search and local SEO. Knowing the users location and better yet, precise location, can result in better local results.
Hopefully this will result in ChatGPT responding with more useful local results for users.
Google Business Profile (GBP) may be getting shoved down the SERPs by ads and AI Overviews more than ever, but itβs still a top source of inbound leads for local businesses β and one of the fastest ways to improve rankings with simple fixes.
Hereβs a five-step audit to find and fix the gaps most businesses miss.
1. Evaluate Google review velocity and recency
Itβs a common misconception that the business with the most Google reviews wins in Google Maps ranking. While a high review count provides social proof, Googleβs algorithm has more of a βwhat have you done for me lately?β attitude.
The number of reviews you get a month, and how recent your last review was, often outweigh the total count for all important map pack positions. We call these metrics review velocity and review recency.
Think about it like this: If you have 500 reviews but havenβt received a new one since 2024, a competitor with 100 fresh reviews from the last month will likely blow past you.
So, how do you measure your review velocity and recency? Analyze competitors to see how top-ranking businesses perform on those metrics.
Follow these steps:
Run a geo-grid ranking scan: Identify which competitors are outranking you for your top keywords.
Analyze the last 30 days: Note how many reviews they received this month, and when their most recent one was posted.
Benchmark your data: Create a simple table comparing your monthly count and recency.
Recommended tools: Places Scout, Local Falcon, or Whitespark for automated grid scans and review data.
You donβt just need more reviews. You need to match or exceed the consistency of top-ranking listings.
You can automate this with Places Scout API data. Thatβs what our agency does, tracking it consistently to keep clients ahead of competitors. Automated charts make it easier to see how you stack up.
Including keywords in your business name is one of the most powerful local ranking signals. Sometimes a profile will rank in the map pack based solely on its name, beating out businesses with better reviews and higher recency.
Googleβs algorithm hasnβt fully filtered out this type of keyword targeting, so it remains an opportunity. Take this business: only 21 reviews, yet it ranks first in the map pack for an extremely competitive term, thanks to the keywords in its business name.
You canβt simply keyword-stuff your name, though. Google can verify your legal name and take action to remove keywords from your profile β or worse, require reverification or suspend it. Your best option is a legal DBA (doing business as) certificate, also known as a trade name, or fictitious name certificate, in some areas.
For example, if your legal name is βSmith & Sons,β youβre missing out. Registering a DBA as βSmith & Sons HVAC Repairβ allows you to update your GBP name while technically adhering to Googleβs guidelines.
Competitor analysis: Are your competitors outranking you simply because their name contains the keyword? If yes, you need to take action to match those tactics.
Make it legal: Check your local Secretary of State website. Filing a DBA is an effective SEO tactic for moving from Position 4+ into the map pack for certain keywords.
Update business website: Update your website with the new name. Google uses website content to verify business details and may update your GBP accordingly. Make sure it only finds the new name, not outdated versions.
Choosing the wrong primary category for your GBP is a leading reason businesses fail to rank. If youβre a personal injury lawyer, but your primary category is set to βtrial attorney,β youβre fighting an uphill battle to rank for those highly competitive terms like βpersonal injury lawyerβ searches.
How to pick the best primary category:
Competitor analysis:Β Use Chrome extensions like Pleper or GMB Everywhere to see exactly which primary categories the top-ranking businesses are using.
Max out secondary categories: You have 10 total slots. Fill all of them with relevant subcategories.
Check off all relevant services: Under each category, Google lists specific services. Select the ones relevant to your business.
Many businesses link their GBP to their homepage and stop there. For multi-location businesses, this is a mistake. You should link to a dedicated local landing page optimized for your top keywords that mentions the city your GBP address is in.
Linking your GBP to a hyper-local city page (e.g., /tampa-plumbing/ instead of the homepage) reinforces βentity alignment.β When the information on your GBP matches a unique, highly relevant page on your site, Googleβs confidence in your location increases, often leading to a jump in the local pack. Make sure your GBP landing page is optimized with all your services and links to dedicated service pages to boost your listing for service-specific searches.
Watch out for the diversity update. Sometimes a business ranks well in the map pack, but its website is nowhere to be found in organic results. This is often due to Googleβs diversity update.
If you suspect youβre being filtered out organically, try linking your GBP to a different localized interior page. This is often a quick fix that helps your site reappear in organic search. Hereβs an example of a client I recently helped beat the diversity update with a simple GBP landing page swap.
Your businessβs physical location within the city and its proximity to the city center are extremely strong ranking signals. Itβs not something you can easily manipulate, though, because itβs not always easy to move your office, store, or warehouse. However, you need to know your βranking radiusβ and how much room there is to improve rankings for certain keywords within it.
Identify the ranking ceiling in your market. I use Local Falconβs Share of Local Voice (SoLV) metric to do this. If your top competitors only have a 53% SoLV, as in this example, itβs unlikely youβll be able to get more than that either.Β
This shows when youβve βmaxed outβ a keyword and need to target new keywords or open a new location outside that radius. It can also show thereβs room to improve β and that you need to increase your SoLV score.
Keep in mind that certain keywords are harder to improve based on where your business is physically located. If youβre not physically located within a cityβs borders, and your map pin sits anywhere outside the Google-defined border of your city, you will struggle to rank for explicit terms like βPlumber Tampa FL,β and within the city borders in general. Always do this analysis on a keyword-by-keyword basis.
Tip: In the current local search landscape, expanding your physical footprint, and verifying more GBPs, is the most reliable way to grow visibility. Max out your current GBPs first, then look for your next location.
This is a strong starting point, but itβs just the beginning. From review strategy and category selection to city borders and the diversity update, every detail counts.
Between overreaching ads and ever-expanding AI Overviews, staying proactive with your GBP strategy is the only way to keep your leads flowing from the map pack. Build your GBP foundation, max out your current locations, and strategize new locations to keep your business in the top spot across your service area.
Users will be able to change their username only once every 12 months. Plus, they won't be able to delete their new email address for that period of time.
Windows 11's in-box apps and system interfaces are currently a mishmash of native and web-based interfaces, but it sounds like that might soon be changing.
Toshiba Electronic Devices & Storage Corporation ("Toshiba") has announced the M12 Series of 3.5-inch nearline hard disk drives (HDDs) for hyperscale and cloud service providers operating largeβscale data centers. The new series uses Shingled Magnetic Recording (SMR) technology to deliver storage capacities ranging from 30 to 34 TB. Sample shipments have begun and Toshiba also plans to begin sample shipments of M12 drives that use Conventional Magnetic Recording (CMR) to deliver capacities of up to 28 TB in the third quarter of 2026.
Today is World Backup Day, the annual international initiative to remind companies and individuals of the importance of backing up and protecting their data. That need is now greater than ever, as the constant expansion of digital services and video content distribution, the widespread adoption of cloud services, and, most recently, the increasing use of data-hungry AI and data science, are driving forward immense growth in the volumes of data generated and stored worldwide.
NVIDIA is reportedly experiencing manufacturing issues with its next-generation "Rubin Ultra" GPU design, one of the company's most ambitious chip development projects, due to the limitations of modern packaging technology. The world's largest company is already shipping customer samples of the standard "Rubin" GPUs, with mass shipments set to begin this summer. However, the current roadmap for the upgraded "Rubin Ultra" design may be encountering technological limitations, as NVIDIA's design goals are too ambitious for TSMC's packaging capabilities. Reportedly, NVIDIA plans to double the regular "Rubin" two-die package with 8 HBM4 modules into a new "Rubin Ultra" package that will include four silicon dies and 16 HBM4E modules in a single package. This configuration is scheduled for 2027, but the sheer volume of silicon may be too much for TSMC's packaging, according to Global Semi Research.
In a typical CoWoS package, TSMC usually combines multiple smaller dies and multiple HBM memory modules into a unified package that supports the entire AI build-out. However, with the ambitious "Rubin Ultra" design, NVIDIA planned to use CoWoS-L, which was expected to handle the design and concept that "Rubin Ultra" was based on. It is rumored, however, that in a 2+2 die packageβmeaning four dies as in this architectureβTSMC is encountering warping issues. The packageβwhich includes a substrateβis bending in multiple directions, causing the compute dies of "Rubin Ultra" to not make complete contact with the underlying substrate. This instability means that TSMC has to explore alternatives within its packaging portfolio. One of these alternatives is a panelized approach called CoPoS, which stands for Chip-on-Panel-on-Substrate.
QNAP Systems, Inc., a leading computing, networking, and storage solution innovator, today announced the launch of the QSW-M7230-2X4F24T, a new L3 Lite managed 100 GbE switch designed for enterprise network upgrades, high-performance storage environments, large-scale media production, virtualization, and AI-driven workloads. The new switch enables organizations to build a scalable 100 GbE core network while maintaining cost efficiency and protecting existing infrastructure investments.
As data-intensive applications continue to accelerateβfrom AI computing and virtualization to collaborative media workflowsβenterprises are increasingly challenged to evolve beyond 10GbE networks without incurring disruptive, large-scale replacements. The QSW-M7230-2X4F24T addresses this transition by providing a flexible, multi-speed architecture that allows enterprises to introduce higher-speed connectivity where it matters most, while expanding the core network over time.
Since the announcement of their collaboration at Computex 2025, Noctua, a leading quiet PC cooling brand, and Asetek, a pioneer in all-in-one (AIO) liquid cooling, have continued to advance their flagship AIO liquid coolers. The products have now successfully completed the Production Validation Test (PVT) phase, confirming performance and manufacturing readiness ahead of the planned Q2 2026 launch.
The Asetek Emma (G8) V2 pump operates at a nominal speed of approximately 3,600 RPM (Β±300 RPM). Through collaboration with Noctua, several key performance aspects have been enhanced. Firstly, a triple-layer noise-reduction pump cover reduces both air-borne noise and structure-borne vibrations. Secondly, a dedicated mode switch allows users to choose between three different pump speed profiles to fine-tune performance-to-noise characteristics.
Advantech (TWSE: 2395), a global leader in IoT intelligent systems and embedded platforms, today announced the expansion of its SQRAM DDR5 7200 MT/s industrial memory module series. Designed to meet the escalating data demands of Edge AI, the new modules offer a 12.5% performance increase over previous generations and a groundbreaking 64 GB per-module capacity, setting a new benchmark for stability and scalability in outdoor deployments.
12.5% Faster, Up to 64 GB per Module
The DDR5 7200 MT/s delivers a 12.5% performance increase compared to the previous DDR5 6400 generation. In addition to higher bandwidth, each module supports up to 64 GB capacity using 32 Gb IC technology. This enables AI PCs and high-end workstations to scale system memory up to 256 GB, fully addressing the growing demands of data-intensive Edge AI and computing applications.
While the upgrade is technical in nature, its implications are broad. If widely adopted, it could allow an Android user and an iPhone owner to move a conversation from text to video within their native messaging apps β no third-party links or app downloads required. GSMA describes this as a...
Since Apple introduced new MacBook Pro models featuring the M5 Pro and M5 Max processors earlier this month, YouTuber Andrew Tsai has analyzed their performance in dozens of high-end PC games. Although MacBooks are not traditionally seen as gaming devices, Tsai's results suggest that the latest models can handle numerous...
An attacker compromised the npm account of a lead Axios maintainer on March 30, and used it to publish two malicious versions of the widely used JavaScript HTTP client library.
The greatest expedition is a live reality adventure show that showcases the real world up close while traveling across the globe by bike. Two riders, a female and a male, travel each continent in a month on a reputable motorcycle provided by the company, then move on to another continent after completing their journey. There's prize money for each successful continent trip, and the couple that completes all continents wins a grand prize. The entire journey is recorded live, with interviews of interesting people they meet shared daily and a weekly episode of each couple's journey.
Plot Party turns your ideas into visual storyboards and videos in minutes. Its AI agent selects the right models and keeps characters, styles, locations, and assets consistent across scenes. Build and tweak shots on a canvas, then polish with a native editor for trimming and subtitles. Create single stories, expand into a series, and publish worlds to engage your audience.
Chinese-speaking users are the target of an active campaign that uses typosquatted domains impersonating trusted software brands to deliver a previously undocumented remote access trojan named AtlasCross RAT.
"The operation covers VPN clients, encrypted messengers, video conferencing tools, cryptocurrency trackers, and e-commerce applications, with eleven confirmed delivery domains impersonating
The cybersecurity landscape is accelerating at an unprecedented rate. What is emerging is not simply a rise in the number of vulnerabilities or tools, but a dramatic increase in speed. Speed of attack, speed of exploitation, and speed of change across modern environments.
This is the defining challenge of the new era of digital warfare: the weaponization of Artificial Intelligence. Threat actors
AI search engines like ChatGPT, Google AI Mode, and Perplexity are changing how consumers discover and purchase products online. If your product pages arenβt optimized for these AI assistants, you could be missing out on a growing source of traffic and revenue.
The challenge? AI assistants donβt evaluate product pages in the same way traditional search engines do. They need to fully understand your products so they can confidently recommend them to different users with different needs.
To help you assess how well your product pages are optimized for AI search, hereβs a simple scorecard covering the six most important factors.Β
1. Product specifications
Does the product page clearly display the productβs attributes and specifications?
AI assistants need clearly stated specifications to better understand your products and match them to customer needs. If a shopper asks an AI assistant for βan airline-friendly crate for a 115-pound dog,β the AI must be able to see the maximum weight limit of a product before it will recommend it. Without clear specifications, some products wonβt get recommended, even if theyβre actually a perfect match.
Amazon does this really well, and itβs likely one of the many keys to their strong performance in AI search. Just look at all the helpful specifications they clearly lay out on their product pages.
Action item: Go through your product pages and make certain all applicable specifications are clearly displayed. Donβt bury them in the main product description or other marketing copy. Clearly lay them out in a structured table or bulleted list.
Are the productβs unique benefits clearly described?
AI needs to understand both what makes your product stand out and why your products should be recommended over the competition. If a product page reads like every other industry website, AI assistants have no compelling reason to recommend the listed products.
Think about it from the AIβs perspective: If a user asks βwhatβs the best L-shaped sofa,β the AI will look for products with clear differentiators (hidden storage, machine-washable, modular parts, durability, etc.). The characteristics that make your product stand out should be explicitly stated on the page.
Hereβs a great example from Home Reserve. Their product pages have a section called βKey Featuresβ that lists the unique selling points that separate them from the competition.
Action item: Make sure your product pages clearly state what makes them better and why it matters to the customer. Keep your key features specific. Generic selling points like βhigh-quality craftsmanshipβ or βpremium materialsβ are too vague and donβt give AI assistants enough information to establish a clear differentiation.
Are the productβs intended use cases and audience clear?
AI assistants donβt match products to keywords β they match products to people and their unique needs. When a user asks ChatGPT, βwhatβs the best desk for a small apartment,β the AI looks for products intended for compact spaces, small rooms, or apartment living.
If a product page only describes the deskβs dimensions without connecting them to a particular use case, AI assistants may not recommend the product when users ask about those scenarios.
Any given product could have a multitude of use cases and audiences. A standing desk could be ideal for remote workers, people with back pain, gamers, or small business owners outfitting a home office. If a product page only speaks to one of these audiences, it might not get recommended to the others in AI search.
Action item: For each product, include the top three to five specific use cases or audience segments on the page. Go beyond demographics and think about situations, pain points, and goals.Β
Does the product page include an FAQ section answering common questions about the product?
AI assistants always try to connect products with the right buyer. When a user asks a question like, βwhatβs the best waterproof sealant for a flat roof,β the AI looks for information on product pages demonstrating theyβre a good fit for the particular use case.
This is what makes FAQ content so valuable. A well-structured FAQ section can give AI assistants additional confidence that the product is a good fit for the user and worthy of a mention. The more specific and detailed your FAQ answers are, the more prompts your product can match within AI search.
For example, Liquid Rubber sells mulch glue and waterproof sealants. They do a great job of providing a clear list of frequently asked questions on their product pages.
This type of FAQ content can help their products get recommended more often when users ask ChatGPT specific questions:
Whatβs the best VOC mulch glue?
Can I get mulch glue that will last up to 12 months?
Is there a mulch glue that delivers within one week?
Action item: Review your customer support inquiries, product reviews, competitor pages, and relevant Reddit threads to identify the most common customer questions. Then add these questions directly to your product pages with clear and concise answers.
Does the product page display customer ratings and review counts?
AI assistants will recommend highly rated products with strong reputations. A product with 500+ reviews and a 4.8-star rating is a much safer recommendation than a product with zero reviews or a low rating.
Just ask ChatGPT for product recommendations, and youβll see the product ratings front and center. Take, for example, the prompt, βWhatβs the best medium roast caramel flavored coffee?β
Itβs clear that ChatGPT relies heavily on product reviews and only recommends products with a high rating. When you click on any of these products, youβll see that product ratings and the number of reviews are clearly displayed on the product page.
Note: Your productβs rating in ChatGPT may differ from whatβs on your product page. This is because ChatGPT calculates an aggregate rating across multiple merchants (e.g., Walmart, Target, etc.), rather than only pulling from your product page.
But having a strong rating isnβt enough β you need a lot of reviews as well. I recently reviewed 1,000 ecommerce-focused prompts and found that the median number of reviews was 156. So, if you want to increase your chances of getting recommended by ChatGPT (and other AI assistants), aim for at least 150+ product reviews.
Action item: Make sure your product pages clearly display customer ratings, review counts, and (ideally) some actual reviews. Third-party review platforms like Yotpo, Judge.me, and Shopper Approved can solicit product reviews from customers for you.
Does the product page include structured data for price, availability, reviews, and other key attributes?
Itβs easier for AI search engines to understand information presented in a clear structure (e.g., tables, lists). But thereβs nothing more structured than the JSON format for structured data (also known as schema markup).
Thereβs a common claim in AI SEO that structured data is some kind of magic bullet for AI visibility. The reality is more nuanced.
Structured data experiment
An interesting experiment conducted by SEO consultant Dan Taylor tested the impact of structured data on AI search. He included a physical address for a made-up company in the JSON-LD structured data, but didnβt include it anywhere in the page content itself. Then, when he asked ChatGPT for the address, it still pulled it from the structured data.
This experiment shows that AI assistants are indeed crawling structured data. But theyβre not necessarily parsing it the same way a traditional search engine would. Instead, theyβre simply treating it as another source of text on the page.
If the content in your schema is relevant to a userβs prompt, AI assistants will pick it up. But it doesnβt matter whether the schema is valid or completely made up.
Where structured data helps most
So, if AI assistants treat structured data like any other text, is it still worth adding it to your product pages? The short answer is βyes.β
Presenting important product information clearly and well formatted can always help AI assistants understand your product pages. But the real advantage is in the product cards found within the AI responses.
So, the main advantage of structured data is how it plays into Googleβs Knowledge Graph of products, which can directly impact product recommendations across Google AI Overviews, AI Mode, and even ChatGPT.
With the rise of agentic commerce, product data will only become more important as AI agents rely on it to compare, evaluate, and even purchase products on behalf of users.
Hereβs a quick overview you can use to audit your product pages:
Once youβve scored your highest-priority pages, any gaps become the priority on your AI product optimization roadmap. Tackle the βNoβ items first, since those represent the biggest missed opportunities, then work on upgrading the βPartialβ scores.
This type of product optimization is still a blind spot for many ecommerce brands, which means every factor you improve is a chance to get recommended where they donβt. The sooner you close these gaps, the harder it becomes for competitors to catch up.
Proton VPN is expanding its global footprint by launching new servers in five underserved countries across South America and the Caribbean, making it easier to bypass censorship and stay secure online.
Microsoft retired its thinnest design just as the silicon caught up. With the MacBook Neo disrupting the market, it's time to bring back the fanless Surface Pro X.
A Β£5 Xbox 360 devkit has revealed a pre-release GTA IV build from 2007, uncovering cut content including ferries, a zombies mode, and early gameplay changes.
Intel Core 200 "Bartlett Lake" is probably the most interesting processor gamers can't buyβbuilt on the Intel 7 node and designed for Socket LGA1700, "Bartlett Lake" is a non-Hybrid, pure P-core chip, a monolithic silicon, with 12 "Raptor Cove" P-cores, and no E-core clusters. The 12-core/24-thread chip was launched earlier this month as an exclusive for the commercial and industrial PC OEM markets, as an edge AI PC processor, it is not drop-in compatible with consumer Intel Z790 chipset motherboards, or at least that was the plan.
A motherboard UEFI firmware mod by "kryptonfly" got a consumer ASUS Z790-AYW OC motherboard to POST with an Intel Core 9 273QPE "Bartlett Lake" processor. The modder used Claude AI to mod the UEFI firmware of the board without tripping safeguards that prevent the motherboard from booting with modded firmware. The 273QPE is a 12-core/24-thread pure P-core processor with 2 MB of L2 cache per core, and 36 MB of shared L3 cache. Its uncore components and iGPU are carried over from "Raptor Lake-S." The 273QPE has a base frequency of 3.30 GHz, an all-core boost frequency of 5.30 GHz, and a single-core TVB frequency of 5.90 GHz. The chip has 125 W processor base power, and 250 W maximum turbo power. You can watch kryptonfly's firmware mod video from the source link below.
ASUS today announced ASUS ExpertBook P5 G1, a powerful and versatile business laptopβwith 14-inch and 16-inch display optionsβdesigned to support the productivity needs of modern professionals. Combining dependable performance from up to an Intel Core Ultra 7 processor and a sleek and lightweight design, ASUS ExpertBook P5 G1 is engineered to deliver a reliable computing experience in offices, hybrid work environments, and for professionals on the move.
ASUS ExpertBook P5 G1, with its choice of 14-inch or 16-inch form factors, provides a flexible workspace in a highly portable design. The lightweight chassis starts at just 1.29 kg, making it easy to carry between meetings, offices or travel destinations. A 70Wh battery supports extended productivity throughout the workday, while the durable design meets MIL-STD-810H US military-grade standards, ensuring reliability in everyday business environments.
Quinnipiac University's survey, which questioned 1,397 adults, found that while Americans are increasing their use of AI, their views on artificial intelligence are becoming more negative.
"Some of those fixed dramatic horizontal lines do become worse because people are literally spending hours on their phone and looking down," Dr. Melanie Palm, a cosmetic dermatologist in Solana Beach, California, told The Wall Street Journal.
Get this MSI 27-inch 4K QD-OLED monitor for just $399.99. Enjoy vibrant colors, true blacks, a 240Hz refresh rate, and ultra-fast response time βperfect for gaming and productivity, and an all-time low price
Research firm Omdia says that the PC market grew by 3% in in the last quarter of 2025 due to holiday sales and the Windows 10 EOL. Unfortunately, 2026 is expected to be a rough year with an expected 13% contraction in sales due to the chip shortages.
You can grab 32GB of fast DDR5-6400 Corsair Pro Overclocking RAM for just $309.99 right now. That's a massive $140 discount on its (current) list price that makes it the cheapest set of 32GB DDR5 RAM that you can buy right now in a market left in ruins by the AI boom.
An enthusiast has managed to get Intel's non-consumer Bartlett Lake CPU running on a regular Z790 motherboard, thanks to AI. Claude modded the original BIOS of the board to detect a Core 9 273QPE and boot with it, but the setup hasn't gotten past the POST screen yet, and the modder is currently facing various error codes.
It's time to celebrate World Backup Day 2026 with these huge promotional deals on storage, from discounted hard drives to SSD cloners, flash drives, and more.
SprintDrip helps startups and small teams plan sprints, manage work, and stay aligned without the usual agile overhead. Set up fast, run async standups and retros, and replace status meetings with quick updates and real-time collaboration. Its AI copilot, Xia, turns updates and project data into summaries, insights, and actionable roadmaps, so you see whatβs working and ship faster. Track progress and performance without micromanaging, with a simple workflow built for modern teams.
Bondary is an AI dating copilot that helps you see who someone really is before things get serious. Unlike general AI, Bondary creates profiles and tracks your dating life over time, remembering what you said weeks ago, connecting dots across conversations, and surfacing what you might be choosing to overlook.
Avatar and Titanic director James Cameron has produced a brand-new NatGeo documentary β and if you're looking for a double bill binge, he's got the perfect recommendation.
RPCS3 now allows game resolution changes without game restarts RPCS3 is the best place to play many PlayStation 3 classics. Why? The simple answer is that many PS3 games are playable there with higher resolutions and framerates than their original PS3 versions. That means many PS3 games now look better and run smoother on a [β¦]
World Backup Day is here to remind everyone to keep their precious files safely backed up, and one of the best ways to do that for Xbox is through various Storage Expansion cards that are now on sale.
Microsoftβs Copilot Cowork is now available through the Frontier program, giving users access to Anthropic's Claude Cowork AI model for smarter task management.
Console Host is gaining several features that have been available on Windows Terminal for a while, including better graphics support, scrolling performance improvements, and more.
Intel earlier this month debuted the Core Ultra 7 270K Plus and Core Ultra 5 250K Plus desktop processors at launch prices of $299 and $199, respectively. At the time, the company hadn't launched "KF" variants of the two chips, which lack integrated graphics and are priced around $15 less than their regular "K" variants. It turns out, that Intel is planning to launch the Core Ultra 5 250KF Plus, while there's no sign of a "Core Ultra 7 270KF Plus." The 250KF Plus is almost identical to the 250K Plus, except it comes with the iGPU disabledβsomething you don't need if you plan on using a graphics card.
As with most "KF" SKUs from the past, the Core Ultra 5 250KF Plus will be priced around $15 less than the regular Core Ultra 5 250K Plus. Intel's own 1,000-unit tray quantity pricing for the chip ranges between $174 and $184. Given how tight memory pricing is, and given that you'll need an aftermarket cooler, the $15 saving might come in handy. Then of course the integrated graphics is nice to have if your graphics card is bricked due to a burnt power connector, and you need something to light your screen up for troubleshooting or during RMA. The Core Ultra 5 250KF Plus is based on the "Arrow Lake" microarchitecture, and packs a 6P+12E core configuration, with 3 MB of L2 cache per P-core, 4 MB of shared L2 cache for each of the three E-core clusters, and 30 MB of L3 cache shared among the six P-cores and three E-core clusters. The Core Ultra 5 250KF Plus should start selling from April 3, 2026.
Zero Latency VR, the undisputed leader in immersive entertainment and the mastermind behind the world's largest true location-based free-roam VR network, has announced a new collaboration with CD PROJEKT RED to bring the award-winning universe of Cyberpunk 2077 into its warehouse-scale VR format.
Cyberpunk 2077 is an open-world, action-adventure role-playing game set in Night City, a dark future megalopolis obsessed with power, glamour, and body modification. Players take on the role of a cyber-enhanced mercenary named V, who faces the most powerful forces in the city in a fight for glory and survival. Created by the studio behind The Witcher series of games, Cyberpunk 2077 has reached a global audience since its launch in 2020, earning acclaim for its storytelling, gameplay, and the immersive nature of its open world.
The CZ.NIC Association, the Czech national domain administrator, presents Turris Omnia NG Wired - a rack-mountable model offering 10 Gbps connectivity and the Turris OS operating system based on OpenWrt/Linux. It builds on the security principles of the Turris project and features a quiet, passive-cooling design. The device is intended for businesses, institutions, and demanding users seeking a powerful and sustainable network foundation while supporting European technologies, open source, and digital sovereignty.
Designed for rack installation: 10G/2.5G connectivity in a compact package
Turris Omnia NG Wired is built for racks and spaces like server rooms and network cabinets. Wi-Fi can be provided by separate access points, while the router stays in the backroom.
The Behind The Scenes Trailer offers an in-depth look into the creation of Masters of Albion. It features personal and detailed interviews with Peter Molyneux, Mark Healey and Russ Shaw, as they reflect on their history as collaborators and the creative processes behind MoA, all supported by brand new in-game capture.
Created entirely in-house, the documentary showcases previously unseen areas of Albion's world, behind-the-scenes footage of key development moments, and candid stories from the team's past. Alongside this, viewers can expect new gameplay insights, a closer look at the game's evolving systems, and a tone that reflects both the humour and ambition of the studio⦠including, at one point, a rogue chicken.
MSI announced that its next-generation AI platform, MSI EdgeXpert, has officially become an NVIDIA-Certified System. This validation ensures the hardware has undergone rigorous testing by NVIDIA engineers for performance, functionality, scalability, and security. Most importantly, it brings MSI EdgeXpert into the supported ecosystem of NVIDIA AI Enterprise (NVAIE), strengthening its capability to support enterprise-grade generative AI, AI agents, and high-performance edge AI workloads.
Establishing Hardware Trust Standards through Rigorous Testing
NVIDIA-Certified Systems establish a broad hardware trust standard. MSI EdgeXpert has passed extensive evaluations, including deep learning training with TensorFlow and PyTorch, high-throughput inference with TensorRT and Triton, and system-level security testing. While the certification covers a wide range of hardware reliability, its support for NVIDIA AI Enterprise is one of the most significant values, helping organizations move efficiently from proof of concept to real-world deployment.
NVIDIA's latest GeForce NOW update has introduced 90 FPS streaming to various VR headsets, and Apple users are in for a treat. For the Apple Vision Pro headset, GeForce NOW will deliver 90 FPS at 4K resolution, offering a noticeable improvement for anyone using NVIDIA's game streaming service with their Vision Pro headset. The Apple Vision Pro features two displays, one for each eye, capable of running at a resolution of 3,660 Γ 3,200 and up to 120 Hz. It's great news that NVIDIA has updated its GeForce NOW service to officially support at least 4K resolution, running at 90 FPS. While it's unclear how many gamers use the Apple Vision Pro as their gaming display, the addition of support by NVIDIA suggests a significant number. Available as part of the Ultimate package, members can stream at 90 FPS on other VR headsets as well, but at lower resolutions.
Additionally, everyone can stream at 1080p and 90 FPS, while 1440p is reserved for Pico and Meta Quest. Currently, only the Apple Vision Pro can handle 4K and 90 FPS output from GeForce NOW. Although not many games can run at 4K resolution and 90 FPS on their own, NVIDIA's DLSS technology can boost the frame rate and deliver impressive visuals, ensuring a smooth 4K mode at 90 FPS. Finally, NVIDIA has also scheduled the rollout of H.265 video decoding support for browsers, which will greatly enhance streaming efficiency and visual quality from NVIDIA's virtual gaming server.
Take your files on-the-go with this classic-looking SSD enclosure from Hagibis in celeberation of World Backup Day. Just $27.99 nets you this USB-C 3.2 Gen 2 M.2 NVMe SSD enclosure, shaped like an old floppy disk.
Bitcoin miners are pivoting to AI infrastructure amidst the conflict in Iran. The BTC network has seen its first quarterly hashrate drop in nearly six years.
Roasted helps you get interviews by analyzing your resume, fixing issues, and showing exactly what to improve. It offers an AI resume builder, voice-to-resume, ATS-friendly templates, PDF export, public sharing, and detailed feedback. You can create tailored CVs and cover letters, match jobs, and apply with one click. Job Autopilot searches, customizes, and applies on your behalf while you track progress.
Verve Intelligence delivers objective startup idea validation in about 30 minutes. Use it to size markets, map competitors, define target segments, assess risks, and receive a "what would work" persona, MVP, and technical scope. It also provides guides on interpreting signals that match historical patterns.
It runs 14 parallel research streams, including adversarial agents that stress-test assumptions, then compiles a 50+ page investor-grade report with a GO, PIVOT, or NO-GO verdict, cited sources, and transparent scoring. Access AI debates, rationale, a personalized industry glossary, and more.
Noctuaβs upcoming CPU liquid cooler has passed its Production Validation Test and is ready for its Q2 launch Noctua and Asetek have confirmed that their upcoming all-in-one (AIO) CPU liquid cooler is ready for its Q2 2026 launch. The CPU cooler has passed Product Validation Testing, meeting the cooling requirements of both companies, and is [β¦]
Acer today introduced the FA300, a mid-range M.2 NVMe Gen 5 SSD. The drive brings PCIe Gen 5 speeds to a wider audience, and is based on a DRAMless controller. The company doesn't specify the controller type. Popular DRAMless Gen 5 controllers include Phison E31T and Silicon Motion SM2504XT. The FA300 comes in 1 TB and 2 TB capacity variants, which differ in performance. Both variants offer up to 11 GB/s of sequential reads, but while the 1 TB variant offers up to 9.7 GB/s sequential writes, the 2 TB variant goes a bit further, posting up to 10 GB/s sequential writes.
In terms of random access performance, the 1 TB Acer FA300 offers up to 1.4 million IOPS 4K random reads, with up to 1.6 million IOPS 4K random writes, while the 2 TB variant offers up to 1.7 million IOPS for both 4K random reads and writes. The company does not specify the 3D NAND flash type used. The 1 TB model is rated for 750 TBW (TB written) write endurance, while the 2 TB model offers 1,500 TBW. Both models are backed by 5-year warranties. Acer did not specify pricing, because it tends to be dynamic in the current market environment, but expect the FA300 to be among the more affordable Gen 5 SSDs.