collaborAItr lets you run multiple AI models in parallel to plan, research, and execute tasks with a single prompt. View side-by-side responses, compare perspectives, and click Continue as your AI team learns from results and refines the plan. Connect to leading models like ChatGPT, Claude, Gemini, Grok, and 40+ others. Start free with no credit card, keep your data private, and use flexible tools to consolidate, fact check, and summarize responses.
Seller Stacked is a directory and newsletter created by a real store operator that reviews AI tools for e-commerce sellers and offers free calculators. It shares honest recommendations with no sponsored placements and focuses on actionable results. The site rates tools on ease of use, value, and workflow fit, and publishes weekly guides, comparisons, and real-world tips to help you choose and apply the right tools.
WavKong introduces a router using Digital Pre-Distortion to improve Wi-Fi consistency, questioning whether industry focus on peak speeds addresses real-world performance.
An Avride autonomous vehicle near Austin hit the duck, facing backlash from the community. "It didn't slow down or hesitate at all, just steamrolled right through," according to a witness.
AMD confirms the MSRP of its Ryzen 9 9950X3D2 Dual Edition processor AMDβs David McAfee has unveiled the official MSRP for its Ryzen 9 9950X3D2 Dual Edition CPU, setting it at $899.99 in the US. This makes the Ryzen 9 9950X3D2 Dual Edition AMDβs most expensive AM5 CPU to date. This price is $200 above [β¦]
The Lenovo Yoga 7 is a laptop from 2024 built with advanced AI-empowered hardware for powering through office work and creative productivity tasks that still hold in 2026, and it's now on sale for 27%
Xbox is finally giving Achievements the kind of upgrade fans have been asking for for years β new features, modern systems, and even a dedicated internal team focused entirely on community feedback.
Snapdragon X2 is Qualcomm's second-generation Windows PC platform, introduced in September 2025, with devices expected in the first half of 2026. The family includes the Snapdragon X2 Elite and the higher-tier X2 Elite Extreme, which targets more demanding workloads.
According to two sources cited by the New York Post, Ghost Murmur can detect heartbeats from miles away. One source described it as being able to hear a voice in a large stadium, "except the stadium is a thousand square miles of desert." The sources also claimed that the tool...
Noir Prompt is a prompt manager for people who use AI generation tools like Midjourney, DALL-E, ChatGPT, Runway, and more. Save your prompts, tag them, organize by type, and find them instantly. No more digging through notes apps or Discord threads wondering what you typed weeks ago.
Every edit is saved automatically so you can roll back to any previous version. Build reusable templates with variable placeholders and swap out subject, style, or mood on the fly. Free to start, it works for image, video, and text prompts, all in one place.
Cybersecurity researchers have flagged a new variantΒ ofmalwareΒ called Chaosthat'scapable ofΒ hitting misconfigured cloud deployments, marking an expansion of the botnet's targeting infrastructure.
"Chaos malware is increasingly targeting misconfigured cloud deployments, expanding beyond its traditional focus on routers and edge devices,"Β Darktrace said in a newΒ report.
New Rowhammer attacks on Nvidia GPUs enable full system compromise by manipulating memory, exposing risks in shared environments despite limited real-world use.
Amazon told Kindle owners this week that it's ending support for all e-readers released in 2012 or earlier, making them virtually unable to load any new content.
Security researchers exposed a spying campaign by a hack-for-hire group that used Android spyware and phishing to steal iCloud credentials and hack victimsβ devices.
Reference to something called SteamGPT has appeared in a datamined Steam update, which could mean that Valve is bringing in AI to help speed up support requests and to buff anti-cheat. Here's what you need to know.
Valve seems to be incorporating more AI integration into its Steam gaming platform to assist gamers with various aspects of their experience. In the latest development of this AI integration, Valve is reportedly developing the SteamGPT AI system to assist with customer support queries, while also integrating many additional features into the system. As you know, Valve's Steam platform receives thousands of support questions daily, ranging from refunds and platform issues to payment processing problems and many other inquiries. Valve's support staff is often overwhelmed, especially during major sales events. If Valve creates a customized AI system for chatting, support, and other infrastructure tasks, the company could alleviate a significant portion of the daily issues.
Additionally, recent source code leaks mention some connection to Valve's Trust systems, which enhance matchmaking quality by grouping players with similar levels of trustworthiness in games like Counter-Strike 2. This is an algorithmic process where an AI system could improve grading, as AI can naturally solve these tasks by grouping players more effectively than any custom algorithm. Furthermore, it could also detect cheating patterns performed by players and activate the anti-cheat system. However, while an AI system can assist with customer support queries, it may still make errors, necessitating human oversight to ensure the validity of support resolutions.
It seems as though Epic Games is working on a way to increase player time and interest in Fortnite, following a massive round of layoffs at Epic Games that was blamed on a downturn in spending in the battle royale shooter. In the announcement of the layoffs, Epic Games CEO, Tim Sweeney, said that the studio would be looking to deliver "Fortnite magic" in upcomig updates, and shaking up the game with a handful of new game modes to see what sticks, and potentially diversifying the player base in the process, may be one way of doing that. According to frequent leaker @Loolo_WRLD on X, who data-mined a recent Fortnite update and discovered references to at least seven new game modes coming to Epic Games's flagship game.
The references, which are codenames, include "WickedSmoke," a social-first game mode similar to the previously available Delulu game mode that relied on proximity chat and on-the-fly co-operation or rivalries; UnableRoman, a team deathmatch game mode; Rivalry, a Lego mode with mechs; CurioBox, another Lego game mode; MatchMist, a new map or mode for the fast-paced Reload game mode; BabyCorgi, a new game mode seemingly related to the tactical FPS, Fortnite Ballistic; and Bulldog and Husky, which is related to Disney mode. It's unclear when these new game modes are slated to launch, if at all, but it stands to reason that if they're already in the Fortnite update logs, they should be making their way to the mainline game at some point soon.
NuPhy has completed its Air V3 lineup by adding two new low-profile mechanical keyboards, the Air 65 V3 in a 65% layout and the Air 100 V3 as a full-size option joining the 75% model that launched earlier last year. Both keyboards use low-profile Gateron x NuPhy MX-style switches with 3.5 mm total travel, available in 42 gf light linear (Blush nano), 45 gf linear (Red nano), and 50 gf tactile (Brown nano) variants. Construction is aluminium on top and PC (Polycarbonate) on the bottom, with a PC plate and gasket mount for a softer typing feel. Keycaps are double-shot PBT, and both keyboards are hot-swappable with N-key rollover and north-facing RGB with 20 backlight modes. Connectivity covers all three modes, 2.4 G and wired USB-C both at 1000 Hz polling, and Bluetooth 5.0 at 125 Hz.
The NuPhy Air 65 V3 measures 318.9 x 109 x 13.2 mm and weighs 635 g, with plate-mounted stabilizers and a 2500 mAh battery rated for 15 to 67 hours with lighting on and up to 761 hours with no lighting. The Air 100 V3 has a 380.6 x 133.6 x 13.2 mm footprint at 950 g and features a 4000 mAh battery good for 24 to 73 hours lit up or up to 1000 hours with lights off. Both offer three typing angles at 4Β°, 8Β°, and 10Β°, and come in Nova White and Nebula Dark color variants. A few weeks after announcing the Air75 V3, NuPhy faced some criticism over its decision to ship the keyboard with only a tall rotary knob by default. The company has since revised the bundle, with pre-orders now including both a high-profile knob and a low-profile variant with a translucent ring, along with the knob base. All components are color-matched to the selected keyboard finish. The Air 65 V3 is priced at $129.95 and the Air 100 V3 at $149.95.
Lenovo has announced the ThinkPad X13 Gen 7 in Japan, featuring updated processors while keeping the same external design. It's the smallest and lightest ThinkPad in the lineup, starting at 936 g with a 13.3-inch screen. For this refresh of the X13 Gen 6 from last year Lenovo is offering Intel Core Ultra 300 (Panther Lake) and AMD Ryzen AI PRO 400 series processors, keeping the same dual-platform approach of the Gen 6. The Intel side gets the bigger upgrade here. The last year model was limited to Arrow Lake and missed out on Lunar Lake entirely, so using Panther Lake processors should bring a notable efficiency step forward. AMD's Ryzen AI 400 is also a generational move up from what the Gen 6 offered. Beside the new chips, not much has changed. The port layout, keyboard, and overall chassis design carry over from the Gen 6 unchanged. Battery options are 41 Wh or 54.7 Wh, and both 4G and 5G connectivity will be available.
Intel models are expected in mid-May, with the AMD variant following in late May. No pricing has been announced. Worth noting this is the standard clamshell X13, not the ThinkPad X13 Detachable 2-in-1 that Lenovo showcased at MWC. Lenovo did announce at MWC that the ThinkPad X13 Detachable would start at β¬1,949. Meanwhile, the X13 Gen 6 starts at $1,469 with a Core Ultra 5 225U, providing some context for where the X13 Gen 7 may land.
The list of devices stretches all the way back to the original 2007 Kindle and includes the Kindle 2, Kindle DX, Kindle DX Graphite, Kindle Keyboard, Kindle 4, Kindle Touch, Kindle 5, and the first-generation Kindle Paperwhite.
We take a deep-dive into the past, present, and future of the ubiquitous PCIe standard, and look ahead at the challenges that await manufacturers when integrating PCIe 6.0 and beyond into real-world hardware.
Celavii helps brands and agencies find creators, manage outreach, and run campaigns from one place. It maps creator networks as a graph, showing audience overlap, bridge creators, and where your budget reaches new people.
Instead of clicking through filters, you ask questions in plain English and AI agents handle discovery, CRM, campaign tracking, and video generation. It works from your dashboard, WhatsApp, Slack, or Discord and starts at $49/month with no annual contracts. It was built because other tools required $2,000+/month and a yearly commitment just to search a database.
BayPoint AI is an all-in-one platform for eBay sellers who want to run their business smarter. The flagship Preflight app analyzes qualitative aspects of your listings β title strength, description quality, photo guidance, and keyword relevance β and gives you AI-generated improvements before you publish. Supporting apps cover shipment tracking, buyer feedback management, sales analytics, and marketplace intelligence, all working together from a single dashboard.
Our AI assistant, Riley, is available in every app. Riley analyzes your actual listing and sales data. Morgan answers eBay strategy questions in real time. No more guessing β just a clear picture of what to fix and why.
Cybersecurity researchers have lifted the curtain on a stealthy botnet that's designed for distributed denial-of-service (DDoS)Β attacks.
Called Masjesu, the botnet has been advertised via Telegram as a DDoS-for-hire service since it first surfaced in 2023. It's capable of targeting a wide range of IoT devices, such as routers and gateways, spanning multiple architectures.
"Built for
Cloudflare outlines its vision for EmDash as a modern CMS designed to improve security, support AI-native workflows, and modernize how websites are built and managed.
If you shelved your inbound strategy this past year, you can shelve your Inbound conference mugs and swag with it.
HubSpot renamed its annual Inbound conference in Boston this September to Unbound. A note on the event site explains the thinking:
βThis evolution is our response to that reality. INBOUND is becoming UNBOUND because growth no longer fits within a single framework or function. Today, it covers marketing, sales, service, and operations across the full customer journey in an AI-driven environment. UNBOUND reflects that expanded reality and the mindset required to lead through it.βΒ
Inbound is outbound. HubSpot pioneered inbound marketing, which uses content and search rankings to attract visitors, then convert them on-site.
Recent Google core updates appeared to hurt the HubSpot blog, possibly because its content drifted from core topics like CRM, sales, and marketing into broader business areas like interview tips.
Inbound strategy has declined as search shifts from platforms like Google to LLMs like ChatGPT, which drive fewer clicks to websites.
From inbound to loop marketing. In 2025, HubSpot introduced its Loop marketing strategy to replace inbound. Loop focuses educating consumers in an AI-driven world.
The conference rebrand acknowledges that no single framework works for you in todayβs marketing landscape.
In Project Glasswing, announced Tuesday, the company is giving a select group of major tech and financial firms access to Claude Mythos Preview, a frontier model that has already uncovered thousands of previously unknown software vulnerabilities. Anthropic says the model is too dangerous to release to the general public.
JerryRigEverything has taken apart a working unit of the never-released LG Rollable on camera. The basic premise with these devices was that instead of folding open like a Galaxy Z Fold, the Rollable uses a small motor to slide an extra screen out of the side. With a simple swipe...
Lego has unveiled 8 new sets within the FIFA World Cup 26 Editions theme, and not only can you score a Lionel Messi Minifigure, but you can even build the soccer star out of bricks.
Australia was the first country to issue a ban in late 2025, aiming to reduce the pressures and risks that young users may face on social media, including cyberbullying, social media addiction, and exposure to predators.
Astropadβs Workbench lets users remotely monitor and control AI agents on Mac Minis from iPhone or iPad, with low-latency streaming and mobile access.
The LAPD said the breach affected βa digital storage systemβ belonging to the cityβs Attorney's Office. The World Leaks extortion gang was reported to be behind the attack.
The maker of the popular open source file encryption software VeraCrypt said Microsoft locked his online account, which may prevent device owners from booting up their computers.
AI bot activity surged 300% in 2025, with media and publishing among the most targeted sectors, according to a new Akamai report.
Why we care. AI bots are reshaping how content is discovered and consumed, shifting users from search clicks to instant answers in chat interfaces. Publishers are seeing fewer visits from organic search and often donβt get attribution in AI-generated answers. Itβs also eroding ad and subscription models.
The threat is real. Publishers now face two threats:
Training bots that ingest content for models.
Fetcher bots that extract real-time content for immediate answers. These pose the bigger risk because they capture value as itβs created.
The impact. Pageviews are declining, costs are rising (because scraping bots increase infrastructure costs by consuming server and CDN resources without generating revenue), and brand visibility is weakening.
AI chatbot referrals drive ~96% less traffic than traditional search
Users click cited sources in AI answers only ~1% of the time
What publishers are doing. Publishers are adopting nuanced controls (rather than blanket blocking AI bots), such as:
Monitoring and classifying bot traffic.
Selectively blocking or slowing malicious scrapers (e.g., tarpitting).
Allowing approved bots tied to licensing or partnerships.
What theyβre saying. According to Akamaiβs report:
βThese bots are not just a security nuisance, they represent a profound business challenge that threatens the sustainability of quality journalism in an age dominated by zero-click searches and AI-generated content.β
βThe publishing industry today faces an existential crisis β¦ Many readers and visitors still value trustworthy reporting and original content. Yet, instead of clicking through search results, users now turn to AI-driven platforms like ChatGPT and Gemini for instant answers and summaries.β
Whatβs next? A βpay-per-crawlβ model is emerging. Tools like identity verification (Know Your Agent) and platforms like TollBit aim to authenticate bots and charge for access in real time.
The goal is to turn scraping into a measurable, monetizable transaction instead of uncontrolled extraction.
About the data. The report analyzed Akamai bot management data from July to December 2025, covering application-layer traffic across websites, apps, and APIs.
Google may be making local search ads more interactive, potentially changing how advertisers showcase multiple locations and capture nearby demand.
Whatβs happening. Google Ads appears to be testing a new format that displays multiple business locations in a swipeable carousel within search ads, allowing users to browse options directly in the ad unit.
How it works. Instead of listing locations separately, the new format groups them into a horizontal carousel with business details like ratings and proximity, enabling users to swipe through locations without leaving the search results page.
Zoom in. Early comparisons show a shift from static, stacked location assets to a more dynamic experience, where multiple listings are consolidated into a single, scrollable unit.
Why we care. Advertisers with multiple locations could gain more visibility within a single ad, while users get a quicker way to compare nearby options.
Between the lines. This format could increase engagement with location-based ads, but may also intensify competition within the carousel itself as businesses vie for attention.
What to watch. Whether the feature rolls out more broadly and how it impacts click-through rates and local ad performance.
First spotted. This update was spotted by Founder of Adsquire Anthony Higman who shared spotting this ad type on LinkedIn.
Google is consolidating its advertising and measurement resources into a single destination, aiming to make it easier for developers and technical marketers to build, automate and scale campaigns.
Whatβs happening. Google has introduced a new Advertising and Measurement Developers Hub, a centralized site designed to help users access tools, documentation and support across its ad ecosystem.
The hub brings together resources for products like the Google Ads API, Google Analytics and publisher tools such as AdMob and Google Ad Manager, all organized into categories including advertising, tagging and measurement.
How it works. The site offers a streamlined homepage with quick access to documentation, blog updates and community channels, along with dedicated sections to explore products, connect with support and engage with Googleβs developer relations team.
Why we care. Google is making it easier to access and implement advanced tools that power automation, tracking and campaign optimization. This can help teams work more efficiently, especially those relying on APIs, tagging and data integrations. As advertising becomes more technical and AI-driven, having a centralized hub lowers the barrier to building more sophisticated, scalable setups.
The big picture. As advertising becomes more automated and API-driven, Google is investing in infrastructure that supports developers and technical users who manage complex integrations across platforms.
Zoom in. New features include a βmeet the teamβ section, a centralized support page linking to Discord and GitHub resources, and a media hub featuring content like Ads DevCast.
What to watch. Whether this hub becomes the primary entry point for developers working across Googleβs ad products β and how it evolves with new AI and measurement tools.
Bottom line. Google is simplifying access to its ad tech ecosystem, betting that better developer support will drive more innovation and adoption.
Most agencies present prospective clients with an account audit as part of their sales process. The purpose is twofold:Β
To provide immediate value (usually without strings attached).
To demonstrate that they know their stuff.
But how often do brand marketers turn the tables and audit their agencies in their RFP?
Iβm the head of performance marketing at a marketing agency, so Iβm clearly writing from a biased perspective. However, over my decade-plus in the industry, Iβve seen too many brands settle for βgood enoughβ because they didnβt know which questions would reveal the cracks in a potential partnerβs strategy and approach.
If I were a brand looking for a true growth partner, here are the specific questions Iβd ask to separate the top performers from the rest.
1. What are your key services, and what percentage of your clients utilize each?
A lot of agencies claim to be βfull service,β but rarely are they βfull excellence.β Iβd be looking for where an agency truly spends its time versus where theyβre just trying to upsell me.
Itβs less about the channels in question (although if, say, LinkedIn is a key growth driver for your brand, theyβd better demonstrate proficiency there), and more about how their strengths align with your needs.
If an agency claims to be experts in SEO, creative strategy, and paid media, but 90% of their client base only uses them for paid search, thatβs a red flag. You want a partner whose core competencies align with your primary needs.Β
If you need high-volume creative testing, you want an agency where 80%+ of clients use its creative production frameworks, not one that treats creative as an add-on service.
2. How are you approaching AI-driven account optimization and platform automation?
I miss the days when knowledge of the manual controls at your disposal could set you apart as a high-performing marketer. But those days have been gone for a while.
In 2026, thereβs a real danger of over-optimization with the controls we have left. This can reset algorithmic learnings and prevent them from fine-tuning in service of your goals. Agency teams that strike this balance most certainly have a healthier approach than those who either blindly trust algorithms or canβt help tinkering excessively.
One control you can and must be diligent about using is first-party data for enhanced conversions and offline conversion tracking. Part of the job of a great marketer is training the algorithms on which leads and which conversions to target, and first-party data is a huge lever to pull in that regard.
3. What is your reporting process and what KPIs do you focus on for the majority of your clients?
Donβt just ask for a sample report. Anyone can make a PDF look pretty. You need to understand their philosophy on data.
Youβre looking for an agency thatβs willing to move upstream. If the majority of their clients are measuring success on clicks, traffic, or even MQLs, run the other way.
A performance-driven agency should be obsessed with revenue, ROAS, and pipeline velocity. Ask them how they handle attribution. If they rely solely on in-platform metrics, which often over-claim credit, they arenβt looking at the full picture.Β
4. Whatβs the average industry tenure of the team on my account?
This is actually a pretty common question and has been for years. Too many marketers know the pain of integrating rotating sets of agency teams because the agency canβt hold onto top employees, and you should be evaluating the answer from this perspective.
Thereβs another factor to consider. Generally speaking, the more experienced a marketing team is, the more effectively it uses AI tools.
Whereas junior marketers might be more avid proponents of AI and quicker to adopt its functionality, theyβre also far more likely to use it for things like creative ideation and strategy. Both are areas where high-quality human thought is a true differentiator.
For this answer specifically, remember that you have some great research tools like Glassdoor that you can and should access. Employee tenure is one thing, but a Glassdoor profile with a bunch of red flags is an indicator that the agency might struggle to keep the talent it really wants to retain.
5. How is your team using AI on client accounts?
Again, youβre looking for a balance here. Agency teams that donβt use AI at all are almost certainly burning resources on manual tasks, but agency teams that overuse it to replace perspective, critical thinking, and creativity are commoditizing their own client service.
Two follow-up questions to ask:
What is your governance structure for AI use?
Whatβs your process for QAing AI output?
Youβre looking for firm answers and redundant layers for each of these questions β at the very least, someone relatively senior should approve any output before it goes live.
6. When you take over an account, what are the first things you do to save budget without affecting growth?
This is the ultimate litmus test for technical proficiency. A great performance marketer knows where the ad platforms hide the waste buttons. If I were a brand marketer, Iβd want to hear about:
What inputs are driving wasted spend (audiences, networks, keywords, etc.).
A plan to prioritize budget around whatβs driving business outcomes.
If an agency canβt rattle off these specific checks, theyβre likely missing the βlow-hanging fruitβ of budget efficiency. Fixing some of these takes seconds, but missing them costs thousands.
What separates a true growth partner from the rest
Remember: when youβre choosing an agency partner, itβs the job of each agency to sound as good as they possibly can, but what an agency considers to be a great answer might not be a great fit for your brand.Β
By focusing on utilization rates of services, strategic application of AI, and approaches to budget efficiency, youβll find a partner capable of driving actual performance, not just spending your budget.
Overclockers UK unveils pricing for AMDβs Ryzen 9 9950X3D2 Dual Edition CPU Overclockers UK has unveiled the pricing of AMDβs Ryzen 9 9950X3D2 Dual Edition CPU, which AMD unveiled last month. This CPU is AMDβs new AM5 flagship, offering 16 cores and 192 MB of total L3 cache. This is the first consumer-grade X3D CPU [β¦]
Dell is quietly turning its "Premium" laptops back into XPS machines to fix a naming mess, and you can save nearly $900 on the hardware formerly known as the Dell 14 Premium.
Intel's aging Raptor Lake desktop CPUs are "not going anywhere," according to an Intel exec, as RAM prices soar and PC gamers search for DDR4 alternatives. Are these chips worth buying in 2026? Let's explore.
SambaNova today announced the next phase of its collaboration with Intel: a heterogeneous hardware solution that combines GPUs for prefill, Intel Xeon 6 processors as both host and "action" CPUs, and SambaNova RDUs for decode to deliver premium inference for the most demanding Agentic AI applications. The design will be made available in H2 2026 to enterprises, cloud providers, and sovereign AI programs that want to run coding agents and other agentic workloads at scale.
"Agentic AI is moving into production - and the winning pattern we're seeing is GPUs to start the job, Intel Xeon 6 to run it, and SambaNova RDUs to finish it fast," said Rodrigo Liang, CEO and coβfounder of SambaNova Systems. "Together with Intel, we're giving customers a blueprint they can deploy in existing airβcooled data centers, with broad x86 coverage for the coding agents and tools they already use today."
Edgecore Networks, a leader in innovative network solutions for enterprises, data centers, and telecommunication service providers, is thrilled to announce the launch of the EAP115, a Wi-Fi 7 dual-band wall-plate access point designed to deliver reliable, high-performance, and cost-effective connectivity for in-room deployments.
Built for environments such as multi-dwelling units (MDUs), hospitality, dormitories, and shared office spaces, the EAP115 is engineered to meet the growing demand for scalable, per-room connectivity while enabling seamless integration of IoT services.
The Moto Pad is an 11-inch 2.5K tablet with a 90Hz display, powered by MediaTek's Dimensity 6300 5G chip and paired with 8GB of RAM and 128GB of storage, along with a microSD slot that supports cards up to 2TB.
The hunt for Satoshi Nakamoto has circled back to a likely candidate, Adam Back, thanks to a New York Times article that draws striking parallels between the two. Back denies being Satoshi, saying it's all just a coincidence and confirmation bias on behalf of the reporter. The 40-page-long investigation goes over decades of evidence to try to prove otherwise.
ogsm.io helps small businesses create a clear, one-page OGSM strategy in about 10 minutes. It guides you through a brief conversation and turns your answers into objectives, goals, strategies, and measures you can download as a PDF or share online, so you can act with focus and track what matters.
ChoreChomp is an AI-powered chore coach for families. Parents assign custom chores with reference photos, kids snap a photo when they're done, and the AI checks the work and gives age-appropriate feedback. Parents approve final scores, award points, and set reward goals to keep motivation high. The app also has a homework helper that guides with Socratic hints without giving answers. It protects privacy with no child accounts, strict guardrails, person detection, and short-lived photos, and one subscription covers unlimited kids.
Maskerade.ai lets you deploy unlimited, highly accurate AI personas to browse the web and navigate your site. Get deep, actionable insights into the thoughts and feelings of your hardest-to-reach audiences.
Analytics tools tell you what happened on your site, but not why. Customer research tells you what a group of people thinks and why, but it's slow, labor intensive, and costly. With Maskerade, you can combine the best of both worlds.
The Russian threat actor knownΒ as APT28 (aka Forest Blizzard and Pawn Storm) has been linked to a fresh spear-phishing campaign targeting Ukraine and its allies to deploy a previously undocumented malware suiteΒ codenamed PRISMEX.
"PRISMEX combines advanced steganography, component object model (COM) hijacking, and legitimate cloud service abuse for command-and-control," Trend Micro
If you're searching for a new laptop, then you should check out these six top laptop offers I've found right now, including devices from Acer, Dell, Apple, and more from $159.
Matei Zaharia has won the top honor from the Association for Computing Machinery. Now he's working on AI for research and says AGI is simply misunderstood.
Save up to $500Β on your TechCrunch Disrupt 2026 pass untilΒ April 10, 11:59 p.m. PT. Secure your spot at the center of the tech ecosystem. Register today.
Google is laying the groundwork for βagentic commerce,β where users can complete purchases directly inside AI-driven search experiences.
Whatβs happening. Google has published a new onboarding guide for its Universal Commerce Protocol (UCP) in Merchant Center, outlining how merchants can integrate with the system and enable checkout directly from product listings in AI Mode and Gemini.
The big picture. As AI search evolves from discovery to transaction, Google is pushing to keep users within its ecosystem by embedding shopping and checkout into conversational experiences.
How it works. Merchants must first complete a technical integration, then submit an interest form and wait for approval before gaining access to onboarding tools in Google Merchant Center, including a sandbox environment to test integration, identity linking and checkout APIs.
Why we care. Google is moving search closer to transaction, meaning users may complete purchases directly inside AI experiences instead of visiting your website. This shifts where conversions happen and could change how performance is measured, attributed and optimized. Early adopters of the Universal Commerce Protocol may gain a competitive advantage as shopping becomes more integrated into tools like Gemini.
Zoom in. The protocol acts as an open standard for connecting product data, user identity and payment flows, enabling seamless purchases without redirecting users to external sites.
What to watch: The rollout is gradual and currently limited to the U.S., with a dedicated UCP integration tab expected to appear in Merchant Center accounts over the coming months.
Bottom line. If widely adopted, the Universal Commerce Protocol could redefine how online shopping works β turning search into a full-funnel, AI-powered checkout experience.
Meta Platforms is making it easier for advertisers to implement tracking, reducing technical friction for teams running campaigns across platforms.
Whatβs happening. Meta released an official Pixel template inside Google Tag Manager, replacing the need for third-party or community-built workarounds.
How it works. The new template allows advertisers to reuse their existing GA4 dataLayer, meaning events already configured for Google Analytics 4 can be leveraged without rebuilding tracking from scratch. It also automatically maps enhanced e-commerce events such as purchases, add-to-cart actions, content views and checkout initiations, eliminating the need for duplicate tagging.
Why we care. This reduces implementation time, lowers the risk of tracking errors and ensures consistency across platforms, especially for advertisers managing both Google and Meta campaigns.
What to watch. Whether this leads to broader adoption of Meta Pixel tracking among advertisers who previously avoided complex setups, and if similar cross-platform integrations follow.
Bottom line. Meta is removing one of the biggest headaches in ad tracking β making it faster and easier to get reliable data across platforms.
First seen. This update was spotted by Paid Media expert Thomas Eccel who shared spotting the update on LinkedIn.
Ask most ecommerce brands who owns their product feed, and the answer is almost always the same: the paid media team.
Maybe a feed management tool sits under PPC. Maybe the shopping team built the feed years ago, and nobodyβs touched the titles since. Either way, SEO rarely has a seat at the table, and itβs often forgotten as part of the broader feed management strategy.
Whether youβre worried about AI search or traditional clicks, youβre missing out on opportunities by excluding SEO from your feed management strategy.
AI shopping results are grounded in Google Shopping data
Up to 83% of ChatGPT carousel products match Google Shoppingβs organic results, according to a recent Peec AI study analyzing more than 43,000 listings. And 60% of those matches came from Shopping positions 1-10.
Data shows how ChatGPTβs product carousel matches Google Shoppingβs organic results, with Google dominating over Bing.
On Googleβs side, the Shopping Graph now contains more than 50 billion product listings and feeds directly into AI Overviews, AI Mode, and Gemini. AI Overviews appear in roughly 14% of shopping queries, up from about 2% in late 2024. Like many other things weβve discovered about AI search, the generative results are informed by traditional SERP.
SEO needs to be the strategic quarterback for brand authority. This is a highly valuable opportunity to work cross-channel toward a common goal of improving visibility across search surfaces. It really requires SEOs, commerce, and paid media teams to get in the same room.
Typically, brands run a single product feed optimized for Google paid shopping campaigns. Titles are written for bid relevance, descriptions are built for Quality Score, and the feed exists to win auctions, with less consideration for user search behaviors.
As user behavior shifts, search surfaces favor stronger semantic alignment between queries and product data. A title stuffed with paid-friendly modifiers or branded terms isnβt the same as a title that mirrors how someone conversationally searches for a product.
We tested this with a large ecommerce brand. Our agencyβs AI SEO team partnered with the commerce team to launch a dedicated product feed for free organic listings, with titles and descriptions optimized specifically for organic visibility, rather than replicating what was already running in the paid feed.
After the organic feed was pushed live:
Organic listing CTR increased 10% month over month, alongside a 4% lift in purchasing rate.
A product-level test saw a 92% increase in revenue for free listings, with visibility up 83%, and add-to-cart up 14%.
The organic optimization changes alone drove 35,000 impressions at a 1.4% CTR, 55% higher than the CTR seen in paid for the same time period.
Rather than replacing our paid feed strategy, we recognized that organic and paid shopping solve different problems and have different needs that require optimizing accordingly.
Organic feed titles should reflect how your customers actually search, not how your bidding strategy is structured.
Not every feed attribute carries equal weight. If youβre building a dedicated organic feed or just auditing your existing feed for gaps, hereβs where you could start.
Titles are the highest-impact lever
Googleβs algorithm heavily favors feed titles when matching products to queries, and its own documentation emphasizes including important attributes to βbetter match search queries and drive performance lift.β Consider how a customer might describe what theyβre looking for in a conversational way, and how that aligns with product attributes.
Googleβs Merchant Center documentation reinforces the point that your feed strategy should map to how your customer actually shops to help improve their search journey
Global Trade Item Numbers (GTINs) are non-negotiable
Googleβs GTIN documentation makes clear that products with correct GTINs receive significantly more visibility. Industry data has consistently shown that properly matched products can drive up to 40% more clicks. Theyβre also the primary signal for aggregating product reviews across sources.
Donβt overlook images
Theyβre still the most common source of Merchant Center disapprovals. Products with both standard and lifestyle images typically see significantly higher engagement.Β
If budget or bandwidth has kept better product images on the back burner, Googleβs Product Studio can help handle some of the editing, so you can test and improve creative at scale without a full reshoot. Itβs also a way for SEO and creative teams to collaborate on feed-specific assets and testing.
Optimize key product attributes: product_highlight and product_detailΒ
product_highlight lets you add scannable benefit statements that appear in expanded Shopping views. For instance, βwater-resistant for light rain commutesβ is doing more work than βhigh-quality materialβ for both the shopper and the AI.Β
product_detail provides structured specifications that power Googleβs faceted filters in organic product grids.
The same semantic work SEOs are doing to optimize product detail pages (PDPs) for conversational search β like defining ideal buyers, naming use cases, and articulating compatibility β should inform feed attributes.Β
Product and content teams already understand what drives someone to buy. That context should be in the feed, not just on a brandβs PDPs.
Your feed is also your agentic commerce foundation
Hereβs what makes this investment compound: the feed optimization work done today for organic shopping visibility will also help build brand readiness for agentic commerce standards and applications.
Googleβs Universal Commerce Protocol, announced in January, is a framework that enables AI agents to discover products, build carts, and complete transactions directly inside AI Mode and Gemini. The shopper may never land on the brand website to make a purchase. UCP isnβt a replacement for Google Merchant Center, because itβs built directly on top of GMC data.
Feeds are how products enter the Shopping Graph. The Shopping Graph is the dataset AI agents query when processing a shopping request. The new native_commerce attribute added to feeds is what signals that a product is eligible for the UCP-powered βBuyβ button in traditional and AI-driven Google services.
Google has also announced the eventual rollout of several new Merchant Center attributes designed specifically for conversational commerce:Β
Product FAQs.
Use cases.
Compatible accessories.
Product substitutes.Β
These are additions to an existing GMC feed that give AI agents the contextual understanding they need to match products to natural-language queries like βwhatβs a good waterproof jacket for bike commuting?β These new conversational attributes are rolling out to a small group of retailers first.
This is where feed data and on-page content need to stay tightly aligned. Search surfaces cross-reference a brandβs feed against:
Structured data.Β
PDP content.
Other sources to validate findings.Β
When those layers contradict each other, trust erodes at the domain level.Β
Product feed strategy and optimization is an opportunity for genuine cross-team collaboration to test, execute, and measure visibility. A holistic approach to managing product details across every surface will benefit brands in both traditional and AI-driven search.
SEOs bring the keyword intelligence, semantic understanding, and knowledge of how AI systems match queries to content.Β
Commerce and marketplace teams own the product data, product information management, and relationships with retailers.Β
Paid teams have the feed infrastructure, the tools, and years of experience managing feed health at scale.
These teams must work together to coordinate their insights and effectively establish an AI SEO operating system. The product feed sits at that intersection as itβs an owned asset managed by commerce infrastructure that directly feeds AI-powered visibility.
The first step is to pull a current feed and compare organic titles to paid titles. The second step is getting the right people in the room to build something better. SEO is most successful when more channels align toward the same goal: better brand visibility.
The March 2026 core update finished rolling out today after 12 days and 4 hours, completing Googleβs first broad ranking update of the year.
What happened. Google confirmed the rollout ended at 06:12 PDT, per its Search Status Dashboard. The update began March 27 and impacted search rankings globally.
Google previously said this was βa regular update designed to better surface relevant, satisfying content for searchers from all types of sites.β
The timeline. Google originally estimated the March 2026 core update would take up to two weeks to complete.
Why we care. Now that the rollout is complete, you can assess impact with more confidence. Analyze ranking and traffic changes, identify winners and losers, and adjust your content strategy based on what the update appears to reward.
Previous core updates.Β Hereβs a timeline and our coverage of recent core updates:
Appleβs next-gen MacBook Neo should feature up to 12GB of memory and an A19 Pro silicon upgrade According to MacRumours, citing a report from Tim Culpan, Apple plans to release a new MacBook Neo in 2027. This new model will reportedly feature Appleβs new A19 Pro processor, the same chip as the Apple iPhone 17 [β¦]
The full map for Forza Horizon 6 has been revealed, and it's a big one. From the vast Tokyo City to the snowy mountains and everything in between, here's how it's all laid out.
OpenNOW is quickly gaining popularity as a quality GeForce Now app alternative, all thanks to its open-source nature. It still has some limitations compared to the official software, but it's on the right path ... assuming NVIDIA's legal team doesn't get involved.
The Forza Horizon 6 preview build is but a tiny snapshot of what we can expect, but even so, I found four new features that add a little change to the way you'll play the game.
SK hynix has started shipping its PQC21 cSSD, the first product built on its 321-layer QLC NAND flash technology. The product is available in 1 TB and 2 TB capacities, with Dell as the first customer starting this month. QLC (Quad-Level Cell) stores four bits per cell, which maximizes storage density per unit area but has historically come with a write performance penalty compared to TLC (Triple-Level Cell). SK hynix addresses that with SLC caching, frequently accessed data is written to faster SLC-mode regions first before being transferred to the QLC cells, smoothing out the performance gap for typical workloads.
According to market research from IDC (International Data Corporation), the QLC NAND is expected to go from 22% of the global cSSD market in 2025 to 61% by 2027, so the timing of this launch makes sense. SK hynix says it will expand beyond Dell to other major customers as production scales up.
The next evolution of tactical RPGs has arrived. Annulus is available now across Steam, the App Store, and Google Play, bringing a bold new entry to the modern strategy RPG landscape. Blending innovative battlefield design, deep strategic gameplay, and striking visual direction, Annulus sets out to redefine the genre with a fresh, art-driven approach to tactical combat.
Reinventing Tactical Combat from the Ground Up
At the heart of Annulus lies a distinctive battlefield system - a bold departure from traditional design. This new structure transforms how players approach positioning, movement, and control, turning every encounter into a dynamic tactical puzzle. Rather than relying on static strategies, players must constantly adaptβanticipating enemy actions, optimizing spatial advantage, and executing carefully timed decisions. The result is a combat system that feels both intuitive and deeply strategic, rewarding foresight, creativity, and precision at every turn.
Liquid Swords, the studio founded by Christofer Sundberg and led by the senior team whose past work helped create genre-defining action experiences like Just Cause and Mad Max, released SAMSON: A Tyndalston Story on Steam and the Epic Games Store. It costs $24.99 and delivers a brutal, and succinct, story where every choice matters and the city always remembers.
Samson McCray comes back to Tyndalston after spending time in jail, to find the city taken over by a new designer drug called White Whisper. For dealers, it's a lucrative business, and for the customers, it's a slow death. Tyndalston's a city built from equal parts ambition, decay, and deeply personal grudges, with a debt that grows every single day and a sister being used as leverage by the people he owes. In Samsons' hunt, he uncovers something bigger and more gruesome than he could ever imagine.
The repairability scores come from the Failing the Fix 2026 study by the US Public Interest Research Group (PIRG) Education Fund, a nonprofit consumer advocacy group.
The lightweight laptop β available in four colors and two configurations β was built around Apple's A18 Pro processor, the chip that powered last year's iPhone 16 Pro. In the Neo, that silicon wasn't newly fabricated; instead, Apple repurposed remaining batches from the initial iPhone run, according to Ben Thompson of Stratechery.
Rolling out from April 7 on desktop Chrome (download here), the vertical tabs feature gives users the option to move the browser's tab strip from the top of the window to a sidebar on the left.
Many of these job cuts are blamed on AI, but some experts say that it's actually caused by bad business decisions or corporate pivots. Still, they do not discount the disruption that AI will have on the job market, even as some companies buck the trend and hire more junior roles.
The Asus ROG Xbox Ally is the cheapest Windows-based PC gaming handheld you can get right now, but some technical hitches and a mediocre processor don't help it shine next to the Steam Deck OLED.
Hreflang has long been a core mechanism in international SEO, directing users to the right regional version of a page. That approach worked when search engines primarily returned static results.Β
AI-driven synthesis changes that. Instead of returning lists of links, AI systems construct answers. They donβt need, nor want, your perfectly implemented hreflang tags. They arenβt looking for instructions on which page to serve. Theyβre trying to determine which answer is best supported across sources.
Your content has to hold up when the model compares it against everything itβs seen, regardless of language or origin. If it doesnβt, it wonβt be used.
What hreflang does and doesnβt do
We need to address a fundamental misunderstanding of the hreflang attribute. Hreflang has always been a switcher, not a booster.Β
If your brand lacked organic authority in Australia before implementing the tag, adding the en-au attribute wouldnβt magically improve your rankings in Sydney. Its only function was to ensure that if you did rank, the user saw the correct regional version.
In AI search, this βyou vs. youβ dynamic has become a liability. While traditional search still relies on these tags to organize traffic, AI models often bypass them during the synthesis phase. If a brandβs U.S.-based .com site possesses decades of authority, the AIβs internal logic may determine that the U.S. site is the true source of information.Β
Consequently, even when a user in Berlin searches in German, the AI may synthesize an answer based on the U.S. data and simply translate it on the fly, effectively ghosting the brandβs localized German site despite perfectly implemented hreflang tags.
The double-blind: Query fan-out vs. entity compression
AI models donβt just answer the query you see. They expand it into dozens of hidden checks, comparing sources, validating claims, and pulling in information across languages to see what aligns.
ChatGPT often translates and evaluates queries in English even when the user searches in another language, research from Peec AI shows. This reinforces how query fan-out operates across markets. If your local entity doesnβt hold up in that broader comparison, it doesnβt get used.
A second issue happens before retrieval even begins. During training, LLMs compress what they see so it can be stored and reused at scale.
When multiple regional pages look too similar, they donβt stay separate. Theyβre folded into a single representation, also known as canonical tokenization.
Local details β phone numbers, office locations, and market-specific references β donβt always survive that process. Theyβre treated as minor variations rather than meaningful signals.
By the time the model is asked a question, your local site is often no longer competing. In many cases, itβs already been absorbed into the global one.
To compete globally, expand your strategy to include signals that resonate with AIβs data supply chain.
1. Build locally aligned infrastructure
Meta tags tell systems what you intend. Infrastructure often tells them what to believe. Datasets like Common Crawl use geographic heuristics, IP location, and domain structure to make sense of content at scale. That happens early in the process, before anything resembling ranking.
This means your content may already be placed in a market before the model ever evaluates it. If your regional domains arenβt supported by local infrastructure or delivery, youβre sending mixed signals. Those are hard to recover from later.
2. Break the compression threshold
To break the semantic gravity that leads to entity compression, you need what I would call a clear βknowledge delta.β Most global teams fail here because they think localization means translation. It doesnβt.Β
Thereβs no universally accepted magic number for unique content. From a semantic vector perspective, I speculate that a divergence threshold of at least 20% of the content on a local page must be unique to prevent the model from collapsing your local identity into your global one.
To address this, front-load market-specific data, such as regional shipping logistics, local tax identifiers, and native case studies, into the first 30% of your page. This lets you provide the mathematical proof the model needs to cite your local URL as a distinct authority.
3. Anchor your entity in semantic neighborhoods
AI models interpret market relevance by looking at the company you keep in the text. Incorporate geographic anchoring by referencing local neighborhoods, regional landmarks, or specific transit hubs (e.g., βlocated near the Alexanderplatz stationβ in Berlin).Β
These co-occurrence signals pull your brandβs vector embedding toward the specific local coordinate in the modelβs training data, creating a geographic fence that helps the AI disambiguate your local office from your global headquarters.
The origin of your links is a primary signal of market authority. During the fan-out phase, AI models look for regional consensus.
This is one of the areas where traditional link building logic starts to break. Itβs not just about getting links. Consider where those links originate, along with their authority and contextual relevance.
If your Australian page has backlinks primarily from U.S.-based websites, the model has little evidence that you actually belong in or are relevant to the Australian market. Local sources, including high local trust and location-specific news outlets, change that. Without them, youβre often treated more like a visitor than a participant.
5. Incorporate linguistic and authoritative nuances
LLMs pick up on regional language nuances far more than most teams expect. This is where simple translation starts to break down. Unique market- or colloquial-specific terms, formatting, and even small legal references signal whether something actually belongs in a market.
Use the terms people in that market actually use β things like βincl. GST,β local identifiers like ABN, and even spelling differences. Without these signals, the page may be technically and linguistically correct, but it wonβt register as truly local.
6. Capture the invisible long-tail
As mentioned, LLMs often generate multiple incremental queries during their research phase. These invisible queries may focus on local friction points, such as βHow does this product comply with [name of local regulation]?βΒ
By incorporating local FAQ clusters that address these nuances, you ensure your local URL survives the fan-out check, making your global .com too generic to be cited in a localized answer.
Expand your SEO reporting beyond traditional rank tracking. Incorporate AI citation audits by using a local VPN to query the most popular generative engines in your target markets.Β
If the AI consistently pulls from your global .com domain for a local query, itβs a clear signal that your local domain lacks the necessary evidence chain. Identify where this market drift is occurring and reinforce those specific pages with more unique local data and infrastructure signals.
Hreflang and traditional technical signals still shape how search engines organize and deliver content, but they donβt determine what AI systems use.
AI models evaluate which sources to use based on evidence of local relevance. Without a distinct presence in each market, they default to the version of your brand they trust most, which often isnβt the one you intended.
Translation alone doesnβt establish that presence. Your content needs to demonstrate that it belongs in the market itβs meant to serve.
Youβre facing a major shift as familiar manual targeting levers disappear in favor of AI-driven discovery. Platformsβ automated tools are collapsing campaign types, obscuring data, and replacing manual targeting with intent-based algorithms.
This is a shift from selection to prediction. You wonβt adapt by holding onto old controls β youβll adapt by learning to engineer the inputs that replace them. Hereβs how to make sure you have the tools to stay on top.
The end of manual targeting as you knew it
You previously relied on granular keyword lists, demographic filters, and custom exclusions to target ideal customers. You told platforms exactly who to target and paid to access that inventory.
Now, platforms have eliminated those controls:
Google collapsed campaign types into Performance Max, removing keyword-level targeting in favor of βasset groupsβ and βaudience signalsβ β suggestions, not directives.
Meta launched Advantage+, automating demographic and interest targeting so your role shifts from selector to signal provider.
Microsoft extended the same model to Bing, confirming this is an industry-wide shift, not a single-platform experiment.
Targeting didnβt disappear β it moved inside the platformβs black box. The algorithm now targets based on data within its own ecosystem.
Platforms are clear: manual segmentation is gone, and automation is here to stay.
The rise of audience engineering
If targeting is now internal to the algorithm, your role changes. Itβs less about selecting your audience and more about engineering it.
From targeting to teaching
The distinction is critical. Traditional targeting focused on selecting audiences. Audience engineering focuses on instructing the algorithm through high-quality conversion signals, precise creative, and first-party data. It teaches AI systems who to find and what to optimize for.
Hereβs how this changes your workflow:
In the past, to target CFOs, you might use job title filters and negative keyword lists. With audience engineering, you instead upload high-quality data (e.g., βdeal closedβ signals) to define a high-value prospect. You also tailor creative to CFO-specific pain points, teaching the AI to reach people who engage with that message.
The new competitive discipline
If you fight the algorithm and resist this shift, youβll struggle. If you embrace it, youβll succeed by optimizing conversion signals, refining creative, and strengthening your data infrastructure.
As manual levers disappear, the gap between strong and average performance comes down to signal quality. Audience engineering is what closes that gap.
The three levers that now drive targeting
You must optimize three critical inputs the AI uses to segment for you:
1. Conversion signal quality
Tell the algorithm what matters. If you optimize for cheap, top-of-funnel leads, it will get efficient at finding people who fill out forms but never buy β thatβs not what you want.
Focus on meaningful business outcomes, not top-of-funnel metrics. Integrate Offline Conversion Imports (OCI) and Conversions API (CAPI) to feed data on final sales, not just initial clicks. With value-based bidding, you teach the algorithm to prioritize users who drive revenue β effectively targeting high-value customers without using demographic checkboxes.
2. Creative as a targeting mechanism
In a world without demographic filters, your creative becomes your primary targeting mechanism. The specificity of your message does the filtering.
If your creative speaks broadly, the AI shows it broadly. If it speaks to a niche pain point, the AI finds users who resonate with that pain point.
Build ad sets around motivations, not product categories.
3. First-party data as competitive moat
Your customer lists, CRM data, and engagement signals are the foundation the algorithm learns from.Β
This data replaces third-party signals and becomes a critical competitive advantage. Youβre giving the algorithm a cheat sheet to identify your best customers.
How this plays out in real campaigns
The shift to AI-driven targeting isnβt theoretical. As an agency managing over $215 million in annual paid media spend, weβve tested this across platforms and validated it with performance data. Hereβs what weβve learned:
Advantage+ Audiences in practice
A long-time client had a well-established view of its target audience based on years of campaign performance and customer data. Campaigns used manual age caps and layered targeting to protect efficiency.
When we transitioned those campaigns to Advantage+ Audiences, manual exclusions were removed, allowing the algorithm to optimize based purely on conversion signals and creative performance.
During testing, Meta identified and scaled into an older demographic that had previously received minimal budget. This segment delivered a 37% higher CTR than the campaign average and drove stronger downstream conversion performance.
As spend shifted into this audience, conversions came at a lower cost per result while total revenue increased. Broader targeting improved return on ad spend (ROAS) compared to the prior manual strategy.
This reflects a broader trend with Advantage+ Audiences. Paired with strong conversion goals, accurate data signals, and high-quality creative, it consistently identifies high-value segments that manual targeting restricts or misses.
Microsoft PMax Placement Transparency and Advanced Audience Signal Targeting
For another client, we implemented a Microsoft PMax test, using advanced audience targeting and first-party data to reach high-intent prospects across Bing, Outlook, MSN, and the Microsoft Audience Network.
With in-platform placement insights, we monitored performance closely and reacted quickly early on. The campaign drove a 10% increase in conversion rate, a 14% decrease in cost per lead, and a 4x increase in form fills in the first month β followed by another 2x the next month.
This reinforced a key principle: automation performs best with strategic human oversight. While we fed strong audience signals and conversion data, performance drifted as the system expanded into less efficient placements. With Microsoft support and ongoing monitoring, we excluded underperforming placements and refined targeting without over-constraining the campaign.
By letting PMax handle scale and optimization β while maintaining disciplined oversight and guardrails β we preserved efficiency and improved overall performance.
The risks nobody is talking enough aboutΒ
Automated targeting is powerful, but not benevolent. It optimizes for the math you give it. Here are pitfalls to avoid.
Garbage in, garbage out
This is the most important risk. Poorly defined conversion events, incomplete data pipelines, or low-quality first-party data limit performance and train the algorithm on the wrong outcomes.
If you feed it noise, it will scale that noise β wasting budget on low-quality traffic.
If your goal is too broad or lacks strong quality signals, the algorithm will maximize volume, even when that volume doesnβt drive real business value.
The self-reinforcement trap
If your seed data is biased, the AI will keep optimizing toward that bias β potentially missing valuable adjacent audiences. This βsampling biasβ in training data is a real, underappreciated risk in automated systems.
Automation without oversight
Platforms have a financial incentive to push broader automation. Without your oversight and willingness to intervene, campaigns can drift from your business goals. βSet it and forget itβ fails. You need to monitor campaigns and nudge them back on track when they drift.
Creative complacency
As targeting automates, creative becomes your primary differentiator. Neglect it and you lose.
Build creative that directly answers your audienceβs pain points. Stand out.
How to put audience engineering into practice
So how do you operationalize this? Here are three steps to start engineering your audiences today:
Audit conversion events. Review what youβre asking platforms to optimize for. Make sure your signals reflect real business outcomes like revenue.
Restructure creative around intent signals. Ask: what does someone need to believe to convert? Let that drive your messaging. Build asset groups around specific barriers or desires to push the AI to find people who hold those beliefs.
Set guardrails before you let the algorithm learn. Automation works best within clear boundaries. Define performance thresholds before launch. Monitor for audience drift and intervene when results diverge from your goals. AI is a tool, not a replacement for strategy.
The future belongs to audience engineers
The era of manual targeting is over, but precision matters more than ever. Audience engineering is your competitive advantage. By teaching algorithms who to target and what matters, you unlock AIβs full potential and win in this evolving landscape.
Intel adds βGaming Supportβ to its ARC PRO B70 and ARC PRO B65 GPUs with its newest ARC GPU drivers With the release of its ARC Graphic Driver 32.0.101.8629 WHQL, Intel has given its ARC Pro B70 and ARC Pro B65 GPUs official βgaming supportβ. This means that users of Intelβs βBig Battlemageβ GPUs will [β¦]
Xbox is reportedly preparing new Forza Horizon 6 themed accessories, including a limited edition controller and headset, with pricing details already emerging ahead of launch.
Arctic today released the Xtender Black (clear glass), a spacious ATX mid-tower case, and a new variant of the Xtender series that the company launched in August 2025. The case features a pillarless front-left corner, and clear tempered-glass panels along the front- and left side panels. Black SECC steel makes up the overall sheet metal component of the case. The case comes with or without a pre-installed vertical GPU mount, and includes five of Arctic's latest ARGB LED fans pre-installed: that's two P12 Pro (120 mm), and three P14 Pro Reverse (140 mm).
The case supports motherboards up to E-ATX dimensions, with room for graphics cards up to 48.2 cm in length. There's room for up to two 420 mm (420 mm x 140 mm) radiatorsβalong the top panel, and along the front-right panel; the rear panel has room for two 120 mm fans, or a 240 mm radiator. The Xtender Black (clear glass) is normally priced at $159.99, but is up on Amazon at an introductory price of $119.99. Its VG (vertical GPU) variant is priced at $189.99 (introductory price of $142.99). The Accelero Vertical GPU Mount can be separately purchased at $51.99 (introductory price of $38.99).
Microsoft is working on stripping out the last vestiges of the classic Win32 user interface in Windows 11, and the company could completely remove the classic Win32 Control Panel in a future update. March Rogers, partner director of design at Microsoft, in a post on X, said that his team is working on migrating all the old controls from the Win32 Control Panel over to the modern Settings app. This will be a long-drawn-out process as the company wants to ensure the lack of the old Control Panel doesn't break any critical functionality, particularly with control panel applications that are part of device drivers, such as those provided by printer and network adapter manufacturers.
"We're working our way through migrating all the old control panel controls into the modern Settings apps. We're doing it carefully because there are a lot of different network and printer devices & drivers we need to make sure we don't break in the process," Rogers said. Control Panel is a classic Win32 shell application, technically "control.exe," with various software and device drivers including their own applications with the extension *.cpl which run under control.exe. The modern Settings app, on the other hand, is based on UWP (universal Windows platform), and the latest one conforms to Microsoft's WinUI 3 architecture, also known as "fluent design."
Around two dozen high-resolution photos of the Moon, taken by the Artemis II mission crew, are now available to download on NASA's website and the agency's Flickr page. During the mission, four astronauts observed portions of the Moon's far side that humans have never seen directly.
According to Cox Automotive, sales of new electric vehicles in the US were down 28% year on year in Q1 2026 after the Trump administration killed off the $7,500 consumer tax credit.
Taiwan's semiconductor industry alliance is urging the government to expand its strategic stockpiles of LNG and helium, find alternative suppliers, and restart nuclear power plants to ensure stability during times of crisis.
An epic deal over at HP's website is knocking over $1,600 off the price of this Omen Max gaming PC, fitted with an RTX 5090, 9800X3D, 32GB RAM, and a 1TB SSD, all for just $3,808.69 if you purchase it with a monitor or a $39 accessory.
Rythm adds a bouncer to your email so you control who reaches you. It builds a guest list from your Gmail or Outlook contacts, lets known senders through, and files unknown senders into a separate folder you can check on your terms. Strangers can pay a small cover charge to reach your inbox, with paid messages marked PAID and funds sent to your wallet. Rythm scans messages only to detect payment proofs and discards contents, never storing or sharing any email content.
DaySet calculates your daily guilt-free spend after all your bills and subscriptions are accounted for. Enter your income and expenses once and get one number every morning telling you exactly what you can freely spend that day. Unspent amounts roll over to tomorrow. It also includes an AI coach, recipe photo scanner, tax deduction tracker with PDF export, goals, habits, and a bill calendar with reminders. It's built for anyone who wants to stop guessing and start owning every day.
The Fragmented State of Modern EnterpriseΒ Identity
Enterprise IAM is approaching a breaking point. AsΒ organizations scale, identity becomes increasingly fragmented across thousands of applications, decentralized teams, machine identities, and autonomousΒ systems.Β
The result is Identity Dark Matter: identity activity that sits outside the visibility of centralized IAM and
Proton VPN has launched a new update for iOS devices, introducing improved Shortcuts, Quick Actions, and Siri integration to make securing your mobile connection faster and easier than ever.
The ASUS Zenbook A16 is a Copilot+ PC by definition, but its 5-star success proves no one cares. Major reviews are hailing its 18-core Snapdragon power, lightweight build, and MacBook-beating performance while completely ignoring Microsoft's "Copilot+ PC" label.
TypeFart is a small Windows app that plays a fart sound on every keypress and weird sounds on the touchpad, but you'll need a subscription plan for the action.
Packaging has shifted from a back-end afterthought to the center of Intel's manufacturing strategy as AI workloads push designers to stitch together many specialized dies into a single system. At its Rio Rancho, New Mexico, site β once home to a shuttered Fab 9 that sat idle for years β...
AMD's Ryzen 5 5500X3D extends AM4's life once again, but is it worth it? We tested 14 games to see how this cut-down 3D V-Cache chip stacks up against Zen 3, older Ryzen parts, and newer CPUs.
A premium Asus ROG Strix X870E motherboard, 9800X3D, 32GB Corsair Vengeance RAM, a free AIO cooler, and a copy of Crimson Desert are yours for just $1054.98
The UK National Cyber Security Centre says that Russian state hackers have been exploiting vulnerable small office and home office routers since 2024 to overwrite their DHCP and DNS settings
The Be Quiet! Pure Power 13 M 1200W combines exceptional build quality from CWT with Platinum-level efficiency and a comprehensive 10-year warranty, though its limited connectivity and premium pricing demand careful consideration.
Taiwan's National Security Bureau claims that China is intensifying efforts to steal semiconductor process technologies and other chip-related know-how from Taiwan as international restrictions get more severe.
Corsair has launched a new case customizer for its Frame 4000D that allows buyers to change almost anything about the enclosure, from the side panel to the front-panel I/O configuration.
Axiom enables enterprise teams to turn complex decisions into action quickly. It centralizes procurement and alignment workflows, lets AI agents research options, propose criteria, and score vendors against documentation and RFPs, and generates audit-ready Architectural Decision Records.
Use Axiom to compare human intuition with data-driven scores, collaborate asynchronously to resolve gaps, and approve outcomes with clear traceability. Replace weeks of meetings with days of structured, transparent evaluation.
Artificial Intelligence (AI) company Anthropic announced a new cybersecurity initiativeΒ called ProjectΒ GlasswingΒ thatΒ will use a preview version of its new frontier model, Claude Mythos,Β to find and address security vulnerabilities.
The model willΒ be used byΒ a smallΒ set of organizations, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike,&
An ex-Microsoft engineer recently detailed how Azureβs rushed launch, talent exodus, and AI hype left Microsoftβs cloud fragile and struggling to compete.
MSI, a world leader in gaming hardware, today unveiled the MAG Infinite S AI 2nd, a gaming desktop built for players who want dependable performance, smart cooling, and clear upgrade paths. Featuring processors up to Intel Core Ultra 7 265 and graphics up to NVIDIA GeForce RTX 5070 Ti. The new MAG Infinite S AI 2nd pairs DDR5 memory and M.2 Gen 5 SSD storage slots to deliver fast load times, smooth gameplay, and headroom for future gaming titles.
Uncompromising Performance: Intel Core Ultra 7 & NVIDIA GeForce RTX 5070 Ti
The MAG Infinite S AI 2nd is meticulously engineered to push gaming performance to its limits, powered by the Intel Core Ultra processor and NVIDIA GeForce RTX 50 series graphics. Intel's advanced hybrid architecture delivers the raw processing muscle required for the most demanding modern titles, while the Blackwell GPU architecture of the RTX 50 series graphics unlocks unprecedented frame rates and visual fidelity that were previously out of reach.
Intel has released its latest Arc GPU Graphics Drivers, version 101.8629 WHQL, introducing the gaming world to its newest Arc Pro B70 and Arc Pro B65 discrete GPUs. These new WHQL drivers primarily focus on providing gaming support for these professional-grade GPUs, which means the drivers include minimal game optimizations and retain some existing issues. Since no extensive optimization was necessary, the recently announced BMG-G31-based GPUs only required a simple driver enablement rather than the lengthy optimizations typically needed. The "Battlemage" architecture has been around for a while, so previous optimizations have already been implemented. The performance of the new GPUs is now achieved by linearly scaling the Xe2 cores within the GPU. When workstations take a break from their usual AI development or inference tasks on Arc Pro B70 GPUs, they can seamlessly transition to gaming. Interestingly, with gaming now an option, the Arc Pro B70 becomes Intel's largest discrete GPU for gaming tasks.
Corsair has silently raised the prices on its AI Workstation 300 mini PCs. The top-end Ryzen AI Max+ 395 model is now $3,399, or $400 pricier than it was just a couple months ago. That increase comes amid spiraling RAM and storage prices thanks to the AI boom.
Naftiko turns existing data and APIs into governed, reusable capabilities for AI. Teams declare what they consume and expose in YAML specs, run them with an open-source engine, and publish them to a runtime where discovery, composition, and observability are built-in. Policy-driven controls, identity propagation, and audit trails keep agents inside trust boundaries while reuse metrics and consistent packaging reduce duplication and speed delivery.
Apricot AI provides 24/7 tech support through a lightweight Windows taskbar app. It reads your hardware, drivers, and software to deliver personalized, step-by-step fixes in seconds instead of generic search results.
For $19/month, you get unlimited questions across common issues like WiβFi, printers, slow PCs, drivers, app errors, and more. Apricot AI keeps your data private and uses system info only to answer your questions, so you can solve problems fast without appointments or jargon.
Hexys is a privacy-first behavioral recovery platform built to break compulsive digital habits. Its name comes from the ancient Greek "hexis," Aristotle's concept of stable character built through repeated practice. Every check-in, honest journal entry, and day you show up is a deposit toward who you are becoming. Hexys encrypts your sensitive data on your device with AES-256-GCM zero-knowledge encryption before it reaches our servers, storing only ciphertext. Features include a journal, Arcos AI companion, streak tracking, XP progression, anonymous accountability pods, and a content blocker.
The North Korea-linked persistent campaign knownΒ as ContagiousΒ Interview has spread its tentacles by publishing malicious packages targeting the Go, Rust, and PHP ecosystems.
"The threat actor's packages were designed to impersonate legitimate developer tooling [...], while quietly functioning as malware loaders, extending Contagious Interviewβs established playbook into a coordinated
Learn how to craft AI website builder prompts that produce professional, agency-quality designs by studying elite inspiration sources and mastering effective prompting techniques.
Yesterday, Qualcomm introduced its Snapdragon X2 Elite series of chips to consumers in partnership with major OEMs like ASUS and HP. On the same day, ASUS released additional laptop models based on Intel and AMD SoCs. However, just a day after some media outlets published their reviews, ASUS increased the prices of its products through its launch partner distributor, Best Buy, significantly affecting some reviewers' conclusions. Reportedly, ASUS raised prices by $100 to $350, depending on the laptop model and configuration, regardless of the platform. This is a substantial change; for example, a $350 increase on a $1,000 model like the Zenbook 14 represents a one-third increase in the device's advertised price. Reviewers, such as those at TechPowerUp, evaluate the model based on its value. If a model is aimed at budget users, the perspective of a budget user is applied, and if a model is aimed at workstation users, a small price increase may not be a dealbreaker. However, in the $1,000 range for a laptop, ASUS is now pushing these laptops into a completely different category.
Below is the price change, broken down by model. On the left is the launch pricing listed by ASUS, and on the right is the new pricing from Best Buy as of today.
How well do your gut decisions actually hold up? Convexly is a decision intelligence platform that measures this. Log predictions with probabilities, resolve outcomes, and see how your confidence matches reality. It calculates Brier scores, calibration curves, and runs Monte Carlo simulations to stress-test your choices. Start with a free 2-minute calibration quiz, then track real decisions to improve over time.
Iran-affiliated cyber actors are targeting internet-facing operational technology (OT) devices across critical infrastructures in the U.S., including programmable logic controllers (PLCs), cybersecurity and intelligenceΒ agencies warnedΒ Tuesday.
"These attacks have led to diminished PLC functionality, manipulation of display data and, in some cases, operational disruption and financial
Research Rocket helps founders validate ideas before building. Launch waitlist landing pages and smoke tests in minutes, run pre-launch surveys and concept tests, and get AI-scored demand signals with clear insight summaries. Use built-in tools for card sorting, tree testing, interview guides, and qualitative analysis to map user needs and make evidence-based decisions.
BeatMusic is an online AI music generation platform for creators, musicians, and content producers. No music theory or expensive equipment neededβjust describe what you want and get professional-quality songs in minutes. It offers 20+ professional tools including AI Cover to transform any song with 100+ vocal styles and genres, AI Music Video Generator to turn static images into music videos, and AI Singing Photo to make anyone in a picture sing your song.
Mygomseo is an AI marketing agent that helps you rank across Google and AI search. It scans your site in seconds, runs 40+ technical checks, connects to Search Console, and uncovers issues and keyword gaps. It learns your brand voice, plans a content calendar, writes SEO articles, and auto-publishes to 13+ platforms including WordPress, Shopify, and Webflow. Mygomseo tracks rankings, backlinks, and anomalies, delivers reports, and answers questions on demand so you grow search visibility with minimal effort.
Ka.I Health builds Kai Companion, a local safety app that triggers a help chain. It sends a personalized SMS with your location to selected contacts and automatically calls a primary contact so you get support fast on iOS.
According to The New York Times, testing suggests that approximately one in 10 Google AI search overviews contains false information. Given that the search engine processes roughly 5 trillion queries per year, users could be exposed to more than 57 million inaccurate answers each hour β nearly 1 million per minute.
With the early access launch of Morbid Metal, an indie-developed hack-n-slash roguelite, just around the corner, the game's lead developer, Felix Schade, has taken to X to confirm that players will not need an Ubisoft+ account or the Ubisoft launcher to play the game, despite the game being published by Ubisoft. Apparently, it had been a frequent question asked of the developer, given that the game is being published by Ubisoft.
Third-party launchers can be a pain at the best of times, but they tend to be particularly maligned when combined with an online account requirement. They also potentially hamper a game's compatibility with Linux devices, which is a particular sticking point for Valve's Steam Deck. At the time of writing, Morbid Metal has a Platinum rating on ProtonDB, suggesting that Linux compatibility will not be an issue at launch.
Since the launch of Marathon, there have been a number of reports and complaints about cheating in Bungie's PvPvE extraction shooter, and in response to these complaints, Bungie has taken to social media site X to detail its anti-cheat systems and explain what the development team plans to do to improve the situation going forward. Bungie starts off by reiterating its zero-tolerance policy for cheating in Marathon, confirming that once a cheater is detected, they will be banned from the game forever, but it goes on to explain that cheating is a "continuous cycle of monitoring, improving, and responding."
Bungie goes on to say that it has already implemented some improvements to its anti-cheat systems, but that the development team is working on several new features that will make it easier to report cheaters and easier to provide evidence of that cheating. For starters, it is making it easier to report cheaters both in-game and with web tools, and it is working on tools to give players feedback when their reports result in actions from the anti-cheat system. Bungie is also working on tools, like username privacy options, that will make tactics like stream sniping in competitive matches more difficult. The studio encourages players to report cheaters and toxic players with as much evidence as possible, including clips and VODs. The increased complaints of cheating in Marathon come despite the game's use of BattlEye anti-cheat, which has incidentally locked out Steam Deck and Linux users from playing the game.
The Apple MacBook Neo has been a bit of a shock to system in the laptop world, thanks to its low barrier to entry and surprisingly solid performance for everyday tasks. However, one of the biggest criticisms levelled against the MacBook Neo is its mobile A18 Pro chip and limited 8 GB of RAM. According to recent rumors, though, Apple may upgrade the internals in the MacBook Neo in 2027, bumping it up to an A19 Pro, which is the SoC featured in the latest iPhone 17 Pro smartphones.
The A19 Pro is still a mobile SoC, and it is slated to use the same 5-core GPU configuration as the A18 Pro in the current MacBook Neo, but that SoC upgrade also means that the MacBook Neo would no longer be limited to just 8 GB of LPDDR5X. Instead, as the rumors suggest, the MacBook Neo 2027 will see a 50% memory increase to 12 GB total memory. So far, the MacBook Neo seems to have been such a hit in sales that the Cupertino giant has already started increasing production of the laptop.
The macOS networking stack has a bug that creates a 49.7-day-long countdown to disaster that currently requires a reboot to fix, as discovered by AI service provider Photon.
APPROXINATION is an AI agent-focused search engine that delivers answers powered by an "Inverse Arena," a system where personal AI agents continuously test, compare, and rank the best tech tools to give you data-driven recommendations.
DutyDesk helps importers and trade professionals look up US and EU tariff rates, calculate total landed costs, and classify products. You can search by name or HTS code to see full rates including trade actions and fees, or use AI classification backed by GRI rules and official rulings. Set alerts for rate changes, organize codes by client or shipment, and access data via a REST API. Data sources include USITC, CBP, the Federal Register, TARIC, EUR-Lex, and EU BTI.
Wall-E and Eve are appearing at Disneylandβs Pixar Place Hotel for a limited time in April 2026, as Disney brings back a rare character robot duo and hints at Imagineeringβs next wave of interactive experiences.
The app highlighted the popularity of its public discussions during March Madness, though Threads and X still have more active users during live events.
There may be opportunities for wellness brands that want to engage with people beyond the confines of a doctorβs office, according to a new study from the company.
It's no secret that prices for electronics are skyrocketing across the board, and shortly after it seemed as though there may be some hope on the horizon, it has been revealed by VideoCardz that the ASUS AMD Radeon RX 9070 XT GPUs have seen a significant price increase in US markets. All three ASUS RX 9070 XT models listed on PCPartPicker have seen a substantial price increase in recent days, with the average increase coming out to around 17%.
The ASUS Prime OC Radeon RX 9070 XT saw its price go from $819.99 to $959.95 at both B&H and ASUS's own online store. It currently retails for the same $959.95 at Amazon, too. Similarly, the black version of the Prime OC RX 9070 XT and the ASUS TUF Gaming OC Radeon RX9070 XT have increased from $799.99 to $939.99 and $849.99 to $989.99, respectively. Curiously, the ASUS Prime OC RX 9070 XT can still be had at Newegg and Amazon for its lower $799.99 price, suggesting that this is effectively a pricing change that was implemented by ASUS when new GPUs were sent out to retailers.
When Blizzard rebranded Overwatch 2 to just Overwatch, it also announced a slew of new characters coming in 2026. One of those character designs, Anran, sparked upheaval in the community over her design being generic and a stark deviation from the character previewed in cinematics leading up to her introduction. Blizzard acknowledged the community feedback, announced an upcoming redesign, and has now subsequently shown off Anran's final design ahead of her launch in Season 2 on April 14.
The character's new look features a number of changes, mostly focussed on the facial shape, in order to make her design match her personality, which Overwatch game director, Aaron Keller, describes as "confident, determined, fierce, and a natural-born leader." The changes include giving her a more focussed expression, darkening the shading on her face, giving her a more defined jawline and a wider mouth and smile, and changing her posing to make her appear more confident. In general, critics of Anran's initial design seem to be appreciative of the changes, although, as is the case with anything related to gaming, there are still complaints about her looks.
The Artemis II crew used an iPhone, and suddenly, nothing is more important about Apple's iconic product, not even the possibility of a foldable device.
Japan is advancing floating data center plans using converted ships, with Hitachi support, to address land shortages and growing AI infrastructure demand.
Google CEO Sundar Pichai said AI models could expose more software vulnerabilities and agreed it was plausible AI is affecting zero-day exploit markets.
Google is giving advertisers new visibility into whether its automated recommendations actually drive performance β a long-standing blind spot in the platform.
Whatβs happening. A new βResultsβ tab within Recommendations shows the incremental impact of bidding and budget changes after theyβve been applied, allowing marketers to evaluate outcomes instead of relying on assumptions.
How it works. The feature attributes performance changes to specific recommendations, helping advertisers understand what effect adjustments like budget increases or bid strategy shifts had on results.
Why we care. Marketers can now validate whether recommendations improved performance, making it easier to decide which automated suggestions are worth adopting in the future.
Between the lines. Google has a vested interest in encouraging adoption of its recommendations, so providing performance data could build trust β but it also raises questions about how that impact is measured.
The catch. Advertisers may question whether the reported results are fully objective or skewed toward showing positive outcomes, given Googleβs incentives.
What to watch. How detailed and transparent the reporting becomes β and whether advertisers see mixed or negative results alongside wins.
Bottom line. Google is moving from βtrust usβ to βhereβs the proof,β but advertisers will be watching closely to see how impartial that proof really is.
First seen. This update was first spotted by Arpan Banerjee who shared seeing the new tab on LinkedIn.
Google is giving advertisers more control over how AI generates ad copy, making it easier to scale campaigns without losing brand consistency.
Whatβs happening. Google Ads is rolling out a beta feature that allows marketers to copy text guidelines from existing campaigns and apply them to new ones, eliminating the need to rewrite brand rules from scratch.
How it works. Advertisers can replicate approved tone, style and messaging rules across campaigns in one click, ensuring AI-generated ads stay aligned with brand standards while reducing setup time.
Why we care. The feature helps teams launch campaigns faster by reusing what already works, while maintaining consistency across large accounts where multiple campaigns run simultaneously.
Between the lines. This shift reflects a growing demand from marketers to βtrainβ AI systems rather than rely on them blindly, effectively turning brand guidelines into reusable inputs for automation.
Bottom line. AI is speeding up ad creation, but control is becoming the real differentiator β and Google is starting to hand more of it back to advertisers.
First spotted. This update was spotted by Paid Media expert Arpan Banerjee when he shared spotting the alert on LinkedIn.
UK publisher Kwalee and independent studio Out of the Blue are pleased to announce that the Lovecraftian narrative puzzle adventure Call of the Elder Gods will launch on May 12, 2026. The game is coming to PC (via Steam), Nintendo Switch 2, PlayStation 5, and Xbox Series X|S, and is available day one with Xbox Game Pass. A sequel to 2020's award-winning Call of the Sea, Call of the Elder Gods is a single-player, first-person puzzle adventure with a strong narrative focus.
Players step into the roles of Professor Harry Everhart and newcomer Evangeline Drayton, solving intricate puzzles driven by logic, observation, and environmental interaction.Together, they journey from New England to the Australian desert, the frozen Arctic and the ancient city of Pnakotus, searching for missing loved ones while confronting personal grief.
A little over a year after the release of Assassin's Creed Shadows, Ubisoft has shipped the Title Update 1.1.10, which, aside from the usual bug fixes, adds PSSR 2 support to the game for PS5 Pro players and a number of quality-of-life updates and gameplay changes across all of the game's platforms. The full changelog is available via an Ubisoft news post.
Ubisoft has not detailed the exact visual upgrades wrought by the addition of PSSR 2, but we can likely expect smoother, higher frame rates with sharper upscaling, as has been seen in other games, like Resident Evil: Requiem and Cyberpunk 2077. As of the new update, all players will be able to access the Bo staff, which was previously locked behind the Claws of Awaji expansion. The Switch 2 version of Assassin's Creed Shadows also now features mouse and keyboard support, and the laundry list of bug fixes include UI fixes for damage indicators, a fix for an unintentional +100% stat cap in some cases, issues with fast travel points not being available, and progression getting stuck at 97.89% despite all content being completed.
According to longtime Intel watcher Jaykihn, Nova Lake's integrated graphics will be built around Xe3, the same generation used in Panther Lake integrated GPUs. Jaykihn had previously suggested that Nova Lake would include an Xe4 media component but has since walked that back, stating that "there is nothing Xe4 on...
Intel is working on its own neural texture compression with similar compression performance to Nvidia's counterpart. Best of all, Intel has a fallback version of its compression tech that will work with GPUs that don't come with Intel's XMX engine.
Anthropic's latest frontier AI model, Claude Mythos Preview, is so adept at finding software vulnerabilities that the lab is holding it back to allow companies and institutions to proactively patch their products against the 'thousands' of bugs it has already uncovered.
ZeroTwo lets you access the combined capabilities of Claude, Perplexity, ChatGPT, Manus, and Higgsfield. These top AI platforms each have unique features that give them special abilities beyond their models. Now you can use all of them without paying for several subscriptions. Perplexity's agentic search, Claude's agentic connector, ChatGPT's apps, and Higgsfield's AI tools for creatives are all available on one platform.
The platform also offers deep research, canvas mode, and shared access to threads and projects. Plans include unlimited messages, expanded memory, priority performance, and team features for businesses.
OrbitMeet is a browser-based AI meeting co-pilot that listens to your meetings in real time, surfaces questions you might miss every 75 seconds, and builds your summary as you talk with no plugins or installation.
It detects action items by speaker name, generates follow-up documents such as emails, memos, and action trackers in seconds, and works across Zoom, Teams, Google Meet, or in-person meetings. It's designed for consultants, founders, and distributed teams working in multiple languages. A free plan is available, with Pro at $20.5 CAD/month.
It's officially marathon season, and if you're looking for new gear, I've rounded up our top-rated running shoes and smartwatches that are currently on sale at Amazon.
Google says its AI-powered advertising tools are starting to deliver meaningful results, including major revenue gains for some retailers, as it experiments with how ads work in AI-driven search.
The big picture. Fears that AI chatbots like ChatGPT would disrupt Googleβs core search business havenβt materialized, and instead the companyβs ads business continues to grow, suggesting AI may be expanding how people search rather than replacing it.
By the numbers:
Alphabet Inc. surpassed $400 billion in revenue in 2025.
Q4 ad revenue: $82.28 billion (+13.5% YoY).
YouTube ads: $11.38 billion (+~9% YoY).
Whatβs happened. Google is embedding ads into its AI-powered search experiences, including AI Mode powered by Gemini, while introducing new ad formats designed for conversational queries and tools that allow brands to shape how they appear in AI-generated answers, with a new βbusiness agentβ feature enabling companies like Poshmark and Reebok to control how their products are represented.
Driving the results. AI-driven campaigns like Performance Max and AI Max match ads to more detailed and conversational search intent, and Google says queries in AI Mode are often two to three times longer than traditional searches, giving the system more context to connect users with relevant products, as seen with Aritzia, which reported an 80% increase in revenue after adopting AI Max.
How it works. The system scans a retailerβs website and creative assets, interprets user intent from conversational queries, and dynamically matches products and messaging in real time. This is increasingly important given that 15% of daily searches are entirely new (according to Google) and cannot be predicted through traditional keyword targeting.
Why we care. Google is shifting from keyword-based ads to intent-driven, AI-matched advertising, meaning campaigns can reach consumers with far more precision at the moment theyβre ready to buy. As search becomes more conversational and unpredictable, advertisers who rely on traditional targeting risk falling behind those using AI-driven formats that automatically adapt to new user behavior.
Commerce push. Google is also advancing its commerce strategy through a Universal Commerce Protocol developed with Shopify, which allows purchases to happen directly within AI conversations.
What theyβre saying, Google positions itself as a βmatchmakerβ rather than a retailer, emphasizing that AI helps deliver more relevant and personalized ads while allowing brands to maintain control over their messaging and build user trust by showing the right product at the right moment.
Whatβs next. Gooogle says it has no current plans to introduce ads directly into Gemini but will continue testing and expanding advertising within AI Mode, including more personalized offers and AI-driven shopping experiences.
Bottom line. AI is not replacing search but reshaping it, and for Google that shift is making advertising more conversational, more targeted and, in some cases, significantly more profitable.
Google Search is evolving beyond links and answers into a system that completes tasks, potentially fundamentally changing how users interact with the web. Thatβs according to Alphabet CEO Sundar Pichai, speaking on the Cheeky Pint podcast.
Why we care. Google is signaling a move from information retrieval to task execution.
Search becoming agentic. Traditional search behavior is already changing and will continue to, Pichai said.
βIf I fast-forward, a lot of what are just information-seeking queries will be agentic in Search. Youβll be completing tasks. Youβll have many threads running.β
Pichai also described a future where Google Search acts less like a list of results and more like a system that coordinates actions:
βSearch would be an agent manager in which youβre doing a lot of things. I think in some ways, I use Antigravity today, and you have a bunch of agents doing stuff. I can see search doing versions of those things, and youβre getting a bunch of stuff done.β
AI Mode is already changing queries. Users are already adapting their behavior in Googleβs AI-powered search experiences, Pichai said:
βBut today in AI Mode in Search, people do deep research queries. That doesnβt quite fit the definition of what youβre saying. But people adapted to that. I think people will do long-running tasks.β
Search vs. Gemini overlap. Despite the rise of Gemini, Pichai said Google isnβt replacing Search with a chatbot. Instead, the two will coexist βΒ and diverge (echoing what Liz Reid said last month):
βWe are doing both Search and Gemini. They will overlap in certain ways. They will profoundly diverge in certain ways. I think itβs good to have both and embrace it.β
Googleβs AI Overviews answered a standard factual benchmark correctly 91% of the time in February, up from 85% in October, according to a New York Times analysis with AI startup Oumi.
Why we care. Weβve watched Google shift from linking to sources to summarizing them for more than two years. This report suggests AI Overviews are improving, but still mix correct answers, weak sourcing, and clear errors in ways that can mislead searchers and reshape which publishers get visibility and clicks.
The details. Oumi tested 4,326 Google searches using SimpleQA, a widely used benchmark for measuring factual accuracy in AI systems, the Times reported. It found AI Overviews were accurate 85% of the time with Gemini 2 and 91% after an upgrade to Gemini 3.
The bigger problem may be sourcing. Oumi found that more than half of the correct February responses were βungrounded,β meaning the linked sources didnβt fully support the answer.
That makes verification harder. The answer may be right, but the cited pages may not clearly show why.
What changed. Accuracy improved between October and February, but grounding worsened. In October, 37% of correct answers were ungrounded; in February, that rose to 56%.
Examples. The Times highlighted several misses:
For a query about when Bob Marleyβs home became a museum, Google answered 1987; the correct year was 1986, according to the Times, and the cited sources didnβt support the claim or conflicted.
For a query about Yo-Yo Ma and the Classical Music Hall of Fame, Google linked to the organizationβs site but still said there was no record of his induction.
In another case, Google gave the correct age at Dick Dragoβs death but misstated his date of death.
Googleβs response: Google disputed the Times analysis, saying the study used a flawed benchmark and didnβt reflect what people actually search. Google spokesperson Ned Adriance told the Times the study had βserious holes.β
Google also said AI Overviews use search ranking and safety systems to reduce spam and has long warned that AI responses can contain mistakes.
Microsoft has announced April's wave of Xbox Game Pass additions, and it includes the Call of Duty: Modern Warfare reboot, Hades 2, and many other titles.
Cyberpunk 2077 has been out since 2020, but it seems as though CD Projekt Red's dedication to the game has not waned, with a new April 8 update bringing enhancements to the game on the PS5 Pro. As detailed in a new PlayStation Blog post, Cyberpunk 2077's PS5 Pro update will bring a slew of visual changes to the game on PlayStation 5 Pro. The biggest change is that it will now use PSSR to upscale the game to 4K, and it will feature ray traced lighting, shadows, and reflections on PS5 Pro. It will be a free update that will be available for players playing on a PlayStation 5 Pro.
Cyberpunbk 2077 PS5 Pro Enhanced version will feature three gameplay modes, giving gamers the choice to optimize their gameplay experience for visuals or the best performance. Ray Tracing Pro mode will enable all RT features, including RT reflections, ambient occlusion, skylight, shadows, and emissive lighting, with a frame rate target of 40 FPS on VRR displays or 30 FPS without VRR. Performance mode will feature the highest frame rate target, at 90 FPS with "high image fidelity," although it isn't specified which features are enabled in Performance mode. Meanwhile, Ray Tracing mode will target 60 FPS with "select ray tracing enhancements" enabled, although CDPR again doesn't specify resolution or RT enhancements for this mode.
PeaZip 11.0 refines one of the most capable free archivers with faster browsing, smoother drag-and-drop across tabs, and a cleaner, more responsive UI. The update also improves scaling, adds flexible icon rendering, and introduces batch archive testing, alongside the usual fixes and cleanup.
The newly announced Netflix Playground is an all-in-one app designed to give children a curated gaming experience built around familiar cartoon characters. The streaming giant describes it as an ever-growing library of instantly playable games for kids aged 8 and under.
Shadow OS is the first decision-making app built on 64 hexagrams, the same system Carl Jung studied for over two decades and called his most significant method for surfacing what the unconscious already knows. Other decision apps use random spinner wheels, AI chatbots validate whatever you say, and astrology apps offer forecasts open to interpretation. Shadow OS gives you one committed answer: move forward, hold, or pull back.
BeMusic AI is a free AI music generator that turns text prompts into fully produced, royalty-free songs in under 30 seconds. Choose from 50+ genres, adjust mood, tempo, and energy, and download high-quality WAV or MP3 for videos, games, podcasts, and ads. It also offers tools to write lyrics, create instrumentals, convert audio to MIDI, edit MIDI, make AI covers, remove vocals, extend tracks, and analyze songs. Use it to avoid copyright issues and keep full ownership of every track.
The Russia-linked threat actor knownΒ as APT28 (aka Forest Blizzard) has been linked to a new campaign that has compromised insecure MikroTik and TP-Link routers and modified their settings to turn them into malicious infrastructure under their control as part of a cyber espionage campaign since at least MayΒ 2025.
The large-scale exploitation campaign hasΒ been codenamedΒ
The Nikon D5's still-unbeaten low-light performance and proven build quality made it NASA's choice for the Artemis II mission's most important photographs.
The news follows a report from Nikkei Asia on Tuesday that raised concerns the companyβs foldable iPhone could be delayed due to challenges during the phoneβs engineering test phase.
Google has begun placing sponsored ad units directly inside the Images tab of mobile search results β a new placement that eligible campaigns can access without any changes to existing keyword targeting.
Whatβs happening.Β When a user navigates to the Images tab within Google Search on mobile, they may now see sponsored units appearing within the image grid. Each unit shows a full image creative as the primary visual alongside text, and is clearly labelled βSponsoredβ β consistent with how Google labels ads elsewhere in search results.
How it works.Β Eligible campaigns can serve into the Images tab without any changes to keyword targeting or campaign structure. The placement draws from existing image assets, meaning advertisers running Search or Performance Max campaigns with strong visual creative are best positioned to benefit. No separate image-only campaign setup is required.
Why we care.Β This is a meaningful expansion of Googleβs paid search real estate. For product-led and catalog-heavy advertisers, the Images tab is where purchase-intent discovery often starts β and now ads can appear right in that moment. If your campaigns already use strong image assets, you may be picking up incremental impressions without lifting a finger.
The big picture.Β Early indications suggest this placement behaves more like a visual discovery surface than classic paid search. Expect high impression volume but lower click-through rates β more in line with display or Shopping than traditional text ads. That said, the assist value in multi-touch conversion paths could be significant, particularly for retail and direct-to-consumer brands. Treat it as upper-funnel reach, not a last-click channel.
What to watch.Β Google has not made a formal announcement, and there is no dedicated reporting breakdown for Images tab placements yet. Monitor your impression share and segment data closely to understand whether this placement is contributing β and whether itβs eating into organic image visibility for competitors.
First seen.Β The placement was spotted by Google Ads Expert β Matteo Braghetta, who shared seeing this update on LinkedIn. No official documentation has been published by Google at the time of writing.
Over 30% of outbound clicks go to just 10 domains, with Google alone taking more than 20%, according to a new Semrush study published today.
ChatGPT also relies less on the live web, triggering search on 34.5% of queries, down from 46% in late 2024.
The big picture. ChatGPTβs growth has plateaued, and its role in how users navigate the web is evolving unevenly.
Referral traffic from ChatGPT grew 206%, comparing January 2025 to January 2026.
The details. Most ChatGPT referral traffic still goes to a small set of sites, even as more sites receive some traffic.
Google accounts for 21.6% of all ChatGPT referral traffic.
The next nine domains bring the top 10 to just over 30% of referrals.
Most other sites get a long tail of minimal traffic.
The number of domains receiving referrals expanded, peaking at around 260,000 in 2025 before settling near 170,000.
Why we care. Visibility in ChatGPT doesnβt translate evenly into traffic, and youβll likely see marginal referral impact. The decline in search-triggered queries also limits your chances to earn citations and traffic.
When ChatGPT searches. It defaults to pre-trained knowledge and uses web search in specific cases, including:
User requests for sources.
Questions about recent events.
Situations where the model lacks confidence.
Behavior shift. Most ChatGPT prompts still donβt resemble traditional search queries.
Between 65% and 85% of prompts donβt match standard keywords, reflecting more complex, conversational inputs.
Meanwhile, engagement is deepening. Queries per session jumped 50% in late 2025.
About the data. Semrush analyzed more than 1 billion lines of U.S. clickstream data from October 2024 to February 2026 across a 200 million-user panel, tracking prompts, referral destinations, and search usage.
Malware ploys from bad actors are getting more elaborate, as axios maintainer Jason Saayman explains how the registry's hijacking was weeks in the making and involved a fake Teams update that delivered a trojan.
2026 has thus far been a busy year for gaming mice releases, with hits like the Razer Viper V4 Pro and VXE's upcoming Logitech G305 alternative launching already. Now, SteelSeries seems to be making something of a come-back in the gaming mouse game, with an as yet unreleased Rival Pro and Rival Pro Mini, which have shown up on Reddit in what appears to be an accidental early leak. If the retail packaging shown off in the Reddit post is anything to go by, the mouse will have a couple of nifty features to set it apart from the rest of SteelSeries's line-up.
Despite the Rival moniker, the Rival Pro Mini looks a lot more like the SteelSeries Prime wireless mouse than the rest of the SteelSeries Rival gaming mice. The Rival Pro Mini will weigh in at 49 g and use the PixArt PAW 3950 sensor that has become ubiquitous in flagship gaming mice in recent years. The Rival Pro Mini's main clicks will be optical switches with a 100 million-click MTBF rating. One of the standout features is the "Infinite Power" swappable battery system, which is similar to those used by Angry Miao in the Infinity AM series and Glorious in the Model O3 Wireless. The Rival Pro and Pro Mini will also have 8 kHz wireless connectivity and 100% PTFE skates.
Late last week, we reported on a new series of rowhammer bit-flip attacks targeting GDDR6-based NVIDIA GPUs. Most of these attacks can be mitigated by enabling IOMMU through the BIOS, which restricts the memory regions the GPU can access on the host system, thereby closing the primary attack path. However, researchers from the University of Toronto have introduced "GPUBreach," which can bypass IOMMU and enable CPU-side privilege escalation, unlike the previous "GDDRHammer" and "GeForge" attacks. In most typical server, workstation, and even PC configurations, IOMMU restricts the GPU's access to the CPU's physical addresses, preventing direct memory access. These are the typical DMA-based attacks that the Input-Output Memory Management Unit protects users from. However, the new "GPUBreach" operates differently.
For example, "GPUBreach" exploits memory-safe bugs in the actual GPU driver and corrupts them. When IOMMU confines the GPU's direct memory access to driver-assigned buffers, the new exploit corrupts metadata within these permitted buffers. This causes the driver, which has kernel privileges enabled on the CPU host, to perform out-of-band writes to the buffer, effectively bypassing any protection IOMMU can offer. This logic is built into the kernel by default, as the GPU driver is one of the most trusted components of the operating system. Hence, IOMMU bypass is possible when the metadata is corrupted. Since "GPUBreach" grants an attacker full root privilege escalation, the attack differs significantly from previous rowhammer attacks.
Speaking to Windows users on X this week, Microsoft's Director of Design, March Rogers, said the company is working to address several UI issues across Windows 11. To that end, all settings options are being consolidated in a single location, ensuring users will no longer need to switch between the...
The 2TB SanDisk Extreme Pro UHS-II SD card costs $2,000, bringing its price per GB to nearly $1, making it more than four times more expensive than much faster microSD Express cards.
A compact office mini PC powered by Intel's 12th Gen Core i5-12600H processor with plenty of upgrade potential and power enough for a three-monitor setup.
Xbox has unveiled the batch of Xbox Game Pass titles for the month of April, and it includes some absolute bangers like The Elder Scrolls 4: Oblivion Remastered, Day Z, and more.
In his guidelines, Russia's Ministry of Digital Development highlighted some limitations in VPN detection, which could also help residents navigate the crackdown.
After a wave of screen-only cameras and even the removal of viewfinders in recent updates of various models, I asked you if you'd buy a camera without a viewfinder. Here's what you said.
Sony has revealed more about its 2026 True RGB TV tech, which replaces the traditional backlight with individually controlled red, green, and blue LEDs for greater brightness, color accuracy, and control.
What happens when a controversial lawyer rewrites rules of justice with dangerous brilliance? Here's how to watch Avvocato Ligas from anywhere in the world.
Amazon launched a big tech sale over the Easter bank holiday weekend, but there's still time to score several of the best deals before they're gone β I've picked out the 18 top offers.
Fancy Bear, also known as APT28, has taken over thousands of residential home routers to steal passwords and authentication tokens in a wide-ranging espionage operation.
Google is rolling out new Google Maps features that make it easier to contribute photos, reviews, and local insights, while adding Gemini-powered caption suggestions.
Local Guides redesign. Contributor profiles are getting more visibility. Total points now appear more prominently, Local Guide levels are easier to spot, and badge designs have been refreshed.
Top contributors will also stand out more in reviews with new gold profile indicators.
AI caption drafts. Google is also introducing AI-generated caption drafts. Gemini analyzes selected images and suggests text you can edit or discard.
Caption suggestions are available in English on iOS in the U.S., with Android and broader global expansion planned.
Media sharing. Google Maps now shows recent photos and videos directly in the Contribute tab, making uploads faster.
If you enable media access, Google Maps will suggest images from your camera roll that are ready to post with a tap.
This feature is now live globally on iOS and Android.
Why we care. Google is making it easier to create and scale fresh local content, which can directly affect rankings and visibility. At the same time, stronger contributor signals may influence which reviews users trust and which businesses win clicks.
Google once attributed two of Barry Schwartzβs Search Engine Land articles to me β a misclassification at the annotation layer that briefly rewrote authorship in Googleβs systems.
For a few days, when you searched for certain Search Engine Land articles Schwartz had written, Google listed me as the author. The articles appeared in my entityβs publication list and were connected to my Knowledge Panel.
What happened illustrates something the SEO industry has almost entirely overlooked: that annotation β not the content itself β is the key to what users see and thus your success.
How Google annotated the page and got the author wrong
Googlebot crawled those pages, found my name prominently displayed below the article (my author bio appeared as the first recognized entity name beneath the content), and the algorithm at the annotation gate added the βPost-Itβ that classified me as the author with high confidence.
This is the most important point to bear in mind: the bot can misclassify and annotate, and that defines everything the algorithms do downstream (in recruitment, grounding, display, and won). In this case, the issue was authorship, which isnβt going to kill my business or Schwartzβs.
But if that were a product, a price, an attribute, or anything else that matters to the intent of a user search query where your brand should be one of the obvious candidates, when any aspect of content is inaccurately annotated, youβve lost the βranking gameβ before you even started competing.
Annotation is the single most important gate in taking your brand from discover to won, whatever query, intent, or engine youβre optimizing for.
Indexing (Gate 4) breaks your content into semantic chunks, converts it, and stores it in a proprietary format. Annotation (Gate 5) then labels those chunks with a confidence-driven βPost-Itβ classification system.
Itβs a pragmatic labeler and attaches classifications to each chunk, describing:
What that chunk contains factually.
In what circumstances it might be useful.
The trustworthiness of the information.
Importantly, itβs mostly unopinionated when labeling facts, context, and trustworthiness. Microsoftβs Fabrice Canel confirmed the principle that the bot tags without judging, and that filtering happens at query time.
What does that mean? The bot annotates neutrally at crawl time, classifying your content without knowing what query will eventually trigger retrieval.Β
Annotation carries no intent at all. Itβs the insight that has completely changed my approach to βcrawl and index.β
That clearly shows you that indexing isnβt the ultimate goal. Getting your page indexed is table stakes. Full, correct, and confident annotation is where the action happens: an indexed page that is poorly annotated is invisible to each of the algorithmic trinity.
The annotation system analyzes each chunk using one or more language models, cross-referenced against the web index, the knowledge graph, and the modelsβ own parametric knowledge. But it analyzes each chunk in the context of the page wrapper.Β
The page-level topic, entity associations, and intent provide the frame for classifying each chunk. If the page-level understanding is confused (unclear topic, ambiguous entity, mixed intent), every chunk annotation inherits that confusion. Even more importantly, it assigns confidence to every piece of information it adds to the βPost-Its.β
The choices happen downstream: each of the algorithmic trinity (LLMs, search engines, and knowledge graphs) uses the annotation to decide whether to absorb your content at recruitment (Gate 6). Each has different criteria, so you need to assess your own content for its βannotatabilityβ in the context of all three.
And a small but telling detail: Back in 2020, Martin Splitt suggested that Google compares your meta description to its own LLM-generated summary of the page. When they match, the systemβs confidence in its page-level understanding increases, and that confidence cascades into better annotation scores for every chunk βΒ one of thousands of tiny signals that accumulate.
Annotation is the key midpoint of the 10-gate pipeline, where the scoreboard turns on. Everything before it is infrastructure: βCan the system access and store your content?β Everything after it is competition:
When you consider what happens at the annotation gate and its depth, links and keywords become the wrong lens entirely. They describe how you tried to influence a ranking system, whereas annotation is the mechanism behind how the algorithmic trinity chooses the content that builds its understanding of what you are.
The frame has to shift. Youβre educating algorithms. They behave like children, learning from what you consistently, clearly, and coherently put in front of them. With consistent, corroborated information, they build an accurate understanding.
Given inconsistent or ambiguous signals, they learn incorrectly and then confidently repeat those errors over time. Building confidence in the machineβs understanding of you is the most important variable in this work, whether you call it SEO or AAO.
βConfianceβ (confidence) is the signal that drives how systems understand content. Slide from my SEOCamp Lyon 2017 presentation.
In 2026, every AI assistive engine and agent is that same child, operating at a greater scale and with higher stakes than Google ever had. Educating the algorithms isnβt a metaphor. Itβs the operational model for everything that follows.
5 levels of annotation: 24+ dimensions classifying your content at Gate 5
When mapping the annotation dimensions, I identified 24, organized across five functional categories. After presenting this to Canel, his response was: βOh, there is definitely more.β
Of course there are. This taxonomy is built through observation first, then naming what consistently appears. The [know/guess] distinctions follow the same logic: test hypotheses, eliminate what doesnβt hold up, and keep what remains.
The five functional categories form the foundation of the model. They are simple by design β once you understand the categories, the dimensions follow naturally. There are likely additional dimensions beyond those mapped here.
What follows is the taxonomy: the categories are directionally sound (as confirmed by Canel), while the specific dimension assignments reflect observed behavior and remain incomplete.
Level 1: Gatekeepers (eliminate)
Temporal scope, geographic scope, language, and entity resolution. Binary: pass or fail.Β
If your content fails a gatekeeper (wrong language, wrong geography, or ambiguous entity), it is eliminated from that queryβs candidate pool instantly. The other dimensions donβt come into play.
Level 2: Core identity (define)
Entities, attributes, relationships, sentiment.Β
This is where the system decides what your content means:
Who is being discussed.
What facts are stated.
How entities relate.
What the tone is.Β
Without clear core identity annotations, a chunk carries no semantic weight in any downstream gate.
Level 3: Selection filters (route)Β
Intent category, expertise level, claim structure, and actionability.Β
These determine which competition pool your content enters.
Is this informational or transactional?Β
Beginner or expert?Β
Wrong pool placement means competing against content that is a better match for the query, and youβve lost before recruitment or ranking begins.
Level 4: Confidence multipliers (rank)
Verifiability, provenance, corroboration count, specificity, evidence type, controversy level, and consensus alignment. These scale your ranking within the pool.Β
This is where validated, corroborated, and specific content outranks accurate but unvalidated content.Β
The multipliers explain why a well-sourced third-party article about you often outperforms your own claims: provenance and corroboration scores are higher.
Confidence has a multiplier effect on everything else and is the most powerful of all signals. Full stop.
Level 5: Extraction quality (deploy)
Sufficiency, dependency, standalone score, entity salience, and entity role. These determine how your content appears in the final output.Β
Is this chunk a complete answer, or does it need context? Is your entity the subject, the authority cited, or a passing mention?Β
Extraction quality determines whether AI quotes you, summarizes you, or ignores you.
Across all five levels, a confidence score is attached to every individual annotation. Not just what the system thinks your content means, but how certain it is.
Clarity drives confidence. Ambiguity kills it.
Canel also confirmed additional dimensions I had not initially mapped: audience suitability, ingestion fidelity, and freshness delta. These sit across the existing categories rather than forming a sixth level.
In 2022, Splitt named three annotation behaviors in a Duda webinar that map directly onto the five-level model. The centerpiece annotation is Level 2 in direct operation:Β
βWe have a thing called the centerpiece annotation,β Splitt confirmed, a classification that identifies which content on the page is the primary subject and routes everything else β supplementary, peripheral, and boilerplate β relative to it.Β
βThereβs a few other annotationsβ of this type, he noted.Β
Annotation runs before recruitment, which means a chunk classified as non-centerpiece carries that verdict into every gate that follows. Boilerplate detection is Level 3: content that appears consistently across pages β headers, footers, navigation, and repeated blocks β enters a different competition pool based on its structural role alone.Β
βWe figure out what looks like boilerplate and then that gets weighted differently,β Splitt saidΒ
Off-topic routing closes the picture. A page classified around a primary topic annotates every chunk relative to that centerpiece, and content peripheral to the primary topic starts its own competition pool at a disadvantage before Recruitment begins.Β
Splittβs example: a page with 10,000 words on dog food and a thousand on bikes is βprobably not good content for bikes.β The system isnβt ignoring the bike content. Itβs annotating it as peripheral, and that annotation is the routing decision.
The multiplicative destruction effect: When one near-zero kills everything
In Sydney in 2019, I was at a conference with Gary Illyes and Brent Payne. Illyes explained that Googleβs quality assessment across annotation dimensions was multiplicative, not additive.Β
Illyes asked us not to film, so I grabbed a beer mat and noted a simple calculation: if you score 0.9 across each of 10 dimensions, 0.9 to the power of 10 is 0.35. You survive at 35% of your original signal. If you score 0.8 across 10 dimensions, you survive at 11%. If one dimension scores close to zero, the multiplication produces a result close to zero, regardless of how well you score on every other dimension.
Payneβs phrasing of the practical implication was better than mine: βBetter to be a straight C student than three As and an F.β
The beer mat went into my bag. The principle became central to everything Iβve built since.
The multiplicative destruction effect has a direct consequence for annotation strategy: the C-student principle is your guide.Β
A brand with consistently adequate signals across all 24+ dimensions outperforms a brand with brilliant signals on most dimensions and a near-zero on one. The near-zero cascades.Β
A gatekeeper failure (Level 1) eliminates the content entirely.Β
A core identity failure (Level 2) misclassifies it so badly that high confidence multipliers at Level 4 are applied to the wrong entity.Β
An extraction quality failure (Level 5) produces a chunk that the system can retrieve but canβt deploy usefully. The failure doesnβt have to be dramatic to be fatal.
At the annotation stage, misclassification, low confidence, or near-zero on one dimension will kill your content and take it out of the race.
Nathan Chalmers, who works at Bing on quality, told me something that puts this in a different light entirely. Bingβs internal quality algorithm, the one making these multiplicative assessments across annotation dimensions, is literally called Darwin.Β
Natural selection is the explicit model: content with near-zero on any fitness dimension is selected against. The annotations are the fitness test. The multiplicative destruction effect is the selection mechanism.
How annotation routes content to specialist language models
The system doesnβt use one giant language model to classify all content. It routes content to specialized small language models (SLMs): domain-specific models that are cheaper, faster, and paradoxically more accurate than general LLMs for niche content.Β
A medical SLM classifies medical content better than GPT-4 would, because it has been trained specifically on medical literature and knows the entities, the relationships, the standard claims, and the red flags in that domain.
What follows is my model of how the routing works, reconstructed from observable behavior and confirmed principles. The existence of specialist models is confirmed. The specific cascade mechanism is my reconstruction.
The routing follows what I call the annotation cascade. The choice of SLM cascades like this:
Site level (What kind of site is this?)
Refined by category level (What section?)
Refined by page level (what specific topic?)
Applied at chunk level (What does this paragraph claim?)
Each level narrows the SLM selection, and each level either confirms or overrides the routing from above. This maps directly to the wrapper hierarchy from the fourth piece: the site wrapper, category wrapper, and page wrapper each provide context that influences which specialist model the system selects.
The system deploys three types of SLM simultaneously for each topic. This is my model, derived from the behavior I have observed: annotation errors cluster into patterns that suggest three distinct classification axes.Β
The subject SLM classifies by subject matter β what is this about? β routing content into the right topical domain.Β
The entity SLM resolves entities and assesses centrality and authority: who are the key players, is this entity the subject, an authority cited, or a passing mention?Β
The concept SLM maps claims to established concepts and evaluates novelty, checking whether what the content asserts aligns with consensus or contradicts it.
When all three return high confidence on the same entity for the same content, annotation cost is minimal, and the confidence score is very high. When they disagree (i.e., the subject SLM says βmarketing,β but the entity SLM canβt resolve the entity, and the concept SLM flags the claims as novel), confidence drops, and the system falls back to a more general, less accurate model.
The key insight? LLM annotation is the failure mode. The system wants to use a specialist. It defaults to a generalist only when it canβt route to a specialist. Generalist annotation produces lower confidence across all dimensions.Β
The practical implicationΒ
Content thatβs category-clear within its first 100 words, uses standard industry terminology, follows structural conventions for its content type, and references well-known entities in its domain triggers SLM routing.Β
Content thatβs topically ambiguous or terminologically creative gets the generalist. Lower confidence propagates through every downstream gate.
Now, this may not be the exact way the SLMs are applied as a triad (and it might not even be a trio). However, two things strike me:
Observed outputs act that way.
If it doesnβt function this way, it would be.
First-impression persistence: Why the initial annotation is the hardest to correct
Here is something Iβve observed over years of tracking annotation behavior. It aligns with a principle Canel confirmed explicitly for URL status changes (404s and 301 redirects): the systemβs initial classification tends to stick.
When the bot first crawls a page, it selects an SLM, runs the annotation, assigns confidence scores, and saves the classification. The next time it crawls the same page, it logically starts with the previously assigned model and annotations. I call this first-impression persistence.Β
The initial annotation is the baseline against which all subsequent signals are measured. The system doesnβt re-evaluate from scratch. It checks whether the new crawl is consistent with the existing classification, and if it is, the classification is reinforced.
Canel confirmed a related mechanism: when a URL returns a 404 or is redirected with a 301, the system allows a grace period (very roughly a week for a page, and between one and three months for content, in my observation) during which it assumes the change might revert. After the grace period, the new state becomes persistent. I believe the same principle applies to content classification: a window of fluidity after first publication, then crystallization.
I have direct evidence for the correction side from the evolution of my own terminologies. When I first described the algorithmic trinity, I used the phrase βknowledge graphs, large language models, and web index.β Google, ChatGPT, and Perplexity all picked up on the new term and defined it correctly.
A month later, I changed the last one to βsearch engineβ because it occurred to me that the web index is what all three systems feed off, not just the search system itself. At the point of correction, I had published roughly 10 articles using the original terminology.Β
I went back and invested the time to change every single one, updating every reference, leaving zero traces. A month later, AI assistive engines were consistently using βsearch engineβ in place of βweb index.β
The lesson is that change is possible, but you need to be thorough: any residual contradictory signal (one old article, one unchanged social post, and one cached version) maintains inertia proportionally. Thoroughness is the unlock, rather than time.
A rebrand, career pivot, or repositioning is the practical example. You can change the AI modelβs understanding and representation of your corporate or personal brand, but it requires thoroughly and consistently pivoting your digital footprint to the new reality.
In my experience, βon a sixpenceβ within a week. Iβve done this with my podcast several times. Facebook achieved the ultimate rebrand from an algorithmic perspective when it changed its name to Meta.
The practical implication
Get your annotation right before you publish. The first crawl sets the baseline. A page published prematurely (with an unclear topic or ambiguous entity signals) crystallizes into a low-confidence annotation, and changing it later requires significantly more effort than getting it right the first time.
Annotation-time grounding: The bot cross-references three sources while classifying your content
The system doesnβt annotate in a vacuum. When the bot classifies your content at Gate 5, it cross-references against at least three sources simultaneously. This is my model of the mechanism. The observable effect β that annotation confidence correlates with entity presence across multiple systems β is confirmed from our tracking data.
The bot carries prioritized access to the web index during crawling, checking your content against what it already knows:Β
Who links to you.
What context those links provide.
How your claims relate to claims on other pages.Β
Against the knowledge graph, it checks annotated entities during classification β an entity already in the graph with high confidence means annotation inherits that confidence, while absence starts from a much lower baseline.Β
The SLMβs own parametric knowledge provides the third cross-reference: each SLM compares encountered claims against its training data, granting higher confidence to claims that align, flagging contradictions, and giving lower confidence to novel claims until corroboration accumulates.
This means annotation quality isnβt just about how well your content is written. Itβs about how well your entity is already represented across all three of the algorithmic trinity. An entity with strong knowledge graph presence, authoritative web index links, and consistent SLM-domain representation gets higher annotation confidence on new content automatically.Β
The flywheel: better presence leads to better annotation, which leads to better recruitment, which strengthens presence, and which improves future annotation.
Once again, better to have an average presence in all three than to have a dominant presence in two and no presence in one.
And this is why knowledge graph optimization (what Iβve been advocating for over a decade) isnβt separate from content optimization. They are the same pipeline. Your knowledge graph presence directly improves how accurately, verbosely, and confidently the system annotates every new piece of content you publish.
If youβre thinking βKnowledge graph? Thatβs just Google,β think again.
In November 2025, Andrea Volpini intercepted ChatGPTβs internal data streams and found an operational entity layer running beneath every conversation: structured entity resolution connected to what amounts to a product graph mirroring Google Shopping feeds.Β
OpenAI is building its own knowledge graph inside the LLM. My bet is that they will externalize it for several reasons: a knowledge graph in an LLM doesnβt scale, an LLM will self-confirm, so the value is limited, a standalone knowledge graph can be easily updated in real time without retraining the model, and itβs only useful at scale when it stays current.
The algorithmic trinity isnβt a Google phenomenon. Itβs the architectural pattern every AI assistive engine and agent converges on, because you canβt generate reliable recommendations without a concept graph, structured entity data, and up-to-date search results to ground them.
Why Google and Bing annotate differently from engines that rent their index
Google and Bing own their crawling infrastructure, indexes, and knowledge graphs. They can afford grace periods, schedule rechecks, and maintain temporal state for URLs and entities over months.
OpenAI, Perplexity, and every engine that rents index access from Google or Bing operate on a fundamentally different model. They have two speeds:Β
A slow Boolean gate (Does this content exist in the index I have access to?)
A fast display layer (What does the content say right now when I fetch it for grounding?)
The Boolean gate inherits Googleβs and Bingβs annotations. Whether your content appears at all depends on whether it was recruited from the index those engines draw from, and that recruitment depends on annotation and selection decisions made by the algorithmic trinity. But what these engines show when they cite you is fetched in real time.
The practical implication
For Google and Bing, youβre optimizing for annotation quality with the benefit of grace periods and gradual reclassification. For engines that donβt own their index, the Boolean presence is inherited from the rented index and is slow to change, but the surface-level display changes every time they re-fetch.
That means what you are seeing in the results is not a direct measure of your annotation quality. Itβs a snapshot of your page at the moment of fetch, and those two things may have nothing to do with each other.
How to optimize for annotation quality: The six practical principles
The SEO industry has spent two decades optimizing for search and assistive results β what happens after the system has already decided what your content means. We should be optimizing for annotation.Β
If the annotation is wrong, everything downstream suffers. When the annotation is accurate, verbose, and confident, your content has a significant advantage in recruitment, grounding, display, and, ultimately, won.
1. Trigger SLM routing
Make your topic category obvious within the first 100 words. Use standard industry terminology. Follow structural conventions. Reference well-known entities. The goal: specialist model, not generalist.
2. Write for all three SLMs
Clear signals for subject (what is this about?), entity (who is the authority?), and concept (what established ideas does this connect to?). Ambiguity on any axis reduces confidence.
3. Get it right before publishing
First-impression persistence means the initial annotation is the hardest to change. Publish only when topic, entity signals, and claims are unambiguous.
4. Build the flywheel
Knowledge graph presence, web index centrality, LLM parameter strengthening, and correct SLM-domain representation all feed annotation confidence for new content. Invest in entity foundation, and every future piece benefits from inherited credibility.
5. Eliminate noise when correcting
Change every reference. Leave zero contradictory signals. Noise maintains inertia proportionally.
6. Audit for annotation, not just indexing
A page can be indexed and still misannotated. If the AI response is wrong about you, the problem is almost certainly at Gate 5, not Gate 8.
Annotation is the gate where most brands silently lose. The SEO industry doesnβt yet have a vocabulary for it. That needs to change, because the gap between brands that get annotation right and brands that donβt is the gap between consistent AI visibility and permanent algorithmic obscurity.
Why annotation matters so much and why it should be your main focus
Youβve done everything within your power to create the best possible content that maps to intent of your ideal customer profile, you have methodically optimized your digital footprint, your data feeds every entry mode simultaneously: pull, push discovery, push data, MCP, and ambient, so they are all drawing from the same clean, consistent source
So, content about your brand has passed through the DSCRI infrastructure phase, survived the rendering and conversion fidelity boundaries, and arrived in the index (Gate 4) intact. Phew!
Now it gets classified. Annotation is the last moment in the pipeline where you have the field to yourself. Every decision in DSCRI was absolute: you vs. the machine, with no competitor in the frame.Β
Annotation is still absolute. The system classifies your content based on your signals alone, independently of what any competitor has done. Nobody elseβs data changes how your entity is annotated.
But this is the last time you arenβt competing. From recruitment onward, everything is relative. The field opens, every brand that passed annotation enters the same competitive pool, and the advantage you carried through the absolute phase becomes your starting position in the competitive race you have to win.
That means:Β
Get annotation right, and you start ahead, with confidence that compounds through every downstream gate in RGDW.Β
Get it wrong, and the multiplicative destruction effect does its work β a near-zero on one annotation dimension cascades through recruitment, grounding, display, and won. No amount of excellent content, structural signals, or entry-mode advantage recovers it.
Warning: First-impression persistence (remember, the first time you are annotated is the baseline) means you donβt get a clean retry. Changing the baseline requires thoroughness, time, and more effort than getting it right on the first crawl.
Annotation isnβt the gate that most brands focus on. Itβs the gate where most brands silently lose.
This is the eighth piece in my AI authority series.Β
SOVOL is about to enter its multi-material era SOVOL has started teasing βsomething newβ, a new 3D printer that promises to be both βmulti-materialβ and βmulti-colourβ. Until now, SOVOL has specialised in single-colour/material 3D printing solutions, promising βopen-source freedomβ and a wealth of customisation options. Based on their teaser image, SOVOLβs new 3D printer appears [β¦]
Lenovo and HP have both devised on-device AI assistants designed to make your digital life easier. I dug into the features to discover similarities and differences.
Intel has officially announced its participation in Elon Musk's "Terafab" project, which aims to reimagine chip manufacturing. Specifically, Intel Foundry plans to join this ambitious initiative, leveraging its significant manufacturing capabilities as one of the strategically important companies in the U.S. However, the specifics of Intel's involvement remain unclear, as it is not yet known how Intel will officially contribute to the Terafab project. The company has stated that since Terafab aims to produce 1 terawatt per year of compute power for AI and robots serving xAI, SpaceX, and Tesla. Intel will assist in designing silicon, manufacturing it, and providing some of the world's most advanced packaging technologies, such as EMIB. It is likely that some of Intel's facilities, which are currently being expanded, will become part of the network needed for the Terafab project, while the Terafab facility itself conducts custom work guided by Intel.
The goal of Terafab is to consolidate the entire chip manufacturing process under one roof. The plant is expected to integrate several stages of semiconductor production at a single site, including logic fabrication, memory, packaging, testing, and mask production. This setup is unusual, as these steps are typically spread across multiple specialized facilities and companies. The original idea behind Terafab is that consolidating these processes could accelerate development by enabling engineers to design, test, and revise chips with fewer delays, essentially allowing for rapid prototyping. This contrasts with the traditional, lengthy process of manufacturing chips at one site, packaging them at another, and testing them in-house. Elon Musk visited Intel's CEO Lip-Bu Tan over the past weekend securing a deal.
The feature is now rolling out as part of the latest Play Store update, version 50.7.24-31. Google recently confirmed the release through its official support documentation, following months of limited testing.
For the Postal Service, which reported a $9 billion net loss last year, the deal averts what could have been a serious revenue shock. Amazon accounts for nearly 15% of USPS package deliveries nationwide, translating to about $6 billion annually. Any major pullback by the tech company β which already...
Java 26 is here with fresh language features, faster performance, stronger security, and a wave of library and tooling upgrades. Early developer reaction has been upbeat, with many praising Java's steady pace of meaningful improvements.
Nikkei Asia reported that Apple has encountered unexpected setbacks during the engineering test phase of its first foldable iPhone, raising the possibility of production delays. Sources familiar with the matter told the publication that the early test production phase has thrown up more problems than anticipated and will require extra...
Zortos293 uploaded an open-source GeForce Now client to GitHub, allowing gamers to connect to Nvidia's service without being tracked by the tech giant.
PanelShot generates realistic AI personas, shows them your website, and delivers structured feedback in minutes. Pick audience segments or create your own, select a research rubric, and let AI evaluate screenshots, copy, and accessibility to surface insights. Review an executive summary and per-page analysis, replay the same personas on new versions, track sentiment trends over time, and chat with any persona for deeper understanding, all for cents per persona.
REWRITE is a 30-day interactive story and voice-first coaching platform that measures personal transformation through your voice. You follow the narrative, talk with an AI coach by text or voice, and see objective signals like stress, confidence, engagement, cognitive load, and authenticity. After the story, daily prompts and monthly voice reports track your progress, giving data you can act on. Coaches get a dashboard with client trends, attention flags, and AI-generated prep notes.
A high-severity security vulnerability has been disclosed in Docker Engine that could permit an attacker to bypass authorization pluginsΒ (AuthZ) under specific circumstances.
The vulnerability, trackedΒ as CVE-2026-34040 (CVSS score: 8.8), stems from an incomplete fixΒ for CVE-2024-41110, a maximum-severity vulnerability in the same component that came to light in JulyΒ 2024.
"
A government block on Telegram in Iraq has triggered a massive 1,200% surge in Proton VPN sign-ups as citizens look for workarounds. The company now warns residents against downloading sketchy apps that could put their data in danger.
Google is rolling out new features to make it easier for users to contribute local knowledge to Maps. Most notably, Gemini can now create captions when users are looking to share a photo or video about a place.
Many of todayβs PPC tools were designed to be easily accessible to ecommerce. That doesnβt mean lead gen canβt take advantage of them, but it does mean more intentional application is required.
Lead gen with AI still requires a creative approach, and many conventional ecommerce tools still apply β but not always in the same way.
Here are the priorities that matter most for succeeding with lead gen using AI.
Disclosure:Iβm a Microsoft employee. While this guidance is platform-agnostic, Iβll reference examples that lean into Microsoft Advertising tooling. The principles apply broadly across platforms.
1. Fix your conversion data first
This is the single most important thing you can do as AI becomes more embedded in media buying.
Between evolving attribution models, privacy changes, different platform connections, and shifts in how consumers engage with brands, itβs reasonable to ask whether your data is still telling an accurate story.
Start by auditing your CRM or lead management system. Make sure the data you pass back to advertising platforms is clean, consistent, and intentional.
In most cases, data issues stem from human choices rather than technical failures. Still, there are a few technical checks that matter:
Confirm conversions are firing consistently.
Regularly review conversion goal diagnostics.
Validate that lead status updates and downstream signals are actually flowing back.
If AI systems are learning from your data, you want to be confident that the feedback loop reflects reality.
2. Make landing pages easy to ingest and easy to understand
Lead gen campaigns often have multiple conversion paths, which can be helpful for users. But from an AI perspective, ambiguity is a risk.
Your landing pages should make it clear:
What action you want the user to take.
What happens after action is taken.
Which conversions matter most.
Redundant or unclear conversion paths can confuse both users and systems. If AI crawlers detect that anticipated outcomes are inconsistent, they may begin to question the accuracy of what your site claims to do. That can limit eligibility for certain placements.
Language clarity matters just as much. Avoid jargon, eccentric terminology, or internally focused phrasing when describing your services. Clear, plain language makes it easier for AI systems to understand who you are, what you offer, and how to match creative to the right audience.
A practical test: Put your website content into a Performance Max campaign builder and review how the system attempts to position your business. If you agree with the messaging, imagery, and framing, your site is likely easy to understand. If not, that feedback is valuable.
You can also paste your site content into AI assistants and ask them to describe your business and services. If the response aligns with reality, youβre in a good place. If it doesnβt, thatβs a signal to refine your content.
Behavioral analytics tools, like Clarity, can help you understand exactly how humans are engaging with your site and how often AI tools are crawling your site.
Lead gen has always struggled with long conversion cycles. That challenge doesnβt go away, and in some ways, it becomes more pronounced.
AI-driven systems increasingly weigh sentiment, visibility, and contextual signals, not just last-click performance. If all of your budget and reporting focuses on immediate traffic, you may miss meaningful impact higher in the funnel.
That means:
Budgeting intentionally across awareness, consideration, and conversion.
Applying the right metrics at each stage.
Looking beyond traffic as the primary success indicator.
In many lead gen models, citations, qualified leads, and eventual revenue tell a more accurate story than clicks alone.
You may not think you have a βfeedβ in your lead gen setup, but that absence can put you at a disadvantage.
Feeds help AI systems understand your business structure, services, and site architecture. Even if you donβt have hundreds of pages, a simple, well-maintained feed in an Excel document can provide valuable context when uploaded to ad platforms.
Example of a feed for lead gen
Feed hygiene matters. Use clear, specific columns. Follow platform standards for text, images, and categorization. Make sure all relevant categories are represented.
On the local side, claim and maintain all map profiles. Ensure information is accurate and consistent. If you use call tracking in map placements, review your labeling carefully. AI systems may pull data from map listings or your website, and mismatches can create attribution confusion, particularly for phone leads.
Account for potential AI-driven inflation in reporting, whether youβre looking at map pack data, direct reporting, or site-level performance. Any changes you make should also be reflected correctly in your conversion goals.
5. Pressure-test your creative for clarity
Creative assets may be mixed, matched, or shortened using AI. In some cases, you may only get one headline to explain who you are and why someone should contact you.
If your value proposition requires three headlines, or a headline plus a description, to make sense, thatβs a risk.
Review your existing creative and identify assets that stand on their own. You should have at least some options where a single headline clearly communicates:
What you do
Who you help
Why it matters
If that clarity isnβt there, AI-driven placements can quickly become confusing.
Most of the actions that matter today are things strong advertisers already do: clean data, clear messaging, intentional budgeting, and disciplined execution. What changes is how attribution may shift, and how much weight systems place on different signals.
The fundamentals still win. The difference is that AI makes weaknesses more visible and strengths more scalable.
If you focus on clarity, accuracy, and alignment across your funnel, you give both people and systems the best possible chance to understand your business β and thatβs where sustainable performance comes from.
We got our hands on the new ASUS Zenbook A16 for in-depth testing, and it's clear that the new Windows laptop is gunning for Apple's lightweight MacBook Air 15. Here's how the two devices compare in terms of design, features, displays, performance, efficiency, and pricing.
A combination of Qualcomm's phenomenal generational performance gains and refinements to ASUS' already stellar Zenbook design has crafted a practically perfect Windows laptop.
Major suppliers are continuing to phase out production of mature products below DDR4, according to TrendForce's latest research on the memory industry. As supply tightens structurally, DRAM prices have already posted significant cumulative increases in recent months.
TrendForce forecasts that consumer DRAM contract prices will continue to rise by 45-50% QoQ in 2Q26 after taking into account ongoing supply reductions, order transfers, and the slower pace of capacity expansion among Taiwanese suppliers.
StarTech.com, a global provider of performance connectivity solutions for IT professionals, announced the release of its next-generation Driverless Multi-monitor USB-C Docking Stations for Windows environments utilizing multi-stream transport (MST) with HDMI and DisplayPort compatible models. Built for enterprise Windows environments, the docks support enterprise mixed hardware platforms including Intel, AMD and Snapdragon-based systems while enabling driverless deployment and up to 100 W of power delivery.
Key features include:
Dual 4K 60 Hz display support or Dual 4K 60 Hz + one 4K 30 Hz with the triple display dock.
Driverless deployment for faster rollout and less troubleshooting.
USB ports up to 10 Gbps.
Mountable design with integrated security lock slots.
Introducing GHS Eternal and GHS Eternal RGB, wired gaming headsets that prove great gear doesn't have to cost a fortune. GHS Eternal and GHS Eternal RGB are the first entry into the new Gaming Headset line, and round out the Glorious product portfolio, covering keyboards, mice, accessories, and now gaming audio.
GHS Eternal and GHS Eternal RGB add to the Glorious Eternal product lineup alongside Model O Eternal, to bring high-quality and affordable gear to gamers around the world. The Eternal lineup exists because great gear shouldn't cost a fortune, and GHS Eternal and GHS Eternal RGB extend that ethos into the world of gaming audio.
ASUS today announced the U.S. availability of its latest Zenbook lineup, headlined by the all-new Zenbook A16. Setting a new standard for groundbreaking performance, the Zenbook A16 debuts as the fastest Snapdragon -powered laptop on the market, equipped with the top-of-the-line Snapdragon X2 Elite Extreme processor for unprecedented local AI capabilities. The new Zenbook seriesβwhich also features the Zenbook A14, Zenbook S16, and Zenbook S14βis unified by Ceraluminum, an ASUS-exclusive material that combines the refined touch of ceramic and strength of aluminium to offer a unique tactile experience and lasting durability. As fully certified Copilot+ PCs, these devices are built to harness the full potential of local AI, furthering ASUS's commitment to deliver future-ready computing today and beyond.
ASUS Zenbook A16
ASUS Zenbook A16 (UX3607)βfeaturing the latest Snapdragon X2 Elite Extreme processor, which combines 18 cores and up to 80 TOPS NPU performance to unlock the next era of AI-enhanced computingβbridges the gap between ultra-portability and uncompromised performance. With a remarkable leap in CPU and GPU performance, while also optimized for superior battery efficiency, Zenbook A16 delivers fluid, lag-free performance across every scenario including media editing and rendering as well as productivity tasks. The laptop also features a vibrant 16-inch 3K 120 Hz ASUS Lumina OLED display, six super-liner speakers, and a comprehensive array of full-size I/O ports. Despite its expansive display, the laptop's sleek, all-Ceraluminum 2.65lbs.
Every year, the moto g stylus stands apart as the only smartphone in its price tier to offer a true stylus experience, giving users a precise, intuitive way to capture ideas the moment inspiration strikes. This year, Motorola builds on that foundation with the new moto g stylus - 2026, now featuring a built-in active stylus, and marks an important expansion of its portfolio with the moto pad - 2026. Together, these devices are designed to support creativity, productivity, and play across screens.
moto g stylus - 2026: Active pen within reach
From focused study sessions to well-earned downtime, today's devices need to move as fast as inspiration does. The integrated active stylus on the moto g stylus - 2026 delivers next-level precision for note-taking, gaming, and creative expression. The new active stylus responds to tilt and pressure in supported apps, enabling broader shading, finer lines, and more natural strokes, bringing a penβonβpaper feel to everyday tasks.
Corsair is proud to unveil the newest additions to the modular FRAME Series Case family; the FRAME 4000X RS and the FRAME 4000D WOOD RS. The FRAME 4000X features an all-new ventilated front panel with built-in RGB lighting and 64 RGB LEDs for a customizable light show, while the FRAME 4000D WOOD sports a front panel made with real wood for great airflow and a natural look. Both cases deliver new aesthetic options to the FRAME 4000 Series lineup while offering excellent cooling performance and easy upgradeability.
The all-new FRAME 4000X RS was created for DIY PC builders who want a great looking PC with RGB lighting and excellent airflow. It includes the new RGB Flow front panel that features 64 built-in RGB LEDs for a customizable light show with effective airflow. The RGB lights on the front panel can be connected to the motherboard's +5V RGB header for easy lighting management via motherboard software.
ZOWIE, a leading global esports brand and part of BenQ Corporation, has been named by Riot Games as the official monitor for the VALORANT Champions Tour Americas (VCTA). This collaboration is grounded in a shared commitment to competitive performance and player-first standards. Trusted by professional FPS players worldwide, ZOWIE monitors are engineered for precision, responsiveness, and consistency under tournament conditions, delivering a performance benchmark that meets the demands of top-tier competition.
ZOWIE's best in class XL2566X+ Gaming Monitor will be used on stage during VCT Americas competitions, giving pro players elite performance when it matters most. The XL2566X+ features a 400 Hz Fast TN Panel with native FHD and DyAc 2 technology to deliver industry-leading motion clarity, and clear, sharp visuals with enhanced color modes. Designed specifically for FPS games, ZOWIE monitors provide stable, predictable performance, empowering players to Strive for Perfection.
H.264 has so far carried a flat annual cap of $100,000 for large subscription platforms. That may sound like a lot, but for these companies, the numbers are so small that most of them probably forgot it even existed on their balance sheet. Well, that comfortable arrangement just got a...
The Asus Zenbook A16 is a lightweight housing for Qualcomm's Snapdragon X2 Elite Extreme, but comes with compromises in build quality and battery life.
A city councilor's home has been shot up for allegedly supporting a data center project in Indianapolis. Opposing neighborhood groups condemn the shooting, while the authorities have yet to determine who's behind the crime.
ByWordy is an AI writing workspace for creating contracts, articles, and other documents in your own voice. The platform offers jurisdiction-aware legal documents generated from templates, with e-sign capabilities. You can draft, rewrite, and refine with an AI editor. Legal templates are free to use, and credits are offered upon signing in.
CloverNut centralizes operations for music labels, publishers, workshops, and other creative businesses. Manage artists, products, and releases; build public homepages; and support eight languages with real-time API sync. Handle streaming links for Spotify and Apple Music, create press kits, and control team access with roles. Flexible plans scale from solo creators to enterprises.
Vala is an AI financial intelligence app that turns transactions into clear insights and practical actions. It connects bank accounts, categorizes expenses, tracks subscriptions, and helps manage shared spending for a simple, complete view of your finances.
Vala also offers goal tracking, budget savings tools, and real-time alerts for bills or unusual spending. With visual insight cards and guided suggestions, it helps individuals, couples, and families understand patterns and make better decisions without manual tracking or complex budgeting.
An active campaign has been observed targeting internet-exposed instances running ComfyUI, a popular stable diffusion platform, to enlist them into a cryptocurrency mining and proxyΒ botnet.
"A purpose-built Python scanner continuously sweeps major cloud IP ranges for vulnerable targets, automatically installing malicious nodesΒ via ComfyUI-Manager if no exploitable node is already
Polaroid's Hi-Print 3x3 portable printer-cum-frame breathes analog life into your smartphone pics, producing and giving a home to display square prints
A fan has discovered what appears to be the early model for the main character for Rockstar Games' canceled project, Agent, within the leaked source code for Grand Theft Auto 5.
The budget proposal would force CISA to operate with a significantly lower budget than previous years, citing the government's claims that the election misinformation programs were used to "target the President."
On a recent episode of Equity, we talked to Arena Private Wealth to explore a growing trend: family offices bypassing VCs to gain direct exposure to AI startups, turning them from passive investors into active participants.
For most people, βMad Menβ means the TV show. But the phrase points to something more specific: Madison Avenue in the 1950s and β60s, when agencies grew brands through persuasion, positioning, and earned trust in a world of scarce media channels and powerful gatekeepers. If you wanted attention, you bought your way in, then made your product the obvious choice.
When the internet arrived and Google made the chaos navigable, an entire industry was built on getting brands found. Search and SEO became one of the most commercially valuable disciplines in marketing.
That model isnβt disappearing. But something new is taking shape on top of it β and most of the industry is still using the wrong language to describe whatβs happening.
AI is exposing everything SEO has neglected. Brands that win recommendations from AI systems wonβt do so by publishing more content. Theyβll win through positioning, persuasion, and corroborated proof.
In other words, theyβll win the way Madison Avenue always did.
SEO was never really about content
One of the strangest things about the current industry conversation is how many people talk as if the job of SEO is to create content. It isnβt. Not for most businesses.
If youβre a publisher, content is the product. Traffic is the commercial engine. But for most brands, content never did what people thought.
Early on, people wrote content for customers, and it worked. Then it changed. Content became a keyword vehicle. βGet people to our siteβ replaced good marketing comms.
Traffic became a proxy for exposure. It worked because search rewarded retrieval: type a query, get a page, get a click. All you needed to sell that model was the belief that any traffic was good traffic. That traffic somehow led to revenue that your agency could keep delivering.
That model is now under serious pressure.Β
Google and ChatGPT are increasingly taking the click. Every serious large language model is trying to satisfy informational intent before the user reaches the source. They arenβt trying to be better search engines. Theyβre trying to make search engines unnecessaryΒ β and thatβs the entire point.
Thereβs too much information on the web. People donβt want to open 10 tabs and read five near-identical blog posts to find a basic answer. They want the answer. The AI systems exist precisely to give it to them.
So if informational retrieval gets absorbed into the interface, what remains? Marketing. Thatβs the part many SEOs are still not fully grappling with.
The cleanest way to understand this shift is through the β4 Psβ of marketing: product, price, place, and promotion.
Traditional SEO has been, almost entirely, a place discipline. Itβs been about getting your products, services, or information onto the digital shelf when people go looking.
Keyword rankings are shelf position. Paid search is just a more expensive version of the same principle. In commercial search, you pay for premium placement in a digital aisle.
That still matters enormously.
Buyer-intent search remains valuable. Google hasnβt solved its commercial transition to a fully AI-led interface, and wonβt overnight. Search is too important to Googleβs revenue to disappear fast. But another layer is emerging above it, and this is the layer that most agencies arenβt yet equipped to compete on.
As AI systems become the first interaction point for more users, the game shifts from being present to being preferred.
Users donβt just search. They ask. They describe a problem. They want the best CRM for a mid-market SaaS company, the best estate agent in their area, the best sandwich shop near the office. And the system responds with recommendations.
If classic SEO was about rankings, the next phase is about recommendations. If classic SEO was about digital placement, the next phase is about shaping preference. And recommendation, in practice, is advertising.
Not a display banner. Not a 30-second TV spot. But advertising in the oldest and most commercially powerful sense: influencing the choice someone makes before theyβve even consciously made it.
An AI-generated recommendation is an invisible ad unit. It doesnβt bill by impression.
Why AI recommendations hit differently
When an LLM recommends a brand, it canβt know with certainty what will work best. So it infers. It weighs signals: past success, prominence, reviews, case studies, corroborating sources, and repeated associations between a brand and a specific type of problem.
Humans do something almost identical.Β
Where performance is clearly bounded, we can identify a winner. We know who won the Oscar. We know which film topped the box office.
But when performance isnβt obvious in advance, we rely on proxies. We ask friends, read reviews, and scan for authority. We use familiarity, logic, and social proof to estimate what is likely to be right.
Thatβs exactly the territory AI recommendation is now entering β the consideration set problem. If I ask an LLM to find me a reliable accountant for a small business, Iβm not asking it to retrieve a blog post. Iβm asking it to build me a shortlist.Β
Unlike traditional search, the recommendation layer is invisible to brands unless they test for it actively. You donβt see the prompt or the source chain. You donβt even know why one brand made the cut and another didnβt.
But the commercial effect is real, possibly stronger than anything traditional search produced. If youβre in the recommendation set, youβre in the running. If youβre absent, youβve lost the sale before the conversation started.
The first practical consequence: your website can no longer function like a polite digital brochure. Despite being optimized for search, many commercial web pages simply:
Introduce the company.
Gesture vaguely at services.
Bury differentiation under generic corporate language.
Treat the page as an endpoint for a ranking rather than a persuasive asset.
Still, theyβre weak where it matters most: actual selling.
In the Mad Men era of SEO, your landing pages and service pages need to function like sales pages, not in a cheesy direct-response way, but in the strategic sense that they must clearly answer four things:
Who is this for?
What problem does it solve?
Why is it different?
Why choose it over the alternatives?
This comes down to positioning, which is key to GEO. If seven brands do broadly the same thing, the model needs distinctions. It needs enough clarity to say: this brand is best for X kind of buyer with Y kind of problem because it does Z better than everyone else.
Your website copy must surface real performance attributes: the specific things you genuinely do better or more distinctively than competitors. Your pages must become machine-readable arguments for preference.
Copywriting is back
Actual commercial copywriting β not fluffy brand storytelling or word count for its own sake β identifies a target customer, sharpens the problem, articulates the value, and makes the offer easy to recommend.
Good copy isnβt optional.
Take a local sandwich shop. The old SEO conversation runs to βbest sandwich near me,β local pack, and review acquisition. Itβs useful, but limited.Β
The GEO version starts with the shopβs actual performance attributes.Β
Is it the speed?Β
The handmade bread?Β
The office catering?Β
The locally sourced produce?
Those claims must be clear on the website first. Then they need corroboration everywhere else:
Reviews that mention the sourdough specifically.
A local food bloggerβs write-up.
Inclusion in βbest lunch spotsβ roundups.
Theyβre specific, repeated, retrievable evidence of why this shop is the right recommendation for a particular type of customer.
Scale that logic to a B2B software company, and the principle holds. Pages that clearly explain who the product is for, which problems it solves, and why it outperforms rivals. Then build mentions, customer reviews, and gain trade-press coverage β the body of evidence to support recommending you to buyers β and let the AI find it.
Thatβs pretty much GEO in a nutshell.
Keywords donβt disappear, but they lose their throne
Keywords are a human workaround. Approximations of intent, built for a retrieval system that needed exact string matching. LLMs process fuller context, layered needs, and comparative requirements. They move from keyword matching toward problem understanding.
Keyword research still matters for classic search, paid search, and buyer-intent pages. But the center of gravity shifts.
Instead of asking only βwhat terms should we rank for?β, the better question is: what attributes make us the right recommendation for the buyer we actually want, and what evidence exists across the web to support that claim?
The future of SEO is starting to look like the old agency model, as the work is increasingly promotional. Once your website clearly expresses your positioning, the challenge becomes promoting that position across the wider web through credible, repeated, relevant signals.
Digital PR.Β
Traditional PR.Β
Expert commentary.Β
Case studies.Β
Reviews.Β
Listicles.
Awards.Β
Trade press.
Brand mentions.Β
Conference speaking.Β
Events.Β
Creator coverage.Β
Product comparisons.Β
Original data studies that other people actually cite.Β
These are the things you go after, create, and encourage. Sadly, many βAI visibilityβ conversations flatten this into nonsense.
The goal isnβt merely to have content cited by AI. Itβs to gather enough market evidence that AI systems repeatedly encounter your brand in the right contexts, with the right associations.
The work stops being optimization and becomes maximization: building the largest possible volume of persuasive, corroborated, retrievable evidence that your brand is a sensible recommendation for a specific kind of buyer.
Thatβs a fundamentally different model from anything the SEO industry has been selling. Itβs promotional and strategic brand marketing.
SEOs need to grow up. Thereβs still significant value in buyer-intent search, technical site architecture, entity clarity, internal linking, and structured data. SEOs are well placed to monitor recommendation environments, test prompts, and identify where visibility is being won or lost.
But the identity crisis is real. Many agencies were built for a world of rankings, informational blogs, and monthly traffic graphs. They arenβt equipped to lead a world defined by positioning, copy, PR, brand evidence, and recommendation science.
Tracking brand citations inside AI outputs isnβt a complete strategy. Itβs a temporary metric.Β
Winning agencies look like hybrid commercial strategy firms: part SEO, part copywriting, part PR, part brand strategy, part technical infrastructure. They know how to protect buyer-intent search revenue today while building the fame, clarity, and corroborated authority that earns recommendation tomorrow.
This is the Mad Men model of SEO. Persuasion, positioning, and clear claims backed by public proof matter again. And the job is to become recommended by AI.
LG Electronics, the #1 OLED Gaming Monitor Brand in the USA, today announced pricing and availability for two new additions to its 2026 UltraGear Evo gaming monitor lineup: the LG UltraGear Evo GX9 (model: 39GX950B-B), the world's first 39-inch 5K2K curved OLED gaming monitor, and the LG UltraGear Evo GM9 (model: 27GM950B-B), a 27-inch 5K Hyper Mini LED gaming monitor. Both monitors bring next-generation display performance and AI-powered features to competitive and immersive gaming, giving players sharper visuals, faster response times, and smarter connectivity than previous generations. Both monitors are available for pre-order today at LG.com - The LG UltraGear GX9 at $1,799.99 and the LG UltraGear Evo GM9 for $1,199. Pre-orders placed through May 3 include the option to add LG Premium Care, extending the standard warranty by two years, for only $1.
LG UltraGear Evo GX9βWorld's First 39-Inch 5K2K Curved OLED
From the #1 OLED Gaming Monitor Brand in the USA the LG UltraGear Evo GX9 brings impeccable OLED picture performance to a size and scale not previously available, with near-instant 0.03 ms (GtG) response time and a 39-inch 5K2K canvas. As the world's first 39-inch 5K2K (5120Γ2160) curved OLED gaming monitor, its 21:9 ultrawide format with 1500R curve and 143 PPI pixel density, offering a wider, more panoramic view and crisp text clarity that pulls players deeper into the action.
Zyxel Networks, a leader in delivering secure and AI-powered cloud networking solutions, today announced the launch of the WBE665S BE22000 12-stream Wi-Fi 7 Triple-Radio NebulaFlex Pro ruggedized access point. The new solution presents MSPs and installers with an opportunity to address the rising demand for fast, reliable wireless connectivity within industrialized and challenging environments. Combining a durable IP67-rated weatherproof design and AI-powered cloud management, the WBE665S is designed for professional installers deploying networks in demanding locations.
In warehousing and distribution, manufacturing, cold storage, large-scale retail and other sectors, Wi-Fi is now being extended into zones that were once considered too harsh for wireless connectivity. Forklift trucks run connected tablets, while IoT sensors track the movement of consignments and goods, and handheld barcode scanners are used to drive greater efficiency and accuracy. In these environments, hazards such as extreme temperatures, humidity and dust are common and dropped connections, downtime and hardware failures can disrupt operations.
Indianapolis City-County Council member Ron Gibson, a Democrat who has held his position since 2023, recently expressed support for rezoning related to a proposed $500 million data center project. Two large buildings, from Los Angeles-based startup Metrobloks, will be built on a 14-acre site located in the Martindale-Brightwood neighborhood of...
TSNC is being positioned as a practical path for developers who already ship BC-compressed assets and want to squeeze more data into the same storage, bandwidth, or VRAM budgets without rethinking their pipelines.
Save $200 on this awesome AMD build from iBuyPower, featuring an AMD Ryzen 7 7800X3D, RTX 5070, 32GB of DDR5 RAM, and a 2TB SSD, all for just $2,049 right now.
InΒ the rapid evolution ofΒ the 2026 threat landscape, a frustrating paradox has emerged for CISOs and securityΒ leaders: Identity programs are maturing, yet the risk is actually increasing.
According to new research fromΒ the PonemonΒ Institute, hundreds of applications within the typical enterprise remain disconnected from centralized identity systems.Β TheseΒ "dark
When talking about credential security, the focus usually lands on breach prevention. ThisΒ makes senseΒ when IBMβs 2025 Cost of a Data BreachΒ Report puts the average cost of a breach at $4.4Β million. Avoiding even one major incident is enough to justify most security investments, but that headline figure obscures the more persistent problems caused by recurring credential
Iβm getting a mid-career executive MBA. Last week, in class, we discussed the interaction between automation and advertising.Β The lecture covered why A/B testing in Meta is less valuable now, since Facebook can auto-optimize faster and better than marketers can on their own.
A classmate took the logical leap and asked the professor, βIf digital channels have more data and more processing power, why donβt advertisers just give them a URL and a credit card and let them go wild?β
The argument has real merit. Google, Meta, and LinkedIn have access to more data than any agency ever will. Their optimization engines are improving fast. Handing them a budget and a URL and walking away isnβt entirely crazy.
But that means weβd need to have faith in the channels to optimize media in a businessβs best interests, and thereβs a long, proud history of that not being the case.
1. The opt-in that wasnβt
About six years ago, we met with a Google rep who pitched a product that introduced broader, more aggressive targeting and bidding. We listened to the pitch and said no. We didnβt want to try it. The reps turned it on anyway.
What happened next was what we predicted. The campaigns spent significantly more money and didnβt generate any additional conversions.
We had to comp the client for the wasted spend, which was bad enough. But what made it worse was the principle of the thing: we hadnβt agreed to this. Google made unauthorized changes to our account.
When I tried to get the money back, Googleβs position was that weβd set our campaign budgets at a certain level, and they were within their rights to spend up to that amount. That framing ignores that a budget cap is a ceiling, not an invitation.Β
Our agency methodology is to never hit a budget cap. We set those numbers based on the strategy weβd approved, not the one they decided to test. I hounded them for weeks, but never got any resolution. It still makes me angry.
The reps were clearly incentivized to get adoption of the new feature. When it didnβt work, there was no accountability and no recourse. We were left covering the cost of a decision we explicitly declined.
Whatβs being misrepresented
Budget caps were treated as implicit consent to spend. A product we declined was activated without authorization, and when it failed, the platform pointed to our own settings as justification.
The incentive structure rewarded the reps for turning it on. There was no corresponding mechanism to make the advertiser whole when it didnβt work.
This was years ago for a successful retainer. A pair of senior Google reps sat across from us and asked what our clientβs gross margin was. Around 50%, we said. They went to the whiteboard and wrote out: if overall revenue/2 β overall media cost >= 0, then we should keep spending money on ads.
On the surface, the math sounds right. In practice, it has two problems.
It assumes the reported conversions are incremental, meaning they wouldnβt have happened without the paid ad. A substantial portion of any Google campaignβs reported conversions, particularly in brand and retargeting, are users who were already going to convert.
The model assumes a flat cost curve, where the 500th conversion costs the same as the 50th. It does not. Marginal returns fall as you scale. The last dollars of spend are always the least efficient, but theyβre exactly what this pitch is designed to help Google access. (They should have said marginal revenue/2 β marginal cost = 0 is profit maximization.)
Whatβs being misrepresented
The model treats all reported conversions as incremental and assumes cost per conversion is constant across spend levels. Both assumptions are wrong, and together they can justify significant overspend.
3. The βhigher CPCs buy better clicksβ pitch
This one still happens all the time. The pitch is that if you raise your CPCs, youβll get access to higher-quality traffic. The implied logic is that conversion rate is influenced by CPC, and that if your investment isnβt high enough, youβre missing the best clicks.
Thereβs a version of this that has some truth to it. Higher CPCs can mean higher ad positions, which can mean higher impression frequency against the same users. More frequency can drive higher aggregate conversion rates, because repeated exposure matters.
But the argument glosses over the other side of that equation.Β
Higher frequency has diminishing marginal returns.Β
The third impression is worth less than the first. The tenth is worth a lot less.
The cost curve isnβt flat. Youβre paying more per click at every step.
In practice, raising CPCs to chase quality traffic is almost always correlated with substantially worse overall return on ad spend.
This is a variant of the marginal return problem seen across these cases. The pitch frames the upside without acknowledging the cost curve. More spend gets positioned as access to better outcomes, when it often delivers the same outcomes at a higher price.
Whatβs being misrepresented
CPC and conversion rate are presented as if higher bids unlock better traffic. In most cases, the incremental cost outpaces the incremental return. The pitch frames diminishing returns as an opportunity, rather than a constraint.
βIf your Meta campaigns are underperforming, itβs because the algorithm just needs more time to learn.β
βDonβt make changes, and donβt reduce budget, just give the platform more data.βΒ
This is sometimes true. Machine learning systems need volume to optimize effectively, and premature intervention can reset progress.
But βit needs to learnβ has become a catch-all explanation thatβs almost impossible to disprove in the short run. It explains away poor CPAs, delays accountability, and keeps spend flowing when a reasonable advertiser might otherwise pull back and reassess.
Thereβs rarely a clear definition of when the learning phase ends, which makes it a moving target. The learning phase ends when performance improves. If performance doesnβt improve, more learning is prescribed.
Whatβs being misrepresented
A real technical concept is being used in ways that resist falsification. When thereβs no defined endpoint and no stated criteria for success, βit needs to learnβ serves as a blank check for budgetary continuity.
5. The metric pivot: When conversions fail, sell sentiment
In many cases, YouTube or display campaigns arenβt driving measurable conversions. The repβs suggestion: letβs look at brand measurement. We can measure recall rates, positive sentiment, and intent to purchase. These are real signals of brand health, and they matter in the long run.
But the shift from conversion to sentiment metrics tends to occur when conversion metrics are poor, not as a principled measurement strategy. Brand lift surveys measure awareness under controlled conditions, but they rely on self-reported intent and donβt connect to downstream revenue.
Recall is almost never translated into a cost per point of lift that can be compared across the media plan. You end up with a number thatβs positive and presented as evidence of success, with no agreed-upon framework for what sufficient lift would look like.
Whatβs being misrepresented
A softer metric is substituted for a harder one after the harder one fails. Brand lift is a legitimate measurement tool when defined upfront as a success criterion. Introduced afterward, it functions as a consolation prize.
6. Upper funnel combined with lower funnel for a blended average
Upper-funnel and lower-funnel campaigns serve different purposes and perform differently on a cost-per-acquisition basis. When a channel reports blended CPA across all campaign types, an average that looks acceptable can hide the fact that some portion of the media plan is wildly inefficient at the margin.
The argument for blending is that upper-funnel spend creates the conditions for lower-funnel performance. That is plausible, but plausibility isnβt the same as demonstrated causality.Β
Often, itβs assumed the upper funnel is directly contributing and that, in aggregate, the system is profitable and fully incremental. This is never the case.
Whatβs being misrepresented
Aggregate CPA can look fine while specific segments of spend have no measurable return. Blending is a reporting choice, and it can obscure where money is and isnβt working.
7. View-through conversions: The numbers that shouldnβt count
A view-through conversion is counted when a user sees an ad, doesnβt click it, and then converts within some attribution window, often 24 hours or more. Platforms report these alongside click-through conversions by default.Β
For retargeting campaigns, which by definition serve ads to people who have already visited your site, view-through attribution is particularly problematic. These users were likely going to return and convert regardless. The ad may have had nothing to do with it.
The issue isnβt that view-throughs arenβt meaningful. For a cold audience, some brand-influenced conversions happen without clicks.
The issue is that those conversions are almost never broken out proactively (you have to ask). And when you remove view-throughs from retargeting campaigns, the ROAS numbers can change dramatically.Β
Weβve seen cases where removing VTAs cuts reported conversions by more than half. I would note that by moving to incremental measurement options, Meta has become substantially more transparent.
Whatβs being misrepresented
View-through conversions inflate reported performance, particularly in retargeting, where incrementality is already low. Default reporting includes them without flagging the methodological problem.
This one is a pattern. A channel rep brings industry benchmark data to a meeting showing that your competitors are spending at a level above your current budget. The implication is clear: youβre being outspent, and you should close the gap.
Industry benchmarks are among the most valuable inputs a channel can provide. Knowing where you sit relative to the market is useful context for planning. The problem is how they get deployed. More often than not, benchmark data shows up as a tool to expand media spend, not as a neutral input into strategy.
And it works. CEOs and CMOs are particularly susceptible to this framing. Nobody wants to hear that a competitor is outspending them.
The emotional pull of βtheyβre investing more than youβ is hard to counter with a measured conversation about marginal returns or strategic fit. The benchmark becomes the argument, and the argument is almost always βspend more.β
What gets lost is any discussion of whether:
The competitorβs spend is actually working for them.
Your business model and margins support the same level of investment.
The benchmark even reflects an apples-to-apples comparison.
Competitive spend data without context is just a number that makes your budget feel inadequate.
Whatβs being misrepresented
Benchmark data is real, but itβs selectively introduced to justify budget increases rather than treated as one input among many. The framing skips over whether the comparison is meaningful and relies on competitive anxiety to sell.
9. The default settings trap
This one is hard to frame as a single incident because itβs everywhere. Iβve talked to so many people trying to break into the industry, or launch their first campaigns, and the story is almost always the same.Β
They follow the platformβs setup guide, accept the default settings, and end up opted into programs that have close to zero chance of being successful.
This is true across pretty much every major channel.Β
LinkedIn defaults you into audience network inventory that runs outside the LinkedIn feed.Β
Google opts you into display inventory when youβre trying to run search. Broad match keywords are set way too far out of the box. Suggested CPCs are astronomical.Β
Googleβs geographic targeting defaults to βpresence or interestβ rather than actual location.Β
Each of these defaults, taken individually, could be defended as a reasonable starting point. Taken together, they create a setup that maximizes the platformβs revenue from day one, before the advertiser knows whatβs happening.
A new advertiser following the guided setup is accepting a configuration that the platform designed, and the platformβs incentives arenβt aligned with efficient spend.
This one is genuinely difficult to solve. Platforms need to provide default settings, and they canβt expect every new advertiser to understand every option.Β
But thereβs something predatory about the gap between what people think theyβre signing up for and what theyβre getting. The defaults are revenue-optimized for the channel, not performance-optimized for the advertiser.
Whatβs being misrepresented
Setup guides and default settings are presented as best practices when theyβre actually configurations that favor the platformβs revenue. New advertisers trust the guided experience, and have no reason to suspect the defaults are working against them.
Privacy regulations and platform changes have created real limitations in conversion tracking. GDPR and Appleβs App Tracking Transparency arenβt invented problems.Β
We have less visibility than we used to, and the platforms have responded by layering probabilistic modeling and modeled conversions on top of deterministic tracking.
But the tracking gap has also become a convenient shelter for underperformance. The argument goes like this:
βThe conversions are happening, we just canβt see them all yet. Thereβs latency in the data.β
βThere are limits to what can be tracked. We need a longer attribution window.β
βWe need more time for the modeled data to populate. And in the meantime, here are some proxy metrics that we think are directionally valid, so letβs keep pushing.β
Each of those can be true in isolation. Modeled conversions take time to appear. Attribution is harder than it was five years ago. Proxy metrics can be useful when direct measurement breaks down.Β
The problem is when all of these caveats get stacked together and used to justify sustained spend in the absence of any measurable result. At some point, βthe data will come inβ stops being a reasonable expectation and becomes an article of faith.
The tracking gap is real, but it cuts both ways. If you canβt measure the result, you also canβt prove the spend is working. The platformβs default position is to assume it is, and keep going. The advertiserβs job is to ask what happens if the modeled conversions never materialize, and what the fallback plan looks like if they donβt.
Whatβs being misrepresented
Legitimate tracking limitations are used to defer accountability indefinitely. When measurement is hard, the platformβs recommendation is always to maintain or increase spend, never to reduce it. The uncertainty gets resolved in the channelβs favor by default.
None of this is an argument that agencies are irreplaceable in their current form. We used to question tCPA, and now itβs a preferred bidding strategy. Automation handles execution-level work that used to require skilled practitioners. In-house teams are viable for more companies than they used to be.
But the argument for fully autonomous, channel-run advertising assumes the channel will optimize for your outcomes rather than revenue. Even if we imagine new profit-sharing contracts, this assumption carries real risk.
And Iβm not blaming reps or the channels. They believe in their products, but theyβre also measured on metrics that create a predictable drift in how they frame data. I should note that agencies struggle with misaligned incentives as well.
The advertiserβs job, with or without an agency, is to keep asking the inconvenient questions.
What is the marginal return at this spend level?
What percentage of conversions are view-throughs?
What does performance look like if we exclude brand search?
Are we measuring incrementality, or are we measuring correlation, and calling it causation?
Maybe the answer to everything is eventually full automation. But the entity building the machine shouldnβt be the one telling you when itβs ready.
For years, Salesforce Marketing Cloud was the safe choice.
Powerful. Enterprise. Trusted.
But lately, weβre hearing something different:
βOur data is too tangled to activate.β
βWeβre locked into contracts.β
βWeβre stuck sending the same emails on repeat.β
βEverything is Band-Aids and duct tape β I donβt know how we can move without breaking everything.β
βWe feel stuck.β
Sound familiar? If so, this fireside chat is for you.
Weβve helped dozens of brands migrate off Salesforce and into modern, composable engagement architectures built for real CRM performance. Not because itβs trendy β but because marketers needed more speed, flexibility, and innovation.
In this April 14 session, weβll cover:
Why brands feel stuck (and why itβs more common than you think).
Whatβs happening inside the Salesforce ecosystem.
The biggest misconceptions about migrating.
Understanding the martech landscape.
What life actually looks like after moving to a modern platform like Braze.
How CMOs and martech leaders should think about platform decisions over the next 3 to 5 years.
How to get the rest of your org on board with making a move.
The steps to take now to set yourself up for migration success.
To be clear: this isnβt a Salesforce-bashing session.
Itβs a candid conversation about innovation velocity, marketing ownership, and what the next era of marketing actually requires.
Disclaimer: To ensure a candid and open conversation, the live session is open only to brand-side marketing leaders. Registrants who are not verified brand-side marketing leaders will not be permitted to attend the live session. However, the recorded session will be made available to all registrants upon completion of the event.
Intelβs first CPUs to integrate Nvidia graphics chiplets are reportedly called βSerpent Lakeβ, and they could launch in late 2028 Last year, Intel struck a deal with Nvidia that would allow them to βbuild and offer to the market x86 system-on-chips (SOCs) that integrate NVIDIA RTX GPU chiplets.β According to the leaker Jaykihn, Intelβs first [β¦]
Turtle Beach's VelocityOne Race KD3 Racing Wheel and Pedals is currently one of the company's best accessories for emulating real-life racing on Xbox and PCs, and it's now on sale for 44% off.
Microsoft is changing how OneDrive handles deleted files. Soon, cloud deletions won't hit your local Recycle Bin, forcing you to use the web for recovery.
Copilot's terms of use indicated that it's for entertainment purposes only. However, Microsoft has indicated that the phrasing is a legacy language from when Copilot originally launched as a search companion service in Bing.
Sonnet Technologies, a US-based long-time provider of connectivity solutions for Mac, Windows, and Linux systems, has released two new Thunderbolt 5 docking stations: the Echo 20 SecureDock and Echo 21 SuperDock. Both target professional users looking for high-speed I/O, with the key difference being built-in storage support on the higher-end model. The two docking stations are mostly identical in terms of connectivity. Both feature three Thunderbolt 5 ports, a host connection capable of up to 140 W of power, and two downstream ports for peripherals and daisy-chaining. There are also nine USB 3.2 Gen 2 ports split between Type-A and Type-C, a 10 Gigabit Ethernet port with backward compatibility, and dual display outputs via HDMI and DisplayPort. Depending on the host system, the docks can drive up to four displays, with support reaching 8K at 60 Hz on Windows systems.
The Echo 21 adds an internal M.2 NVMe slot taking drives up to 8 TB, with transfer speeds reaching 3,300 MB/s. This makes it suitable for local media storage or backup use cases without requiring external drives. Both models feature high-res audio I/O at up to 24-bit/192 kHz, plus full-size SD and microSD card readers. Compatibility covers Apple M-series Macs, older Intel Macs with Thunderbolt 3, and Windows or Chromebook machines with Thunderbolt 4, 5, or USB4. The Echo 20 SecureDock is available now at $449.99, with the Echo 21 SuperDock following in late May at $499.99.
Errol Segal has been a LA Dodgers season ticket holder for over 50 years, but that run has come to an end as the team transitions to all-digital for season tickets.
Robert Hallock, Intel's vice president and general manager of client segment technical marketing, confirmed in an interview with Club386 that the Raptor Lake lineup remains "a big part" of Intel's client segment strategy and will stay in production alongside newer chips.
New academic research has identified multiple RowHammer attacks against high-performance graphics processing units (GPUs) that could be exploited to escalate privileges and, in some cases, even take full control of aΒ host.
The efforts have beenΒ codenamed GPUBreach, GDDRHammer,Β and GeForge.
GPUBreach goes a step furtherΒ than GPUHammer, demonstrating for the first time that
STALKER 2 is getting some free content ahead of its Cost og Hope DLC this summer GSC Game World has confirmed that STALKER 2 is getting a free content update this month. This update is βSealed Truthβ, which will allow players inside the X-18 Lab. STALKER fans should already be aware of the Lab X-18, [β¦]
Based on the satirical novel and film franchise, Starship Troopers: Ultimate Bug War! puts the player in the shoes of Major Dietz as they fight back an army of alien arachnids. Or try things out from the bugβs perspective and destroy all humans. Either way, youβre in for a good time.
IBASE Technology Inc., a leading manufacturer of embedded and edge computing solutions, launches the MBB1002, a powerful AI-ready eATX motherboard engineered to accelerate next-generation edge AI and data-intensive applications. Powered by AMD EPYC Embedded 8004 series processors, it delivers exceptional multi-core performance and outstanding power efficiency, enabling faster AI inference, real-time analytics, and high-throughput computing at the edge.
Built for scalability and performance, the MBB1002 supports up to 576 GB DDR5-4800 ECC memory for reliable, high-speed data processing. Five PCIe Gen 5 x16 slots unlock unmatched flexibility for integrating GPUs and AI accelerators, empowering system integrators to scale performance based on evolving workload demands. With dual 10GbE LAN and high-speed NVMe storage support, the platform ensures ultra-fast data transfer and seamless system responsiveness for mission-critical deployments.
ASUS today announced the ProArt Router PRT-BE5000 and ProArt Switch PQG-U1080, introducing networking solutions into the ProArt family of Creator-First devices designed for modern studios. Joining the existing ProArt lineup of laptops, displays, graphics cards, motherboards, and other creator-focused products, these new devices help build a more complete studio infrastructure for creators. Combining dual-band Wi-Fi 7 connectivity, intelligent traffic prioritization, and high-speed multi-gigabit wired expansion, the ProArt Router PRT-BE5000 and ProArt Switch PQG-U1080 enable quick file transfers, cloud collaboration, and stable connections across multiple creative devices.
The ProArt Router PRT-BE5000 delivers dual-band Wi-Fi 7 with throughput performance of up to 5000 Mbps plus Multi-Link Operation (MLO), and dual 2.5G WAN/LAN connectivity for flexible, high-speed wired connections. Creator-First adaptive QoE intelligently prioritizes creative traffic in real time, helping to ensure fast file transfers, smooth cloud collaboration, and streaming across connected devices, in harmony with other network activity. ASUS Smart Home Master software further simplifies network segmentation through dedicated SSIDs for IoT devices and VPN connections, enabling more intuitive management across studio and personal environments.
Motherboards, who needs them? Not Breadboarding Labs, which recently drafted plans for a retro Intel 80386 (i386) PC build using solderless breadboards.
Giraffe Gold lets you build ownership of a physical gold, silver, or platinum bar through small monthly contributions and automatic round-ups. Connect your bank and spending card, set a contribution starting at $50, and watch your bar balance grow in real time with market prices.
When you hit the bar price, Giraffe Gold purchases from certified refiners and ships your bar fully insured to your door. The platform uses Plaid for secure, read-only connectivity and partners with Upstate Coin & Gold and ShipSecure to ensure authenticity and safe delivery.
Stay the Week helps homeowners privately share lake houses, cabins, beach houses, ski condos, and second homes with friends and family. Invite guests to a private booking page to see availability, request dates, and receive automatic confirmations and reminders. Owners control availability, blackout dates, and access from a simple dashboard, with directions and property info attached to each booking, replacing messy text threads with a clean, controlled process.
The UK's cybersecurity workforce has nearly tripled, but headcount growth is masking a deeper crisis - privacy teams remain critically understaffed, underfunded, and underpowered just as threats intensify.
FORMLOVA is a chat-first form service powered by MCP. Create forms from ChatGPT, Claude, or Cursor in under a minute, then manage response routing, follow-up emails, reminders, analytics, and CRM handoffs from the same conversation. It integrates with 118 tools across 24 categories, focusing on the 95% of form work that happens after the form exists.
Most AI form tools stop at creation. FORMLOVA was built by a solo founder with years in digital marketing who knew the real burden was in post-publish operations. It's free to start with unlimited forms and responses.
If you're running a website on Cloudflare's free or pro plan and don't have time to babysit logs or tune WAF rules, Detect7 fits. It gives you automated, intelligent protection that works in the background without needing you to be a security expert. You set it up once, and it handles the rest: detecting threats, escalating from challenge to block, learning traffic patterns, and managing Cloudflare firewall rules and IP lists for you. It analyzes 100% of your origin logs in real time, learns your normal patterns, and auto-blocks threats with adaptive rules pushed to your Cloudflare integration.
AΒ China-based threat actor known for deploying Medusa ransomware has been linked to the weaponization of a combination of zero-day and N-day vulnerabilities to orchestrate "high-velocity" attacks and break into susceptible internet-facingΒ systems.
"The threat actor's high operational tempo and proficiency in identifying exposed perimeter assets have proven successful, with recent
Spawnbase turns recurring work into reliable AI-powered workflows. Teams describe goals in plain language, then build agentic flows with triggers, AI steps, and app actions across seven providers and 200+ models. Configure nodes, test each run, and deploy on schedules or events while monitoring performance. Connect Slack, GitHub, HubSpot, Notion, Jira, and more, and pay only for executions with credit-based pricing.
Threat actors are exploiting a maximum-severity security flaw in Flowise, an open-source artificial intelligence (AI) platform, according to new findings from VulnCheck.
The vulnerability in questionΒ is CVE-2025-59528 (CVSS score: 10.0), a code injection vulnerability that could result in remote code execution.
"The CustomMCP node allows users to input configuration settings for connecting