❌

Normal view

Today β€” 1 April 2026Tech

Intel "Wildcat Lake" Core 300 Series Specifications Surface

31 March 2026 at 23:34
Intel's "Wildcat Lake" processors, part of the Core 300 series non-Ultra family, have been leaked by a reputable source Jaykihn0 on X, revealing the entire lineup across various configurations and SKUs. The lineup includes six SKUs across the Core 3, Core 5, and Core 7 tiers, all designed to operate within a 15 to 35 W TDP range. Each model features a hybrid core configuration, pairing two "Cougar Cove" P-cores with four low-power efficiency cores, completely omitting the traditional "Darkmont" E-cores. Boost clocks range from 4.3 GHz on the entry-level Core 3 304 up to 4.8 GHz on the Core 7 360. All six SKUs share 6 MB of L3 cache, a single NPU tile, and integrated Xe3 graphics. The leak suggests that Intel is bringing architecture closely related to the Core Ultra 300 "Panther Lake" mobile platform into the embedded and industrial space, or perhaps into low-cost laptop configurations that don't require the power of "Panther Lake," appealing to buyers seeking budget-friendly options.

The 2P+0E+4LPE core layout is a deliberate trade-off, prioritizing efficiency over raw multithreaded performance, which suits the thermal constraints common in edge and IoT deployments. NPU performance figures range between 15 and 17 TOPS across the lineup. While this won't power the largest LLMs, it may be more than sufficient for on-device inference in industrial or automation settings. The Core 3 304 deserves special mention: it reduces to a single P-core and one Xe graphics unit, creating a clear cost-optimized option at the bottom of the lineup. SIPP certification, important for buyers needing stable, long-lifecycle platform support, is available on the Core 7 360 and Core 5 330 but not consistently across the lineup. Notably, there is no vPro support on any SKU, clearly distinguishing "Wildcat Lake" from Intel's enterprise mobile portfolio.

NVIDIA Launches Auto Shader Compilation for Faster Game Loading and Less Stuttering

31 March 2026 at 22:37
The NVIDIA App update today introduced some interesting features, such as DLSS 4.5 dynamic multi-frame generation and a 6x mode. Additionally, the app now includes a new beta version of NVIDIA Auto Shader Compilation (ASC). This feature takes DirectX 12 shaders from games and quietly compiles them while the system is idle or not running any graphically intensive tasks. Typically, when you start a game, you have to wait for all assets to load and shaders to compile before you can begin playing. However, with ASC, NVIDIA aims to shorten this process by pre-compiling shaders to reduce loading times and, interestingly, decrease in-game stuttering, which can occur when shaders don't load properly. NVIDIA states that this feature is opt-in within the NVIDIA App and can be enabled by navigating to the Graphics Tab > Global Settings > Shader Cache. Once in the menu, users can access a range of settings, including the option to turn on Auto Shader Compilation.

Since ASC uses a separate folder, users will need to allocate sufficient disk space to store the shaders that ASC will access. In the NVIDIA App, gamers can choose the "Compile Now" option to pre-compile all game shaders immediately by clicking on three dots, or they can wait for the system to do it automatically when it becomes idle. As compiling shaders requires some computing power, there are settings to control system utilization, with the default set to medium. The NVIDIA App will also display the date of the last compilation. Interestingly, ASC will perform its functions once a game is downloaded and after a new driver update is installed for optimal performance. NVIDIA requires GeForce Game Ready Driver 595.97 WHQL or newer for ASC to work, and more optimizations are expected as the beta testing concludes in the coming weeks.

We go hands-on with Nvidia's DLSS 4.5 Dynamic Multi Frame Generation and its 5X and 6X multipliers β€” more generated frames, now tailor-made for your monitor's refresh rate

31 March 2026 at 22:22
The arrival of Nvidia's DLSS 4.5 Dynamic Multi Frame Generation mode and its extended 5X and 6X multipliers promises more control and higher generated frame rates for GeForce RTX 50-series graphics cards. We went hands-on to see just how far AI can stretch one input frame.

The Card Shop Store – Buy, sell, vault, and fractionally own graded trading cards


The Card Shop Store is a marketplace for buying, selling, and vaulting sports, TCG, and entertainment trading cards. It supports direct sales and auctions, offers storefronts for sellers, and features CardShares for fractional physical ownership. You can browse graded and raw cards, track conditions and prices, and manage secure transactions. Use the web or mobile apps to list inventory, join breaks and auctions, and keep high-value cards safe in vault storage.

View startup

Tonimus AI – Automate content, posting, and engagement while tracking real revenue


Tonimus automates social media growth for creators by generating, posting, and engaging in your brand voice while reporting revenue and personalized insights. Instead of guessing, creators know which platform earns money, audience authenticity, and insights across your genre based on real data. Tonimus not only tells you how many followers you have but also what they're worth and what to do next. It shows creators exactly which content drives revenue and automates creating more of it.

View startup

AnveVoice – Add voice AI that navigates your site and completes tasks


AnveVoice brings voice-first conversations to your website so visitors can speak naturally and get things done. It listens, understands intent, and acts on the page by scrolling, navigating, filling forms, and booking meetings while remembering preferences across sessions.

Embed a single script to add it to Shopify, WordPress, Webflow, Wix, Squarespace, React, or any site. A dashboard tracks sessions, conversions, and usage in real time so you can monitor performance and scale with transparent, token-based pricing.

View startup

Yesterday β€” 31 March 2026Tech

Android Developer Verification Rollout Begins Ahead of September Enforcement

Google on Monday said it's officially rolling out Android developer verification to all developers to combat the problem of bad actors distributing harmful apps while "hiding behind anonymity." The development comes ahead of a planned verification mandate that goes into effect in Brazil, Indonesia, Singapore, and Thailand this September, before it expands globally next year. As part of this

YouTube adds AI creator matching and ad formats to its partnerships platform

31 March 2026 at 22:50

YouTube used its NewFront presentation to unveil a significant upgrade to its Creator Partnerships platform, adding Gemini-powered creator matching, stronger measurement tools, and new ways to run creator content as paid ads.

Why we care. Influencer marketing has become a core part of many brands’ strategies, but finding the right creators at scale and proving ROI is a pain point. tackles influencer marketing’s two biggest friction points β€” finding the right creator and proving ROI.

Gemini-powered matching cuts through the noise of three million creators, while the ability to run creator content as paid Shorts and in-stream ads makes performance measurable like any standard campaign, backed by a reported 30% conversion lift.

How it works. The updated platform uses Gemini to recommend creators from a pool of more than three million YouTube Partner Program members, filtered by campaign goals. Advertisers get more control over who they work with and better visibility into how those partnerships perform.

The big new feature. A revamped Creator Partnerships boost lets brands run creator-made content directly as Shorts and in-stream ads β€” formats YouTube says deliver an average 30% lift in conversions.

The big picture. The announcement builds on BrandConnect, YouTube’s existing creator monetization infrastructure, showing that the platform is doubling down on the creator economy as a growth lever for advertisers β€” not just a content strategy.

What’s next. Brands interested in the updated tools can watch the full NewFront presentation on YouTube for more details.

AI search engines cite Reddit, YouTube, and LinkedIn most: Study

31 March 2026 at 22:36
AI citations

Reddit ranks as the most-cited domain in AI-generated answers, followed by YouTube and LinkedIn, based on a new analysis of 30 million sources by Peec AI, an AI search analytics tool.

The findings. Reddit was the most-cited source across ChatGPT, Google AI Mode, Gemini, Perplexity, and AI Overviews. YouTube, LinkedIn, Wikipedia, and Forbes also ranked in the top five. Review platforms like Yelp and G2 appeared often in recommendation queries.

The research showed which domains models rely on:

  • ChatGPT favored Wikipedia, Reddit, and editorial sites like Forbes.
  • Google leaned toward platforms like Facebook and Yelp.
  • Perplexity emphasized Reddit, LinkedIn, and G2 for B2B queries.

Why we care. To win in AI search, you need authority beyond your site. Brands that appear consistently across trusted third-party platforms are more likely to be cited.

Why these sources? AI systems prioritize perceived authority plus authentic user input:

  • Reddit leads because it captures real user discussions.
  • YouTube dominates video citations via transcripts and descriptions.
  • Wikipedia serves as both a live source and a training dataset.

About the data. The analysis covered 30 million sources across ChatGPT, Google AI Mode, Gemini, Perplexity, and AI Overviews, measuring domains directly cited in answers to isolate what shapes responses.

The study. Top domains cited by AI search: Analysis based on 30M sources

Dig deeper. More citation research:

(PR) Sony and TCL Sign Definitive Agreements for Partnership in the Home Entertainment Field

31 March 2026 at 21:33
Sony Corporation ("Sony") and TCL Electronics Holdings Limited (together with its subsidiaries, "TCL") today announced that Sony and TCL have entered into legally binding definitive agreements for a strategic partnership in the home entertainment field. This follows the memorandum of understanding announced on January 20, 2026, pursuant to which both parties have been conducting discussions.

Under this partnership, Sony will establish a wholly owned subsidiary (the "Preparatory Company") to assume its home entertainment business, and TCL will subscribe to a portion of the Preparatory Company's shares, forming a joint venture (the "New Company") with TCL holding 51% and Sony holding 49% of the shares. The New Company will succeed to Sony's home entertainment business, which includes product development and design, manufacturing, sales and logistics, and customer service for products such as Consumer TVs (BRAVIA), B2B Flat Panel Displays (B2B BRAVIA), B2B LED Displays, projectors, and home audio equipment such as home theater systems and audio components. The New Company is expected to operate this integrated business globally.

TrueConf Zero-Day Exploited in Attacks on Southeast Asian Government Networks

A high-severity security flaw in the TrueConf client video conferencing software has been exploited in the wild as a zero-day as part of a campaign targeting government entities in Southeast Asia dubbed TrueChaos. The vulnerability in question is CVE-2026-3502 (CVSS score: 7.8), a lack of integrity check when fetching application update code, allowing an attacker to distribute a tampered update,

Google Gemini may adapt AI answers to match user tone: Report

31 March 2026 at 21:25
Google Gemini positive vs negative framing

A newly published, unverified report claims Google’s Gemini AI is instructed to mirror user tone and validate emotions while grounding its responses in fact and reality.

Why we care. If accurate, AI-generated search responses may vary based on how a query is phrased β€” not just the information available.

What’s new. The report centers on the inherent tension in the system-level instructions guiding how Gemini responds. The report, published by Elie Berreby, head of SEO and AI search at Adorama, suggested that Gemini is instructed to:

  • Match the user’s tone, energy, and intent.
  • Validate emotions before responding.
  • Deliver answers aligned with the user’s perspective.

What it means. The β€œoverly supportive mandate frequently overrides the factual grounding,” Berreby wrote. So instead of acting as a neutral aggregator, AI answers may:

  • Reinforce negative framing (β€œWhy is X bad?”).
  • Reinforce positive framing (β€œWhy is X great?”).

If public perception is negative, AI may amplify it. As the report suggests:

  • AI reflects existing sentiment signals.
  • It doesn’t β€œbalance” them the way blue links often do.

Query framing. The emotional framing of a query affects:

  • Which sources get cited.
  • How summaries are written.
  • The overall tone of the answer.

Google’s AI Overviews already show tone shifts, often aligning with query intent beyond keywords. This report offers a possible explanation.

Unverified. Google hasn’t confirmed the leak. As Berreby noted in his report: β€œI’ve decided to share only a fraction of the leaked internal system information with the general public. I’m not sharing any sensitive data. This isn’t a zero-day exploit. This is a tiny leak.”

The report. This Gemini Leak Means You Can’t Outrank a Feeling

Google expands Merchant Center loyalty features to 14 countries and AI surfaces

31 March 2026 at 21:17
Google Shopping Ads - Google Ads

Google is giving retailers more firepower to promote loyalty program benefits directly within product listings β€” expanding the program internationally and into its newest AI-powered shopping experiences.

What’s new. Merchants can now highlight member pricing and exclusive shipping options directly on listings. Loyalty annotations have also expanded to local inventory ads and regional Shopping ads β€” making it easier to promote in-store or geography-specific perks.

Why we care. The more you can personalize an offer for a shopper, the better. Embedding member perks into the moment of purchase discovery β€” rather than requiring a separate loyalty app or webpage β€” makes programs more visible and more likely to drive sign-ups.

By the numbers. According to Google, some retailers have reported up to a 20% lift in click-through rates when showing tailored offers to existing loyalty members.

The big picture. Loyalty benefits will now appear on Google’s AI-first surfaces, including AI Mode and Gemini, putting member offers in front of shoppers at an entirely new layer of the search experience.

Where it’s available. The expansion covers 14 countries β€” Australia, Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Netherlands, South Korea, Spain, the UK, and the US.

How to get started. Merchants activate the loyalty add-on in Merchant Center, configure member tiers, and set up pricing and shipping attributes. Connecting Customer Match lists in Google Ads is required to display strikethrough pricing and shipping perks to known members.

Don’t miss. US merchants can apply to join a pilot that uses Customer Match as a relationship data source for free listings β€” potentially expanding loyalty reach without additional ad spend.

Google explains how crawling works in 2026

31 March 2026 at 20:52

Gary Illyes from Google shared some more details on Googlebot, Google’s crawling ecosystem, fetching and how it processes bytes.

The article is named Inside Googlebot: demystifying crawling, fetching, and the bytes we process.

Googlebot. Google has many more than one singular crawler, it has many crawlers for many purposes. So referencing Googlebot as a singular crawler, might not be super accurate anymore. Google documented many of its crawlers and user agents over here.

Limits. Recently, Google spoke about its crawling limits. Now, Gary Illyes dug into it more. He said:

  • Googlebot currently fetches up to 2MB for any individual URL (excluding PDFs).
  • This means it crawls only the first 2MB of a resource, including the HTTP header.
  • For PDF files, the limit is 64MB.
  • Image and video crawlers typically have a wide range of threshold values, and it largely depends on the product that they’re fetching for.
  • For any other crawlers that don’t specify a limit, the default is 15MB regardless of content type.

Then what happens when Google crawls?

  1. Partial fetching:Β If your HTML file is larger than 2MB, Googlebot doesn’t reject the page. Instead, it stops the fetch exactly at the 2MB cutoff. Note that the limit includes HTTP request headers.
  2. Processing the cutoff:Β That downloaded portion (the first 2MB of bytes) is passed along to our indexing systems and the Web Rendering Service (WRS) as if it were the complete file.
  3. The unseen bytes:Β Any bytes that existΒ afterΒ that 2MB threshold are entirely ignored. They aren’t fetched, they aren’t rendered, and they aren’t indexed.
  4. Bringing in resources:Β Every referenced resource in the HTML (excluding media, fonts, and a few exotic files) will be fetched by WRS with Googlebot like the parent HTML. They have their own, separate, per-URL byte counter and don’t count towards the size of the parent page.

How Google renders these bytes. When the crawler accesses these bytes, it then passes it over to WRS, the web rendering service. β€œThe WRS processes JavaScript and executes client-side code similar to a modern browser to understand the final visual and textual state of the page. Rendering pulls in and executes JavaScript and CSS files, and processes XHR requests to better understand the page’s textual content and structure (it doesn’t request images or videos). For each requested resource, the 2MB limit also applies,” Google explained.

Best practices. Google listed these best practices:

  • Keep your HTML lean:Β Move heavy CSS and JavaScript to external files. While the initial HTML document is capped at 2MB, external scripts, and stylesheets are fetched separately (subject to their own limits).
  • Order matters:Β Place your most critical elements β€” like meta tags,Β <title>Β elements,Β <link>Β elements, canonicals, and essential structured data β€” higher up in the HTML document. This ensures they are unlikely to be found below the cutoff.
  • Monitor your server logs:Β Keep an eye on your server response times. If your server is struggling to serve bytes, our fetchers will automatically back off to avoid overloading your infrastructure, which will drop your crawl frequency.

Podcast. Google also had a podcast on the topic, here it is:

πŸ’Ύ

Google went through crawling, fetching, and the bytes it processes.

'Only limited by the physics': inside Apple’s AirPods Max 2 and the H2 chip upgrade

Five years after its debut, Apple's AirPods Max 2 arrives with the same iconic design but a completely rebuilt interior β€” and the engineers behind it say the H2 chip's headroom means the best may be yet to come.

Whoop’s valuation just tripled to $10 billion

31 March 2026 at 20:58
The fitness tracking startup just closed a $575 million Series G with Cristiano Ronaldo and LeBron James among its investors. The obvious question looming over a round of this size at this valuation: Is an IPO coming?

Xbox Ally and Xbox Ally X's native SSDs are too tiny for storing games β€” fortunately, this amazing 1TB microSD with 245MB/s read speeds is now 41% off

31 March 2026 at 18:11
The SDSQXH9-1T00-GZ6MA model of the SanDisk 1TB Extreme microSD UHS-I Card is one of the best ways to upgrade your Xbox Ally/Xbox Ally X's storage size, and it's now on sale for a 41% discount thanks to World Backup Day.

Xbox celebrates its 25th anniversary with Fanta partnership with exclusive rewards for Forza Horizon 6, Halo, Call of Duty and more

31 March 2026 at 17:57
Xbox has officially unveiled its 25th anniversary partnership with Fanta, offering themed in-game rewards across Halo, Diablo IV, Call of Duty, World of Warcraft, and Forza Horizon 6, alongside prize giveaways and live event experiences.

(PR) Razer Introduces the Pro Type Ergo Wireless Keyboard Series

31 March 2026 at 20:19
Razer, the leading global lifestyle brand for gamers, today unveiled the Razer Pro Type Ergo, an ergonomic wireless keyboard designed to make long hours at the desk feel more natural and less fatiguing, while helping users get more done with less effort.

Pro Type Ergo is Razer's answer to a productivity market that has barely moved: a split-ergonomic keyboard that feels familiar from the first keystroke, cuts strain over time, and builds powerful workflow tools directly into the layout. For professionals who live on their keyboard, it is built to support comfortable, focused work all day, every day.

Eidos Montreal Cancels Unannounced Open-World Adventure Game 7 Years in Development as Studio Rocked by Layoffs

31 March 2026 at 20:14
Eidos Montreal announced earlier this week in a LinkedIn post that it was laying off 124 of its employees and that its studio director, David Anfossi was leaving. The studio explained that the layoffs are a result of necessary cost-cutting measures and evolving project needs. The layoffs would affect both production and support teams, and the studio says that they are necessary cuts to allow the studio to concentrate its efforts where it can be most effective. Now, new reporting seems to suggest that the layoffs may have been partially caused by a game whose budget had ballooned and caused financial strain in the studio.

Following the layoffs, Insider Gaming reports that Eidos, who had previously worked on Deus Ex: Mankind Divided and Marvel's Guardians of the Galaxy, has also cancelled Wildlands, an in-development open-source action-adventure game that the studio had been working on for seven years. The publication reports that WIldlands had had a somewhat troubled development cycle prior to its cancellation, with the team having gone through four different game engines and burned through hundreds of millions of dollars in budget. These reports are further backed up by Jason Schreier's comments on Reddit. Further, according to Insider Gaming's sources, the game was in the debugging phase and nearing completion before Embracer, Eidos's parent company, shut it down.

Intel Binary Optimization Tool Changes Code Execution with Heavy Vectorization

31 March 2026 at 20:07
The Intel Binary Optimization Tool (BOT) has been launched alongside the "Arrow Lake Refresh" series of processors, which includes the Core Ultra 5 250K Plus and Core Ultra 7 270K Plus models. While the tool is beneficial for gamers looking to extract a few extra frames from their setups, it may be a nightmare for makers of benchmarking tools like Geekbench by Primate Labs. Recently, Primate Labs testing found that BOT changes the way .exe applications run and concluded that Geekbench runs will now flag these BOT-enhanced runs. However, in deeper testing, Primate Labs discovered that Intel's BOT may deliver significant boosts in some applications like Object Remover and HDR, increasing performance by up to 30%. This is thanks to the deep vectorization that the BOT performs behind the scenes to optimize performance.

For example, Primate Labs used Intel's own Software Development Emulator (SDE) to measure how many instructions were executed and which types of instructions the program executed. Without BOT, Geekbench 6 required a total of 1.26 trillion instructions to finish, while a BOT-enhanced run completed with 1.08 trillion instructions. This is an impressive 14% reduction. However, when examining the execution by type, we see that BOT makes heavy use of vector instructions like SSE2 and AVX2. The number of scalar instructions needed to execute a program fell from 220 billion to 84.6 billion, while the number of vector instructions increased from 1.25 billion to 18.3 billion, a 13.7x increase. This means that Intel BOT finds a way to turn inefficient scalar code into vectorized instructions that are processed much more efficiently inside Intel CPUs. These techniques indicate a very complex behind-the-scenes process than was originally believed. The Geekbench v6.7 update will include a flag for BOT, allowing future Geekbench results to be easily distinguished as BOT-enhanced or not.

(PR) Blaze Entertainment Announces Evercade Nexus Retro Gaming Handheld

31 March 2026 at 20:02
Blaze Entertainment is proud to announce the Evercade Nexus, the newest retro gaming handheld console from Evercade. Evercade continues to champion physical cartridges as the medium to relive the classic gaming experience, bringing more top-quality names to the ever-growing ecosystem with Rare's Banjo-Kazooie and Banjo-Tooie included. The latest iteration of the Evercade gaming experience draws on the feedback of Evercade fans and the demands of gaming in the current age, while also keeping the simplicity and ease of use that Evercade provides, and the continuing commitment to the nostalgia and experience of using and collecting physical cartridge media.

The Evercade Nexus is built to play with an ultra-bright 5.89" screen, the biggest ever screen on an Evercade, with a peak brightness of over 500 nits. The new, larger design allows for dual analogue sticks, giving Evercade players the full experience of 64 and 32-bit games, and helping recreate the feel of arcade-style gaming in your hands. All of this in a new larger form factor that is still light and comfortable to use and travel with, and a sleek new look with a black color scheme and a customizable RGB light-up logo.

Xbox App and Game Bar Overlay Get Nifty Features for Gaming Handhelds

31 March 2026 at 19:41
Since the launch of the Steam Deck and the subsequent competing Windows gaming handhelds, Microsoft has been working on improving its UI for gaming consoles, culminating with the recent adoption of the Windows Full Screen Experience, which was later renamed to Xbox Mode. The latest update to Microsoft's gaming experience, however, comes by way of the Xbox app and its overlay, as spotted by ROGAllyLife. These new features will be available to everyone using compatible hardware and the Xbox app, although they are still in the preview version of the app, so they may only reach mainline status in a few weeks. The biggest update is a new display widget that was added to the Xbox Game Bar overlay, which adds controls like display refresh rate, resolution, projection mode, and Auto Super Resolution, allowing users to test different display configurations without leaving their games.

Users can now also change notification placement in the Xbox app, allowing them to see notifications without completely disrupting the gaming experience. The Xbox app now allows for eight notification placement optionsβ€”three positions along each screen edgeβ€”and this can also be customized from the Game Bar overlay instead of necessitating a potentially game-breaking app switch. These updates are just the most recent in Microsoft's efforts to make handheld gaming more feasible on Windows, but it remains to be seen how Microsoft will change the regular Windows 11 experience after its recent promise to address quality and usability complaints.

AMD Quietly Renames Anti-Lag 2 to "FSR Latency Reduction 2.0"

31 March 2026 at 19:37
AMD has quietly renamed its Anti-Lag 2 technology as part of the FSR package, now calling it "FSR Latency Reduction 2.0." This move aligns with AMD's recent trend of rebranding FSR-related technologies. The AMD Radeon marketing team has successfully unified FidelityFX Super Resolution under the FSR branding, although Anti-Lag 2 was previously an exception, bundled with other AMD technologies. The advanced graphics technology, once known as FidelityFX Super Resolution, is now simply called "FSR." This change is reflected on AMD's official product page, which notes that FSR stands for "formerly AMD FidelityFX Super Resolution." However, AMD has not formally announced this rebranding. These changes occurred before the official launch of the FSR "Redstone" product in late December last year. Now, every new announcement features the standard FSR language, suggesting that this renaming might be part of a broader update to Anti-Lag 2.

Since FSR is aimed at gamers, it is now included in the FSR package as "FSR Latency Reduction 2.0." With FSR Redstone, AMD has already grouped four technologies under the FSR "Redstone" name: FSR Upscaling, FSR Frame Generation, FSR Ray Regeneration, and FSR Radiance Caching. If the renaming becomes more than just a label update, FSR Latency Reduction 2.0 could become the fifth component of the FSR "Redstone" suite. Technologies like AMD Anti-Lag 2 are specifically designed to reduce latency by improving CPU and GPU coordination. Even without frame generation, it can lower latency in a game, but it may be especially useful when synthetic frames are involved, helping to keep latency at a level where any added delay is far less noticeable.

YoloLiv YoloCam S3 Review: A 4K powerhouse

31 March 2026 at 17:00
YoloLiv's YoloCam S3 is a small, sturdy 4K/30 fps webcam that delivers excellent video β€” once you spend some time fiddling with the software. It's got a large sensor, a wide 82-degree field of view, and lightning-fast autofocus, but you'll need to plug it into a USB 3.0 port for it to function.

Manuscript – AI-powered publishing tool that never writes for you


Manuscript is two things. For publishing houses, it's a tool that streamlines the entire editorial process and makes it 10 times more efficient. It uses AI ethically, handling the tedious parts of editing while keeping the artful, human side of publishing exactly where it belongs: with humans.

For authors, Manuscript is a full workspace that gives you a complete toolbox but leaves the writing entirely to you. Think of it as a Scrivener alternative built for the 21st centuryβ€”one that will never write for you.

View startup

TapHum – A presence app for the people who matter most


TapHum is a presence app that lets you tell someone you're thinking of them with a single tap. No messages, no emojis, no pressure to reply. Just tap their circle and they instantly feel it through a gentle vibration and a warm glow on their phone. It removes the need for words while keeping connections warm and effortless.

Each person in your circle gets their own glowing orb you can personalize with a custom color and nickname. Build daily streaks by tapping each other, see your shared timeline grow over time, and connect through QR codes or invite links. TapHum is for the people you don't need words with, like partners, parents, and best friends who just need to know you're there.

View startup

59% of SEO jobs are now senior-level roles: Study

31 March 2026 at 19:43
SEO command center

SEO hiring is shifting toward senior, strategy-led roles as AI reshapes search and expands the scope of the job. A new Semrush analysis of 3,900 listings shows companies now prioritize leadership, experimentation, and cross-channel visibility over pure technical execution.

Why we care. SEO hiring, career paths, and required skills are changing. Entry roles focus on execution, while most demand sits at the leadership level β€” owning strategy across search, AI assistants, and paid channels, with clear revenue impact.

What changed. Senior roles dominated, accounting for 59% of listings. Mid-level roles, such as specialists (15%) and managers (10%), trailed far behind.

  • Companies are shifting budget toward strategy as AI tools absorb more execution work.

The skills shift. In-demand capabilities extend beyond traditional SEO into coordination, testing, and decision-making:

  • Project management appeared in more than 30% of listings.
  • Communication led non-senior roles at 39.4%.
  • Experimentation appeared in 23.9% of senior roles compared with 14% of other roles.
  • Technical SEO appeared in about 6% of listings.

Tools and channels. The SEO tech stack now spans analytics, paid media, and data.

  • Google Analytics appeared in up to 47.7% of listings.
  • Google Ads appeared in 29% of listings.
  • SQL demand grew at the senior level.
  • AI tools like ChatGPT were increasingly listed.

AI expectations: AI literacy is moving from optional to expected:

  • 31% of senior roles mentioned AI.
  • Nearly 10% referenced LLM familiarity.
  • AI search concepts like AI search and AEO appeared more often.

Pay and positioning: SEO is increasingly treated as a business function.

  • The median salary for senior roles reached $130,000, compared to $71,630 for others. Some listings were much higher.
  • Degree preferences skewed toward business and marketing.

Remote work is now standard. More than 40% of listings offered remote options, with little difference by seniority.

About the data: Semrush analyzed 3,900 U.S.-based SEO job listings from Indeed as of Nov. 25. Roles were deduplicated, segmented by seniority, and analyzed using semantic keyword extraction.

The study. What 3,900 SEO Job Listings Reveal for 2026: Experiments, AI, and Six-Figure Salaries

Technical SEO for generative search: Optimizing for AI agents

31 March 2026 at 19:00
Technical SEO for generative search: Optimizing for AI agents

Technical SEO extends beyond indexing to how content is discovered and used, especially as AI systems generate answers instead of listing pages.

For generative engine optimization (GEO), the underlying tools and frameworks remain largely the same, but how you implement them determines whether your content gets surfaced β€” or overlooked.

That means focusing on how AI agents access your site, how content is structured for extraction, and how reliably it can be interpreted and reused in generated responses.

Agentic access control: Managing the bot frontier

From a technical standpoint, robots.txt is a tool you already use in your SEO arsenal. You need to add the right crawlers within your files to allow specific bots their own rights.Β 

For example, you may want a training model like GPTBot to have access to your /public/ folder, but not your /private/ folder, and would need to do something like this:

User-agent: GPTBot
Allow: /public/
Disallow: /private/

You’ll also need to decide between model training and real-time search and citations. You might consider disallowing GPTBot and allowing OAI-SearchBot.

Within your robots.txt, you also need to consider Perplexity and Claude standards, which are tied to these bots:

Claude

  • ClaudeBot (Training)
  • Claude-User (Retrieval/Search)
  • Claude-SearchBot

PerplexityΒ 

  • PerplexityBot (Crawler)
  • Perplexity-User (Searcher)

Adding to your agentic access is another new protocol β€” llms.txt, a markdown-based standard that provides a structured way for AI agents to access and understand your content.

While it’s not integrated into every agent’s algorithm or design, it’s a protocol worth paying attention to. For example, Perplexity offers an llms.txt that you can follow here. You’ll come across two flavors of llms.txt:

  • llms.txt: A concise map of links.
  • llms-full.txt: An aggregate of text content that makes it so that agents don’t have to crawl your entire site.

Even if Google and other AI tools aren’t reading llms.txt, it’s worth adapting for future use. You can read John Mueller’s reply about it below:

Extractability: Making content β€˜fragment-ready’

GEO focuses more on chunks of information, or fragments, to provide precise answers. Bloat is a problem with extractability, which means AI retrieval has issues with:

  • JavaScript execution.
  • Keyword-optimized content rather than entity-optimized content.
  • Weak content structures that fail to provide clear, concise answers.

You want your core content visible to users, bots, and agents. Achieving this goal is easier when you use semantic HTML, such as:

  • <article>
  • <section>
  • <aside>

The goal? Separate core facts from boilerplate content so your site shows up in answer blocks. Keep your context window lean so AI agents can read your pages without truncation. Creating content fragments will feed both search engines and agentic bots.

Dig deeper: How to chunk content and when it’s worth it

Structured data: The knowledge graph connective tissue

Schema.org has been a go-to for rich snippets, but it’s also evolving into a way to connect your entities online. What do I mean by this? In 2026, you can (and should) consider making these schemas a priority:

  • Organization and sameAs: A way to link your site to verified entities about you, such as Wikipedia, LinkedIn, or Crunchbase.
  • FAQPage and HowTo: Sections of low-hanging fruit in your content, such as your FAQs or how-to content.
  • SignificantLink: A directive that tells agents, β€œHey, this is an authoritative pillar of information.”

Connecting information and data for agents makes it easier for your site or business to be presented on these platforms. Once you have the basics down, you can then focus on performance and freshness.

Get the newsletter search marketers rely on.


Performance and freshness: The latency of truth

AI is constantly scouring the internet to maintain a fresh dataset. If the information goes stale, the platform becomes less valuable to users, which is why retrieval-augmented generation (RAG) must become a focal point for you.

RAG allows AI models, like ChatGPT, to inject external context into a response through a prompt at runtime. You want your site to be part of an AI’s live search, which means following the recommendations from the previous sections. Additionally, focus on factors such as page speed, server response time, and errors.

In addition to RAG, add β€œlast updated” signals for your content. <time datetime=””> is one way to achieve this, along with schema headers, which are critical components for:

  • News queries.
  • Technical queries.

You can now start measuring your success through audits to see how your efforts are translating into real results for your clients.

Dig deeper: How to keep your content fresh in the age of AI

Measuring success: The GEO technical audit

You have everything in place and ready to go, but without audits, there’s no way to benchmark your success. A few audit areas to focus on are:

  • Citation share: Rankings still exist, but it’s time to focus on mentions as well. You can do this manually, but for larger sites you’ll want to use tools like Semrush.
  • Log file analysis: Are agents hitting your site? If so, which agents are where? You can do this through log analysis and even use AI to help parse all of the data for you.
  • The zero-click referral: Custom tracking parameters can help you identify traffic origins and β€œread more” links, but they only paint part of the picture. You also need to be aware that agents may append your parameters, which can impact your true referral figures.

Measuring success shows you the validity of your efforts and ensures you have KPIs you can share with clients or management.

Scaling GEO into 2027

Preparing your GEO strategy for 2027 requires changes in how you approach technical SEO, but it still builds on your current efforts. You’ll want to automate as much as you can, especially in a world with millions of custom GPTs.

Manual optimization? Ditch it for something that scales without requiring endless man-hours.

Technical SEO was long the core of ranking a site and ensuring you provided search bots and crawlers with an asset that was easy to crawl and index.

Now? It’s shifting.

Your site must become the de facto source of truth for the world’s models, and this is only possible by using the tools at your disposal.

Start with your robots.txt and work your way up to structure, fragmented data, and extractability. Audit your success over time and keep tweaking your efforts until you see positive results. Then, scale with automation.

Health data giant CareCloud says hackers accessed patients’ medical records

31 March 2026 at 18:50
CareCloud, a major provider of medical records storage, said hackers accessed one of its repositories of patient data earlier in March. It provides technology for more than 45,000 providers covering millions of patients.

The company behind ClassPass and Mindbody just got a lot bigger with a $7.5B merger

31 March 2026 at 18:00
The merger is a sign that the fitness industry is continuing to move toward consolidation to compete at a larger scale. Recent moves include MyFitnessPal acquiring Cal AI, an AI calorie counting app, and Strava buying two apps: cycling app The Breakaway and running app Runna.

Exclusive: Runway launches $10M fund, Builders program to support early-stage AI startups

31 March 2026 at 18:00
Runway is launching a $10 million fund and startup program to back companies building with its AI video models, as it pushes toward interactive, real-time β€œvideo intelligence” applications.

Google Explains Googlebot Byte Limits And Crawling Architecture

31 March 2026 at 19:28

Google's Gary Illyes published a blog post explaining how Googlebot works as one client of a centralized crawling platform, with new byte-level details.

The post Google Explains Googlebot Byte Limits And Crawling Architecture appeared first on Search Engine Journal.

Nvidia DLSS 6x and Dynamic Frame Generation have arrived

The new Nvidia App beta enables DLSS 4.5 Dynamic Frame Generation and 6x Frame Generation With the newest Nvidia App beta, Nvidia has officially released DLSS 4.5 Dynamic Frame Generation and 6x Frame Generation. These options are available as DLSS Overrides on the Nvidia App and should become available to all RTX 50 series GPU […]

The post Nvidia DLSS 6x and Dynamic Frame Generation have arrived appeared first on OC3D.

(PR) Slimbook Updates Kymera Desktop Line with More Options

31 March 2026 at 18:53
We continue to improve our Kymera desktop line, not only in hardware but also in the way you can explore and configure each system. We are presenting new product pages for our lines, along with the return of one of the most requested formats by the community.

Kymera Cristal: power you can see
The Slimbook Kymera Cristal features a design that doesn't hide its power, it showcases it. Thanks to its tempered glass front and side panels, every component of the system becomes part of the design, proudly displaying the hardware with precision. This is a solution aimed at those seeking a system that combines extreme performance with striking aesthetics. From RGB-lit configurations to more understated setups, Kymera Cristal allows you to create an environment that reflects your style.

Analysts Predict PS6 and Xbox Helix Prices As High As $999

31 March 2026 at 18:38
We recently heard from industry insiders that a $699 PlayStation 6 may still be theoretically possible, even with the current hardware market conditions resulting in steep prices for components like memory and storage. Shortly following that report, though, industry analyst, Matt Piscatella (via GamesRadar+) predicted that both the PlayStation 6 and the Xbox Helix consoles may cost as much as $999. He largely blames the AI industry demand and inflated hardware prices for the price increase, but noted that there isn't much certainty in the current hardware market, whether you're considering launch dates or pricing. Dr. Serkan Toto, CEO of Kantan Games, a consultancy firm, noted that, with the recent price increases to the PS5 line-up, Sony may have "baked in potential future fluctuations...instead of raising prices more frequently and over a longer period of time."

Toto goes on to say that "I think $999 at least for one variant of the PS6 is not impossible," potentially alluding to a PS6 Pro, if current industry prices are anything to go by, but Joost van Dreunen, a video games professor at NYU, argues that "we're quickly moving towards a world in which a $1,000 console will be the norm, and console gaming will be a luxury expenditure." Van Dreunen goes on to predict that we may see the next-gen consoles start at a 50% higher price than the current generation, which would mean a $600 starting price for the base PS6 digital edition and somewhere in the region of $750 for the disc drive model. On the Microsoft side of things, this would put the base model Xbox Helix somewhere around $450, while the "Series X" version would be around $750. Sony is also slated to release a standalone handheld game console that has been commonly referred to as the PlayStation Portable, but there is no indication of pricing on that.

Sony Suspends SD and CFexpress Memory Cards Production

31 March 2026 at 18:14
Sony suspended orders for almost its entire lineup of SD and CFexpress memory cards. The company is citing the global semiconductor shortage that has made it impossible to meet demand. The move, announced by Sony Japan and spotted by PetaPixel, effectively pauses shipments to both partners and direct customers starting March 27. The suspension covers nearly the company's entire lineup, including CFexpress Type A and Type B cards, as well as higher-end SD offerings such as TOUGH-branded models. Lower-tier SD cards are also affected, suggesting the shortage isn't limited to premium components. Sony says supply is unlikely to meet demand "for the foreseeable future," and has stopped accepting new orders from distributors and through its own store.

A few exceptions remain. The 960 GB CFexpress Type B card is still in production, alongside some entry-level SF-UZ series SD cards, though the latter are already largely phased out in certain regions. More specifically, on the CFexpress side, all Type A capacities are affected (240 GB, 480 GB, 960 GB, and 1920 GB), along with the 240 GB and 480 GB Type B cards. On the SD side, the entire TOUGH lineup (64 GB, 128 GB, 256 GB), standard V60 cards across all capacities, and even budget V30 64 GB and 128 GB options are suspended. Existing inventory is still moving through the supply chain, so cards will remain available at retail for now, but restocking will stop once that supply runs out. Sony hasn't provided a timeline for resuming production, stating it will monitor component availability before making a decision.

NVIDIA DLSS 4.5 Dynamic Multi-Frame Generation and 6x Mode Officially Arrive

31 March 2026 at 17:29
NVIDIA has finally launched its long-teased Dynamic Multi Frame Generation (MFG) and Multi Frame Generation 6x mode today through a new NVIDIA app beta update. This marks the full public release of NVIDIA's DLSS 4.5 technology suite, which enables the GPU to generate up to five additional frames following each traditionally rendered frame using generative AI. Using the new MFG 6x mode results in a 6x performance uplift, meaning a game that traditionally runs at 60 FPS can now reach 360 FPS. Users will need to enable "beta and experimental features" in the NVIDIA app's Settings menu, and the GeForce Game Ready Driver 595.79 WHQL or newer is required to access all features. This will give a limited set of games (for now) a massive performance uplift, which includes ARC Raiders Flashpoint, Marvel Rivals Season 7, 007 First Light, CONTROL Resonant, and Tides of Annihilation. More games will get the official support as NVIDIA is working with game studios.

However, for setups where a monitor is maxed out at 240 Hz or 144 Hz, as many gaming panels are, using 6x MFG would be overkill. This is where Dynamic MFG comes into play. The technology determines which MFG multiplier is needed based on the display's refresh rate capability and the input framerate from the upscaler. NVIDIA calls this the "automatic transmission" for MFG, drawing a parallel to modern vehicle automatic transmission systems that switch gears based on demand. In graphically intensive scenarios, the multiplier can scale up to 4x, 5x, or 6x, while lighter scenes like settings menus or static sequences may only require a 2x multiplier to hit the target frame rate.

Vertex AI Vulnerability Exposes Google Cloud Data and Private Artifacts

Cybersecurity researchers have disclosed a security "blind spot" in Google Cloud's Vertex AI platform that could allow artificial intelligence (AI) agents to be weaponized by an attacker to gain unauthorized access to sensitive data and compromise an organization's cloud environment. According to Palo Alto Networks Unit 42, the issue relates to how the Vertex AI permission model can be misused

The push layer returns: Why β€˜publish and wait’ is half a strategy

31 March 2026 at 18:00
The push layer returns- Why β€˜publish and wait’ is half a strategy

In 1998, submitting a website to search engines was manual, methodical, and genuinely tedious. I remember 17 of them: AltaVista, Yahoo Directory, Excite, Infoseek, Lycos, WebCrawler, HotBot, Northern Light, Ask Jeeves, DMOZ, Snap, LookSmart, GoTo.com, AllTheWeb, Inktomi, iWon, and About.com.

Each had its own form, process, and wait time, and its own quiet judgment about whether your URL was worth including. We submitted manually, 18,000 pages in all. Yawn.

Google was barely a year old when we were doing this. But they were already building the thing that would make submission irrelevant.

PageRank meant Google followed links, and a site that other sites linked to would be found whether it submitted or not. The other 17 engines waited to be told about content. Google went looking, and within a few years, they got so good at finding content that manual submission became the exception rather than the norm.

You published, you waited, the bots arrived. For 20 years, that was the deal, and SEO optimized for a crawler that would show up sooner or later.

The irony is that we’re now shifting back. Not because Google got worse at finding things, but because the game has expanded in ways that pull alone can’t cover, and the revenue flowing through assistive and agentic channels doesn’t wait for a bot.

Your opportunities to skip gates

Pull isn’t the only entry mode

The pull model (bot discovers, selects, and fetches) remains the dominant entry mode for the web index. What’s changed is that pull is now one of five entry modes into the AI engine pipeline (the 10-gate sequence through which content passes before any AI system can recommend it), not the only one.Β 

The pipeline has expanded, and new modes have been added alongside the existing model rather than replacing it, and the single entry mode that has been the norm for 20 years has expanded to five.

What follows is my taxonomy of those five modes, with an explanation of the advantages each one gives you at the two gates that determine whether content can compete: indexing and annotation.

The five entry modes differ by gates skipped, signal preserved, and revenue reached

Mode 1: Pull model

Traditional crawl-based discovery where all 10 pipeline gates apply and the bot decides everything. You start at gate zero and have no structural advantage by the time your content gets to annotation (which is where that content starts to contribute to your AI assistive agent/engine strategy). You’re entirely dependent on the bot’s schedule and the quality of what it finds when it arrives.

Mode 2: Push discovery

The brand proactively notifies the system that content exists or has changed, through IndexNow or manual submission.Β 

Fabrice Canel built IndexNow at Bing for exactly this purpose: β€œIndexNow is all about knowing β€˜now.’” It skips discovery, improves the chances of selection, and gets you straight to crawl. The content still needs to be crawled, rendered, and indexed, because IndexNow is a hint, not a guarantee.Β 

You win speed and priority queue position, which means your content is eligible for recommendation days or weeks earlier than a competitor who waited for the bot. In fast-moving categories, that window is the difference between being in the answer and being absent from it.

Note: WebMCP helps with Modes 1 and 2 by making crawling, rendering, and indexing more reliable, retaining signal and confidence that would otherwise be lost through those three gates.Β 

Because confidence is multiplicative across the pipeline, a higher passage rate at crawling, rendering, and indexing means your content arrives at annotation with significantly more surviving signal than a standard crawl delivers. The structural advantage compounds from there.

Mode 3: Push dataΒ 

Structured data goes directly into the system’s index, bypassing the entire bot phase. Google Merchant Center pushes product data with GTINs, prices, availability, and structured attributes. OpenAI’s Product Feed Specification powers ChatGPT Shopping that supports 15-minute refresh cycles.Β 

Discovery, selection, crawling, and rendering don’t exist for this content, and the β€œtranslation” at the indexing phase is seamless: it arrives at indexing already in machine-readable format, four gates skipped and one improved. That means the annotation advantage is significant.

This is where the money is for product-led businesses: where crawled content arrives as unstructured prose the system has to interpret and feed content arrives pre-labeled with explicit machine-readable entity type, category, and attributes. By structuring the data and injecting directly into indexing, you’re solving a huge chunk of the classification problem at annotation, which, as you’ll see in the next article, is the single most important step in the 10-gate sequence.

As the confidence pipeline shows, each gate that passes at higher confidence compounds multiplicatively, so this is where you can get the β€œ3x surviving-signal advantage” I outline in β€œThe five infrastructure gates behind crawl, render, and index.”

Mode 4: Push via MCPΒ 

Model Context Protocol (MCP) β€” a standard that lets AI agents query a brand’s live data during response generation β€” allows agents to retrieve data from brand systems on demand.Β 

In February 2026, four infrastructure companies shipped agent commerce systems simultaneously. Stripe, Coinbase, Cloudflare, and OpenAI collectively wired a real-time transactional layer into the agent pipeline, live with Etsy and 1 million Shopify merchants.Β 

Agentic commerce is key. MCP skips the entire DSCRI pipeline and then operates at three levels, each entering the pipeline at a different gate:Β 

  • As a data source at recruitment.
  • As a grounding source at grounding.
  • As an action capability at won, where the transaction completes without a human in the loop.Β 

The revenue consequences are already real: brands without MCP-ready data are losing transactions to those with it, because the agent can’t access their inventory, pricing, or availability in real time when it needs to make a decision. This is where you see multi-hundred percent gains in the surviving signal.

MCP is already simultaneously push and pull, depending on context.Β 

There’s a dimension to Mode 4 that most people don’t think about much: the agent querying your MCP connection isn’t always a Big Tech recommendation system. It’s increasingly the customer’s own AI, acting as their purchasing agent, evaluating your inventory and pricing in real time, with their credit card behind the query, completing the transaction without them opening a browser.

When your customer’s agent (let’s say OpenClaw-driven) comes knocking, agent-readable is the entry requirement. Agent-writable β€” the capacity for an agent to act, not just retrieve β€” is where you’ll make the conversion. The brands without writable infrastructure will be losing transactions to competitors whose systems answered the query and handled the action.

Mode 5: Ambient

This is structurally different from the other four. Where Modes 1 through 4 change how content enters the pipeline, ambient research changes what triggers execution of the final gates.Β 

The AI proactively pushes a recommendation into the user’s workflow without any query: Gemini suggesting a consultant in Google Sheets, a meeting summary in Microsoft Teams surfacing an expert, and autocomplete recommending your brand.Β 

Ambient is the reward for reaching recruitment with accumulated confidence high enough that the system fires the execution gates on the user’s behalf, without being asked. You can’t optimize for ambient directly. You earn it β€” and the brands that earn it capture the 95% of the market that isn’t actively searching.

Several people have told me my obsession with ambient is misplaced, theoretical, and not a real thing in 2026. I’ve experienced it myself already, but the clearest demonstration came at an Entrepreneurs’ Organization event where I was co-presenting with a French Microsoft AI specialist.Β 

He demonstrated on Teams an unprompted push recommendation: a provider identified as the best solution to a problem his team had been discussing in the meeting. Nobody explicitly asked. Copilot listened, understood the problem, evaluated options, and push-recommended a supplier right after the meeting. Ambient isn’t theoretical. It’s running on Teams, Gmail, and other tools we all use daily, right now.

Get the newsletter search marketers rely on.


Every mode converges at annotation

Five entry modes, each with a different starting point, and they all converge at annotation. Annotation is the key to the entire pipeline. Every algorithm in the algorithmic trinity (LLM + knowledge graph + search) doesn’t use the content itself to recruit, it uses the annotations on your chunked content, and nothing reaches a user without being recruited.Β 

Why is that important? Because accurate, complete, and confident annotation drives recruitment, and recruitment is competitive regardless of how content entered. A product feed arriving at indexing with zero lost signal competes at recruitment with a huge advantage over every crawled page, every other feed, and every MCP-connected competitor that entered by a different door.Β 

You control more of this competition than most practitioners assume, but skipping gates gives you a structural advantage in surviving signal. It doesn’t exempt you from the competition itself.

That distinction matters here because annotation sits at the boundary. It’s the last absolute gate: the system classifies your content based on your signals, independently of what any competitor has done. Nobody else’s data changes how your entity is annotated. That makes annotation the last moment in the pipeline where you have the field entirely to yourself.

From recruitment onward, everything is relative. The field opens, every brand that passed annotation enters the same competitive pool, and the advantage you carried through the absolute phase becomes your starting position in a winner-takes-all race. Get annotation right, and you have a significant head start. Get it wrong, and no matter how much work you do to improve recruitment, grounding, or display, it will not catch up, because the misclassification and loss of confidence compound through every gate downstream.

Nobody in the industry was talking about this in 2020. I started making the point then, after a conversation on the record with Canel, and it still isn’t getting the attention it deserves.

Annotation is the key

Annotation is your last chance before competition arrives.

Search is one of three ways users encounter brands β€” and it’s the least valuable

The research modes on the user’s side have expanded, too. The SEO industry has traditionally focused on just one: implicit, when the user types a query. There was always one more: explicit brand queries, and now we have a third. Each research mode is defined by who initiates and what the user already knows.

Explicit research is the deliberate query, where the user asks for a specific brand, person, or product, and the system returns a full entity response (the AI rΓ©sumΓ© that replaces the brand SERP).Β 

This is the lowest-confidence mode of the three, because the user has already signaled very explicit intent: you’re only reaching people who already know your name. Bottom of the funnel, decision. Algorithmic confidence is important here to remove hedging (β€œthey say on their website,” β€œthey claim to be…”) and replace it with absolute enthusiasm (β€œworld leader in…,” β€œrenowned for…”).

Implicit research removes the explicit query. The AI introduces the brand as a recommendation (or advocates for you) within a broader answer, and the user discovers the brand because the system considers it relevant to the conversation, staking its own credibility on the inclusion. Top- and mid-funnel, awareness and consideration. Algorithmic confidence is vital here to beat the competition and get onto the list when a user asks β€œbest X in Y market” or be cited when a user asks β€œexplain topic X.”

Ambient research requires the highest confidence of all. The system pushes the brand into the user’s workflow with no query, no explicit request, the algorithm is making a unilateral decision that this user, in this context, at this moment, needs to see your brand. That requires very significant levels of algorithmic confidence.

The format is small: a sentence, a credential, a contextual mention. The audience reached is the largest: people not yet in-market, not yet actively looking, who encounter your brand because the AI decided they should. And the kicker is that your brand gets the sale before the competition even starts.

For me, this is the structural insight that inverts how most brands prioritize, and where the real money is hiding. They optimize for implicit research, where competition is highest, the target you need to hit is widest, and the work is hardest.Β 

Most SEOs underestimate explicit research (where profitability is highest) and completely ignore ambient, which reaches the 95% who aren’t yet looking and requires the deepest entity foundation to trigger. I call this the confidence inversion, first documented in May 2025: the smallest format requires the highest investment, and it reaches the most valuable audience.

How algorithmic confidence affects the three research modes in AI

The entity home website is the single source that feeds every mode

In 2019, AI engineers spent 80% to 90% of their time collecting, cleaning, and labeling data, and the remaining 10% to 20% on the work they actually wanted to do. They wryly called themselves data janitors. Today, Gartner estimates 60% of enterprises are still effectively stuck in the 2019 model, manually scrubbing data, while the companies that got organized early compound their advantage.

The same split is happening with brand content and entity management, for the same reason. Every push mode described in this article draws on data: product attributes for merchant feeds, structured entity data for MCP connections, and corroborated identity claims for ambient triggering.Β 

If that data lives in scattered, inconsistent, contradictory sources, every push attempt is expensive to implement, structurally weak on arrival, and liable to contradict the previous one. Inconsistency is the annotation killer: the system encounters two different versions of who you are from two different push moments, and confidence drops accordingly.

The framing gap, where your proof exists but the algorithm can’t connect it to a coherent entity model, is a direct consequence of disorganized data, and it costs you in recommendation frequency every day it persists.

The entity home website β€” the full site structured as an education hub for algorithms, bots, and humans simultaneously, built around entity pillar pages that declare specific identity facets β€” becomes the single source that feeds every mode simultaneously.

Pull, push discovery, push data, MCP, and ambient all draw from the same clean, consistent, non-contradictory data. You build the structure once, maintain it in one place, and you’re ready for push and pull modes today, and any to come that don’t yet exist.

Using your entity home website to feed the bots

AI handles 80%, humans protect the other 20%

That foundation is only as strong as the corrections made to it. How this works in practice depends on where you’re starting from. For enterprises, the website typically mirrors an internal data structure that already exists:Β 

  • Product catalogs.Β 
  • CRM records.
  • Service definitions.
  • Organizational hierarchies.Β 

The website becomes the public representation of structured data that lives inside the business, and the primary challenge is integration and maintenance.

For smaller businesses and personal brands, the direction often runs the other way: building the entity home website well is what forces you to figure out how your business is actually structured, what you genuinely offer, who you serve, and how everything connects. The website imposes discipline.Β 

We’re doing exactly this: centralizing everything as the structured data representation of the entire brand (personal or corporate). Getting the foundation right (who we are, what we offer, who we serve) is generally the heaviest lift. Building N-E-E-A-T-T credibility on top of that foundation is now comparatively straightforward, and every new push mode draws from the same organized source.

Here’s where using AI fits into this work. It can handle roughly 80% of the organization: extracting structure from existing content, proposing taxonomies, drafting entity descriptions, mapping relationships, and flagging gaps. What it does poorly, and what humans need to correct, are the three failure modes that propagate silently through every downstream gate:

  • Factual errors, where something is simply wrong.
  • Inaccuracies, where something is approximately right but imprecise enough to mislead.
  • Confusions, where two different concepts are conflated, or an entity is ambiguous between interpretations.

Confusion is the sneakiest because it looks like data, passes automated quality checks, enters the pipeline with apparent confidence, and then causes annotation to misclassify in ways that compound through every gate downstream.

Alongside the errors sit the missed opportunities, which are equally costly and considerably less obvious:

  • Lost N-E-E-A-T-T credibility opportunities, where the systems underestimate or undervalue the entity because credibility signals exist but aren’t structured, corroborated, or framed in a way the algorithmic trinity can read. The authority exists, but the machine doesn’t understand it.
  • Annotation misclassification, where the entity is indexed coherently but placed in the wrong category, meaning it competes for the wrong queries entirely and never appears in the contexts where it should win. Correctly classified competitors take the recommendation: your brand is present in the pipeline, but absent from the competition that matters to your business.
  • Untriggered deliverability, where understandability is solid and credibility has crossed the trust threshold, but topical authority signals haven’t accumulated densely enough to push the entity across the deliverability threshold for proactive recommendation. The machine knows who you are and trusts you. It just doesn’t advocate for you yet.

The human doing the correction and optimization work is the competitive advantage. Because the errors are surreptitious and the opportunities non-obvious, the trick is finding where both actually are, fixing one, and acting on the other.

The errors are surreptitious. The opportunities are non-obvious. Finding both is the work that compounds.

Organize once, feed every mode that exists and every mode to come

The push layer is expanding. The brands that organize their data now β€” not perfectly, but consistently, and with a system for maintaining it β€” are building the infrastructure from which every current and future entry mode draws.

The brands still publishing and waiting for the bot (Mode 1) are optimizing for the least advantageous mode in a five-mode landscape, and that disadvantage gap widens with every passing cycle.

This is the seventh piece in my AI authority series.Β 

ChatGPT enables location sharing for more precise local responses

31 March 2026 at 17:43

OpenAI now allows users of ChatGPT to share their device location so that ChatGPT can know more precisely where the user is and serve better answers and results based on that location.

The feature is called location sharing, OpenAI wrote, β€œSharing your device location is completely optional and off until you choose to enable it. You can update device location sharing in Settings > Data Controls at any time.”

What it does. If ChatGPT knows your location, it can return better local results. OpenAI wrote:

  • β€œPrecise location means ChatGPT can use your device’s specific location, such as an exact address, to provide more tailored results.”
  • β€œFor example, if you ask β€œwhat are the best coffee shops near me?”, ChatGPT can use your precise location to provide more relevant nearby results. On mobile devices, you can choose to toggle off precise location separately while keeping approximate device location sharing on for additional control.”

Privacy. OpenAI said β€œChatGPT deletes precise location data after it’s used to provide a more relevant response.” Here is how ChatGPT uses that information:

  • β€œIf ChatGPT’s response includes information related to your specific location, such as the names of nearby restaurants or maps, that information becomes part of your conversation like any other response and will remain in your chat history unless you delete the conversation.”

Does it work. Does this work? Well, maybe not as well as you’d expect. Here is an example from Glenn Gabe:

I shared about the "Near Me ChatGPT Update" the other day and just let ChatGPT use my device location. This is supposed to enhance results for local queries. I just asked for the "best steakhouses near me" and several of the restaurants are ~45 minutes away. Both restaurants… pic.twitter.com/gRkMeuzMQt

β€” Glenn Gabe (@glenngabe) March 30, 2026

Why we care. Making ChatGPT local results better is a bit deal in local search and local SEO. Knowing the users location and better yet, precise location, can result in better local results.

Hopefully this will result in ChatGPT responding with more useful local results for users.

5-step Google Business Profile audit to improve local rankings

31 March 2026 at 17:00
5-step Google Business Profile audit to improve local rankings

Google Business Profile (GBP) may be getting shoved down the SERPs by ads and AI Overviews more than ever, but it’s still a top source of inbound leads for local businesses β€” and one of the fastest ways to improve rankings with simple fixes.

Here’s a five-step audit to find and fix the gaps most businesses miss.

1. Evaluate Google review velocity and recency

It’s a common misconception that the business with the most Google reviews wins in Google Maps ranking. While a high review count provides social proof, Google’s algorithm has more of a β€œwhat have you done for me lately?” attitude.

The number of reviews you get a month, and how recent your last review was, often outweigh the total count for all important map pack positions. We call these metrics review velocity and review recency.

Think about it like this: If you have 500 reviews but haven’t received a new one since 2024, a competitor with 100 fresh reviews from the last month will likely blow past you.

So, how do you measure your review velocity and recency? Analyze competitors to see how top-ranking businesses perform on those metrics.

Follow these steps:

  • Run a geo-grid ranking scan: Identify which competitors are outranking you for your top keywords.
  • Analyze the last 30 days: Note how many reviews they received this month, and when their most recent one was posted.
  • Benchmark your data: Create a simple table comparing your monthly count and recency.
  • Recommended tools: Places Scout, Local Falcon, or Whitespark for automated grid scans and review data.

You don’t just need more reviews. You need to match or exceed the consistency of top-ranking listings.

Lead Gen Reviews Performance

You can automate this with Places Scout API data. That’s what our agency does, tracking it consistently to keep clients ahead of competitors. Automated charts make it easier to see how you stack up.

Automating with Places Scout API data

Dig deeper: Local SEO sprints: A 90-day plan for service businesses in 2026

2. Add keywords to your business name

Including keywords in your business name is one of the most powerful local ranking signals. Sometimes a profile will rank in the map pack based solely on its name, beating out businesses with better reviews and higher recency.

Google’s algorithm hasn’t fully filtered out this type of keyword targeting, so it remains an opportunity. Take this business: only 21 reviews, yet it ranks first in the map pack for an extremely competitive term, thanks to the keywords in its business name.

AC repair dallas

You can’t simply keyword-stuff your name, though. Google can verify your legal name and take action to remove keywords from your profile β€” or worse, require reverification or suspend it. Your best option is a legal DBA (doing business as) certificate, also known as a trade name, or fictitious name certificate, in some areas.

For example, if your legal name is β€œSmith & Sons,” you’re missing out. Registering a DBA as β€œSmith & Sons HVAC Repair” allows you to update your GBP name while technically adhering to Google’s guidelines.

  • Competitor analysis: Are your competitors outranking you simply because their name contains the keyword? If yes, you need to take action to match those tactics.
  • Make it legal: Check your local Secretary of State website. Filing a DBA is an effective SEO tactic for moving from Position 4+ into the map pack for certain keywords.
  • Update business website: Update your website with the new name. Google uses website content to verify business details and may update your GBP accordingly. Make sure it only finds the new name, not outdated versions.

3. Optimize categories (primary vs. secondary)

Choosing the wrong primary category for your GBP is a leading reason businesses fail to rank. If you’re a personal injury lawyer, but your primary category is set to β€œtrial attorney,” you’re fighting an uphill battle to rank for those highly competitive terms like β€œpersonal injury lawyer” searches.

How to pick the best primary category:

  • Competitor analysis:Β Use Chrome extensions like Pleper or GMB Everywhere to see exactly which primary categories the top-ranking businesses are using.
  • Max out secondary categories: You have 10 total slots. Fill all of them with relevant subcategories.
  • Check off all relevant services: Under each category, Google lists specific services. Select the ones relevant to your business.
Personal injury attorney - Google Search

Dig deeper: How to pick the right Google Business Profile categories

Get the newsletter search marketers rely on.


4. Improve your GBP landing page

Many businesses link their GBP to their homepage and stop there. For multi-location businesses, this is a mistake. You should link to a dedicated local landing page optimized for your top keywords that mentions the city your GBP address is in.

Linking your GBP to a hyper-local city page (e.g., /tampa-plumbing/ instead of the homepage) reinforces β€œentity alignment.” When the information on your GBP matches a unique, highly relevant page on your site, Google’s confidence in your location increases, often leading to a jump in the local pack. Make sure your GBP landing page is optimized with all your services and links to dedicated service pages to boost your listing for service-specific searches.

Watch out for the diversity update. Sometimes a business ranks well in the map pack, but its website is nowhere to be found in organic results. This is often due to Google’s diversity update.

If you suspect you’re being filtered out organically, try linking your GBP to a different localized interior page. This is often a quick fix that helps your site reappear in organic search. Here’s an example of a client I recently helped beat the diversity update with a simple GBP landing page swap.

GBP landing page swap results

Dig deeper: Google’s Local Pack isn’t random – it’s rewarding β€˜signal-fit’ brands

5. Understand proximity and city borders

Your business’s physical location within the city and its proximity to the city center are extremely strong ranking signals. It’s not something you can easily manipulate, though, because it’s not always easy to move your office, store, or warehouse. However, you need to know your β€œranking radius” and how much room there is to improve rankings for certain keywords within it.

Identify the ranking ceiling in your market. I use Local Falcon’s Share of Local Voice (SoLV) metric to do this. If your top competitors only have a 53% SoLV, as in this example, it’s unlikely you’ll be able to get more than that either.Β 

Competitor Report - Local Falcon

This shows when you’ve β€œmaxed out” a keyword and need to target new keywords or open a new location outside that radius. It can also show there’s room to improve β€” and that you need to increase your SoLV score.

Keep in mind that certain keywords are harder to improve based on where your business is physically located. If you’re not physically located within a city’s borders, and your map pin sits anywhere outside the Google-defined border of your city, you will struggle to rank for explicit terms like β€œPlumber Tampa FL,” and within the city borders in general. Always do this analysis on a keyword-by-keyword basis.

Tip: In the current local search landscape, expanding your physical footprint, and verifying more GBPs, is the most reliable way to grow visibility. Max out your current GBPs first, then look for your next location.

Dig deeper: The proximity paradox: Beating local SEO’s distance bias

Prioritize where you can win now

This is a strong starting point, but it’s just the beginning. From review strategy and category selection to city borders and the diversity update, every detail counts.

Between overreaching ads and ever-expanding AI Overviews, staying proactive with your GBP strategy is the only way to keep your leads flowing from the map pack. Build your GBP foundation, max out your current locations, and strategize new locations to keep your business in the top spot across your service area.

(PR) Toshiba Begins Sampling of 30-34 TB SMR Nearline Hard Disk Drives

31 March 2026 at 16:58
Toshiba Electronic Devices & Storage Corporation ("Toshiba") has announced the M12 Series of 3.5-inch nearline hard disk drives (HDDs) for hyperscale and cloud service providers operating large‑scale data centers. The new series uses Shingled Magnetic Recording (SMR) technology to deliver storage capacities ranging from 30 to 34 TB. Sample shipments have begun and Toshiba also plans to begin sample shipments of M12 drives that use Conventional Magnetic Recording (CMR) to deliver capacities of up to 28 TB in the third quarter of 2026.

Today is World Backup Day, the annual international initiative to remind companies and individuals of the importance of backing up and protecting their data. That need is now greater than ever, as the constant expansion of digital services and video content distribution, the widespread adoption of cloud services, and, most recently, the increasing use of data-hungry AI and data science, are driving forward immense growth in the volumes of data generated and stored worldwide.

NVIDIA's "Rubin Ultra" Reportedly Faces Issues With CoWoS-L Packaging

31 March 2026 at 16:50
NVIDIA is reportedly experiencing manufacturing issues with its next-generation "Rubin Ultra" GPU design, one of the company's most ambitious chip development projects, due to the limitations of modern packaging technology. The world's largest company is already shipping customer samples of the standard "Rubin" GPUs, with mass shipments set to begin this summer. However, the current roadmap for the upgraded "Rubin Ultra" design may be encountering technological limitations, as NVIDIA's design goals are too ambitious for TSMC's packaging capabilities. Reportedly, NVIDIA plans to double the regular "Rubin" two-die package with 8 HBM4 modules into a new "Rubin Ultra" package that will include four silicon dies and 16 HBM4E modules in a single package. This configuration is scheduled for 2027, but the sheer volume of silicon may be too much for TSMC's packaging, according to Global Semi Research.

In a typical CoWoS package, TSMC usually combines multiple smaller dies and multiple HBM memory modules into a unified package that supports the entire AI build-out. However, with the ambitious "Rubin Ultra" design, NVIDIA planned to use CoWoS-L, which was expected to handle the design and concept that "Rubin Ultra" was based on. It is rumored, however, that in a 2+2 die packageβ€”meaning four dies as in this architectureβ€”TSMC is encountering warping issues. The packageβ€”which includes a substrateβ€”is bending in multiple directions, causing the compute dies of "Rubin Ultra" to not make complete contact with the underlying substrate. This instability means that TSMC has to explore alternatives within its packaging portfolio. One of these alternatives is a panelized approach called CoPoS, which stands for Chip-on-Panel-on-Substrate.

(PR) QNAP Introduces QSW-M7230-2X4F24T L3 Lite 100 GbE Managed Switch

31 March 2026 at 16:44
QNAP Systems, Inc., a leading computing, networking, and storage solution innovator, today announced the launch of the QSW-M7230-2X4F24T, a new L3 Lite managed 100 GbE switch designed for enterprise network upgrades, high-performance storage environments, large-scale media production, virtualization, and AI-driven workloads. The new switch enables organizations to build a scalable 100 GbE core network while maintaining cost efficiency and protecting existing infrastructure investments.

As data-intensive applications continue to accelerateβ€”from AI computing and virtualization to collaborative media workflowsβ€”enterprises are increasingly challenged to evolve beyond 10GbE networks without incurring disruptive, large-scale replacements. The QSW-M7230-2X4F24T addresses this transition by providing a flexible, multi-speed architecture that allows enterprises to introduce higher-speed connectivity where it matters most, while expanding the core network over time.

(PR) Noctua and Asetek Announce Flagship AIO Liquid Coolers Complete PVT Phase, Targeted for Q2-2026 Launch

31 March 2026 at 16:33
Since the announcement of their collaboration at Computex 2025, Noctua, a leading quiet PC cooling brand, and Asetek, a pioneer in all-in-one (AIO) liquid cooling, have continued to advance their flagship AIO liquid coolers. The products have now successfully completed the Production Validation Test (PVT) phase, confirming performance and manufacturing readiness ahead of the planned Q2 2026 launch.

The Asetek Emma (G8) V2 pump operates at a nominal speed of approximately 3,600 RPM (Β±300 RPM). Through collaboration with Noctua, several key performance aspects have been enhanced. Firstly, a triple-layer noise-reduction pump cover reduces both air-borne noise and structure-borne vibrations. Secondly, a dedicated mode switch allows users to choose between three different pump speed profiles to fine-tune performance-to-noise characteristics.

(PR) Advantech Unveils SQRAM DDR5 7200 MT/s 64 GB Industrial Memory Modules

31 March 2026 at 16:26
Advantech (TWSE: 2395), a global leader in IoT intelligent systems and embedded platforms, today announced the expansion of its SQRAM DDR5 7200 MT/s industrial memory module series. Designed to meet the escalating data demands of Edge AI, the new modules offer a 12.5% performance increase over previous generations and a groundbreaking 64 GB per-module capacity, setting a new benchmark for stability and scalability in outdoor deployments.

12.5% Faster, Up to 64 GB per Module
The DDR5 7200 MT/s delivers a 12.5% performance increase compared to the previous DDR5 6400 generation. In addition to higher bandwidth, each module supports up to 64 GB capacity using 32 Gb IC technology. This enables AI PCs and high-end workstations to scale system memory up to 256 GB, fully addressing the growing demands of data-intensive Edge AI and computing applications.

the greatest expedition – A biking adventure around the world by a woman and man, live journey


The greatest expedition is a live reality adventure show that showcases the real world up close while traveling across the globe by bike. Two riders, a female and a male, travel each continent in a month on a reputable motorcycle provided by the company, then move on to another continent after completing their journey. There's prize money for each successful continent trip, and the couple that completes all continents wins a grand prize. The entire journey is recorded live, with interviews of interesting people they meet shared daily and a weekly episode of each couple's journey.

View startup

Plot Party – Create a drama with AI


Plot Party turns your ideas into visual storyboards and videos in minutes. Its AI agent selects the right models and keeps characters, styles, locations, and assets consistent across scenes. Build and tweak shots on a canvas, then polish with a native editor for trimming and subtitles. Create single stories, expand into a series, and publish worlds to engage your audience.

View startup

Silver Fox Expands Asia Cyber Campaign with AtlasCross RAT and Fake Domains

Chinese-speaking users are the target of an active campaign that uses typosquatted domains impersonating trusted software brands to deliver a previously undocumented remote access trojan named AtlasCross RAT. "The operation covers VPN clients, encrypted messengers, video conferencing tools, cryptocurrency trackers, and e-commerce applications, with eleven confirmed delivery domains impersonating

The AI Arms Race – Why Unified Exposure Management Is Becoming a Boardroom Priority

The cybersecurity landscape is accelerating at an unprecedented rate. What is emerging is not simply a rise in the number of vulnerabilities or tools, but a dramatic increase in speed. Speed of attack, speed of exploitation, and speed of change across modern environments. This is the defining challenge of the new era of digital warfare: the weaponization of Artificial Intelligence. Threat actors

A 6-point scorecard for AI-ready product pages

31 March 2026 at 16:00
A 6-point scorecard for AI-ready product pages

AI search engines like ChatGPT, Google AI Mode, and Perplexity are changing how consumers discover and purchase products online. If your product pages aren’t optimized for these AI assistants, you could be missing out on a growing source of traffic and revenue.

The challenge? AI assistants don’t evaluate product pages in the same way traditional search engines do. They need to fully understand your products so they can confidently recommend them to different users with different needs.

To help you assess how well your product pages are optimized for AI search, here’s a simple scorecard covering the six most important factors.Β 

1. Product specifications

Does the product page clearly display the product’s attributes and specifications?

AI assistants need clearly stated specifications to better understand your products and match them to customer needs. If a shopper asks an AI assistant for β€œan airline-friendly crate for a 115-pound dog,” the AI must be able to see the maximum weight limit of a product before it will recommend it. Without clear specifications, some products won’t get recommended, even if they’re actually a perfect match.

Amazon does this really well, and it’s likely one of the many keys to their strong performance in AI search. Just look at all the helpful specifications they clearly lay out on their product pages.

Amazon dog crate product page

Action item: Go through your product pages and make certain all applicable specifications are clearly displayed. Don’t bury them in the main product description or other marketing copy. Clearly lay them out in a structured table or bulleted list.

2. Unique selling points

Are the product’s unique benefits clearly described?

AI needs to understand both what makes your product stand out and why your products should be recommended over the competition. If a product page reads like every other industry website, AI assistants have no compelling reason to recommend the listed products.

Think about it from the AI’s perspective: If a user asks β€œwhat’s the best L-shaped sofa,” the AI will look for products with clear differentiators (hidden storage, machine-washable, modular parts, durability, etc.). The characteristics that make your product stand out should be explicitly stated on the page.

Here’s a great example from Home Reserve. Their product pages have a section called β€œKey Features” that lists the unique selling points that separate them from the competition.

Action item: Make sure your product pages clearly state what makes them better and why it matters to the customer. Keep your key features specific. Generic selling points like β€œhigh-quality craftsmanship” or β€œpremium materials” are too vague and don’t give AI assistants enough information to establish a clear differentiation.

Dig deeper: How AI-driven shopping discovery changes product page optimization

3. Use cases and target audience

Are the product’s intended use cases and audience clear?

AI assistants don’t match products to keywords β€” they match products to people and their unique needs. When a user asks ChatGPT, β€œwhat’s the best desk for a small apartment,” the AI looks for products intended for compact spaces, small rooms, or apartment living.

If a product page only describes the desk’s dimensions without connecting them to a particular use case, AI assistants may not recommend the product when users ask about those scenarios.

Any given product could have a multitude of use cases and audiences. A standing desk could be ideal for remote workers, people with back pain, gamers, or small business owners outfitting a home office. If a product page only speaks to one of these audiences, it might not get recommended to the others in AI search.

Action item: For each product, include the top three to five specific use cases or audience segments on the page. Go beyond demographics and think about situations, pain points, and goals.Β 

Get the newsletter search marketers rely on.


4. FAQ section

Does the product page include an FAQ section answering common questions about the product?

AI assistants always try to connect products with the right buyer. When a user asks a question like, β€œwhat’s the best waterproof sealant for a flat roof,” the AI looks for information on product pages demonstrating they’re a good fit for the particular use case.

This is what makes FAQ content so valuable. A well-structured FAQ section can give AI assistants additional confidence that the product is a good fit for the user and worthy of a mention. The more specific and detailed your FAQ answers are, the more prompts your product can match within AI search.

For example, Liquid Rubber sells mulch glue and waterproof sealants. They do a great job of providing a clear list of frequently asked questions on their product pages.

FAQs on mulch glue

This type of FAQ content can help their products get recommended more often when users ask ChatGPT specific questions:

  • What’s the best VOC mulch glue?
  • Can I get mulch glue that will last up to 12 months?
  • Is there a mulch glue that delivers within one week?

Action item: Review your customer support inquiries, product reviews, competitor pages, and relevant Reddit threads to identify the most common customer questions. Then add these questions directly to your product pages with clear and concise answers.

Dig deeper: AI citations favor listicles, articles, product pages: Study

5. Product reviews

Does the product page display customer ratings and review counts?

AI assistants will recommend highly rated products with strong reputations. A product with 500+ reviews and a 4.8-star rating is a much safer recommendation than a product with zero reviews or a low rating.

Just ask ChatGPT for product recommendations, and you’ll see the product ratings front and center. Take, for example, the prompt, β€œWhat’s the best medium roast caramel flavored coffee?”

ChatGPT What’s the best medium roast caramel flavored coffee

It’s clear that ChatGPT relies heavily on product reviews and only recommends products with a high rating. When you click on any of these products, you’ll see that product ratings and the number of reviews are clearly displayed on the product page.

Bones Coffee Company - Salted caramel product page

Note: Your product’s rating in ChatGPT may differ from what’s on your product page. This is because ChatGPT calculates an aggregate rating across multiple merchants (e.g., Walmart, Target, etc.), rather than only pulling from your product page.

But having a strong rating isn’t enough β€” you need a lot of reviews as well. I recently reviewed 1,000 ecommerce-focused prompts and found that the median number of reviews was 156. So, if you want to increase your chances of getting recommended by ChatGPT (and other AI assistants), aim for at least 150+ product reviews.

Action item: Make sure your product pages clearly display customer ratings, review counts, and (ideally) some actual reviews. Third-party review platforms like Yotpo, Judge.me, and Shopper Approved can solicit product reviews from customers for you.

Dig deeper: How to make ecommerce product pages work in an AI-first world

6. Product structured data

Does the product page include structured data for price, availability, reviews, and other key attributes?

It’s easier for AI search engines to understand information presented in a clear structure (e.g., tables, lists). But there’s nothing more structured than the JSON format for structured data (also known as schema markup).

Product structured data

There’s a common claim in AI SEO that structured data is some kind of magic bullet for AI visibility. The reality is more nuanced.

Structured data experiment

An interesting experiment conducted by SEO consultant Dan Taylor tested the impact of structured data on AI search. He included a physical address for a made-up company in the JSON-LD structured data, but didn’t include it anywhere in the page content itself. Then, when he asked ChatGPT for the address, it still pulled it from the structured data.

This experiment shows that AI assistants are indeed crawling structured data. But they’re not necessarily parsing it the same way a traditional search engine would. Instead, they’re simply treating it as another source of text on the page.

If the content in your schema is relevant to a user’s prompt, AI assistants will pick it up. But it doesn’t matter whether the schema is valid or completely made up.

Where structured data helps most

So, if AI assistants treat structured data like any other text, is it still worth adding it to your product pages? The short answer is β€œyes.”

Presenting important product information clearly and well formatted can always help AI assistants understand your product pages. But the real advantage is in the product cards found within the AI responses.

Google isΒ using its Knowledge Graph data in their AI systems, andΒ this type of structured data, or schema markup, can feed into it. There are also reports of ChatGPT using Google Shopping data for its product recommendations.

Structured data benefits

So, the main advantage of structured data is how it plays into Google’s Knowledge Graph of products, which can directly impact product recommendations across Google AI Overviews, AI Mode, and even ChatGPT.

With the rise of agentic commerce, product data will only become more important as AI agents rely on it to compare, evaluate, and even purchase products on behalf of users.

Putting the scorecard to work

Here’s a quick overview you can use to audit your product pages:

Putting the scorecard to work

Once you’ve scored your highest-priority pages, any gaps become the priority on your AI product optimization roadmap. Tackle the β€œNo” items first, since those represent the biggest missed opportunities, then work on upgrading the β€œPartial” scores.

This type of product optimization is still a blind spot for many ecommerce brands, which means every factor you improve is a chance to get recommended where they don’t. The sooner you close these gaps, the harder it becomes for competitors to catch up.

Intel's Pure P-core "Bartlett Lake" Made to Run on Regular Z790 Motherboard via BIOS Mod

31 March 2026 at 15:50
Intel Core 200 "Bartlett Lake" is probably the most interesting processor gamers can't buyβ€”built on the Intel 7 node and designed for Socket LGA1700, "Bartlett Lake" is a non-Hybrid, pure P-core chip, a monolithic silicon, with 12 "Raptor Cove" P-cores, and no E-core clusters. The 12-core/24-thread chip was launched earlier this month as an exclusive for the commercial and industrial PC OEM markets, as an edge AI PC processor, it is not drop-in compatible with consumer Intel Z790 chipset motherboards, or at least that was the plan.

A motherboard UEFI firmware mod by "kryptonfly" got a consumer ASUS Z790-AYW OC motherboard to POST with an Intel Core 9 273QPE "Bartlett Lake" processor. The modder used Claude AI to mod the UEFI firmware of the board without tripping safeguards that prevent the motherboard from booting with modded firmware. The 273QPE is a 12-core/24-thread pure P-core processor with 2 MB of L2 cache per core, and 36 MB of shared L3 cache. Its uncore components and iGPU are carried over from "Raptor Lake-S." The 273QPE has a base frequency of 3.30 GHz, an all-core boost frequency of 5.30 GHz, and a single-core TVB frequency of 5.90 GHz. The chip has 125 W processor base power, and 250 W maximum turbo power. You can watch kryptonfly's firmware mod video from the source link below.

(PR) ASUS Announces the ExpertBook P5 G1 in 14 and 16-inch Sizes

31 March 2026 at 15:06
ASUS today announced ASUS ExpertBook P5 G1, a powerful and versatile business laptopβ€”with 14-inch and 16-inch display optionsβ€”designed to support the productivity needs of modern professionals. Combining dependable performance from up to an Intel Core Ultra 7 processor and a sleek and lightweight design, ASUS ExpertBook P5 G1 is engineered to deliver a reliable computing experience in offices, hybrid work environments, and for professionals on the move.

ASUS ExpertBook P5 G1, with its choice of 14-inch or 16-inch form factors, provides a flexible workspace in a highly portable design. The lightweight chassis starts at just 1.29 kg, making it easy to carry between meetings, offices or travel destinations. A 70Wh battery supports extended productivity throughout the workday, while the durable design meets MIL-STD-810H US military-grade standards, ensuring reliability in everyday business environments.

Modder gets Intel's OEM-only 'Bartlett Lake' CPU to post on a regular Asus Z790 motherboard β€” BIOS was edited by Claude AI to make Core Ultra 9 273QPE boot

An enthusiast has managed to get Intel's non-consumer Bartlett Lake CPU running on a regular Z790 motherboard, thanks to AI. Claude modded the original BIOS of the board to detect a Core 9 273QPE and boot with it, but the setup hasn't gotten past the POST screen yet, and the modder is currently facing various error codes.

SprintDrip – Plan sprints, align async, and ship faster with AI insights


SprintDrip helps startups and small teams plan sprints, manage work, and stay aligned without the usual agile overhead. Set up fast, run async standups and retros, and replace status meetings with quick updates and real-time collaboration. Its AI copilot, Xia, turns updates and project data into summaries, insights, and actionable roadmaps, so you see what’s working and ship faster. Track progress and performance without micromanaging, with a simple workflow built for modern teams.

View startup

Bondary – AI dating copilot that helps you see who someone is before it gets serious


Bondary is an AI dating copilot that helps you see who someone really is before things get serious. Unlike general AI, Bondary creates profiles and tracks your dating life over time, remembering what you said weeks ago, connecting dots across conversations, and surfacing what you might be choosing to overlook.

View startup

RPCS3 just made easier to UpRes PlayStation 3 games

RPCS3 now allows game resolution changes without game restarts RPCS3 is the best place to play many PlayStation 3 classics. Why? The simple answer is that many PS3 games are playable there with higher resolutions and framerates than their original PS3 versions. That means many PS3 games now look better and run smoother on a […]

The post RPCS3 just made easier to UpRes PlayStation 3 games appeared first on OC3D.

Xbox games are getting tougher to fit onto a console, but these World Backup Day bargains for 1TB, 2TB, and 4TB Expansion Cards will solve that issue

31 March 2026 at 14:05
World Backup Day is here to remind everyone to keep their precious files safely backed up, and one of the best ways to do that for Xbox is through various Storage Expansion cards that are now on sale.

Intel Readies Core Ultra 5 250KF Plus for April 3, Save $15 if You Don't Need iGPU

31 March 2026 at 13:58
Intel earlier this month debuted the Core Ultra 7 270K Plus and Core Ultra 5 250K Plus desktop processors at launch prices of $299 and $199, respectively. At the time, the company hadn't launched "KF" variants of the two chips, which lack integrated graphics and are priced around $15 less than their regular "K" variants. It turns out, that Intel is planning to launch the Core Ultra 5 250KF Plus, while there's no sign of a "Core Ultra 7 270KF Plus." The 250KF Plus is almost identical to the 250K Plus, except it comes with the iGPU disabledβ€”something you don't need if you plan on using a graphics card.

As with most "KF" SKUs from the past, the Core Ultra 5 250KF Plus will be priced around $15 less than the regular Core Ultra 5 250K Plus. Intel's own 1,000-unit tray quantity pricing for the chip ranges between $174 and $184. Given how tight memory pricing is, and given that you'll need an aftermarket cooler, the $15 saving might come in handy. Then of course the integrated graphics is nice to have if your graphics card is bricked due to a burnt power connector, and you need something to light your screen up for troubleshooting or during RMA. The Core Ultra 5 250KF Plus is based on the "Arrow Lake" microarchitecture, and packs a 6P+12E core configuration, with 3 MB of L2 cache per P-core, 4 MB of shared L2 cache for each of the three E-core clusters, and 30 MB of L3 cache shared among the six P-cores and three E-core clusters. The Core Ultra 5 250KF Plus should start selling from April 3, 2026.

(PR) CD Projekt Red Partners With Zero Latency VR to Bring the World of Cyberpunk 2077 Into Immersive VR

31 March 2026 at 13:50
Zero Latency VR, the undisputed leader in immersive entertainment and the mastermind behind the world's largest true location-based free-roam VR network, has announced a new collaboration with CD PROJEKT RED to bring the award-winning universe of Cyberpunk 2077 into its warehouse-scale VR format.

Cyberpunk 2077 is an open-world, action-adventure role-playing game set in Night City, a dark future megalopolis obsessed with power, glamour, and body modification. Players take on the role of a cyber-enhanced mercenary named V, who faces the most powerful forces in the city in a fight for glory and survival. Created by the studio behind The Witcher series of games, Cyberpunk 2077 has reached a global audience since its launch in 2020, earning acclaim for its storytelling, gameplay, and the immersive nature of its open world.

(PR) Turris Launches the Omnia NG Wired 10 Gbps Router

31 March 2026 at 13:45
The CZ.NIC Association, the Czech national domain administrator, presents Turris Omnia NG Wired - a rack-mountable model offering 10 Gbps connectivity and the Turris OS operating system based on OpenWrt/Linux. It builds on the security principles of the Turris project and features a quiet, passive-cooling design. The device is intended for businesses, institutions, and demanding users seeking a powerful and sustainable network foundation while supporting European technologies, open source, and digital sovereignty.

Designed for rack installation: 10G/2.5G connectivity in a compact package
Turris Omnia NG Wired is built for racks and spaces like server rooms and network cabinets. Wi-Fi can be provided by separate access points, while the router stays in the backroom.

(PR) Masters of Albion - A Conversation With the Creators Behind the Scenes Video Released

31 March 2026 at 13:38
The Behind The Scenes Trailer offers an in-depth look into the creation of Masters of Albion. It features personal and detailed interviews with Peter Molyneux, Mark Healey and Russ Shaw, as they reflect on their history as collaborators and the creative processes behind MoA, all supported by brand new in-game capture.

Created entirely in-house, the documentary showcases previously unseen areas of Albion's world, behind-the-scenes footage of key development moments, and candid stories from the team's past. Alongside this, viewers can expect new gameplay insights, a closer look at the game's evolving systems, and a tone that reflects both the humour and ambition of the studio… including, at one point, a rogue chicken.

(PR) MSI EdgeXpert Achieves NVIDIA-Certified Systems Status, Fully Supporting NVIDIA AI Enterprise

31 March 2026 at 13:26
MSI announced that its next-generation AI platform, MSI EdgeXpert, has officially become an NVIDIA-Certified System. This validation ensures the hardware has undergone rigorous testing by NVIDIA engineers for performance, functionality, scalability, and security. Most importantly, it brings MSI EdgeXpert into the supported ecosystem of NVIDIA AI Enterprise (NVAIE), strengthening its capability to support enterprise-grade generative AI, AI agents, and high-performance edge AI workloads.

Establishing Hardware Trust Standards through Rigorous Testing
NVIDIA-Certified Systems establish a broad hardware trust standard. MSI EdgeXpert has passed extensive evaluations, including deep learning training with TensorFlow and PyTorch, high-throughput inference with TensorRT and Triton, and system-level security testing. While the certification covers a wide range of hardware reliability, its support for NVIDIA AI Enterprise is one of the most significant values, helping organizations move efficiently from proof of concept to real-world deployment.

NVIDIA GeForce NOW Brings 4K 90 FPS Streaming to Apple Vision Pro

31 March 2026 at 13:11
NVIDIA's latest GeForce NOW update has introduced 90 FPS streaming to various VR headsets, and Apple users are in for a treat. For the Apple Vision Pro headset, GeForce NOW will deliver 90 FPS at 4K resolution, offering a noticeable improvement for anyone using NVIDIA's game streaming service with their Vision Pro headset. The Apple Vision Pro features two displays, one for each eye, capable of running at a resolution of 3,660 Γ— 3,200 and up to 120 Hz. It's great news that NVIDIA has updated its GeForce NOW service to officially support at least 4K resolution, running at 90 FPS. While it's unclear how many gamers use the Apple Vision Pro as their gaming display, the addition of support by NVIDIA suggests a significant number. Available as part of the Ultimate package, members can stream at 90 FPS on other VR headsets as well, but at lower resolutions.

Additionally, everyone can stream at 1080p and 90 FPS, while 1440p is reserved for Pico and Meta Quest. Currently, only the Apple Vision Pro can handle 4K and 90 FPS output from GeForce NOW. Although not many games can run at 4K resolution and 90 FPS on their own, NVIDIA's DLSS technology can boost the frame rate and deliver impressive visuals, ensuring a smooth 4K mode at 90 FPS. Finally, NVIDIA has also scheduled the rollout of H.265 video decoding support for browsers, which will greatly enhance streaming efficiency and visual quality from NVIDIA's virtual gaming server.

Roasted.CV – Analyze and auto-fix your resume, then tailor and apply to jobs


Roasted helps you get interviews by analyzing your resume, fixing issues, and showing exactly what to improve. It offers an AI resume builder, voice-to-resume, ATS-friendly templates, PDF export, public sharing, and detailed feedback. You can create tailored CVs and cover letters, match jobs, and apply with one click. Job Autopilot searches, customizes, and applies on your behalf while you track progress.

View startup

Verve Intelligence – Validate your startup idea with investor-grade AI analysis


Verve Intelligence delivers objective startup idea validation in about 30 minutes. Use it to size markets, map competitors, define target segments, assess risks, and receive a "what would work" persona, MVP, and technical scope. It also provides guides on interpreting signals that match historical patterns.

It runs 14 parallel research streams, including adversarial agents that stress-test assumptions, then compiles a 50+ page investor-grade report with a GO, PIVOT, or NO-GO verdict, cited sources, and transparent scoring. Access AI debates, rationale, a personalized industry glossary, and more.

View startup

Noctua x Asetek confirm flagship AIO Liquid Cooler launch window

Noctua’s upcoming CPU liquid cooler has passed its Production Validation Test and is ready for its Q2 launch Noctua and Asetek have confirmed that their upcoming all-in-one (AIO) CPU liquid cooler is ready for its Q2 2026 launch. The CPU cooler has passed Product Validation Testing, meeting the cooling requirements of both companies, and is […]

The post Noctua x Asetek confirm flagship AIO Liquid Cooler launch window appeared first on OC3D.

Acer Intros FA300 M.2 NVMe Gen 5 SSD

31 March 2026 at 12:00
Acer today introduced the FA300, a mid-range M.2 NVMe Gen 5 SSD. The drive brings PCIe Gen 5 speeds to a wider audience, and is based on a DRAMless controller. The company doesn't specify the controller type. Popular DRAMless Gen 5 controllers include Phison E31T and Silicon Motion SM2504XT. The FA300 comes in 1 TB and 2 TB capacity variants, which differ in performance. Both variants offer up to 11 GB/s of sequential reads, but while the 1 TB variant offers up to 9.7 GB/s sequential writes, the 2 TB variant goes a bit further, posting up to 10 GB/s sequential writes.

In terms of random access performance, the 1 TB Acer FA300 offers up to 1.4 million IOPS 4K random reads, with up to 1.6 million IOPS 4K random writes, while the 2 TB variant offers up to 1.7 million IOPS for both 4K random reads and writes. The company does not specify the 3D NAND flash type used. The 1 TB model is rated for 750 TBW (TB written) write endurance, while the 2 TB model offers 1,500 TBW. Both models are backed by 5-year warranties. Acer did not specify pricing, because it tends to be dynamic in the current market environment, but expect the FA300 to be among the more affordable Gen 5 SSDs.

Axios Supply Chain Attack Pushes Cross-Platform RAT via Compromised npm Account

The popular HTTP client known as Axios has suffered a supply chain attack after two newly published versions of the npm package introduced a malicious dependency that delivers a trojan capable of targeting Windows, macOS, and Linux systems. Versions 1.14.1 and 0.30.4 of Axios have been found to inject "plain-crypto-js" version 4.2.1 as a fake dependency. According to StepSecurity, the two

Rec Room Shuts Down Following Early 2026 Layoffs Despite "Reaching Over 150 Million Players"

31 March 2026 at 09:27
Rec Room, a social VR game that largely revolved around user-generated content and social mini-games for fun, has been added to the long line of games that will meet their doom in 2026. According to the game's developer, Rec Room will shut down on June 1, 2026, due to issues with sustainability and profitability. This is after the studio behind the game announced earlier this year that it was laying off roughly 50% of its development team and scaling back the game's scope due to similar issues. As the game studio explains in the announcements, it "never quite figured out how to make Rec Room a sustainably profitable business," and its "costs always ended up overwhelming the revenue" it earned.

This failure to find profitability is in spite of some rather impressive claims regarding player counts for Rec Room. According to the game studio, Rec Room reached over 150 million players and creators, and players spent a cumulative 68 thousand years in Rec Room. The studio blames a "recent shift in the VR market, along with broader headwinds in gaming," which have made profitability as a game studio all the more difficult. The game will officially shut down on June 1, 2026, along with the official website, servers, and player accounts, although players will be able to play the game until then, and all first-party content will be 80% off, while there will no longer be a requirement for a RecRoomPlus account for certain cosmetic features.

Location Risks – The only risk map that adapts to your worldview


Location Risks is a global risk intelligence platform that maps over 300 location-based risk factors onto an interactive map. From environmental contamination to financial freedom, it helps you visualize combined hazards for any location across more than 230 countries.

The platform offers access to historical data with AI-powered risk estimates. Users can customize policy metrics to match their personal priorities, such as firearm rights, off-grid living restrictions, or crypto friendliness, making it the first risk tool that adapts to your worldview, not just geography.

View startup

DwellRecord – Dwell Record helps homeowners track home records, assets, and upkeep


Dwell Record is a home recordkeeping platform that helps homeowners organize possessions, documents, receipts, warranties, maintenance, and home improvements in one place. With photos, scanning, document uploads, and OCR, it makes it easy to keep important home information captured and searchable.

Dwell Record is built for real life, whether you are staying organized, tracking maintenance, preparing to sell, or making sure you have the records you need for insurance claims. It helps homeowners create a clear history of their home without the usual hassle.

View startup

PollenTracker – Get a clear yes/no for going outside from pollen, AQI, and weather


PollenTracker gives a simple yes/no answer to β€œShould I go outside today?” by combining live pollen counts, air quality, and weather for 200+ US and UK cities. It updates every 15 minutes and shows clear risk levels so you can plan your day and manage allergies. You can browse the map, compare cities, and check local forecasts without creating an account.

View startup

Upvote – Boost any Reddit post with drip-fed upvotes from aged accounts


Upvote sells safe, drip-fed Reddit upvotes to push your posts higher in subreddits. Every upvote comes from 1–8 year aged accounts with real karma for a 99% stick rate, with instant start and free 24-hour replacements. Choose 10–1,000 upvotes starting at $0.10 each, paste your post URL, and track delivery in a dashboard. Pay with card, PayPal, or crypto, and get 24/7 human support.

View startup

ScribbleScan – Transcribe your handwriting no matter how messy it is


Scan your handwritten notes and search them easily! ScribbleScan can recognize most handwriting, even messy scrawly notes written in a hurry. Snap a photo of notebooks, worksheets, or whiteboards and quickly extract accurate text you can copy, search, and share. You can also add printed notes, business cards, flyers, or coupons and search them too. Available on iOS and Android.

View startup

Right Suite – Find out who will buy, what to charge, and what to say before spending $$$


Most founders make their biggest go-to-market decisions based on gut feel. They pick a price because it feels right, write copy that sounds good to them, and send cold emails using templates they found online. Right Suite tests all of this against simulated buyers first so you know what works before you commit.

There are seven tools, one for each decision: who to sell to, how to position against competitors, what to charge, whether your copy converts, whether your cold outreach will get replies, which channel to focus on, and whether your ad will stop the scroll. One credit is used per simulation, and credits work across all seven tools.

View startup

$700 PS6 Still Possible but Chances Are Slim with $800+ Xbox Incoming

31 March 2026 at 02:52
Microsoft recently revealed that Xbox Project Helix would be its next-gen gaming console and that it would play both PC and Xbox games, however, previous leaks tipped a rather high price tag for the upcoming console generationβ€”some expect it to be upwards of $800. While this may have its effects on Microsoft's sales, it also has bigger implications for the wider console and gaming hardware market itself. According to notorious hardware leaker, KeplerL2, on the NeoGAF forums, the reduced pressure from Microsoft may be enough for Sony to raise prices on the PS6. There have also been rumors that Sony is planning a PlayStation 6 Portable, which would be a standalone device and could theoretically fill the gap left by Sony in the more affordable console space if this price prediction is true.

The leaker explains that their "current BOM estimate for PS6 is ~$760, so I would say $699 is still possible with a reasonable subsidy. The question is if Sony will even bother now that Xbox is not direct competition anymore." These comments come at a time when hardware prices have skyrocketed, and availability is low, thanks to the ongoing DRAM crisis largely caused by increased demand from the AI industry. This bill of materials cost estimate also assumes that there will be no decrease in memory or hardware costs, which is currently an unknown, with all the talk about AI being a bubble. Those same hardware shortages and price increases have been the justification for predictions of console delays and numerous cancellations by PC hardware manufacturers. It also recently came to light that Sony will be scaling back its PC port efforts, meaning fewer Sony exclusive games will end up on PC, giving players more of an incentive to buy the PS6, even if prices are somewhat inflated.

Crimson Desert Overtakes PokΓ©mon Pokopia in Metacritic User Reviews

31 March 2026 at 01:55
When Crimson Desert launched, its reviewsβ€”especially those from very early reviewersβ€”were good, although less than stellar, however, the player retention figures painted a more positive picture. Now, it looks as though the reviews have caught up to those high early player counts, at least on Metacritic. As of the time of writing, the review aggregator site ranks Crimson Desert as the second highest-rated game of 2026 so far. With its 8.8 user score, Crimson Desert only trails Resident Evil Requiem, which has a strong lead, at 9.4 points. Following Crimson Desert are PokΓ©mon Pokopia, with 8.5 points, Resident Evil Village Gold Edition, Resident Evil 7: Biohazard Gold Edition, and Cairn, with 8.5, 8.4, and 8.3 points respectively.

When it comes to the Metascore, however, Pokopia still holds onto its number-one spot, with 89 points, trailed by Resident Evil: Requiem, at 89 points, and Mewgenics, with a score of 88. Meanwhile, Crimson Desert has not yet been reviewed enough for an overall review score, but it currently has a PC Metascore of 77, which is based on very divided reviews, some giving it as low as 50 points, while others give it as much as 100 points. It's clear Crimson Desert will have a lot of competition in 2026, though, given what a packed year it is stacking up to be in the gaming industry. Aside from the already-launched hits like Resident Evil, PokΓ©mon Pokopia, and Slay the Spire 2, massive launches like GTA VI, Pragmata, and Gears of War: E-Day are also slated to launch in 2026.

Some are celebrating, but World of Warcraft's move away from the "Horde" and "Alliance" faction split will be seen as a historic mistake

Not only does it undermine decades of lore and world-building, but player identity is how you reach your audience when they're not gaming. WoW's iconic faction split is under threat, and that's a bad thing, actually.

Estimates Say GTA VI May Have Cost Over $3 bnβ€”$2.1 bn in Salaries Alone

31 March 2026 at 00:29
It's no secret that the development of GTA VI has been a mammoth undertaking, but the exact figures have been a bit hazy, with previous estimates from early 2026 putting the game's total budget at around $2 billion. Now, thanks to an investigation by an internet sleuth, u/Due-Vanilla-8294 on Reddit, it seems as though Rockstar may have exceeded the $3 billion mark, with speculation putting the game's budget as high as $5 billion.

These estimations were based on recent financial filings by Rockstar and its parent company, Take-Two, wherein it was revealed that Rockstar had spent as much as $2.1 billion on salaries alone at its Rockstar North location since GTA VI went into full-scale production in 2019. Given that this is only one location's figures and that Rockstar had already started work on GTA VI long before it went into full-scale production, it has been speculated that, by the time it launches, Rockstar may have spent as much as $5 billion on GTA VI. So far, it seems as though GTA VI's high budget may have been worth it, with the game's launch trailer racking up 275 million views since it launched in 2024. GTA VI is also expected to launch at a substantial price premium, with previous rumors floating a $90+ MSRP.

Feroce – AI health coach in WhatsApp that reads your wearable data


Feroce is an AI health coach in WhatsApp that connects your wearables, calendar, and lab results to deliver daily personalized guidance. It builds a permanent memory across sleep, stress, activity, nutrition, biometrics, and lifestyle, then coaches you with morning briefings, a Pulse Score, proactive alerts, and instant meal analysis. It integrates devices like Apple, Garmin, Oura, Fitbit, WHOOP, and more, applies evidence-based rules to your data, and safeguards privacy with end-to-end encryption and EU servers.

View startup

BookMerang – Connect with readers nearby and swap or boomerang physical books


BookMerang connects readers to exchange physical books in their city, either as permanent swaps or as boomerangs you return after reading. Create a profile, list up to three books, set a wishlist and reader status, and rate swaps with mini-boomerangs in a verified community. Libraries and bookstores can launch branded digital profiles with shelves, rentals, themes, and verified badges. Track reads, share reviews, follow other readers, and personalize your virtual room with posters, collectibles, and skins while discovering your next book match.

View startup

OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability

A previously unknown vulnerability in OpenAI ChatGPT allowed sensitive conversation data to be exfiltrated without user knowledge or consent, according to new findings from Check Point. "A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content," the cybersecurity company said in

Before yesterdayTech

(PR) JetStor Delivers 80 PB High-Density Archive for Government Agency Using WD's Ultrastar Drives

30 March 2026 at 22:46
JetStor today announced the successful deployment of an 80 PB high-density archive for a government agency - one of its largest on-premises archive deployments of its kind in the public sector. Designed to meet rigorous security demands for long-term data retention, JetStor's POD-based architecture was paired with WD Ultrastar 26 TB enterprise-class hard drives to deliver a secure, high-density cost-efficient storage foundation built for long-term data retention at scale.

Public-sector IT teams face a distinct challenge at this scale: growing storage capacity economically without disrupting active production networks or retraining operations staff. JetStor addressed this by standardizing the entire deployment into repeatable POD building blocksβ€”each anchored by WD Ultrastar DC HC590 26 TB SAS 7200RPM drivesβ€”across 132 JetStor XS3324D 4U 24-bay systems and 3,200 drives total. The result is a dual Fibre Channel fabric architecture that supports seamless expansion while keeping infrastructure management straightforward and predictable from day one.

C Dance 2.0 – Generate text-to-video with stable motion and native audio sync


C Dance 2.0 is an AI video generator powered by Seedance 2.0. It lets you create text-to-video, image-to-video, and video-to-video content with smooth, stable motion, precise creative control, and native audio-video sync. You can choose aspect ratios and durations, add sound effects, and iterate quickly with instant variations. Creators use it for cinematic scenes, ads, product demos, and short-form content, with flexible pay-as-you-go or annual credit plans.

View startup

pdfzus – Combine and organize PDF files in your browser


pdfzus is a simple web app for merging, sorting, and compressing PDF files directly in the browser. It is designed for people who want to prepare clean PDF documents without complicated software, forced sign-ups, or cluttered workflows. Many users only need to combine a few files, arrange them in the right order, and send the final document. pdfzus focuses on doing that part well, working especially well for applications, office documents, email attachments, and other everyday PDF tasks while keeping files on the user’s device for a more privacy-friendly experience.

View startup

Google removes Search Engine Land article after false DMCA claim

30 March 2026 at 22:25
Google DMCA hammer

Google removed a Search Engine Land article (Report: Clickout Media turned news sites into AI gambling hubs, published March 26) from its search results after a copyright complaint (that appears, to us, to be entirely false). Meanwhile, a similar DMCA filing led to the takedown of the original Press Gazette investigation.

What happened. A DMCA notice filed March 27 claimed Search Engine Land copied content β€œword for word” and used proprietary images.

  • The complaint led Google to begin removing the article from search results globally.
  • The notice identified the complainant as β€œUS Webspam,” with no clear public attribution.

The context. The removed article reported that Clickout Media allegedly used expired or acquired domains to publish AI-generated gambling content.

The claim details. Here’s the message we received via Google Search Console on March 27:

Description of claim: The infringing news website has blatantly and willfully violatedΒ copyrightΒ law by copying our entire content word for word, including all images, which are solely owned by our company. This includes the complete replication of our original written material, as published on our official website, along with the proprietary visuals accompanying it. Despite multiple good-faith efforts to resolve this matter amicably, the infringing party (hereinafter referred to as β€œInfringer”) continues to unlawfully publish and distribute ourΒ copyrightedΒ content without permission. This is a direct and flagrant breach of our rights and a clear violation of Google’sΒ copyrightΒ policies. We hereby demand the immediate removal of this infringing material from Google search results to protect our intellectual property.

You can read the DMCA complaint here.

What doesn’t add up. The Search Engine Land article contains no images, contradicting the complaint. Also:

  • A search of its text shows no evidence of copied content.
  • The notice claims β€œmultiple good-faith efforts” to resolve the issue, but no outreach was received before filing.
  • The complaint was submitted one day after publication.

What Google says. Google’s standard policy is to remove content upon receiving a valid copyright complaint, with an option for publishers to file a counter notice. The company has not commented on this specific case.

Why we care. This shows how DMCA takedowns can be weaponized to suppress reporting, including coverage of search spam and site reputation abuse. Legitimate content can be temporarily removed from search results due to unverified claims, and the resolution can take weeks or longer.

What’s next. We’ll watch whether this article is DMCA’d and removed, along with the Press Gazette’s, and anyone else covering the story.

Reactions. Here’s some reaction from X:

theholycoins isn’t owned by clickout (it’s one of the sites that would actually do negative reporting into their scams, so they probably picked one of those posts and said they were them/the original author of your dmca’d piece)

the rabbit hole on clickout goes a lot deeper than…

β€” πŸˆβ€β¬› (@undercover) March 30, 2026

I'm surprised this was approved by Google… I've seen them come back with rejected DMCA notices when it was clear the site was infringing copyright. This is a BS DMCA takedown that doesn't even make sense. Very interesting case… I have a feeling the article will surface again… https://t.co/Zi8hUV8g14

β€” Glenn Gabe (@glenngabe) March 29, 2026

πŸ†• A totally irrelevant site has DMCAed Search Engine Land's reporting page about ClickOut Media spamming Google's search results!

Weird enough DMCA requested was accepted by Google and now this URL https://t.co/DV8TR1NRLk from Search Engine Land isn't showing up in search… pic.twitter.com/dGbJ04KbQG

β€” Gagan Ghotra (@gaganghotra_) March 29, 2026

ICYMI:

Last week @pressgazette published an investigative report about a media company that acquires online publishers and exploits their domain authority for SEO shenanigans.

This is the same company that acquired a portion of @Cointelegraph to host casino & gambling content,… pic.twitter.com/duFkS7MBiP

β€” Afik Rechler (@kifakrec) March 29, 2026

Update, March 31. The Press Gazette and Search Engine Land articles, which were removed due to the bogus DMCA complaints, are now back in Google Search.

Microsoft lets merchants update store names and domains in Merchant Center

30 March 2026 at 22:20
Microsoft Ads: How it compares to Google Ads and tips for getting started

Microsoft Advertising now allows e-commerce merchants to edit their Merchant Center store name and domain directly within the platform β€” no support ticket required.

Why we care. Store details like names and URLs change as businesses rebrand or restructure. Previously, updating these required manual intervention. Self-serve control reduces friction and keeps campaigns running more smoothly during transitions.

How it works β€” the details:

  • Store name changes go through editorial review before going live. During review, ads keep running under the existing approved name β€” so there’s no interruption to campaigns.
  • Domain/URL changes require merchants to verify ownership of the new domain before the switch takes effect. Ads continue serving on the old domain in the meantime. Once approved, product URLs must be updated to reflect the new domain.
  • Reusing names or domains is allowed β€” as long as the store name clears editorial checks and the domain is verified and confirmed as merchant-owned.

The bottom line. The update gives ecommerce advertisers more autonomy over their store settings while building in safeguards β€” editorial review and domain verification β€” to prevent abuse and maintain ad quality.

Reddit Pro opens to all publishers, adds new features in public beta

30 March 2026 at 21:38
Reddit command center

Reddit today opened its Pro publishing tools to all publishers, removing the waitlist and offering free access in a public beta to expand distribution and engagement.

Why we care. Reddit Pro gives you a centralized tool to track where your content spreads, streamline posting, and find the right communities. It transforms Reddit from a manual posting exercise into a structured distribution channel.

The details. You can now sign up for Reddit Pro, verify your domain (typically within three business days), and access the Links tab. With Reddit Pro, you can:

  • Track where your content is shared across Reddit.
  • Auto-import articles via RSS for quick posting.
  • Get AI-powered recommendations on relevant communities.

Reddit also added features based on early feedback:

  • Community snapshots show rules, stats, and top discussions.
  • Community notes let you track strategy and context.

By the numbers. Reddit reported more than 55 billion views of publisher-related conversations in 2025. Publishers testing since September saw:

  • Median post views up 46%.
  • Profile views nearly doubled.
  • Median comments up 48%.

What else. Reddit is expanding profile flairs to all Pro users, letting you organize posts on your profile so users can browse coverage and engage with stories.

Reddit’s announcement. Helping publishers thrive on Reddit

Microsoft confirms this year’s Xbox Games Showcase alongside Gears of War: E-Day Direct

Microsoft prepares for β€œthe return of Xbox” Asha Sharma, Microsoft’s recently installed CEO of Xbox, has unveiled this year’s Xbox Games Showcase. This year’s Xbox Games Showcase will be presented on Sunday, June 7th, followed by Gears of War: E-Day Direct. Gears of War: E-Day will be shown in detail after Microsoft’s main Xbox showcase. […]

The post Microsoft confirms this year’s Xbox Games Showcase alongside Gears of War: E-Day Direct appeared first on OC3D.

June Xbox Game Showcase Revealed, Gears of War: E-Day Direct Event Follows

30 March 2026 at 21:28
Microsoft has officially announced its next Xbox Summer Games showcase, which will take place on June 7, 2026, showing off the latest of what Microsoft has in-store for gamers in 2026. While it's not yet been revealed what will be shown off at the game showcase, Microsoft did confirm that there will be a Gears of War: E-Day showcase immediately after the Xbox Games Showcase, in which we're likely to see at least a new trailer and maybe a release date for the upcoming Xbox shooter, which was revealed at the 2024 Summer Xbox Game Showcase and hasn't made much noise since, despite its 2026 release date.

The Xbox Game Showcase is slated to start at 10 AM PT on June 7, which is 14:00 UTC. It's already been all but confirmed via the game's Steam store page that Gears of War: E-Day will launch for both PC and Xbox, although not much else is known about the game other than that it will be a prequel set 14 years before the original Gears of War, and that it will be built on Unreal Engine 5.

ARC Team and Krafton Kill PUBG: Blindspot Mere Months After Launch

30 March 2026 at 20:46
PUBG: Blindspot launched at the beginning of February as a new 5v5 top-down shooter from the same studio and publisher as the original PUBG, but now, less than two months after the launch of the game, the developer, ARC Team, has announced that the new free-to-play tactical shooter will be shutting down on March 30, 2026. ARC Team says that, although the developers had tried to explore ways to improve the game based on player feedback, the studio is "no longer able to sustainably provide the level of experience we set out to deliver through Early Access."

PUBG: Blindspot has a fairly acceptable 72% positive rating on Steam, but player counts are rather low, with an all-time peak of just 3,251 concurrent players and a 24-hour peak of just 236 at the time of writing. PUBG: Blindspot is already delisted from Steam, and players will no longer be able to access the game. This is only one of a number of recent game closures, with other notable additions including Highguard and a number of other games that were in development, like those that recently got caught up in the Ubisoft reshuffle. Further, Riot recently laid off a number of staff from its 2XKO fighting game due to sustainability reasons, all of which seem to suggest that game studios are fighting for revenue more than ever and are unable to make it work long-term if their games aren't immediately successful.

DeepLoad Malware Uses ClickFix and WMI Persistence to Steal Browser Credentials

A new campaign has leveraged the ClickFix social engineering tactic as a way to distribute a previously undocumented malware loader referred to as DeepLoad. "It likely uses AI-assisted obfuscation and process injection to evade static scanning, while credential theft starts immediately and captures passwords and sessions even if the primary loader is blocked," ReliaQuest researchers Thassanai

Google Ads Editor bug links structured snippet languages across accounts

30 March 2026 at 20:50
Google Ads may be over-crediting your conversions- A 7-day test tells a different story

A bug in Google Ads Editor is causing structured snippet extensions copied between accounts to remain unintentionally linked. When advertisers change the language in one account, it can automatically update the same extension in another.

Why we care. This bug creates hidden inconsistencies for advertisers managing multi-market campaigns, especially when different languages are required across accounts.

What advertisers are seeing. The issue surfaced while managing Czech and Slovak e-commerce accounts by digital marketer Marcin WsΓ³Ε‚. Changing the snippet language in one account triggered the same change in the other.

  • The extensions appear separate but behave as if synced.

Zoom in. Using the Google Ads web interface can temporarily correct the issue, however, further edits in Editor may cause the language settings to toggle again.

Also. The bug isn’t limited to cross-account use. PPC News Feed founder, Hana KobzovΓ‘, founder that copying structured snippets within the same account can also lead to incorrect language settings after edits.

Between the lines. Advertisers relying on bulk edits in Editor may unknowingly overwrite localization settings, leading to mismatched messaging across markets.

Bottom line. Until fixed, advertisers should double-check structured snippet languages after copying or editing in Google Ads Editorβ€”especially when working across accounts or regions.

First seen. This error was first picked up by WsΓ³Ε‚, which was picked up by PPC News Feed.

New Google TurboQuant algorithm improves vector search speed

30 March 2026 at 20:26
Vector space

Google says a new compression algorithm, called TurboQuant, can compress and search massive AI data sets with near-zero indexing time, potentially removing one of the biggest speed limits in modern search systems.

What it is. TurboQuant is a way to shrink and organize the data that powers AI and search without losing accuracy. It reduces memory use while keeping results precise and cuts the time to build searchable AI indexes to β€œvirtually zero,” according to the research paper.

How it works. Modern search converts content into vectors (lists of numbers that represent meaning). Similar ideas sit close together in this numeric space, and search finds the closest matches.

However, these vectors are large and expensive to store and search. TurboQuant addresses this by using much smaller data that behaves almost exactly like the original, through:

  • Smart compression. It rotates the data mathematically to compress it cleanly, like organizing messy items into neat boxes.
  • Error correction. It adds a 1-bit signal to fix small compression errors and preserve accuracy.

What it means. Vector search β€” the system behind semantic search and AI answers β€” has been slow and expensive at scale. TurboQuant makes it faster and cheaper. Google says it enables faster similarity search, lower memory costs, and real-time processing of massive datasets.

Why we care. Google can evaluate far more documents per query, not just a small subset. If/when Google adopts this in Search, AI Overviews could pull from a broader, more precise set of sources, making it easier to generate instant summaries from large data pools.

More about TurboQuant:

Death Stranding 2 pushes past 2 million sales following PC release

Death Stranding 2’s PC launch has been a huge success After almost a year of PlayStation 5 exclusivity, Death Stranding 2: On the Beach has arrived on PC, and it’s selling well. To date, Death Stranding 2 has generated over 2 million sales across PC and PlayStation 5. According to Alinea Analytics, Death Stranding 2 […]

The post Death Stranding 2 pushes past 2 million sales following PC release appeared first on OC3D.

Micron plans stacked GDDR memory, but it’s not for gaming

Micron plans to stack GDDR memory to create higher bandwidth/capacity modules Micron has reportedly begun developing a new form of GDDR memory, hoping to gain an edge over rivals. With its new stacked GDDR modules, Micron hopes to create a product that sits between HBM and GDDR memory, offering users more bandwidth and capacity per […]

The post Micron plans stacked GDDR memory, but it’s not for gaming appeared first on OC3D.

This Windows laptop may have a moderate battery life, but it burns bright with an OLED display, 16GB of RAM, and a 512GB SSD

30 March 2026 at 19:55
Best Buy has an exclusive 30% discount on the ASUS Zenbook 14. Despite being held back by a poor battery life, it's a solid pick for workers and casual users thanks to high performance rates, a stunning FHD+ OLED display, and a sleek yet durable build.

Get 32GB of high-end DDR5 RAM for just $150 with this Core Ultra 7 270K Plus PC gaming bundle: A perfect way to start or upgrade your build

It's hard to find affordable, high-end RAM for gaming PC upgrades; if you need a full system upgrade, this bundle drops the price on 32GB of DDR5-6400 to just $150. Oh, yeah, you also get Intel's latest gaming CPU and a top-tier ASUS ROG Strix motherboard.

(PR) JBL Launches Xtreme 5 and JBL Go 5 Portable Speakers

30 March 2026 at 20:07
Two fan favorites just got a serious upgrade. JBL Xtreme 5 and JBL Go 5 combine Legendary JBL Sound with a refreshed look and new ambient edge lighting to bring the party wherever you take them. JBL Xtreme 5 now features AI Sound Boost and Smart EQ Mode for rich, powerful sound whether you're listening to music or podcasts while the compact JBL Go 5 makes stereo pairing even easier with AirTouch. No matter the size, JBL brings next-level sound and the vibes to match.

Turn up the volume
Now with 10% deeper bass and louder sound than the previous gen, your most-loved songs hit harder than ever with JBL Xtreme 5. Its new acoustic design made up of dual tweeters, a subwoofer, and enhanced power output delivers powerful sound, while AI Sound Boost minimizes distortion at high volumes. Switching from your party playlist to a podcast? New AI-powered SmartEQ Mode optimizes sound settings for music or speech, so you'll hear the optimal version of whatever you're listening to.

(PR) Salvation Denied Announcement - Build Big, Fall Hard

30 March 2026 at 19:26
Publisher Digital Vortex Entertainment (part of Utmost Games) and indie developer Firevolt are proud to announce Salvation Denied, a chaotic co-op building sim for 1-4 players where a crew of small, yellow, mischievous construction robots attempts to build massive structures with heavy machinery and absurd tools.

A time-limited open playtest is now available on Steam for one week, with the full game release launching Fall 2026 on PC via Steam and in 2027 on PS5 and Xbox Series X. In Salvation Denied, players work under contract for a mysterious and fanatical client. On a hostile planet filled with extreme environments, a crew of construction robots is tasked with assembling massive experimental structures, though only their obsessive employer seems to know what's coming next.

CachyOS emerges as a fast, gaming-ready Arch Linux OS alternative

30 March 2026 at 19:20

CachyOS is a performance-driven Arch Linux-based distribution that's been grabbing attention lately as more gamers and power users highlight its speed and polished out-of-the-box experience. As Linux gaming continues to gain momentum and become a bigger talking point, CachyOS is increasingly being mentioned as a go-to choice for users who want cutting-edge software without sacrificing responsiveness or control.



Read Entire Article

Analyzing Elon Musk's TeraFab β€” A step towards Tesla and SpaceX's partial vertical integration, or an unattainable dream?

Elon Musk's TeraFab has been announced, and the first employees are now being hired. But can this venture scale to all of its terawatt glory? Or will it just help Tesla, SpaceX, and xAI land additional chips they cannot get from regular partners?

Oareo – Capture spaces with LiDAR and generate precise 3D models and floor plans


Oareo is an iOS app for scanning rooms and indoor spaces into clean 3D captures using LiDAR. Capture spaces, review them on-device, and export useful 3D outputs for design, planning, documentation, and spatial workflows. It's built for people who want fast, practical room scanning without a complicated setup.

View startup

CoreForm – Create responsive secure forms with drag-and-drop and integrations


CoreForm lets you build responsive, secure forms in minutes with a drag-and-drop editor. Use conditional logic, quizzes, and calculators to craft dynamic experiences, then track performance with built-in analytics. Connect submissions to thousands of apps via Zapier or webhooks and export data when you need it. CoreForm optimizes load speed, ensures GDPR/CCPA compliance, and removes branding on higher plans so you can collect leads and insights at scale.

View startup

Where paid media optimization should stop in long sales cycles

30 March 2026 at 19:00
Where paid media optimization should stop in long sales cycles

In long sales cycles, a lot of what happens after lead submission involves people. When you optimize campaigns to final sales, you’re teaching the ad platform to respond to how well the sales team performed that month rather than lead quality, and that’s a problem no amount of campaign changes will fix.

The common advice is to β€œoptimize the full funnel” (i.e., track media spend to revenue, optimize campaigns to sales, etc.). But beyond lead capture, most of what drives sales has little to do with your paid media. It’s about who’s on the sales team, how busy they are, and dozens of other factors you can’t influence through targeting or creative.

When your sales team becomes the signal

I’ve spent over 15 years in financial services marketing, but this isn’t unique to mortgages or insurance. If your sales process relies heavily on people, you’ll recognize this immediately.

In most businesses, there’s someone like Dave. In my case, he’s a mortgage adviser, but in yours, he might be your top enterprise sales rep, your star business development manager, or your best project estimator.

He closes deals at twice the rate of his colleagues, not because he gets better leads, but because he’s naturally gifted at building rapport, asking the right questions, and guiding anxious customers through difficult decisions.

However, Dave isn’t always there. Sometimes he’s on vacation, sometimes he might leave the company for a better opportunity, or sometimes your business hires three more Daves.

The makeup of your sales team likely changes constantly. You might have more experienced closers one month, fewer the next, a recruitment drive that brought in several new starters, or Dave and two of his colleagues leaving within a month of each other. Sales rates can swing dramatically based purely on who’s in the office, regardless of lead quality.

This can lead to targeting problems. For example, when the conversion rate drops because Dave’s away and a junior team member is covering his accounts, the algorithm sees it as a targeting problem rather than a staffing issue.Β 

If you’ve set your campaigns to optimize for sales, it thinks, β€œOur targeting stopped working. These clicks are lower-quality for this conversion action now. We should shift spend away from these audiences.”

Eventually, this could result in keywords that were previously working well being turned off, audiences that were driving sales volume no longer being bid for, and, eventually, a decline in the entire account’s performance. But the leads haven’t changed, only the team has.

Dig deeper: How to diagnose and fix the biggest blocker to PPC growth

Operational factors that distort your conversion data

It’s not just the sales team makeup either. Let’s say:

The team gets slammed in Q4 as everyone tries to close before year-end, response times stretch from two days to over a week, and customers get impatient and look elsewhere.Β 

Perhaps market conditions shift, and your most competitive product gets pulled.Β Or summer vacations mean the team is running short-handed, and some leads go cold before anyone contacts them. Then September comes and everything bounces back to normal.

It goes beyond the day-to-day. Budget approvals get delayed, product ranges change, and planning delays push projects back. The specific reason varies by business, but the effect on your conversion data is always the same.

The algorithm ends up thinking targeting got worse when, in fact, the team was just busy with leads from other sources.

When Dave becomes a superhuman: The Santa Claus Rally

The Santa Claus Rally, also known as the December Effect, is the best example I’ve seen of how human behavior can throw off algorithmic targeting.

Every December in financial services, something strange happens. In the third week of December, conversion rates from lead to sale spike dramatically. We’ve seen increases of up to 150% compared to normal weeks.

If campaigns are optimized for sales, the algorithm thinks, β€œWhatever we’re doing this week is working incredibly well!” Then the holiday week arrives, and everything crashes, with conversion rates plummeting to a fraction of normal levels.

None of it has anything to do with paid media. In week three, Dave and his colleagues are in target-hitting panic mode. End-of-year bonuses are on the line, and there’s one final push before the holiday break, so they’re calling leads faster, following up more aggressively, and closing deals they might typically have let simmer. Dave is working like a machine.

Then the holiday week arrives, and everyone’s mentally checked out, customers aren’t answering phones, and Dave has finally taken time off. The team that’s still at work is thinking more about family get-togethers and less about targets.

The lead quality, targeting, and ads haven’t changed. The team is just working at different levels of intensity due to seasonality. The algorithm overpays for normal performance and underbids for identical audiences, purely based on when Dave and his team take their vacations.

Dig deeper: How to analyze your marketing funnel and fix costly drop-offs

Where optimization should actually stop

So if optimizing for sales is being distorted by things outside your control, how should you draw the line? How can you balance this lead distortion and still drive the right type of leads?

The answer is your last point of control, which, for these kinds of sales, means at lead submission. But not just simply counting leads. Instead, value them based on both likelihood to convert and the commercial value of the end sale.

The other issue is that most high-value businesses only generate a handful of sales per month, which isn’t enough data for automated bidding to learn anything useful. Lead valuation also solves this issue by providing the platform with hundreds of conversion events rather than a few sales.Β 

This means automated bidding can actually function properly, campaign and audience testing can become meaningful, and the data stays reliable. You’re optimizing to lead quality before Dave and the sales team get involved.

To be clear, importing downstream conversion stages or revenue into ad platforms can be extremely powerful. But optimization to those signals only works when volume is sufficient, conversion lag is manageable, and the sales process is stable.

Get the newsletter search marketers rely on.


How to build lead valuation

The starting point is your historical data, ideally 12 months of it, though you can work with six. You need to understand which leads actually closed, what they were worth, and what they had in common at the point of inquiry.

For financial services, it’s things like loan amount and term. For B2B, it might be company size or sector. For construction, it’s usually project size and urgency.

From there, it’s about grouping leads by their likelihood to close to a sale and by what a typical deal size looks like, and then assigning each group an expected revenue value.

The check to make sure it’s working as expected is simple. The total estimated value you assign to your leads over a period should roughly match the revenue they actually generated. If not, the model needs work. Ideally, you should revisit it at least quarterly as your campaigns and operational factors change.

As an example, you might end up with a high-likelihood lead worth $850, a mid-range lead at $420, and a lower-likelihood lead at $120.

Once you have that, set up your conversion tracking to pass the expected value back to the platform on your conversion action and use value-based bidding (target return on ad spend in Google Ads) to point the algorithm toward the leads that are actually worth chasing.

Dig deeper: How to make automation work for lead gen PPC

Optimize for what you can control

β€œOptimize the full funnel” sounds sensible until you realize how much of that funnel you don’t actually control.

You can influence the targeting, the creative, the landing page, and the experience that gets someone to submit a form. After that, it’s over to Dave and the sales team, and dozens of other factors that have nothing to do with your campaigns.

When you expect an algorithm to optimize for things it can’t see, it will start drawing the wrong conclusions, chasing the wrong audiences, and getting worse over time.

The answer isn’t to stop measuring what happens after lead submission. You absolutely should continue measuring, as those numbers can tell you a lot about what’s going well and what might need to be corrected for. Remember:

  • When lead quality stays steady, but sales drop, that’s an operations issue, not a paid media one.
  • When both drop at the same time, look at your campaigns.
  • When sales spike, but lead quality is flat, that’s Dave having a great month, not your targeting.

That visibility is genuinely helpful, but it just shouldn’t be what you’re optimizing to.

Build lead valuation, feed expected values back to your platform, and let the algorithm do what it’s actually good at: finding people who look like your best leads. Leave the rest to Dave.

Know where your control ends, as that’s where optimization should stop.

How to build a custom GPT for business (that your team actually uses)

30 March 2026 at 18:00
How to build a custom GPT for business (that your team actually uses)

The OpenAI GPT Store launched in January 2024 with more than 3 million custom GPTs. Ask any team how many they still use, and the answer is usually zero or one.

Most business GPTs fail because they’re built like novelties rather than tools. They’re too broad, under-tested, and launched without a strategy, so they never become part of a team’s workflow.

I’ve built and audited 12+ custom GPTs across marketing, SEO, and sales teams. The pattern is consistent: a small number get used daily, while most collect dust.Β 

Here’s how to build GPTs that do β€” from validating the right use case to structuring, testing, and launching in a way that drives real adoption.

At a glance: The 15-minute version

If you’re ready to jump in, you can start with these steps:

  • Pick one task your team does 3x+ per week that takes 15+ minutes.
  • Complete this sentence: β€œThis GPT helps [role] do [task] by [method].”
  • Write instructions in the Configure tab, not the Create tab.
  • Upload a curated one- to two-page .md knowledge file, not a raw document dump.
  • Add four specific conversation starters. Users who see specific options are significantly more likely to engage than those facing a blank input field. If they can’t immediately see what to do, they leave.
  • Test with five questions before anyone else sees it.
  • Share with three teammates. Watch them use it. Iterate within 48 hours.
GPT Store’s Research & Analysis category.

Want to see what a well-built business GPT looks like before building your own? Try Marketing Research & Competitive Analysis or MARKETING, both ranked in the GPT Store’s Research & Analysis category. I helped build these at Semrush and will reference them throughout, and they demonstrate the build patterns covered below.

Need the full framework? Keep reading.

What a business GPT actually is (and what it isn’t)

A business GPT is a custom version of ChatGPT configured to do one specific, recurring job for a defined role on your team. Not β€œan AI assistant.” Not β€œa helpful tool.” One job.

Think of it like hiring. A generalist can help with anything. A specialist who does one thing incredibly well is worth 10 times more for that specific task, because they’ve already internalized the context, the standards, and the constraints you’d otherwise have to explain every single time.

That’s what a well-built business GPT does. It already knows your brand voice, output format, and when to stop and escalate instead of guessing.

I’ve built and audited 12+ custom GPTs across marketing, SEO, and sales teams, and the pattern is consistent: the ones that get used daily are tightly scoped and predictable. The ones that aren’t collect dust.

The one-sentence test: If your GPT needs more than one sentence to explain what it does, the use case is still too broad. Narrow it until the answer is obvious.Β 

  • β€œA GPT that drafts on-brand responses to negative customer reviews using our escalation framework” passes.Β 
  • β€œA general customer support assistant” doesn’t.

That specificity is what makes it useful at the planning stage, where most marketing GPTs fall short.

Marketing GPTs

The same pattern shows up across the best GPTs in the store. Most are novelties. These aren’t. Each demonstrates a build pattern you can apply.

Marketing Research & Competitive Analysis

  • Ranked No. 2 in Research & Analysis. Drop in a competitor, an industry, or a business challenge, and you’ll get structured frameworks, SWOT analyses, positioning gaps, and audience breakdowns backed by cited sources.
  • The build pattern worth noting: breadth within a defined domain. Most research GPTs do one thing. This one covers the full strategic stack, from competitive analysis to market research to strategic planning, without losing focus because the scope is bounded by β€œresearch and analysis” rather than β€œmarketing” broadly.

MARKETINGΒ 

  • Ranked No. 4 in Research & Analysis. Covers 14+ disciplines, including paid search, programmatic, out-of-home, influencer, and retail media.
  • The build spans the full media mix rather than specializing in one channel. It’s useful at the planning stage, where most marketing GPTs fall short. It also shows how conversation starters can guide users to high-value use cases immediately, rather than leaving them staring at a blank input field.

Write For MeΒ 

  • Consistently top five globally across all GPT Store categories. This is strongest for blog posts, articles, and long-form content.Β 
  • The build uses front-loaded conversation starters to narrow scope at the session level rather than baking rigid constraints into the instructions. That makes it flexible enough to serve thousands of different users without losing focus.

Data Analyst (by OpenAI)Β 

  • Upload a CSV and receive charts, summaries, and insights without writing a single line of code. This is the clearest live demonstration of Code Interpreter used well.Β 
  • This build demonstrates what the capabilities toggle actually unlocks in practice. Open it first if you want to convince a skeptical stakeholder.

Automation Consultant by ZapierΒ 

  • Describe a workflow problem in plain English and receive specific Zapier automation recommendations.Β 
  • The business model pattern here is as instructive as the build pattern: a tool-native GPT that generates qualified leads by solving the exact problem its parent product addresses. This is worth studying if you’re thinking about GPTs as a distribution channel, not just a productivity tool.

CanvaΒ 

  • Create and edit designs, presentations, and social graphics through conversation.Β 
  • Beyond the practical utility, Canva’s GPT is worth studying as a forward-looking example of where the category is heading. It has evolved from a simple GPT integration to a full native ChatGPT app integration, showing what a mature tool-native deployment looks like when a brand commits to the channel properly.

Validate before you build

The biggest waste in GPT development is building something nobody needed badly enough to actually use. Before writing a single line of instructions, score your idea across four dimensions.

CriteriaLow (1 point)Medium (3 points)High (5 points)
FrequencyMonthly or lessA few times/weekMultiple times daily
Time costUnder 15 minutes15-45 minutes1+ hours each time
ConsistencyNot criticalModerateMission-critical
Context requiredGeneric info worksSome internal dataDeep internal knowledge

Score interpretation:

  • 16-20 points: Build it this week.
  • 10-15 points: Worth a prototype.
  • Below 10: Skip it. The ROI math won’t justify adoption.

The math is simple. A 45-minute task done five times per week is 16 hours per month. Anthropic’s November 2025 productivity research found that the median AI-assisted task delivered an estimated 84% time savings, with most tasks falling somewhere in the 50-95% range.Β 

Even at the conservative end of that range, a well-scoped GPT returns eight to 12 hours per person per month on that one task alone. The St. Louis Fed’s October 2025 survey research backs this up: One-third of workers who use AI tools daily report saving at least four hours every single week. Multiply either number across a team, and the ROI case writes itself.

Tip: Audit your team’s weekly standup notes or Slack threads from the last 30 days. Tasks mentioned repeatedly (especially ones people complain about) are your best GPT candidates. They’re already annoying enough to surface unprompted, which means adoption motivation already exists.

Build it right with the 6-layer framework

Every effective business GPT is built on six layers. Skip one, and the output feels half-baked. Add unnecessary complexity to one, and adoption drops.

Layer 1: Use case (one job. Full stop.)

This is the filter every other decision runs through.

❌ A general coding assistant. 

βœ… A code reviewer that checks React components against our team's style guide.

❌ A marketing helper. 

βœ… A campaign brief generator that outputs our standard five-section brief format from a single one-line input.

If you find yourself adding β€œand also it should…” more than twice during the build, you need two GPTs, not one bigger one.

This is why Marketing Research & Competitive Analysis works. It could easily have tried to write copy, plan campaigns, and do SEO analysis. Instead, it stays in its lane: research and competitive intelligence. That constraint is what makes the output reliable enough to use in real strategy meetings.

Layer 2: Instructions (your most important investment)

Most people underinvest here by an order of magnitude. Your system prompt isn’t a description of what the GPT does. It’s the operating system that controls how it thinks, behaves, and responds.

A weak system prompt produces generic, unreliable output. A strong one turns a blank ChatGPT into a domain expert.

Go straight to the Configure tab. ChatGPT’s conversational builder (the β€œCreate” tab) is fine for quick setup but gives you almost no control over formatting, behavior rules, or conditional logic. The Configure tab is where you actually build the thing.

If you’re already using ChatGPT for SEO workflows, you know how much the quality of your prompts determines the quality of the output. The same principle applies tenfold with system instructions. For a deeper dive on prompt construction for SEO specifically, check out our guide to ChatGPT for SEO.

Layer 2: Instructions (your most important investment)

Structure your instructions in this order:

  • Role definition: Who is this GPT? What’s its point of view? What does it know deeply?
  • Behavioral guidelines: What should it always do? What should it never do?
  • Output format: How should responses be structured? What’s the ideal length? Tables, bullets, prose?
  • Brand voice: What language does your brand use? What language is off-limits?
  • Escalation paths: When should it recommend a resource, a tool, or a human instead of answering?

One formatting trick that actually works: For rules that are truly non-negotiable, write them in ALL CAPS. It sounds aggressive in isolation, but it works. The model reads formatting signals. β€œNEVER recommend a competitor product” lands harder than β€œtry not to mention competitors.” Use it for your three to five most critical behavioral guardrails.

Examples:

❌ Write professional emails to clients. 

βœ… You are a B2B sales rep at a SaaS company. Tone: confident, concise, no buzzwords. NEVER use the word "synergy." Format: Subject line, three short paragraphs, clear single CTA. ALWAYS end with a specific next step, not a vague "let me know."

Budget 10-15 hours of system prompt iteration before you call a GPT production-ready. That’s not a typo. Test against normal cases, edge cases, and adversarial inputs β€” the kinds of things a skeptical user or an off-script question will throw at it.

Layer 3: Knowledge files (what makes it yours)

Without knowledge files, you’ve built a custom-named version of standard ChatGPT. The knowledge layer is what gives your GPT institutional memory: the brand voice, the internal frameworks, the context that doesn’t exist anywhere on the public internet.

What to upload:

  • Brand voice guides and style examples.
  • Internal process docs and frameworks.
  • Competitor positioning notes.
  • Product one-pagers and FAQs.
  • Past high-performing examples of the output you want.
Layer 3: Knowledge files (what makes it yours)

File format matters. Plain text (.txt) and Markdown (.md) outperform PDFs for retrieval accuracy. Never dump a raw 500-page document. The model can’t efficiently parse messy formatting or irrelevant context.

The cheat sheet rule: If a source document is longer than 20 pages, use AI to distill it into a focused, five-to-10-page summary specifically for the GPT to reference. Shorter, curated context outperforms raw data dumps every time.

The transcript trick most teams miss: If your company has recorded webinars, training videos, or internal demos, those transcripts are ready-made knowledge files. Open the video on YouTube, click β€œShow transcript,” toggle off timestamps, copy the full text, paste into a Google Doc, and download as .txt. A 45-minute video becomes a high-quality knowledge source in about 10 minutes.

Layer 4: Capabilities (enable what you need. Nothing else.)

There are three built-in toggles: Web Browsing, Code Interpreter, and DALL-E. Don’t enable them all β€œjust in case.” Each one adds surface area for the model to go off-script.

CapabilityEnable whenSkip when
Web BrowsingGPT needs live data: prices, news, current URLsGPT should only draw from your uploaded knowledge files
Code InterpreterUsers will upload CSVs, run analysis, generate chartsGPT is purely text-based
DALL-EGPT creates visual assets as part of the workflowGPT is analytical or copy-focused

Code Interpreter is the most underrated of the three. A GPT with it enabled can accept CSV uploads, run analysis, generate charts, and return downloadable files, replacing hours of manual reporting. If any part of your workflow involves structured data, this is worth experimenting with.

A note on web browsing: Web-enabled GPTs will confidently pull and present outdated or wrong information. If accuracy is important, disable web browsing entirely and rely only on your curated knowledge files. You control what’s in them. You can’t control what the web returns.

Layer 4: Capabilities (enable what you need. Nothing else.)

Layer 5: Actions (one integration for V1)

API connections to external systems β€” CRMs, project management tools, databases, calendars β€” are where GPTs start to feel like real automation infrastructure rather than fancy chat interfaces.

For V1, connect exactly one integration. Not five. Scope creep at the actions layer is where GPT projects stall before launch. Pick the single integration that would deliver the most immediate value, typically where the GPT’s output currently has to be manually copied somewhere else.

Layer 6: Evaluation (test before anyone else sees it)

Write five to 10 test questions before you share the link with anyone. Include normal cases, edge cases, and at least two adversarial inputs, the kinds of questions a frustrated user or an off-topic request would generate.

❌ Hello, what can you do? 

βœ… Here is a furious customer email accusing us of fraud. Draft a response using our de-escalation framework without admitting liability.

Test cases should reflect the hardest version of the job, not the easiest. If the GPT can handle the edge cases, the normal cases will be fine.

Get the newsletter search marketers rely on.


The most common GPT mistakes (and exactly how to fix them)

#MistakeWhy it failsThe fix
1Scope too broadTries to do everything, does nothing wellOne GPT = one job. No exceptions.
2No example outputs in instructionsGPT guesses your preferred formatInclude one to two β€œgolden” examples of ideal output directly in your system prompt
3Raw document dumpsModel can’t parse 500-page PDFs reliablyCurate five to 10-page Markdown cheat sheets instead
4No conversation startersUsers stare at a blank prompt field and close the tabAdd four specific starters that showcase different use cases immediately
5No evaluation before launchEdge cases surface publicly and erode trustWrite five to 10 test cases before sharing, including adversarial ones
6Wrong capabilities enabledWeb Browsing introduces hallucination riskEnable only what the workflow actually requires
7Build and forgetInstructions go stale as your business evolvesRevisit instructions monthly, update knowledge files quarterly

The department playbook: Highest-ROI opportunities by team

Start with the department that complains most about repetitive work. Their pain is your adoption fuel. A GPT that eliminates a universally-hated task markets itself through word-of-mouth faster than anything you could announce in a Slack channel.

Marketing

Campaign copy assistant: Input one brief. Receive ad copy, email subjects, and social captions formatted by channel. Upload your brand guidelines as the knowledge file. This replaces 30-45 minutes of copy concepting per campaign.Β 

Semrush integration opportunity: Feed in keyword data from Keyword Magic Tool to ensure copy is aligned with how your audience searches.

Competitor messaging analyzer: Paste competitor copy or a landing page URL. Get a structured summary of their positioning, the gaps they’re ignoring, and angles your brand can own.Β 

Semrush integration opportunity: Pair with Traffic Analytics data to qualify which competitors are worth analyzing by actual share of voice.

If you want to skip the build and get competitive intelligence right now, Marketing Research & Competitive Analysis handles exactly this workflow out of the box. Drop in a competitor and get a structured SWOT, positioning gaps, and audience breakdown in a single conversation.

SEO

Content brief generator: This turns a keyword into a structured brief covering audience, search intent, recommended outline, and competitor content gaps. It replaces 30-45 minutes of manual brief writing per piece. At 20 briefs per month, that’s 10 to 15 hours returned to your team.Β 

Semrush integration opportunity: Build the brief template around Semrush’s SEO Content Template output. The GPT populates the strategic rationale, Semrush provides the keyword and competitive data.

Technical SEO audit assistant: Paste a page’s content and meta information. Receive a prioritized fix list with title tag rewrites, internal link suggestions, and schema recommendations formatted exactly the way your team tracks them.Β 

Semrush integration opportunity: Pull the audit inputs directly from Semrush’s Site Audit exports.

If you’re already using ChatGPT for SEO work, our collection of SEO prompts for ChatGPT is a good starting point for building the system instructions for either of these GPTs.

Sales

Prospect research brief: Input a company name. Receive a pre-call brief with recent company news, likely buying signals based on firmographic patterns, and tailored talk tracks for the likely objections.Β 

A sales rep I worked with spent 20 minutes per prospect doing this manually before every cold call. The GPT produces the equivalent brief in 90 seconds. That means he spends his actual working hours on the only part that earns commission: the call itself.

Win/loss analyzer: Upload anonymized CRM deal notes. Surface patterns in why deals close or fall apart: which objection categories are fatal, which talk tracks correlate with wins, where in the funnel deals die.

Customer support

Ticket response drafter: Paste a customer ticket. Receive an on-brand draft response using your de-escalation framework. Rep reviews and sends in three minutes instead of 12. At 30 tickets per day, that’s 2.5 hours returned to a support rep’s day.

Policy Q&A bot: Upload your HR handbook or policy documentation. This will answer common employee questions instantly, reducing the repetitive Slack messages that eat 30-60 minutes from HR and ops leads per week.

Operations

OKR reviewer: Paste a team’s OKRs and get scores and rewrites. Are the objectives inspiring? Are key results actually measurable? Enforces rigor at scale without requiring a senior leader to manually review every team’s draft.

Meeting structurer: Input a topic and attendee list. Output a tight agenda with pre-reads, decision points, and follow-up templates. For organizations where meeting bloat is a recognized problem, this one tends to spread fast.

How to prevent your GPT from making things up

Hallucination (the model generating confident-sounding incorrect information) is the single most-cited concern from teams considering custom GPTs. It’s a manageable risk if you build correctly.

Add an explicit guardrail sentence in your instructions. Something like: β€œIf you do not know the answer from the provided knowledge files, say so directly. Do not invent information. Direct the user to [specific resource] instead.” Simple. Effective. Dramatically reduces the instinct to fill gaps with plausible-sounding fabrication.

Disable Web Browsing when accuracy matters. A web-enabled GPT will pull and confidently present outdated, incorrect, or hallucinated source material. If your GPT’s value depends on accuracy, including policy Q&A, compliance guidance, and product specs, turn off Web Browsing entirely and rely only on the knowledge files you’ve curated and can verify.

Test for it systematically before launch. Ask your GPT questions you already know the answers to. Ask it something outside its defined scope. Ask an edge-case question that isn’t covered by your knowledge files. If it confidently fabricates rather than saying β€œI don’t know,” fix the instructions before anyone else encounters it.

The tighter the scope, the lower the hallucination risk. This is another reason the one-job rule isn’t just about UX. It’s about accuracy. A GPT that knows it’s only supposed to answer questions about your return policy has far less surface area to go off-script than one configured as a general business assistant.

How to launch so your team actually adopts it

How to launch so your team actually adopts it

Building the GPT is half the job. The failure mode most teams hit isn’t a bad build. It’s a bad launch. A GPT nobody can find is a GPT nobody uses.

Phase 1: BuildΒ 

Define your one-sentence purpose. Write layered instructions with examples. Upload focused knowledge files. Configure one API action maximum for V1. Resist the urge to expand scope.

Phase 2: TestΒ 

Create five to 10 golden test questions. Run a pilot with three to five real users. Don’t send them a link and walk away. Watch them use it, note where they stall, and iterate two to three rounds before wider release. The feedback from watching someone use your GPT for the first time is worth more than any amount of solo testing.

Phase 3: LaunchΒ 

Write your GPT store or sharing copy around the outcome, not the technology. β€œSave 45 minutes on every content brief” outperforms β€œan AI-powered SEO assistant.” Add four conversation starters that showcase different use cases immediately. Users who see specific options to click engage at a significantly higher rate than those staring at a blank input field with no idea where to start.

Phase 4: PromoteΒ 

Record a two-minute Loom showing a before/after on the specific task the GPT replaces. Share through your team Slack with that before/after story, not a feature list. Create a one-page β€œprompt pack” with the 10 highest-value starting prompts for your GPT.

The discoverability principle: Pin your GPT in the team Slack channel. Add it to onboarding docs. Demo it at the next all-hands. If someone can’t find it and understand what it does in five seconds, they won’t come back after the first session.

Measuring what actually matters

Tracking total conversations is the floor, not the ceiling. Here’s what actually tells you whether your GPT is working:

MetricWhat it tells youTarget
Return rateOnce is curiosity. Twice is value. Weekly is a habit.50%+ returning after first use
Conversation depthTurns per session; longer = higher utility4+ turns average for complex tasks
Time saved per useSurvey users or compare task completion times30-70% reduction vs. manual
Team adoption rate% of target users engaging weekly60%+ within 30 days for internal GPTs
Downstream action rateAre users taking the next step you wanted?Defined per use case

The ROI one-pager: Hours saved per use Γ— frequency per week Γ— team size Γ— average hourly cost = monthly dollar value. Build this at the 30-day mark. It’s the most powerful artifact you have for justifying continued investment, or making the case for the next GPT.

Where most B2B teams are right now

Organizations fall into one of five stages:

  • Exploring: Team members use ChatGPT ad hoc. No shared GPTs exist.
  • Experimenting: One or two people have built a custom GPT. Usage is informal and person-dependent.
  • Standardizing: Three to five GPTs are deployed with proper instructions, knowledge files, and evaluation criteria. This is where shared value starts to compound.
  • Scaling: GPTs are integrated into defined workflows across departments. Usage is tracked. Iteration is systematic.
  • GPT-Native: GPTs are the default starting point for designing new workflows, not an afterthought.

Most B2B teams are at Level 1 or 2. The biggest ROI jump happens between Level 2 and Level 3. That’s the moment GPTs stop being personal productivity experiments and start becoming team infrastructure.

What separates useful GPTs from the rest

Custom GPTs are a workflow infrastructure decision. It compounds over time when scoped correctly, and quietly disappears when it isn’t.

The teams getting real ROI from them aren’t building the most technically sophisticated versions. They’re building focused ones: scoped to one job, launched with enough intentionality that their team can actually find and use them, and iterated based on real usage data, not assumptions.

Start with the task your team complains about most. Score it against the framework. If it hits 12 or above, you have your answer.

Build it this week. Run it for 30 days. That’s when it gets interesting.

Ready to build your GPT? Start with a blueprint

Ready to build your GPT? Start with a blueprint

The GPT Blueprint Generator on Thinklet walks you through the validation framework above, generates a custom system prompt for your specific use case, and outputs a ready-to-paste knowledge file, all in one session. It’s built specifically as the hands-on companion to this guide.

Or, if you want to see what a well-built GPT feels like before you commit to building one, start here:

Xbox Games Showcase 2026 will debut Gears of War: E-Day gameplay and bring back Xbox Fanfest β€” Here's when you can expect these events to go live

30 March 2026 at 18:34
Microsoft has announced when the Xbox Games Showcase 2026 will take place. In addition, it has announced that Gears of War: E-Day will be there to reveal its first gameplay footage and that the community-beloved Xbox FanFest will return at the Xbox Games Showcase 2026.

"A computer should be yours" β€” Framework founder compares the MacBook Neo and its underlying philosophy to the most upgradeable Windows laptop on the market

Framework founder Nirav Patel tears down the MacBook Neo and his company's Laptop 12 to see how they compare in terms of upgradeability. He admits there's a lot to love from both approaches, but the underlying product philosophies are quite different.

(PR) Avatar Legends: The Fighting Game Launching July 2

30 March 2026 at 18:23
Gameplay Group International and PM Studios, in collaboration with Paramount, Skydance and Avatar Studios, Today announced a global publishing partnership for AVATAR LEGENDS: The Fighting Game. Developed together with Nickelodeon Animation Studios, the fast-paced 1v1 fighter is set to launch July 2 for $29.99 USD SRP on PlayStation 5, Xbox Series X|S, Nintendo Switch 2, Nintendo Switch and PC via Steam, with full cross-play support available at launch. The game's release date was unveiled live at the EVO Awards, marking a major milestone for the fighting game community. Players can pre-order now on Steam and wishlist the game on PlayStation 5 and Xbox Series X|S Today.

Set within the iconic worlds of Avatar: The Last Airbender and The Legend of Korra, AVATAR LEGENDS: The Fighting Game delivers fluid combat, expressive bending abilities, and competitive gameplay designed to engage both seasoned fighting game players and newcomers alike. Players are challenged to master the elements across a roster of fan-favorite characters, combining strategic depth with fast-paced, accessible action.

(PR) Shawn Chang Appointed General Manager of ASUS North America

30 March 2026 at 18:12
ASUS today announced the appointment of Shawn Chang as the General Manager of its North America System Business Group. Chang, a veteran ASUS executive with more than two decades of experience across global markets, will now lead the business development and strategic growth initiatives across the United States and Canada.

Under Chang's leadership, ASUS North America will continue to accelerate its momentum across consumer, gaming, and commercial segments. The company remains committed to delivering the award-winning hardware and innovation that has made it a leader in consumer and gaming electronics and delivering the level of advanced security, durability and processing power needed for its commercial line of electronics.

Slimbook Refreshes Its Creative Laptop Series with RTX 5070 and Ryzen AI 9

30 March 2026 at 17:58
Spanish tech company Slimbook has refreshed its Creative laptop series, pairing an AMD Ryzen AI 9 365 with an NVIDIA RTX 5070 in a noticeably smaller chassis that weighs just 1.9 kg. The display is a 15.3-inch IPS panel at 2560 x 1600, 16:10 aspect ratio, 180 Hz refresh rate, 100% sRGB, and 400 nits brightness. The Ryzen AI 9 365 brings a dedicated NPU, while the RTX 5070 runs at up to 115 W TDP with an additional 45 W headroom via Dynamic Boost. A MUX switch lets you toggle between the discrete GPU, integrated Radeon 880M, or hybrid mode depending on what you need. Memory goes up to 128 GB of DDR5-5600 across two user-accessible SO-DIMM slots, and storage tops out at 8 TB via two PCIe 4.0 x4 M.2 slots.

Connectivity is generous, Thunderbolt 4, two USB-A 3.2 Gen 2 on the right, one more on the left alongside a USB-C 3.2 Gen 2, dual HDMI 2.1 outputs, a Mini DisplayPort 2.1a, gigabit Ethernet, and an SD card reader. Wi-Fi is 6E on Windows or 6 on Linux, with Bluetooth 5.2. The 99.9 Wh battery supports fast charging to 40% in 30 minutes. Slimbook offers the Creative with a choice of Linux, Windows, or both. The base configuration starts at €1,799 with 16 GB (2 x 8 GB) of DDR5 RAM and a 500 GB PCIe 4.0 NVMe SSD, with options to scale up memory, storage, and OS from there.

(PR) XGIMI Unveils TITAN Noir Series 4K Projectors

30 March 2026 at 17:53
XGIMI has announced that its flagship TITAN Noir series 4K projectorsβ€”first unveiled at CES 2026β€”are now officially available for pre-order, with early supporters eligible to save up to $3,200 off the retail price. As XGIMI's most advanced and powerful flagship home theater lineup to date, the TITAN Noir series is engineered with cutting-edge RGB triple-laser technology and a precision Dual Iris system, redefining "absolute black" in home cinema and setting a new benchmark for professional-grade at-home projection.

Crafted for home cinema enthusiasts and power users who refuse to compromise on visual excellence, the TITAN Noir series is purpose-built for dedicated home theaters and high-end living spacesβ€”where deep blacks, high brightness, and superior contrast are the cornerstones of an exceptional viewing experience.

(PR) Aurzen Unveils the Portable EAZZE D1 air Projector

30 March 2026 at 17:42
Aurzen today introduced the EAZZE D1 air smart projector, a portable cinema designed for people who want the experience of a smart TV without being tied to a single room. Built for effortless viewing, D1 air runs popular streaming apps like Netflix and YouTube out of the box, fully licensed and ready to playβ€”no extra sticks, no workarounds, and no complicated setup. Power it on, and your favorite content starts immediately, just like turning on a TV.

A recessed USB-C port with 65 W PD support allows D1 air to run from common laptop chargers or portable power banks, making it easy to move from room to room or take entertainment outdoors. The integrated 180-degree gimbal stand keeps cables neatly out of sight while allowing smooth transitions from wall to ceiling projection with a simple adjustment.

(PR) US PC Market Returned to 3% Growth in Q4 2025

30 March 2026 at 17:17
The latest research from Omdia shows that US PC shipments (excluding tablets) grew 3% year-on-year in Q4 2025 to 18.2 million units, reversing two consecutive quarters of annual decline. The return to growth was driven by a combination of the peak of Windows 11 commercial refreshes, holiday-season demand, and vendor efforts to secure inventory ahead of anticipated memory and storage supply constraints in 2026. Full-year 2025 shipments reached 71.5 million units, up 3% from 2024, but 2026 shipments are now forecast to decline 13% year-on-year due to highly constrained supply of memory and storage products.

"Q4 marked a meaningful inflection point for the US PC market," said Kieren Jessop, Research Manager at Omdia. "After two quarters of year-on-year decline, the market returned to growth driven by solid performances across both the consumer and commercial segments. Consumer shipments rose 6% to 8.2 million units - the fourth consecutive quarter of annual growth - underpinned by holiday spending and a product mix shift to more affordable price ranges. The commercial segment grew 4% as enterprises continued their Windows 11 migration, particularly in the final stretch before the Windows 10 end-of-support deadline in October."

Latest Windows 11 update is broken, refuses to install β€” Microsoft pulls latest update over missing files error

Another month, another broken Windows update β€” what's new? Well, this update was supposed to bring "production-quality" improvements, which means it's part of Microsoft's efforts to fix its AI enshittification. Ironically, the update won't even install for most users and has since been pulled with no workaround ready so far.

Asus ROG Strix Morph 96 Wireless Review: Cheaper, but not really?

30 March 2026 at 17:00
Asus' new ROG Strix Morph 96 Wireless keyboard is a well-built wireless mechanical gaming keyboard with great battery life and a compact 96-percent layout. It feels and sounds great and it doesn't rely on Asus' Armoury Crate, and it retails for just $140. But you can get its pricier older sibling for less, right now.

Neosmith AI – Replace your LLM with a custom SLM per agent, cheaper and faster


Neosmith trains a custom Small Language Model from your LLM interaction logs. The SLM handles 80–90% of agent tasks at 40–55% of the cost, and because it's trained on your workload, accuracy improves. One endpoint swap, no MLOps needed, with a free pilot until live in production.

Neosmith captures traces and outcomes to train runtime models that improve with use. Use the dashboard to deploy, version, monitor models, optimize cost and latency, and view end-to-end traces. An intelligent router, evaluation gates with policy enforcement, and auto-fallback keep quality high, while auto-reward tuning balances speed, cost, and accuracy.

View startup

CAMAudit – Audit CAM bills for errors and reclaim overcharges in minutes


CAMAudit audits commercial tenants' Common Area Maintenance reconciliation statements to uncover billing errors and help you recover overcharges. Upload your lease and CAM statement, and it uses OCR and clause analysis to pull key terms and map them, then runs 13 rule-based checks to verify math, pro rata shares, exclusions, caps, and fees. In minutes, you get findings plus a dispute letter draft. Scans are free, and you can unlock the full report for a $199 flat fee with no contingency.

View startup

⚑ Weekly Recap: Telecom Sleeper Cells, LLM Jailbreaks, Apple Forces U.K. Age Checks and More

Some weeks are loud. This one was quieter but not in a good way. Long-running operations are finally hitting courtrooms, old attack methods are showing up in new places, and research that stopped being theoretical right around the time defenders stopped paying attention. There's a bit of everything this week. Persistence plays, legal wins, influence ops, and at least one thing that looks boring

3 SOC Process Fixes That Unlock Tier 1 Productivity

What is really slowing Tier 1 down: the threat itself or the process around it? In many SOCs, the biggest delays do not come from the threat alone. They come from fragmented workflows, manual triage steps, and limited visibility early in the investigation. Fixing those process gaps can help Tier 1 move faster, reduce unnecessary escalations, and improve how the entire SOC responds under pressure

How to build FAQs that power AI-driven local search

30 March 2026 at 17:00
How to build FAQs that power AI-driven local search

There’s no such thing as β€œtoo much information” in AI search. The more detail you provide, the less likely your business is to be replaced by third-party sources β€” or left out entirely.

With the rise of AI search, we know users want answers, and they want them fast. Google Maps has Know before you go and Ask Maps about this place (not to be confused with Ask Maps, the new conversational β€œAI Mode” in Google Maps), both AI features that let users easily find information about a place without visiting their website or social media.

Merchant Center added a new feature, Business Agent, that allows shoppers to chat with brands. Business Agent pulls from the business’s product information and website to answer users’ questions.

The best way sites can prepare for the continued rollout of features like this is to ensure FAQ content based on customer research (not just standard SEO research) is top of mind.Β 

Why FAQs power answers in Google’s AI features

Ask Maps about this place offers preloaded questions and lets users ask their own. If it can’t answer, it responds, β€œThere’s not enough information about this place to answer your question, but you can try asking another question.”

It’s a basic Q&A feature right now, but we can reasonably expect this to become more conversational in the future. With the Q&A feature being deprecated on GBPs, this is the replacement. If there isn’t information available for the AI to pull from, you’re leaving users in the dark.

This doesn’t mean you should have Q&As on every page or grab every People Also Ask question from an SEO tool and use it as-is. It’s not very strategic, and those questions likely just reflect search volume.

So what about the questions that don’t have national search volume? Or the questions that are highly specific to a region or location and their considerations? Think Victorian homes or specific city insurance laws.

To craft an FAQ strategy that can provide helpful information to both AI features and people, you’ll need two things:

  • Think outside the box of regular FAQs you’ll see across all businesses and SEO tools.
  • Be consistent in how you answer these questions across platforms (website, social media, and third-party review sites like Yelp).

Dig deeper: Local SEO sprints: A 90-day plan for service businesses in 2026

Research the right questionsΒ 

Most businesses write FAQs based on whatever a tool tells them customers want to know (which is usually based on national, not local, data). The best way to get started is by re-evaluating your FAQ content.Β 

Where does it live? How many places are FAQs answered? Consider all the places your audience is and where they’re likely to ask questions or engage with your content.Β 

Look through:

  • Dedicated FAQ pages.
  • Service/Product pages.
  • About Us pages.
  • GBP Q&As.
  • Ask the community on Yelp.
  • Other third-party review sites.
  • Social media content.
  • Social media comments.
  • Customer service call logs.
  • Reviews.Β Β 

You should also open up Google Maps and check whether there’s an Ask Maps about this place feature on your own or your competitors’ GBPs. Take note of the questions Ask Maps about this place recommends, and write down any that remain unanswered.Β 

Dig deeper: If your local rankings are off, your map pin may be the reason

Get the newsletter search marketers rely on.


Social media

You can work with the client’s social media team to ask which questions they receive most frequently. Social media managers will have the most insight into the types of questions they’ve answered in comments or DMs. If you can work with them and get this information, do it.

You can also just visit the client’s social media accounts and review their content. You’ll want to look for direct questions people are asking in the comments, and also think about the types of questions people might ask based on the content being posted.

NakedMD is a medspa chain across the U.S. that regularly posts content on TikTok. They posted a before-and-after video for lip injections.

NakedMd
Affordable MedSpa

One of the comments is someone asking if they also offer dissolving services, and if you visit their site and search for β€œdissolver,” nothing pops up. They also didn’t respond to the comment, but based on watching other people’s TikToks about their experiences at NakedMD, they can dissolve filler.

Unfortunately, I only found out they dissolve filler from a negative TikTok review of their services. This is an opportunity to make sure they create content about this on the website and social media. It will allow NakedMD to control the narrative about dissolving filler vs. letting potential customers know they’ve only done it when clients were unhappy with the results.

Another example of FAQ content from social media is posts that could leave users confused or make them want to know more. This TikTok asked staff to choose Xeomin or Dysport β€” that’s it. All the staff members chose Xeomin, but there wasn’t any follow-up on why. Content like this provides another opportunity to ensure these follow-up questions are answered.

Start with the client’s social media accounts to find FAQ opportunities. Also, check out competitor social media accounts and general Reddit posts about your client’s products or services.Β 

Dig deeper: How to apply β€˜They Ask, You Answer’ to SEO and AI visibility

Customer service call transcripts and reviews

Call transcripts and reviews are your direct line into how customers feel about a client:

  • With transcripts, you’ll be able to read and hear the questions customers are asking.
  • With reviews, you get to read exactly what the people who feel strongly about your clients’ services or products think.

Both of these datasets offer insights into customers’ pain points and priorities. Use both the strengths and weaknesses identified from the transcripts and reviews to create FAQ content.

Let’s say you’ve noticed reviewers mention the words β€œemergency,” β€œmiddle of the night,” and β€œSunday” often. Customers are happy that a home service provider is available for their emergencies, no matter the day or time. Make sure the site’s content aligns with what users are saying. Maybe it’s including β€œ24/7 emergency service, 7 days a week” as an H2 on the homepage, and using it as a selling point on service pages. If there was ever any question about your client’s service hours, having it mentioned on pages is an implicit way of answering that.

While that’s a simple example, it’s still an easy way to think about how you can use this data to answer potential questions without having to write in literal FAQ format.

Google is pulling from your on-site content to feed AI-driven answers. While the FAQ format may be best for some questions, it isn’t the only format that will work.

Consistency across platforms

While reviewing existing FAQs, ensure consistency across platforms. If a client is answering a question one way on the website and another way on Yelp, how can someone tell what the real answer is? Inconsistent answers confuse people and LLMs.

As Jason Barnard recently wrote, AI platforms generate responses by sampling from a probability distribution that is influenced by the model’s knowledge, its confidence in that knowledge, and the information retrieved at the time of the query.Β 

When an AI system encounters the same information across multiple trusted sources, it becomes more confident in it. On the flip side, if it finds conflicting information or only discovers the answer in one location, its confidence diminishes.

Make sure to include an FAQ review process in your workflow. Regularly audit and flag information related to hours, pricing ranges, availability, and service offerings for frequent review. These areas tend to change the most rapidly, and having outdated information can significantly harm customer trust.

Dig deeper: The proximity paradox: Beating local SEO’s distance bias

Just one piece of the AI readiness puzzle

While having an FAQ strategy in place isn’t anything new, the importance of it and the approach have shifted. With the rise of AI features like Ask Maps about this place, it has placed a stronger emphasis on structured, consistent, and explicit service or product and pricing information.

Review FAQs wherever they may exist and audit for consistency across all digital touchpoints. This will help you prepare for the changes coming to Google Maps and Google Business Profile overall.

Crimson Desert recieves Hotfix to boost Nvidia graphics quality

NVIDIA users received a graphics boost in Crimson Desert with the game’s new DLSS and Ray Reconstruction Hotfix Pearl Abyss has just released a new PC hotfix for Crimson Desert on Steam, giving Nvidia users an image quality boost. How? Improvements to Nvidia’s DLSS and Ray Reconstruction technologies have boosted image quality when these features […]

The post Crimson Desert recieves Hotfix to boost Nvidia graphics quality appeared first on OC3D.

Halo Campaign Evolved to feature all-new story missions

Halo Campaign Evolved is getting new content that the original lacked It will be a while before gamers see an all-new Halo game. That said, a remake of the first Halo game is coming, featuring new missions/content for gamers to enjoy. With Halo Campaign Evolved, gamers will be able to play three new story missions […]

The post Halo Campaign Evolved to feature all-new story missions appeared first on OC3D.

The State of Secrets Sprawl 2026: 9 Takeaways for CISOs

Secrets sprawl isn't slowing down: in 2025, it accelerated faster than most security teams anticipated. GitGuardian's State of Secrets Sprawl 2026 report analyzed billions of commits across public GitHub and uncovered 29 million new hardcoded secrets in 2025 alone, a 34% increase year over year and the largest single-year jump ever recorded. This year's findings reveal three core trends: AI has

What the β€˜Global Spanish’ problem means for AI search visibility

30 March 2026 at 16:00
The β€˜Global Spanish’ problem in AI search and what it means for visibility

AI search often fails to identify which Spanish-speaking market it’s serving. Instead, it blends regional terminology, legal frameworks, and commercial context into a single response, creating answers that don’t map to any real market.

The result is answers that mix multiple countries into something no user can actually use. This is the β€œGlobal Spanish” problem.

How AI turns β€˜correct’ Spanish into useless answers

Ask a chatbot in Spanish how to file your taxes β€” cΓ³mo puedo declarar impuestos β€” and watch what happens.

The response is grammatically perfect, well structured, and seemingly helpful. Then, in a single bullet point, it casually lists β€œRFC, NIF, SSN, segΓΊn paΓ­s” β€” Mexico’s tax ID, Spain’s tax ID, and America’s Social Security Number β€” as if they were interchangeable items on a shopping list.

Screenshot of chatbot response to "cΓ³mo puedo declarar impuestos" showing RFC/NIF/SSN mixed in a single answer
Chatbot response to β€œcΓ³mo puedo declarar impuestos” showing RFC/NIF/SSN mixed in a single answer

To be fair, it’s improving β€” early models would confidently give you Mexico’s SAT filing process when you were sitting in Madrid, no disclaimer attached. Now they hedge. But hedging by dumping three countries’ tax systems into a single bullet point isn’t localization. It’s surrender dressed up as thoroughness.

The model still can’t determine which Spanish-speaking market it’s talking to, so it defaults to a vague, one-size-fits-none answer that serves no user well. It’s the AI equivalent of a waiter asking a table of 20 people, β€œWhat will you all be having?” and writing down β€œFood.”

If your AI answers a Mexican user with Spain’s tax logic, you don’t have a translation problem. You have a geo- and jurisdiction-inference problem. And in AI-mediated search, that inference is now the foundation on which everything else sits.

Traditional search had these same issues. Google has spent years building systems to handle regional intent, geotargeting, and language variants β€” and still doesn’t get it right every time.

The difference is that generative AI removes the safety net. Instead of 10 blue links where users can self-correct, you get one synthesized answer. And that answer either lands in the right country or it doesn’t.

Spanish isn’t one market, it’s 20+ β€” and β€˜neutral’ is not neutral

Most Americans hear β€œSpanish” and imagine a language toggle. Hispanic markets don’t work like that.

Spain and Latin America don’t just differ in slang. They’re distinct in what decides whether a page converts, whether a brand is trusted, and whether an answer is even legally usable.

For example, there are clear differences in the following:Β 

  • Regulators (Hacienda vs. SAT).
  • Legal terms (NIF vs. RFC).
  • Currencies (EUR vs. MXN).
  • Formatting (period vs. comma decimals).
  • Tone and social distance (tΓΊ/vosotros vs. usted/ustedes β€” get it wrong and you’re instantly an outsider).
  • Commercial norms (payment rails, installment culture, shipping expectations).
  • Search intent (the same query can map to different products or categories, depending on the country).

Every international SEO knows these differences matter β€” they affect everything from indexing to conversion. In generative search, they become decisive.

The model doesn’t show 10 blue links and let the user decide. It collapses the SERP into a single synthesized answer and chooses what counts as authoritative. If your context signals are ambiguous, the model improvises. That’s where β€œGlobal Spanish” is born.

Linguists have a name for this: β€œDigital Linguistic Bias” (Sesgo LingΓΌΓ­stico Digital), documented by MuΓ±oz-Basols, Palomares MarΓ­n, and Moreno FernΓ‘ndez in Lengua y Sociedad.Β 

Their research shows how the uneven distribution of Spanish varieties in training corpora produces chatbot responses that ignore specific dialectal varieties and sociocultural contexts. The bias is structural β€” baked into the training data itself.

Spain represents a minority of the world’s Spanish speakers, yet it’s often overrepresented in the digital corpora and institutional sources that shape what models β€œsee” as default Spanish.Β 

Meanwhile, many Latin American markets remain comparatively underrepresented in AI investment and data infrastructure. Latin America received only 1.12% of global AI investment despite contributing 6.6% of global GDP.Β 

The result is predictable: The model’s most confident Spanish tends to sound geographically specific β€” even when the user didn’t ask for that geography. LLM models are trained on whatever web data is most available, and that data skews heavily toward certain geographies.Β 

In practice, this means a well-written product page from a Mexican SaaS company competes for model attention against decades of accumulated Peninsular Spanish web content and often loses.

Marketers created β€œneutral Spanish” as an efficiency shortcut, and LLMs treat it as a standard β€” one that breaks down at scale.

How LLMs break Spanish: 3 failure modes that matter for SEO

The cultural blind spots cluster into three predictable failure modes, each with direct consequences for search performance, trust, and conversion.

1. Dialect defaulting: The most visible failure

When an LLM generates Spanish, it gravitates toward a default variant β€” usually Mexican for vocabulary, sometimes Peninsular for grammar. It doesn’t announce the choice. It just picks one and presents it as β€œSpanish.”

Will Saborio demonstrated this concretely in 2023. Testing GPT-3.5 and GPT-4 with regionally variable vocabulary β€” β€œstraw” can be pajilla, popote, pitillo, or bombilla depending on the country β€” ChatGPT consistently defaulted to the most globally popular translation, typically Mexican Spanish.Β 

Even after explicit context-setting prompts (asking for Colombian recipes first), the model couldn’t be reliably localized.

A study evaluating nine LLMs across seven Spanish varieties confirmed the pattern at scale: Peninsular Spanish was the variant best identified by all models, while other varieties were frequently misclassified or collapsed into a generic register. GPT-4o was the only model capable of recognizing Spanish variability with reasonable consistency.

But dialect defaulting goes far beyond pronoun mismatch. It’s vocabulary (coche/carro/auto), product categorization (zapatillas/tenis), idiomatic expressions, formality register, and the cultural assumptions embedded in every sentence.Β 

A product page that sounds like it was written for Spain signals to a Mexican user that the content wasn’t made for their market. In AI discovery, those signals compound. The model learns to associate your content with β€œoutsider” markers and may select other sources for the answer.

(A nuance worth noting: This isn’t always binary. A Mexican luxury brand might deliberately use tΓΊ in certain contexts. The point isn’t rigid rules β€” it’s that the model should make intentional choices, not default ones.)

The dialect defaulting problem" β€” diagram showing how one word maps to five different terms across Spain, Mexico, Argentina, Colombia, and Chile, with LLMs defaulting to one variant
β€œThe dialect defaulting problem” β€” diagram showing how one word maps to five different terms across Spain, Mexico, Argentina, Colombia, and Chile, with LLMs defaulting to one variant

Get the newsletter search marketers rely on.


2. Format contamination: The silent conversion killer

This one is invisible and arguably more dangerous. It’s not about words, it’s about numbers.

A documented issue in the Unicode ICU4X ecosystem illustrates the problem: Mexican Spanish (es-MX) uses a period as decimal separator (1,234.56), but if a system lacks specific es-MX locale data and falls back to generic β€œes,” it applies European formatting (1.234,56).Β 

The number 1.250 could mean one thousand two hundred fifty or one-point-two-five-zero, depending on which locale the system defaults to.

If you’ve ever shipped a pricing page with the wrong currency symbol, you know the damage. (I have. It was a Black Friday landing page showing €49,99 to Mexican users who expected $49.99. Support tickets spiked before anyone in the office noticed.)Β 

Now multiply that by AI summaries and assistants. The wrong market default propagates into product answers, generative search snippets, customer support scripts, and β€œrecommended pricing” explanations.

3. Legal and regulatory hallucination: Where it gets dangerous

This is where β€œGlobal Spanish” becomes genuinely harmful. If you’re producing content in regulated verticals (i.e., finance, health, legal, insurance), it’s the kind of error that erodes the E-E-A-T signals that Google relies on.

Spain operates under the EU’s GDPR and its national LOPDGDD. Argentina has its Habeas Data law. Colombia has its own framework. Chile is updating its personal data legislation.

Mexico has its own federal privacy law, and as of March 2025, functions previously handled by the INAI have been transferred to the SecretarΓ­a AnticorrupciΓ³n y Buen Gobierno.Β 

An LLM that treats β€œSpanish-speaking” as a single legal context might answer a privacy question from Madrid by citing Mexican regulators, or advise a Colombian business on using Spanish consumer protection law. The output reads confidently β€” but legally fictional.

In YMYL verticals, this creates legal risk and may result in your content being excluded from AI-generated answers.

Geo-identification failures: When AI gets the country wrong, it gets the Spanish wrong

International SEO used to be a routing problem: Make sure Google shows the right URL.Β In AI-mediated discovery, the failure shifts upstream. If the system misidentifies geography, it retrieves the wrong market context. β€œSpanish” then becomes a coin toss between Spain’s defaults and Latin America’s realities.

Motoko Hunt describes it as β€œgeo-drift” β€” when a global page replaces a region-specific page in AI-generated answers. AI systems treat language as a proxy for geography, so a Spanish query could represent Mexico, Colombia, or Spain, and without explicit signals, the model lumps them together.

Hunt introduced the concept of β€œgeo-legibility” β€” making your content’s geographic boundaries interpretable during traditional indexing and AI synthesis.Β 

Her critical finding, echoed by practitioners across the industry: hreflang β€” already one of the most complex and fragile signals in traditional SEO, where it was always advisory rather than deterministic β€” appears even less influential in AI synthesis.

LLMs don’t actively interpret hreflang during response generation. They ground responses based on semantic relevance and authority signals.

Language match without market match

One example from her analysis makes the Spanish problem concrete. International SEO consultant Blas Giffuni typed β€œproveedores de quΓ­micos industriales” (industrial chemical suppliers) into a generative search engine.Β 

Rather than surfacing Mexican suppliers, it presented a translated list from the U.S. β€” companies that either didn’t operate in Mexico or didn’t meet local safety and business requirements. The AI performed the linguistic task (translating) while completely failing the informational task (finding relevant local suppliers).Β That’s geo-drift in action: language match without market match.

The scale of the problem

Even within a single country, 78% of U.S. markets receive the same AI-generated recommendation list, regardless of local economic context, per Daniel Martinβ€˜s analysis of 773 queries across 50 markets.

If this cookie-cutter pattern exists within English across U.S. cities, imagine the scale across 20+ Spanish-speaking countries with distinct legal systems, currencies, and cultural norms.

Semantic collapse: When localized versions disappear

Gianluca Fiorelli calls the endgame β€œsemantic collapse” β€” the point where localized content versions become indistinguishable to AI retrieval systems, and the strongest version (usually English or U.S.-centric) absorbs the rest.Β 

His framework maps three ways this plays out:Β 

  • The AI retrieves from the wrong market.
  • It translates U.S. content into Spanish rather than using native sources.
  • It serves legal advice from one jurisdiction in another.

All three are happening in Hispanic markets right now.

The concept resonates beyond SEO. NeurIPS presentation β€œArtificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)” documents a broader pattern of output homogeneity: open-ended LLM responses are collapsing into the same narrow set of answers across major models β€” different labs, different training pipelines, same outputs.Β 

If output diversity is shrinking globally, the prospects for preserving regional diversity in Spanish-language answers are sobering.

Why this matters now

These problems existed before AI Overviews. But the expansion of AI-generated search to Spanish-speaking markets is amplifying them at scale.

Google’s AI Overviews have expanded to Spain, Mexico, and multiple Latin American countries. The same Spanish-language AI summary can be served across geographies. If it was generated from β€œgeneric Spanish” content, it may carry dialect assumptions, formatting conventions, and regulatory references that may be incorrect for the user receiving it.

The crawl gap

Log file analysis by Pieter Serraris revealed a compounding factor: OpenAI’s indexing bots visit English-language pages significantly more frequently than non-English variants on multilingual sites.Β 

Even when a site has properly localized Spanish content, the AI training pipeline may be systematically undersampling it, reinforcing the English-centric bias at the data ingestion level.

The tokenization tax

The Spanish wordΒ desarrolladorΒ requires four tokensΒ while the English word β€œdeveloper” needs just one, according to analysis by Sngular. A typical technical paragraph in Spanish consumes roughly 59% more tokens than the same content in English β€” higher API costs, reduced context windows, and degraded output quality.Β 

A systemic cost on non-English content compounds across every interaction, creating an economic bias.

The self-reinforcing loop

The combined effect is predictable and vicious β€” the most-resourced market version (typically U.S. English) accumulates the strongest authority signals, gets retrieved more often, and progressively absorbs the localized versions. Spanish pages receive fewer retrieval opportunities, weaker engagement signals, and eventually become invisible to the AI.

The SEO shift: From ranking pages to shaping entity perception

We’ve entered a visibility model where being retrievable isn’t the same as being selected.

In generative search, what matters is whether the system sees you as authoritative for that context. The margin for error has collapsed. You’re competing to be included in a single synthesized answer.

A single Spanish site often underperforms because it doesn’t clearly signal a specific market. Generic Spanish signals low confidence, and models avoid it.

The next step is making that context explicit β€” so it’s clear where your content belongs.

Sony suspends SD and CFexpress card orders due to memory shortage

Sony halts CFexpress and SD memory card orders in Japan over global memory shortages Sony has apologised to its customers in Japan, confirming that it has temporarily suspended orders for several of its CFexpress and SD memory cards. The company has stated that supply will not meet demand for the foreseeable future. As such, the […]

The post Sony suspends SD and CFexpress card orders due to memory shortage appeared first on OC3D.

Russian CTRL Toolkit Delivered via Malicious LNK Files Hijacks RDP via FRP Tunnels

Cybersecurity researchers have discovered a remote access toolkit of Russian-origin that's distributed via malicious Windows shortcut (LNK) files that are disguised as private key folders. The CTRL toolkit, according to Censys, is custom-built using .NET and includes various executables" to facilitate credential phishing, keylogging, Remote Desktop Protocol (RDP) hijacking, and reverse tunneling

Xbox 360 devkit bought for $5 at car boot sale came with 2007 beta build of GTA IV with unreleased assets β€” Version includes cut ferry system, zombies, and more

A pre-release build of GTA IV has been discovered on an Xbox 360 devkit dating back to 2007, just a few months before the game's official release next year. The build includes various unused assets that were altered or removed for the final release, such as the iconic ferry system. People have even found zombie models for a mini game, alongside NPCs, vehicles, and much more.

Thunder Compute – Rent on-demand GPU instances for AI and machine learning workflows


Thunder Compute provides on-demand dedicated GPU instances with options like RTX A6000, A100 80GB, and H100 at prices far below major clouds. You can customize vCPU, RAM, and storage, then launch in seconds from VS Code, CLI, or the web console. Switch or add GPUs, expand disks, and take snapshots as your workflow changes. Prebuilt templates like Ollama and ComfyUI help you prototype quickly and scale to production with 7–10 Gbps networking.

View startup

True Profit Calculator – See what you actually keep after fees and tax reserve


True Profit Calculator helps sellers understand their actual profit after all deductions. Many sellers underestimate how fees, payment processors, and taxes reduce their margins. This tool calculates true profit after product cost, platform fees (like Etsy, Shopify), payment processor fees, and federal, state, and local taxes. It gives sellers a clearer picture of earnings per sale or what they might earn when setting prices. Currently seeking beta testers and feedback from online sellers.

View startup

Nvidia DLSS 4.5 Dynamic and 6x Frame Generation are launching this week – Zotac confirms

DLSS 4.5 Dynamic and 6x Frame Generation are launching this week Nvidia’s new DLSS 4.5 Dynamic and 6x Frame Generation features will become available to RTX 50 series GPU owners on March 31st. This support will arrive through DLSS Overrides as part of an opt-in Nvidia App beta update. DLSS 6x Frame Generation increases Nvidia’s […]

The post Nvidia DLSS 4.5 Dynamic and 6x Frame Generation are launching this week – Zotac confirms appeared first on OC3D.

NVIDIA Readies Rubin-based GeForce RTX 60-series with Massive RT Performance Gains

30 March 2026 at 13:02
The rumor mill has started grinding about NVIDIA's next-generation gaming GPU, and it looks like NVIDIA does not want to allow the current market environment to get in the way of implementing its product roadmap, with a roughly 2-year GeForce generation product launch cadence. The next-generation GeForce RTX 60-series will be powered by the "Rubin" graphics architecture. "Rubin" already debuted on NVIDIA's bread-winning AI GPU series, and is making its way to GeForce, reports RedGaming Tech.

The first slice of rumors about GeForce RTX 60-series predicts that NVIDIA will stick to a more conservative approach with foundry nodes, and not go with a sub-2 nm nanosheet-based node. GeForce "Rubin" will be built on some variant of the current TSMC 3 nm FinFET node. They need not be the same N3 node that's in use by Apple, Intel, and others; and NVIDIA might collaborate with TSMC on an exclusive variant just the way it created the NVIDIA 4N node, derived from TSMC N5. Chips in the series will follow the numbering scheme "GR20x," with examples being "GR202" for the biggest part powering the flagship product. The 3 nm node will allow NVIDIA to maintain GPU clock speeds ranging between high 2 GHz and low 3 GHz, which is a minor increase over the current "Blackwell."

(PR) QNAP and CyberLink Extend Partnership to Optimize Media Creation with Reliable Storage Solutions

30 March 2026 at 12:13
QNAP Systems, Inc. (QNAP), a leading innovator in computing, networking, and storage solutions, today announced an expanded partnership with CyberLink Corp., a global leader in AI and multimedia software. Through this collaboration, QNAP has been officially selected as a recommended storage partner for CyberLink's PhotoDirector and PowerDirector workflows.

CyberLink's PhotoDirector, PowerDirector, and PowerDVD are trusted by creators worldwide for their AI-driven editing technologies. When paired with QNAP NAS, creators gain a private cloud storage solution that allows them to centrally store, manage, and protect their entire media libraryβ€”without being locked into recurring cloud subscriptions. This seamless combination enables creators to edit freely while retaining full ownership and control of their content and files.

(PR) Next-Level Performance Starts with MSI's MEG X870E UNIFY-X MAX motherboard and AMD Ryzen 9 9950X3D2 Dual Edition Processor

30 March 2026 at 12:11
MSI is excited to introduce full support for the upcoming AMD Ryzen 9 9950X3D2 Dual Edition processor, built on cutting-edge "Zen 5" architecture and powered by next-generation AMD 3D V-Cache technology. Designed for MSI's AM5 motherboard lineup, this 16-core, 32-thread powerhouse boasts a massive 192 MB L3 cache, up to 5.6 GHz boost clocks, and a 200 W TDP. Whether for intense gaming or heavy content creation, it delivers exceptional speed and efficiency, pushing performance to new heights.

MSI also introduces its new flagship MEG X870E UNIFY-X MAX, built for uncompromising performance. Designed with a specialized 2-DIMM layout and OC Engine, it caters to extreme overclocking enthusiasts seeking maximum potential. A powerful 18+2+1 power phase design (110 A SPS), combined with an advanced cooling solution featuring a Direct Touch Cross Heat-pipe and double-sided EZ M.2 Shield Frozr II, ensures stability under the most demanding conditions. The exclusive Tuning Controller simplifies advanced tuning and CMOS reset, making overclocking accessible even for those new to extreme performance tuning. Fully optimized for AMD Ryzen 9000 Series processors with AMD 3D V-Cache technology, and equipped with Full-Speed Wi-Fi 7 and 5G LAN, it delivers blazing-fast, low-latency connectivity.

Zibby – Capture your life and share a lasting, interactive legacy


Zibby is an iOS app that helps you capture your life story and create a shareable legacy on your terms. Many want to preserve their legacy but feel stuck due to lack of time, not knowing where to start, or feeling overwhelmed by memories. Zibby becomes a partner that learns how you think, what matters to you, and helps overcome what’s been holding you back. It integrates across iOS, allowing you to collect memories from group chats, social apps, photos, videos, places you've been, and more. Motivational prompts build a consistent routine, while smart organization turns entries into an interactive journal for family and friends to explore.

View startup

Kinship – Find verified jobs with fit scores, ghost flags, and company signals


Kinship pulls fresh roles from 1,700+ company career pages, filters out about 30% as ghost listings, and scores each one from 0 to 100 based on your work style and energy map. Every match includes a personalized explanation, skills analysis, company research, and AI coaching. You only see roles worth applying to. Free during beta.

View startup

Three China-Linked Clusters Target Southeast Asian Government in 2025 Cyber Campaign

Three threat activity clusters aligned with China have targeted a government organization in Southeast Asia as part of what has been described as a "complex and well-resourced operation." The campaigns have led to the deployment of various malware families, including HIUPAN (aka USBFect, MISTCLOAK, or U2DiskWatch), PUBLOAD, EggStremeFuel (aka RawCookie), EggStremeLoader (aka Gorem RAT), MASOL

Unbiased Ventures – Score pitch decks fast with deterministic, benchmarked analysis


Unbiased Ventures delivers deterministic evaluation of startup pitch decks so investors can make evidence-backed decisions. Its DeckAnalyst scores every deck across seven dimensions, verifies claims with human-in-the-loop review, detects AI-generated content, and benchmarks results against 3,000+ competition-winning decks. The system classifies industry and stage, calibrates weights, and provides audit trails and confidence intervals, helping you compare rivals, spot risks in team and governance, and prioritize due diligence.

View startup

Samsung Readies PCIe 5.0 QLC SSD with a Custom RISC-V Controller

30 March 2026 at 10:00
Samsung has developed a new SSD controller based on the open-source RISC-V instruction set, moving away from the Arm ISA in some of its SSD controllers. With the introduction of the BM9K1 PCIe 5.0 QLC NAND SSD, Samsung has officially created a proprietary RISC-V IP that will serve as a foundation for many SSDs the company plans to release. Announced at the China Flash Market Summit 2026, the BM9K1 SSD has been showcased with just one metric: sequential read speed. Achieving a maximum sequential read speed of 11.4 GB/s, Samsung has reached impressive speeds for QLC NAND Flash. While the sequential write speed is unknown, it is expected to be around 10 GB/s, varying slightly depending on Samsung's design. Typically, high-performance SSDs use TLC NAND, as seen in Samsung's own 9100 Pro SSD, which we reviewed. It features 3D TLC V-NAND V8 with 236 layers. While this SSD uses TLC NAND and has a proprietary Samsung Presto 5 nm controller running on Arm-based cores, Samsung might transition a significant portion of its SSD lineup to a RISC-V based design with the BM9K1, offering satisfactory performance with QLC NAND.

Interestingly, Samsung has designed this SSD with considerations for size, power, and AI. For instance, the BM9K1 PCIe 5.0 SSD replaces the previous BM9C1 PCIe 4.0 SSD controller. Although both use QLC NAND, the newer BM9K1 features a new RISC-V controller and a fresh PCIe 5.0 interface, doubling the performance in sequential reads on average. The power efficiency of the new RISC-V design is also improved. Samsung claims a 23% increase in power efficiency, thanks to the flexibility of RISC-V, which allows for greater customization and optimization of the controller firmware to match I/O patterns with the QLC NAND, resulting in nearly a quarter less power consumption. This improvement is expected to have a significant impact on small form factor PCs and client laptops. The main drawback of the design is the use of QLC NAND, but Samsung may introduce TLC NAND SSDs with RISC-V controllers in the future. For now, these remain on the 5 nm Presto Arm-powered controllers.

FontCraft – Turn your handwriting into a real, downloadable font in minutes


FontCraft turns your handwriting into a real, installable font in your browser or on iPad. Draw each character with Apple Pencil, stylus, or mouse, then refine spacing and alignment with live previews as your font takes shape in real time. Export TTF, WOFF2, and OTF for use in desktop apps, websites, and print. Create ligatures and kerning pairs, sync projects in the cloud, and download when ready. Start free, then upgrade for full character sets and commercial licensing.

View startup

IntelCue – AI-powered competitive and market intelligence tracking everything


IntelCue is an AI-powered competitive and market intelligence platform that continuously monitors newsletters, blogs, LinkedIn profiles, news feeds, websites, patents, SEC filings, and more. It detects trending topics, surfaces competitive moves, extracts keywords, and delivers weekly briefings and alerts. Use it directly inside Claude and ChatGPT via the Model Context Protocol to ask questions and get live, sourced answers. Connect your sources, let the AI analyze them, and receive concise insights and content ideas that help you act first.

View startup

AstroSeek – Create your free AI birth chart with insights from a pro astrologer


AstroSeek offers a free birth chart generator that combines AI trained on over 9,000 charts with guidance from an astrologer with 18 years of experience. It calculates precise planetary positions, houses, and aspects, then provides clear personality insights and past patterns with no sign-up required. You can upgrade to unlock deeper career, relationship, health, and transit analysis, plus email Q&A and forecasts.

View startup

Nyle & Moon – Align your wellness routine with real-time astronomy & NASA planetary data


Nyle & Moon grounds self-discovery in true-position astronomy. It integrates JPL DE441 ephemeris data and compensates for Earth’s axial tilt to calculate your exact celestial alignment with mathematical certainty. The platform offers a personalized daily ritual routine for symbolic reflection, a chant tuned to your natal lunar house to help sync your nervous system, and a lunar food guide that adapts to current planetary dietetics. Use precise space data to align daily routines while an intuitive layer guides reflection and action.

View startup

I just saw United Airlines big plans for the future and, yes, it wants to fly like Apple

After testing United’s latest upgrades, from Starlink Wi-Fi to its new β€œElevated” cabins, a clearer strategy emerges: a push toward a more connected, consistent airline experience that increasingly echoes Apple’s ecosystem approach ahead of its 100th year.

darwintIQ – Discover which trading models work best in evolving markets


darwintIQ is a quantitative trading research platform that analyzes evolving trading models across multiple markets. Instead of evaluating a single fixed strategy, the platform continuously ranks many model variants on recent market data, helping traders explore which approaches currently perform best under changing market conditions. Insights can be integrated into custom workflows, bots, or MetaTrader via API.

View startup

SeedDance – AI video generation platform supporting text-to-video and image-to-video


SeedDance is an AI video generation platform supporting text-to-video and image-to-video with multiple AI models including Veo, Seedance, Kling, Sora, Wan and more.Describe any scene, character, or story in natural language β€” SeedDance will transform it into a cinematic video with synchronized audio, physics-accurate motion, and stunning visual fidelity.Upload a photo, illustration, or product shot and bring it to life with realistic motion, camera movement, and native audio. Maintain character consistency across every frame. SeedDance is designed for everyone β€” from professional filmmakers to first-time creators.

View startup

❌
❌