❌

Reading view

Intel Stock Surges to All-Time High on Foundry Revival and Strong CPU Demand

Intel's stock is one of the best-performing semiconductor-related names in 2026, with the company's share price reaching a new all-time high of $94.10 per share at the time of writing. This is remarkable news, considering that about a year ago, Intel's stock hit a decade-low of $17.67 per share. This marks a growth of more than 400% in a single year for a company that is one of the most strategically important in the United States' sovereign semiconductor manufacturing. Intel's rise began with investment from the United States government, aimed at supporting the only company left in the U.S. conducting R&D and advanced silicon manufacturing. Since then, Intel has been on an upward trajectory, and the share price shows no signs of slowing down.

Contributing to this success is the revival of Intel's Foundry business, which is on track to attract many external customers. Intel Foundry recently achieved a significant milestone by improving yields across all major foundry nodes currently in high-volume manufacturing. This includes the Intel 4, Intel 3, and 18A nodes, which power the majority of Intel's product portfolio. In the latest Q1 2026 earnings call, Intel CFO David Zinsner noted that the company continues to improve yields on its older nodes, such as Intel 4 and Intel 3, while refining the yield of the current top-performing 18A node to reduce waste and increase the number of functional chips, even in larger dies. Additionally, Tesla signed on as Intel's first major 14A customer for Elon Musk's Terafab AI chip complex in Austin, indicating that the foundry's success in attracting external clients is just beginning.

Bazzite Staggers Fedora 44 Roll-Out: Desktops First, Handheld Users To Wait

Following the official release of Fedora 44, Bazzite's lead developer Kyle Gospodnetich has announced the release of Bazzite 44, the atomic gaming-focused distribution based on Fedora, which brings Gnome 50 and KDE Plasma 6.6, as well as many of the same improvements as the workstation OS. However, there are some notable differences for the atomic distributionβ€”namely, the Linux kernel is still on version 6.19.x. Other notable changes and updates to Bazzite 44 include an update to the KDE Plasma Login Manager for KDE versions, a new version of the Bazaar app store, improved image security for ISOs, built-in support for Elgato 4K capture cards, and the removal of QEMU and ROCM, Bazzite 44 also includes the latest ASUS Linux patches for ASUSCtl, which provides access to LED customization, fan control, and various BIOS, boot, and power settings.

While the desktop images of Bazzite 44 are already available, the handheld versions have been delayed, with the developer stating that "we are slow-rolling this update due to the nature and amount of changes present in it to ensure that the vast majority of our existing users have a good experience." He also indicates that there will be more news about Bazzite 44's handheld images "in the coming weeks," so it seems like the delay will be more than just a few days. While this may be somewhat disappointing to hear, it also means that the developers will have more time to test and validate Bazzite 44 ahead of release, hopefully delivering a more stable OS as a result. Bazzite's development team has also promised to ship the new VRAM management patch that recently made a buzz in the Linux gaming world when it releases kernel version 7.

(PR) SEMI Reports Worldwide Silicon Wafer Shipments Increase 13% Year-on-Year in Q1 2026

The SEMI Silicon Manufacturers Group (SMG) reported today, in its quarterly analysis of the silicon wafer industry, that worldwide silicon wafer shipments increased 13.1% year-on-year to 3,275 million square inches (MSI) from the 2,896 MSI recorded during the same quarter of 2025. Sequentially, shipments declined 4.7% quarter-over-quarter from the 3,437 MSI recorded during the fourth quarter of 2025 in line with typical seasonality.

"Silicon wafer demand related to AI data centers continues to be strong, including advanced logic and memory, and also now extending to power management devices," said Ginji Yada, Chairman of SEMI SMG and Managing Executive Officer, General Manager, Sales and Marketing Division at SUMCO Corporation. "Overall, silicon wafer demand has improved, but the recovery is not uniform. Many device companies have noted improvements in the industrial semiconductor segment, and this is creating a more broad-based recovery as wafer inventory is absorbed. Weaker smartphone and PC shipments in the first quarter of this year may show the impact of tighter supply of memory due to AI high bandwidth memory (HBM) allocation decisions."

Nacon’s Collapse Claims First Casualty as Greedfall Makers Spiders Shut Down, and its Other Insolvent Studios Could Be Next

Three distinct gaming characters from Spiders Studio games are shown, with the text 'SPIDERS' superimposed across them.

Yesterday, we reported on a claim that Spiders, the studio behind the Greedfall series and Steelrising, would be shutting down "soon" after it filed for insolvency last month. The shutdown was allegedly a result of its parent company Nacon being unable to find a buyer for the studio after the insolvency filing. Today, Spiders finally spoke out via its official X (formerly Twitter) account, and while it didn't confirm anything about the failed sale, it did confirm that yes, "Spiders is being liquidated." The studio's statement began with an apology for the team's silence over the last couple of months, […]

Read full article at https://wccftech.com/greedfall-makers-spiders-confirms-shut-down-nacon/

Directive 8020 Leans Hard Into The Thing β€” Supermassive Promises Eyes, Teeth and Bones in Deep Space Body Horror

The image features a character in a space suit from 'DIRECTIVE 8020 A Dark Pictures Game' with branding from Supermassive Games.

British developer Supermassive Games is about to launch Directive 8020, the fifth mainline game in its The Dark Pictures Anthology horror game series, following Man of Medan, Little Hope, House of Ashes, and The Devil in Me (there was also the PS VR2 exclusive Switchback). Directive 8020 is special within the franchise for at least two reasons: it's the first entry to drop the anthology name "The Dark Pictures" from the title, and it's also the first one to take place in a sci-fi setting. Set in the near future,Β the game follows the five-person crew of theΒ Cassiopeia, a colony reconnaissance […]

Read full article at https://wccftech.com/directive-8020-the-thing-body-horror-supermassive/

Microsoft Wants To Bring SteamOS-Level of Gaming Performance To Windows 11, While Cutting Back AI Bloat With β€œK2” Project

Microsoft Wants To Bring SteamOS-Level of Gaming Performance To Windows 11, While Cutting Back AI Bloat With "K2" Project

Microsoft is working on a new Windows 11 project called "K2," which will reduce bloat while increasing overall performance, including gaming. Microsoft Windows 11 "K2" Project Will Be A Step In The Right Direction, Aims To Reduce AI, Bloat, & Improve Overall Performance, Including Gaming The Windows 11 operating system has had its ups and downs. The general backlash has been there since the beginning, with features such as Recall and an integral focus on AI. These have had a negative impact, and with the ongoing "Windows Update" issues, the current Microsoft OS is far from perfect. The software giant […]

Read full article at https://wccftech.com/microsoft-wants-to-bring-steamos-level-of-gaming-performance-to-windows-11-cutting-back-ai-bloat-k2-project/

Report: Samsung Hits 80% Yield on 4nm Process as NVIDIA-Backed Groq, IBM, and Baidu Pile Onto Its Foundry

Korean semiconductor giant Samsung has achieved 80% production yield with its 8-nanometer chip manufacturing process, according to a media report from the Seoul Economic Daily. Samsung's yields are a frequent feature of industry discussion, with multiple media reports highlighting the firm's struggles with production efficiency. A process technology's yield is a crucial metric of its commercialization, and lower yields often lead to foundries having to bear the cost of defective products which their customers are unable to use or sell. NVIDIA Backed Groq Relying On Samsung For Its Chip Production Requirements According to the details, NVIDIA backed Groq has ordered […]

Read full article at https://wccftech.com/report-samsung-hits-80-yield-on-4nm-process-as-nvidia-backed-groq-ibm-and-baidu-pile-onto-its-foundry/

Ustwo CEO Admits She β€œHates” the Contractor Shift, but Calls Job Security the β€œRomantic” Idea the Monument Valley Studio Needs to Abandon

A character from the game 'Monument Valley' stands on a high pillar against a backdrop of stylized mountains and a starry sky.

Mass layoffs have been, and will unfortunately likely continue to be an issue that plagues the video game industry in the short and long term. After the influx of investment that flooded the industry during the COVID-19 pandemic dried up, the thousands of layoffs we've seen over the last few years have decimated the video game industry and driven developers away from games due to the lack of job security. But, according to Maria Sayans, the chief executive officer of Ustwo Games, the team behind the Monument Valley games, job security is the price the industry needs to pay to […]

Read full article at https://wccftech.com/monument-valley-studio-ceo-hates-contractor-shift-but-calls-job-security-romantic-idea-to-abandon/

A $180 RAM Bill Might Force Apple To Stick With An 8GB iPhone 18

A hand holding an iPhone 15 Pro displaying a home screen with various apps, including Fantastical and Threads, and a weather widget showing 'New York 70Β°'.

Apple has been making the most of the ongoing memory 'chipflation' by freezing the prices of its sprawling portfolio of products in a bid to gain market share. But this does not mean that the iPhone manufacturer is immune to the ongoing biting cost surges. As a matter of fact, despite recent corroborative commentary, you should not count on the base iPhone 18 sporting a 12GB RAM, especially as memory costs are slated to make up a whopping 45 percent of a given iPhone's Bill of Materials (BOM) by next year. LPDDR5, which made up just 10 percent of a […]

Read full article at https://wccftech.com/a-180-ram-bill-might-force-apple-to-stick-with-an-8gb-iphone-18/

Costco Shopper Walks Out With A Ryzen 9800X3D + RTX 5070 PC Build For $1,100 While Others Pay $2,000

An advertisement for an iBUYPOWER Gaming PC featuring AMD Ryzen 7 9800X3D and NVIDIA GeForce RTX 5070 priced at $1100, with specifications including 32GB DDR5 memory and a 2TB solid-state drive.

For a configuration including a powerful CPU like Ryzen 9800X3D and one of the best mid-range GPUs on the planet, a price tag of just $1,100 seems like a steal. User Buys Ryzen 9800X3D-RTX 5070 PC Build With 32 GB DDR5 and 2 TB SSD for Just $1,100 on Costco Once again, we see another lucky buyer snagging a complete PC build for almost half the price one would have to pay in the RAMpocalypse era. We used to see such stories a lot in recent weeks, but these are still rare and occasional. While some users are happy to […]

Read full article at https://wccftech.com/costco-shopper-walks-out-with-a-ryzen-9800x3d-rtx-5070-pc-build-for-1100/

OpenAI has effectively abandoned first-party Stargate data centers in favor of more flexible deals β€” company now prefers to lease compute and says Stargate is an umbrella term

OpenAI has reportedly modified its arrangement on several Stargate projects, leaving the direct ownership set up and instead preferring to lease compute from other partners who took on the direct risk of investing in the infrastructure.

Snowy – Conversational AI for every Tesla since 2017, not just new models


Snowy is a conversational AI for Tesla drivers. It pairs with your phone, talks through your Tesla's speakers, and integrates with the car's navigation and display. You can ask anything, send destinations to your nav, or get live news, weather, sports, and stock prices without taking your hands off the wheel. Unlike Tesla's Grok, Snowy works on every Tesla built since 2017, including the Intel-Atom cars Tesla's AI rollout has left behind. It is made independently and powered by OpenAI.

View startup

SMX Now: The automation drift and how to correct course

Automation doesn’t fail on its own β€” it does exactly what it’s trained to do. The problem is that when Google Ads is fed incomplete, misaligned, or overly broad signals, it can optimize toward the wrong outcome faster than most advertisers realize.

In our second installment of SMX Now, our new monthly series, Ameet Khabra of Hop Skip Media will break down a real account where a 417% jump in conversions turned out to be the wrong kind of success. She’ll use that case study to explain the four key ways automation drift enters an account: signal drift, query drift, inventory drift, and creative drift.

You’ll leave with a practical framework for diagnosing drift early, understanding where human oversight matters most, and managing automation more deliberately so it works toward real business goals β€” not just platform-reported wins.

Join us May 6 at noon ET.

Save your spot

Is there still a long-term game for SEO in AI search?

Is there still a long-term game for SEO in AI search

SEO sits at an interesting crossroads. One camp insists on optimizing for large language models (LLMs) and AI engines, and the other insists on doing SEO the same way we’ve always done it.

But there’s another way to approach it: combining the fundamentals of SEO with an understanding of how LLMs operate and why.

With this approach, you can keep what’s always worked β€” like on-page SEO and backlinks from reputable sources. Yet you can also look ahead to new tactics, such as optimizing for query fan-out and emerging prompt intents.

Since 2023, and the rise of tools like ChatGPT, Gemini, Claude, and Perplexity, I’ve been researching how AI engines display search results and where SEO is headed.

Here’s what I’ve found, and how you can use it to rethink your approach to a future where AI SEO considers human behavior at its core.

How the Red Queen theory applies to AI search

The Red Queen evolutionary model says that for everything to stay the same over time, everything must change. But as you adapt to the changing environment, so does the competition.

As a result, you and your competitors remain the same distance apart. In your attempt to become the predator, your prey adapts in equal measure, leaving the status quo firmly in place.

Essentially, if you don’t adapt, you’ll get eaten.

How to apply the Red Queen principle to your AI SEO strategy

Along the same lines, AI search is a natural progression of what has existed for at least a decade. A hybrid search model has been in place since 2015, with the introduction of RankBrain.Β 

That’s why many of the same SEO tactics still work now. Instead of a fundamental change, a series of big and small shifts has taken place over time.

For example:

  • LLMs still use retrieval-based search engines.
  • Content quality and freshness still matter.
  • Site speed remains crucial for performance.
  • Intent matching across the major categories is still relevant.

β€œStop optimizing for β€˜AI,’” says Britney Muller via LinkedIn.

  • β€œOptimize for search engines (so retrieval-based AI can cite you) + earn third-party coverage (so the model already knows you before the prompt is typed).”

So, what makes a worthy source for LLMs? What are people using AI assistants to accomplish? Is it to find information, analyze an issue, or create a list of recommendations?

Research from Moz shows that only 12% of AI Mode citations mirror the URLs in organic results. This means AI engines only somewhat follow the traditional rules of SEO. And over time, these changes will likely become more extensive.

While Google denies that the search engine will be entirely generative, my prediction is that Google will continue along a generative path that encompasses AI assistant behavior, such as questions, actions, analysis, and creation.

As a result, your short- and long-term strategies must work together to remain innovative yet grounded.

Focusing on human behavior and traditional search while working to understand LLMs is how you worship the Red Queen.

Why RAG is essential to understanding AI search

The most effective approach is focusing on where LLMs fall short: their limited databases. Their systems rely on retrieval-augmented generation (RAG) to address gaps in their databases without requiring constant retraining.Β 

AI assistants like Google AI Mode and Gemini need RAG to prevent hallucinations and to continue surfacing relevant answers for consumers.

Here, I gave Google AI Mode and ChatGPT the same prompt:

  • β€œI am looking for a skincare routine that prioritizes anti-aging. What routines and products should I use?”

Both returned relevant results, but the specifics differed. Google AI mode returned anti-aging tips and routines, while ChatGPT sourced anti-aging products.

They also used different sources for their information. Where ChatGPT preferred a fresh Today.com source, Google referenced dermatology websites and even Google Shopping listings.Β 

In both instances, the AI assistants needed external sources.Β 

How to optimize for AI search vs. traditional search

For SEO, you need to understand how your content aligns with the limitations of AI engines. They do the searching for themselves and then generate a response for the user, only showing external sources some of the time.Β 

It’s a subtle shift in thinking. Optimizing for search is less about crafting SEO content and more about becoming a trusted supplier for these LLMs β€” so when people enter a prompt, your brand shows up in the answer.

In that way, the Red Queen evolution involves studying AI answers, learning their quirks, comparing their preferences, and evaluating their most common intents.

Then, you can feed the database. Make sure Google, which has the largest database of any LLM, has sufficient data to keep you in the pool of trusted sources.

Without people, AI assistants have no power. That’s why you have to put people first.

Where are people using AI assistants to create, achieve, build, search, and prompt? And where does it make sense for your brand to be?

Now that the AI search landscape is more competitive, you have to think like a social media professional or a traditional marketer.

Get the newsletter search marketers rely on.


Short-term SEO tactics rely on topical authority

A short-term SEO strategy can work now, in the overlap between traditional and AI search.Β It uses topical authority to deliver results immediately, shortening clients’ time to success.Β Here’s the short-term plan.Β 

Use internal links to build entity relationships

As Kevin Indig explains:

  • β€œToday, internal links aren’t just distributing authority. They’re defining the semantic structure of your site.”

Internal links help search engines understand your site’s overall structure. AI Mode, for example, is built with vector search models, and entities are crucial to their operation.

Vector search puts your website’s information into a 3D model, allowing algorithms to go beyond keywords and determine the intent behind someone’s search. Internal links help strengthen these signals.Β 

As Gianluca Fiorelli suggests:

  • β€œWe should link internally and externally to content that reinforces entity connections, because this helps LLMs map embeddings to a wider network of connected entities, hence increasing our authority in the knowledge graph.”

Links have long mattered for search, and they still do. As you develop your long-term SEO strategy, they become increasingly important for surfacing your content in LLMs and AI assistants.Β 

Think in terms of topical coverage versus keyword research

Plan your topical authority through these four lenses:

  • Topical coverage: Develop pages that cover the overall topic and its subtopics in a relevant, useful way.
  • Query fan-out: Study the query fan-out behavior for your most valuable search terms to identify gaps in your website content.
  • Intent: Be ruthless in determining intent by breaking down the categories in your niche that do or don’t have AI visibility potential.
  • Content quality: Make sure your content follows strong experience, expertise, authority, and trust (E-E-A-T principles) and is optimized for AI SEO.

These are all based on traditional SEO tactics. However, they consider a hybrid or LLM-based approach versus focusing solely on organic search.Β 

Optimize and maintain your site’s technical health

Technical health is rooted in what works for search now: site speed, schema markup, and optimized titles and descriptions.Β 

After all, LLMs are expensive to maintain and run. It’s in their best interest to use resources that are fast and easy to extract information from.Β 

Consider recent site speed findings from Mike King, who notes, β€œSlow responses can trigger 499 errors, where the AI stops waiting.”

These three short-term goals β€” topical coverage, internal links, and technical health β€” are all important for visibility in LLMs and AI engines.Β Β 

But search has evolved because human behavior has changed. So, the long-term play involves adapting to human behavior.Β 

The long-term future of SEO relies on human behavior

Long-term SEO strategies should focus on the intent and actions of human behavior surrounding AI.

Identify search intent

The four traditional search intents (informational, navigational, commercial, and transactional) are still relevant. But AI search has added a few more.

According to MIT, examples include zero-shot, instructional, and contextual prompts. Grammarly considers other intents, including educational, opinion-based, and problem-solving.

I tend to break down intent into multiple categories of SEO opportunity based on the clients I’m working with. Some common examples include directional, recommendation, local, booking, and shopping.

Consider query fan-out

Once you identify the most relevant search intents, you can hypothesize what people are looking for the generative engine to do. From there, you can do one of two things:

  • Rule a subset of topics out of your strategy. For example, if you don’t have a local business but the results have local intent, you don’t need to focus on those topics.
  • Create web pages optimized for LLMs. For example, you can break down a topical category, study its query fan-out results, and reverse engineer what answer engines find valuable based on their behavior.

Say your target customers are U.S. home buyers. They want to know: β€œIs now a good time to buy a house?”

Plug the prompt into an AI engine and study the AI-generated answer. In AI Mode, for example, you can infer that Google fans out across multiple topics, including market conditions and pros and cons.Β 

ChatGPT, in contrast, looks at trends, forecasts, and seasonality.

Based on the data, develop a content strategy that supports query fan-out behavior.Β 

As Aleya Solis explains:

  • β€œBy β€˜fanning out’ the original query, the system can explore various facets and subtopics simultaneously based on semantic understanding, user behavior patterns, and logical information architecture around the topic, leading to a more complete and contextually rich understanding of the user’s need.”

For example, you can break down the complexities of buyer’s markets, buyer and seller perspectives, or the changes in rising inventories. You could even build a useful tool around mortgage rates or national home price trends.Β 

I use a variety of tools to help with analyzing query fan-out. But the most popular options include Semrush, Ahrefs, and Profound.

Prepare for the future of AI search

Prompting may not even be a concern in the future if AI assistants become more sophisticated at solving problems rather than responding to prompts.

Instead, AI engines may be able to anticipate searchers’ needs and intentions, according to Harvard Business Review. That means it may be increasingly helpful to focus less on prompts and more on problems.

In the absence of keyword research, it will be more important than ever to analyze human behavior, evaluating and pivoting based on how people use AI assistants.Β 

It’s helpful to consider how social media professionals and brand experts think creatively about where their audiences are and how to attract attention while building brand power and recognition.Β 

For example, Rare Beauty and Rhode have both grown their brands with creativity and consumer listening, especially in the last six years.Β 

They’ve put considerable effort into brand campaigns, public relations (PR) campaigns, TikTok content, and in real-life (IRL) experiences that have gone viral globally.

Looking at ChatGPT, the first product recommended for β€œbest makeup gifts for Gen Z” is Rare Beauty.Β 

Google makes similar recommendations, with Rare Beauty and Rhode leading the list. The results are influenced by PR coverage and social media virality.Β 

SEO’s role in the future of search

SEO will have a future as long as there are search engines with AI experiences.Β While it might look like SEO has become the prey, it’s evolved just as much as the predator has.

Everything’s changed. Yet everything’s the same.

Why tracking parameters in internal links hurt your SEO and how to fix them

Why tracking parameters in internal links hurt your SEO and how to fix them

Internal linking is one of the most controllable levers in technical SEO. But when tracking parameters are embedded in internal URLs, they introduce inefficiencies across crawling and indexing, analytics, site speed, and even AI retrieval.

Parameterized URLs

At scale, this isn’t just a β€œbest practice” issue. It becomes a systemic problem affecting crawl budget, data integrity, and performance.

Here’s how to build a case study for your stakeholders to show the side effects of nuking tracking parameters in internal links β€” and propose a win-win fix for all digital teams.

How tracking parameters waste crawl budget

Crawl budget is often misunderstood. What matters isn’t the volume of crawl requests, but how efficiently Google discovers and prioritizes valuable pages.

Crawl budget oversimplified
Crawl budget oversimplified

As Jes Scholz pointed out back in 2022, crawl efficacy indicates how quickly Googlebot reaches new or updated content. Inefficient signals, such as low-value or parameterized URLs, can dilute crawl demand and delay the discovery of important pages.

Tracking parameters like utm_, vlid, fbclid, or custom query strings work well for campaign tracking. But when applied to internal links, they force search engines to process additional URL variations, increasing crawl overhead.

Crawlers treat every parameterized URL as a unique address. This means:

  • Multiple versions of the same page are discovered.
  • Crawl paths become longer and more complex.
  • Resources are wasted processing duplicate content variants.

Search engines must still crawl first, then decide what to index.

How crawl budget feeds into the crawling and indexing pipeline
How crawl budget feeds into the crawling and indexing pipeline

Tracking parameters can quickly escalate a single URL into many variations by combining different values, creating a large number of duplicate URLs. This leads to:

  • Redundant crawling of identical content.
  • Longer crawl paths (more β€œhops” before reaching key pages).
  • Reduced discovery efficiency for important URLs.
URLs with tracking parameters lost in the invisible long tail of a website.
URLs with tracking parameters lost in the invisible long tail of a website.

On large websites, this becomes a critical issue. Googlebot has a limited number of crawl requests per website. Any time spent crawling parameterized URLs reduces the opportunity to crawl the most important pages, even the so-called β€œmoney pages.”

Crawl entries for URLs with tracking parameters via server logs
Crawl entries for URLs with tracking parameters via server logs

Granted, crawl budget is typically a source of concern for larger websites, but that doesn’t mean it shouldn’t be ignored on sites with 10,000+ pages. Optimizing for it often reveals more room for efficiency gain in how search engines discover your content.

Canonicalization isn’t a long-term fix

A common misconception is that canonical tags β€œfix” parameter issues and β€œoptimize” crawl efficacy. They don’t.

Canonicalization works at the indexing stage, not at the discovery stage. If your internal links point to parameterized URLs:

  • Search engines will still crawl them.
  • Crawl budget is still consumed.
  • Crawl depth is unnecessarily extended.
Lengthy crawl depth (5 to 7 steps) for web crawlers to discover this website.
Lengthy crawl depth (5 to 7 steps) for web crawlers to discover this website.

This is why parameter-heavy sites often show patterns like:

GSC indexing report - canonical tag

Crawl budget is not the only culprit here.Β 

When tracking breaks attribution

Ironically, tracking parameters in internal links can corrupt the data they are meant to measure.

When a user lands on your site via organic search and then clicks an internal link with a tracking parameter, the session may break down and be reattributed.

Anecdotally, Google Analytics 4 resets a session based on campaign parameters, whereasΒ Adobe Analytics does not.

This creates several downstream issues. Attribution becomes fragmented, especially under last-click models, where credit may shift away from organic entry points to internal interactions.

Attribution is fragmented across the same pair of URLs
Attribution is fragmented across the same pair of URLs

As performance is split across URL variants, page-level SEO reporting becomes unreliable and creates a disconnect between organic SERP behavior and what actually happens when a prospect lands on your pages.

Get the newsletter search marketers rely on.


How tracking parameters dilute link equity

One of the most overlooked risks is backlink fragmentation. If internal links include tracking parameters, users may share those exact URLs. As a result, external backlinks may point to parameterized versions of your pages rather than the canonical ones.

This means authority is split across URL variants, some signals may be lost or diluted, and search engines may treat these links as lower value. Over time and in large proportions, this is set to weaken your backlink profile.

Backlink dilution on target URLs by allegedly authoritative domains.
Backlink dilution on target URLs by allegedly authoritative domains

Nonetheless, it piggybacks on the above tracking problems. Those external backlinks carry internal UTM parameters into external environments. This permanently fractures session attribution and wastes crawling resources.

Why URL bloat slows pages and weakens AI access

Using UTM parameters in your internal links is more than just a crawl overhead. It also strains your caching system.

Each URL with parameters is essentially a different page with its own cache entry. That means the same content may be fetched and processed multiple times, increasing load on both servers and CDNs.

Page speed and AI retrieval example

This becomes even more critical with AI crawlers and LLM retrieval systems. It’s understood that many of these agents fetch content at scale and have limited rendering capabilities, making them more sensitive to parameterized URLs.

As the web is increasingly consumed by aggressive AI bots, having internal links with tracking parameters leaves traditional web crawlers and RAG-based systems wasting bandwidth on duplicate cache entries for pages that serve the same purpose.

At the same time, many of these systems rely heavily on cached versions and avoid rendering JavaScript due to architectural and cost constraints at scale.

Systems relying on cached versions

This makes URL hygiene a foundational requirement, not just a technical preference.

On the cache front, Barry Pollard recently suggested a smart workaround that Google has been testing for a while.Β 

Googlebot discovering pages indefinitely

Granted that removing those parameters results in identical content, helping the browser reuse a single cached response can dramatically improve Time to First Byte (TTFB), a metric that directly affects your Core Web Vitals.

Some CDNs already strip UTM parameters from their cache key, improving edge caching. However, browsers still see each parameterized URL as a separate asset and will request them one by one.

The No-Vary-Search response header closes this gap by aligning browser caching behavior with CDN logic. Implementing it allows browsers to treat URLs with specific query parameters as the same resource. Once set, the browser excludes the specified parameters during cache lookups, avoiding unnecessary network requests.Β 

In practice, the header signals which parameters to ignore when determining cache identity. The only caveat is that it’s supported in Google Chrome +141, with support coming in version 144 on Android. If most of your organic traffic comes from Chromium-based browsers and you run paid campaigns, this is worth adding now.

The structural fix: Move tracking out of URLs and into the DOM

While canonicalization to the clean URL version isn’t a long-term solution, it remains the standard requirement. If you’re stuck in such a position, it’s likely a symptom of deeper architectural challenges at the intersection of SEO, IT, and tracking.

Either way, the preferred solution is to move measurement from the URL layer into the DOM layer.

This can be achieved successfully using a good old HTML workaround: data attributes.

Data atrributes

This configuration allows tracking tools (e.g., tag managers) to capture click events and user interactions without altering the URL. Plus, it ensures internal links point to the canonical version without introducing duplicate cache entries.

Dig deeper: How the DOM affects crawling, rendering, and indexing

Why data-* attributes are a win-win for all digital marketing teams

BenefitStakeholder
Enables clean internal link URLs and unbreakable trackingSEO, analytics, product managers
Robust against CSS changes for page restylingWeb developers, product managers
Do not interfere with providing structural or semantic meaning to screen readers and search enginesProduct managers, SEO
Easy to embed directly onto an HTML elementWeb developers, analytics
Acts as a hidden storage layer for tracking data, allowing tools to capture interactions via JavaScript without exposing parameters in URLsPR, affiliates, analytics

Rethinking internal tracking for scalable growth

Tracking parameters in internal links is a legacy workaround, often rooted in siloed teams and flawed site architecture.

However, they create downstream issues across the entire organization: wasted crawl budget, fragmented analytics, diluted backlink equity, and degraded web performance. They also interfere with how both search engines and AI systems access and interpret your content.

The solution isn’t to optimize these parameters, but to remove them entirely from internal linking and adopt a cleaner, more robust tracking approach.

Using a good old HTML trick sounds just about the right fix to win over traditional search engines, AI agents, and especially your stakeholders.

Note: The URL paths disclosed in the screenshots have been disguised for client confidentiality.

New Wave of DPRK Attacks Uses AI-Inserted npm Malware, Fake Firms, and RATs

Cybersecurity researchers have discovered malicious code in an npm package after a malicious package as a dependency to the project by Anthropic's Claude Opus large language model (LLM). The package in question is "@validate-sdk/v2," which is listed on npm as a utility software development kit (SDK) for hashing, validation, encoding/decoding, and secure random generation. However, its real

'VPNs have adapted': How BlancVPN and VPN Liberty are dodging Russia's VPN blocking technology to allow Russians access to Telegram without losing everyday services

While the Kremlin's fight against VPN usage reached new heights, the tech seems to keep outsmarting the politics β€” for now, at least. And these are the best performing VPNs in Russia right now.

Is your PC ready for The Blood of the Dawnwalker

Here’s what you need to run The Blood of the Dawnwalker on PC Bandai Namco has unveiled that The Blood of the Dawnwalker is coming to PC and consoles on September 3rd 2026. On PC, the game will be available on Steam, and Rebel Wolves has provided detailed PC system requirements for the game. The […]

The post Is your PC ready for The Blood of the Dawnwalker appeared first on OC3D.

(PR) Fractal Design Launches Pop 2 Vision Dual-Chamber PC Case

Announcing Pop 2 Vision, a new addition to the Pop 2 series, combining panoramic design with a sleek dual-chamber layout. Pop 2 Vision is crafted to provide a clean, uncluttered view into its refined interior. With support for graphics cards up to 412 mm, top-mounted radiators up to 360 mm, and compatibility with reverse connector motherboards, it offers flexibility for modern gaming components. Out of the box, it features four pre-installed reverse-blade fans, integrated with hidden cables and frames to make achieving a clean build effortless.

On the outside, Pop 2 Vision offers easy access through removable glass panels, together with a ventilated right-side panel and magnetically attached top mesh filter. Inside, dedicated cable routing space, a large cable grommet, and modular power supply mounting help create a smooth building experience. A top-mounted I/O provides quick connectivity with two USB ports and an audio jack, while RGB versions also include integrated controls for effortless lighting adjustments.

[Editor's note: Our in-depth review of the Fractal Design Pop 2 Vision is now live]

(PR) Philips Evnia Introduces AmbiScape for Room-Synced Gaming Lighting

Philips Evnia is expanding its ambient lighting ecosystem for gamers who use lighting to enhance atmosphere, pace, and immersion beyond the screen. Building on its established AI-Enhanced Ambiglow technology and Windows Dynamic Lighting support, Evnia now introduces AmbiScape; a new feature designed to extend synchronized lighting from the display into the surrounding room.

Together, Ambiglow, AmbiScape, and Dynamic Lighting integration enable a more unified setup where on-screen action can influence both the monitor's ambient lighting and compatible devices across the gaming environment. The experience is enabled via USB upstream and configured through the Philips Evnia Precision Center.

MindsEye Dev Calls Out Leadership, Says the β€˜Corporate Sabotage’ Was the CEO’s β€œHate Mail” That β€œRead Like League of Legends Chats”

MindsEye dev heartbroken over game's launch

A little more than a week after unionized members of the Build a Rocket Boy staff took legal action against the studio's leadership for installing spy software on work devices that allegedly violated data protection laws, Chris Wilson, a former animator who worked at the studio for the last six years and has worked in the video game industry for more than two decades has come forward to share just exactly what it was like working on MindsEye under former Rockstar producer Leslie Benzies and his co-chief executive officer, Mark Gerhard. In a massive interview with Kotaku, Wilson is very […]

Read full article at https://wccftech.com/mindseye-dev-speaks-on-alleged-corporate-sabotage-was-just-hate-mail-to-build-a-rocket-boy-ceo/

Memory Manufacturers Earned More In Q1 Than All of Last Year, Prices To Spiral Up Another 40% In Q2 2026

Global DRAM Prices Will Decline Up To 18% This Month & Hit Bottom By Q1 2023 1

Memory makers are enjoying a huge boost to revenue as AI demand has earned them more in a single quarter than the entire previous year. Memory makers such as ADATA saw a 17x Annual Growth In Profits, Others Also Seeing Similar Boost From AI Boom The AI crunch continues to devastate the consumer markets as component prices go up; at the same time, memory makers are seeing an astronomical rise in their profit margins. So there are two things to factor here, first is that memory demand is at an all-time high due to AI firms requiring more DRAM for […]

Read full article at https://wccftech.com/memory-manufacturers-earned-more-in-q1-than-all-of-last-year-prices-to-spiral-up/

TSMC Doubles Down on 2nm With Five Fabs Ramping at Once, Output Set to Eclipse 3nm by 2x

TSMC is all set to double its 2nm production capacity through five state-of-the-art fabrication plants to meet global AI and chip demand. TSMC's 2nm Output To Be 45% Higher Than 3nm At The Same Stage As Production Ramps Up Recently, we shared how TSMC is gearing up to boost its 2nm and 3nm wafer output aggressively by the end of 2026. Now, the company is reportedly going to double its 2nm capacity output to meet "explosive" demand for AI and compute. As such, TSMC has set up five wafer fabs, all entering ramp-up phase this year towards 2 nano-meter proceses. […]

Read full article at https://wccftech.com/tsmc-doubles-down-on-2nm-five-fabs-ramping-at-once-output-eclipse-3nm-by-2x/

Lenovo abandons separate magnesium frame for latest P16 Gen 3 laptop after 20 years β€” robust feature introduced in ThinkPad T60 in 2006, company now integrates material into outer shell for a thinner design

Lenovo has reportedly stopped using magnesium alloy subframes in the ThinkPad P16 Gen 3 to save on weight and thickness. This feature was first introduced on the ThinkPad T60 in 2006 and was added to ensure rigidity for the brand's workstation laptops.

Clera – Skip applications and get warm intros to top startup jobs


Clera is an AI-powered talent agent that matches candidates with startup roles and introduces them directly to founders and hiring managers. Share your experience, preferences, and dealbreakers to receive curated opportunities with context instead of cold applications. It’s free for candidates because companies pay upon hire. Chat to define goals, review matched roles, accept intros, then move quickly to interviews. Clera also offers tools like a resume creator, career coach, and salary calculator to help you prepare.

View startup

4 signals that now define visibility in AI search

4 signals that now define visibility in AI search

Ranking and visibility are no longer the same thing. For 20 years, SEO teams optimized for SERP position. Higher rankings meant more visibility, more clicks, and more traffic. That relationship is breaking down.

Earlier this year, Ahrefs found that only 38% of pages cited in Google AI Overviews also ranked in the traditional top 10. Eight months earlier, that number was 76%.

The implication is straightforward: being highly ranked no longer guarantees being seen.

In AI-generated answers, visibility is determined by inclusion β€” and by how your brand is represented when it appears. That representation is determined by a different set of signals.

How visibility works in AI search: 4 signals that matter

Four distinct patterns determine how brands appear inside AI-generated responses:Β 

  • Mention order.
  • Depth of explanation.
  • Authority signals.
  • Comparative positioning.

1. Mention order

When an AI model lists three CRM options, the order matters. Up to 74% of users choose the AI’s top recommendation, according to a Growth Memo and Citation Labs AI Mode study.

This reinforces how heavily people rely on the first option presented.Β 

About 26% of users overrode the AI’s order entirely when they recognized a brand they already knew. This is a shift from how users behave in traditional search. And 56% of users built their own shortlist from multiple sources. In AI Mode, 88% took the AI’s shortlist without checking further.Β 

The AI’s curated answers carry that much weight. But mention order isn’t stable. SE Ranking’s August 2025 analysis found that when you run the same query three times, AI Mode only overlaps with itself 9.2% of the time.Β 

The sources change. The order changes, sometimes dramatically.

The lesson: Mention order creates an advantage, but it isn’t deterministic. Brand recognition can trump position.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

2. Depth of explanation

Not all mentions are created equal. Some brands get a single sentence. Others get a full paragraph explaining their strengths, use cases, and differentiators.

The difference comes down to how much citation-worthy information AI systems found about you.

When Semrush announced its AI Visibility Awards in December 2025, it analyzed more than 2,500 prompts run through ChatGPT and Google AI Mode. Category leaders like Samsung in consumer electronics didn’t just appear more often. They got more detailed descriptions when they did appear.

Challenger brands like Logitech in gaming accessories showed up, too, but typically with shorter mentions focused on a single differentiator.

The top 4.8% of URLs cited 10+ times by ChatGPT share a common trait. They’re comprehensive pages that answer β€œwhat is it,” β€œwho uses it,” β€œhow to choose,” and β€œpricing” in a single URL.

Word count seems to matter, too. Pages above 20,000 characters average 10.18 citations each. Pages under 500 characters average just 2.39.

The lesson: If AI systems have thin data about your brand, you get thin mentions.

3. Authority signals

AI systems don’t just cite sources. They characterize them by tone, which reveals how much confidence the AI has in your authority.

HubSpot’s AEO Grader, launched in early 2026, classifies brands into competitive roles: leader, challenger, or niche player. They’re positioning labels that determine how persuasively AI presents you.

Semrush’s awards data showed that category leaders have less than 20% monthly volatility in AI share of voice. Once AI systems establish you as a leader, that perception tends to stick.

The language reflects this correlation.Β 

  • Leaders get described with confident phrasing, such as β€œthe industry standard” and β€œwidely recognized.” 
  • Challengers get β€œgrowing alternative” and β€œgaining traction.”

Most brand mentions in AI answers are neutral or positive. But neutral isn’t the same as enthusiastic.

The difference between β€œalso offers project management features” and β€œconsidered one of the top three project management platforms” is authority signaling.

The lesson: AI doesn’t just say your name. It frames your reputation.

Get the newsletter search marketers rely on.


4. Comparative positioning

Comparative positioning is the closest thing to traditional rankings in AI answers: how you’re positioned when multiple brands appear together. But instead of Position 1 vs. Position 2, it’s β€œbetter for X” vs. β€œbetter for Y.”

Amsive’s research found clear positioning hierarchies.Β 

  • In banking, Bank of America leads with 32.2% visibility, SoFi follows at 25.7%, and LightStream captures 20.2%.Β 
  • In healthcare, Mayo Clinic dominates at 14.1%.

Kevin Indig’s Growth Memo research revealed a critical nuance. When AI positioned a brand as β€œbest for startups” versus β€œbest for enterprises,” users self-selected based on that framing, even if both brands technically served both segments.

The lesson: You’re not competing for position 1 anymore. You’re competing to own a specific positioning niche in AI’s mental model of your category.

How traditional rank correlates with AI visibility (barely)

We already covered the 38% overlap stat. The interesting question is why it dropped so fast. The answer: query fan-out.

When an AI Overview triggers, Google doesn’t just evaluate the top-ranking pages for the user’s actual query. It breaks the question into multiple sub-queries, retrieves relevant passages from across its index, and synthesizes them into a single response.

Your page might rank No. 1 for β€œbest project management software” and still get skipped. The AI pulled from pages ranking for β€œproject management for remote teams” or β€œintegrations with Slack” instead. One query to the user. A dozen queries behind the scenes.

SE Ranking’s February 2026 research found that Google’s upgrade to Gemini 3 replaced approximately 42% of previously cited domains and generates 32% more sources per response than its predecessor. Traditional ranking positions became even less predictive overnight.

Where AI traffic actually goes

Semrush’s analysis of 17 months of clickstream data reveals an unexpected pattern: Over 20% of ChatGPT referral traffic goes to Google. That share rose from roughly 14% at the start of the study to more than 21% by early 2026.

The biggest beneficiary of ChatGPT’s growth is Google.Β 

Users go to ChatGPT to get an answer, then head to Google to confirm findings or research brands they just discovered. For users, they’re complementary steps in a single journey.

Most ChatGPT prompts don’t match traditional search language. Between 65% and 85% of prompts couldn’t be matched to any traditional search keyword in Semrush’s database of 27 billion keywords.

  • A traditional Google search: β€œbest project management software.” 
  • The ChatGPT equivalent: β€œI manage a 12-person remote engineering team, and we’re constantly missing sprint deadlines. What should I change about our weekly standups?”

That level of specificity doesn’t exist in keyword databases β€” and it’s becoming more common.

Measuring visibility in AI answers

If position doesn’t matter the way it used to, what does?

  • Citation frequency replaces rankings as the primary metric. How often does your brand appear when AI systems answer questions in your category?
  • Brand mention rate measures penetration. If AI generates 100 answers about your category, what percentage mention your brand? Scores above 70% indicate strong AI search performance. Below 30% signals significant visibility gaps.
  • Recommendation rate matters more than mention rate for B2B SaaS and high-consideration purchases. Being recommended carries more weight than being mentioned in a general list.
  • Sentiment and context determine whether mentions drive action. Track how AI describes you: premium vs. cheap, advanced vs. beginner, reliable vs. experimental.
  • Citation position within answers creates measurable advantage. Unlike traditional rankings, you can be first-cited without being first-ranked organically.

The measurement infrastructure you actually need

Traditional rank trackers can’t measure these signals.

The 2026 measurement model requires parallel tracking. Traditional SEO metrics still matter for the portion of search that remains blue links. AI visibility requires tracking how often your brand appears and how it’s represented in AI-generated answers.

A new category of tools has emerged to support this shift.

  • For citation tracking, platforms like Profound, Gauge, Peec AI, and Scrunch monitor which URLs get cited across ChatGPT, Perplexity, Claude, and Google AI Overviews.
  • For brand analysis, tools like Semrush’s AI Visibility Toolkit and AthenaHQ measure how often your brand is mentioned, how it’s described, and whether it’s recommended.
  • For competitive positioning, Bluefish and HubSpot’s AEO Grader evaluate how AI systems categorize your brand relative to competitors.

None of these tools replace traditional SEO infrastructure. They supplement it.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

A different model of visibility

The ranking obsession isn’t going away entirely. Traditional search still drives traffic. But measuring success solely through rankings misses the larger shift.

AI answer engines now act as gatekeepers, surfacing only the brands they consider citation-worthy.

Visibility depends on how often you’re included, how you’re described, and how you’re positioned relative to competitors.

Traditional rank trackers can’t capture that. It requires a different measurement model. That’s what determines visibility now.

Webinar: How to Automate Exposure Validation to Match the Speed of AI Attacks

In February 2026, researchers uncovered a shift that completely changed the game: threat actors are now using custom AI setups to automate attacks directly into the kill chain. We aren't just talking about AI writing better phishing emails anymore. We’re talking about autonomous agents mapping Active Directory and seizing Domain Admin credentials in minutes. The problem? Most defensive workflows

How Microsoft’s β€œK2” project aims to fix Windows

Windows β€œK2” isn’t Windows 12, it’s better than that Microsoft has promised to make Windows 11 faster and more reliable. Over the years, goodwill towards Windows has eroded, and many users have begun actively seeking an alternative. Recently, Microsoft’s focus on Copilot and AI has accelerated this negativity to the point that Microsoft has started […]

The post How Microsoft’s β€œK2” project aims to fix Windows appeared first on OC3D.

Blind Test Shows Gamers Prefer NVIDIA DLSS 4.5 Over AMD FSR 4.1

Back in February, ComputerBase conducted a large blind test comparing in-game screenshots generated using the latest upscaling technologies: AMD's FSR 4.0 and NVIDIA's Deep Learning Super Sampling 4.5. The testing has since been updated, and community votes have been processed, revealing that AMD's updated upscaling technology, FSR 4.1, shows significant improvement over FSR 4.0. However, it still trails behind NVIDIA's DLSS 4.5 in visual quality. In the latest ComputerBase testing, the following games were upscaled using FSR 4.0, FSR 4.1, and DLSS 4.5: Year 117 - Pax Romana, ARC Raiders, Assassin's Creed Shadows, Call of Duty: Black Ops 7, Kingdom Come 2: Deliverance, Resident Evil Requiem, and The Last of Us Part I. In all of these games, the ComputerBase review concluded that DLSS 4.5 was the top performer, a view confirmed by the community in a separate blind test vote.

Similar to the previous test, the ComputerBase team conducted the comparison using videos labeled with three options, without revealing which rendering method was used, to ensure a fully blind test. This resulted in a community verdict with two notable outcomes. First, NVIDIA's DLSS 4.5 remains the leader in image quality, with 6 out of 7 games showing the best results using DLSS 4.5. The only game where AMD's FSR upscaling came out on top was Resident Evil Requiem, where DLSS 4.5 placed second behind FSR 4.1. Overall, DLSS 4.5 is seen as providing sharper visual details and more consistent frame generation compared to AMD's FSR upscaling.

β€œA Missed Opportunity for Xbox” β€” The Man Who Built ESO for 18 Years Opens Up About Project Blackbird’s Cancellation

A futuristic sci-fi game, Project Blackbird from ZeniMax Online and Xbox, depicts armored soldiers in a neon-lit industrial setting with blue and green holographic displays.

Former ZeniMax Online Studios founder and studio head Matt Firor has finally spoken openly about Project Blackbird's cancellation. Firor, who was behind the successful release of The Elder Scrolls Online and its continued support for many years, left the studio following Microsoft's decision to cancel the project after many years of development. Shortly after that news, he admitted that the two things were directly related. Now, speaking to MinnMax, he provided a lot more color on why he feels the decision to shut down Project Blackbird is a missed opportunity for Xbox. It's conflicted. I'm so proud of what the […]

Read full article at https://wccftech.com/project-blackbird-cancellation-matt-firor-xbox/

Strike Chaos: Samsung Threatens To Spin Off Its Semiconductor-Focused DS Division Into A New Company To Neutralize Union Leverage

Samsung's 2nm GAA yields are less than 40%, according to research firm's estimations

As Samsung's unionized workers grow ever bolder, egged on by the rapidly fattening purse of its memory business, which has spurred calls for pay/bonus hikes from other divisions as well, Samsung is eyeing the nuclear option of definitively splitting up the conglomerate by spinning off its semiconductor-focused Device Solutions (DS) division. Irked by rising calls for pay/bonus hikes from its less profitable business units, Samsung is eyeing a spin-off of its very lucrative DS division into an entirely different company The top echelons of Samsung's management appear to be in a panic mode ahead of an impending workers' strike, so […]

Read full article at https://wccftech.com/strike-chaos-samsung-threatens-to-spin-off-its-semiconductor-focused-ds-division-into-a-new-company-to-neutralize-union-leverage/

ASUS Cracks the JEDEC Mixing Problem on Intel Z890 & B860, Letting Users Pair Mismatched DDR5 Sticks As RAM Prices Spike

A row of ASUS AEMP JEDEC DDR5 DIMMs shows models with labeling '4800 MT/s,' '5200 MT/s,' and '5600 MT/s' alongside the text 'JEDEC DIMM Mix & Match.'

Users can now mix and match DDR5 memory on the latest Intel platform to achieve better tuning without worrying about compatibility. ASUS Rolls Out BIOS 3002 and 3103 to Mix and Match Memory, Allowing Users to Optimize Different Spec "Green" Modules on Intel Z890 and B860 At a time when DDR5 RAM prices are at an all-time high, many are looking at the JEDEC industry standard RAM modules, which typically bring lower clocks out of the box compared to Intel XMP or AMD EXPO-enabled memory modules. These adhere to strict standards and operate at a particular frequency, timing, and voltage. […]

Read full article at https://wccftech.com/asus-aemp-iii-brings-support-for-mixed-ddr5-configurations-on-intel-z890-and-b860-motherboards/

Chinese GPU Maker, Lisuan, To Launch 7G100 Gaming Graphics Card on 20th May: First 6nm Domestic Product With WHQL Certification

Chinese GPU Maker, Lisuan, To Launch 7G100 Gaming Graphics Card on 20th May: First 6nm Domestic Product With WHQL Certification 1

Lisuan's 7G100 gaming graphics card will launch on 20th May, becoming China's first fully domestic 6nm GPU with WHQL certification. Lisuan 7G100 Gaming GPUs All Set For 20th May Launch In China, Brings Wider Game Support Through WHQL-Certified Drivers China's first domestically produced 6nm GPU for gaming audiences is launching on 20th May, bringing wider support, and the biggest of all, Microsoft WHQL certification for drivers, joining the ranks of Intel, NVIDIA, and AMD. This makes Lisuan Technologies the first Chinese GPU maker to achieve WHQL certification, marking a significant progress for domestic producers. Lisuan also states much wider game […]

Read full article at https://wccftech.com/lisuan-launches-7g100-china-gaming-graphics-card-on-20th-may-6nm-whql-certification/

Intel’s 18A-P Pulls in Apple’s Next M Chips While EMIB Reportedly Wins Google TPUv8e As Customer Confidence Amps Up

A presentation slide shows details about the 'Intel 18A' and 'Intel 18A-P' chips with performance and chip density metrics, and a timeline graph displaying defect density improvements, accompanied by a statement 'On Track for HVM Yield Levels In Q4'25.'

Intel continues to see increased confidence for its upcoming Foundry technologies, such as 18A-P, 14A, and EMIB. Apple & Google Will Reportedly Leverage Intel Foundry 18A-P & EMIB Technologies, 14A Customers Also Lining Up. The Agentic AI and Inferencing boom has led to a significant surge in CPU demand. This has led major semiconductor companies such as TSMC to face severe supply constraints, all the while going on a large-scale expansion spree to meet demand. At the same time, Intel has been driving revenue up by selling off salvaged dies, but the company is also attracting the attention of various […]

Read full article at https://wccftech.com/intel-18a-p-pulls-in-apple-next-m-chips-emib-reportedly-wins-google-tpuv8e/

Framework's new RTX 5070 12GB graphics module costs a whopping $1,199 β€” 72% more expensive than $699 8GB version, says pricing is beyond its control

Framework has just launched a new graphics module featuring the RTX 5070 mobile 12GB, and it costs a cool $1,200, representing a 72% raise in pricing over the 8GB variant, that's $699. Both GPUs are identical apart from the memory capacity (and bandwidth), but Framework says pricing is out of its control.

US stops exports of tools to China’s number two chip maker β€” Hua Hong and Huali Microelectronics reportedly on the cusp of starting a 7-nm fab in Shanghai

Applied Materials, KLA, and Lam Research received letters from the U.S. Department of Commerce preventing them from shipping some of Hua Hong's orders for the latest chipmaking tools. These are reportedly being planned for use on the Chinese company's planned 7-nm fab in Shanghai.

Palit Group says Galax GPU brand will continue to operate following restructure β€” Galax management centralized under Palit Group in 'pre-planned' shakeup

Galax has moved under the direct control of Palit, owned by the Palit Group, but the brand itself isn't going anywhere. Official statements from both companies clarify that Galax will continue to design, produce, and release hardware like before, but will be managed by Palit now to streamline the business.

Developer re-enables 3D printer features that Bambu Lab disabled, firm promptly threatens legal action β€” OrcaSlicer-BambuLab project now shuttered

Independent software developer Pawel Jarczak has voluntarily shuttered his popular β€œOrcaSlicer-BambuLab” project following legal threats from Bambu Lab, ending one man’s fight to restore direct control to the popular third-party slicer.

Adestto AI – Automate Forex, gold, and indices trading with adaptive AI bots


Adestto AI deploys 26 self-optimizing trading bots across Forex, gold, indices, and crypto. It provides real-time AI-verified signals and fully automated MT5 execution, with weekly AI tuning that adapts strategies to market conditions. You can manage risk with predefined profiles, automatic stop-loss, and news pauses, and control everything from a dashboard with Telegram alerts and built-in backtesting. Plans include VPS hosting and 24/7 disciplined trading.

View startup

Grocyy – Scan receipts, track grocery spending, and predict what you'll need next


Grocyy scans your grocery receipts, extracts products, prices, and store details, and organizes every purchase into a clean, searchable dashboard. It tracks spending by store and category, highlights trends, and helps you compare costs over time. Grocyy learns your buying habits to predict when you'll run out, estimate your next shopping date, and remind you before essentials run low. Use it to spot unnecessary purchases, control your budget, and save time without manual tracking.

View startup

What to Look for in an Exposure Management Platform (And What Most of Them Get Wrong)

Every security team has a version of the same story. The quarter ends with hundreds of vulnerabilities closed. The dashboards are bursting with green. Then someone in a leadership meeting asks: "So, are we actually safer now?" Crickets. The room goes quiet because an honest answer requires context – which is something that patch counts and CVSS scores were never designed to provide. Exposure

Critical cPanel Authentication Vulnerability Identified β€” Update Your Server Immediately

cPanel has released security updates to address a security issue impacting various authentication paths that could allow an attacker to obtain access to the control panel software. The problem affects all currently supported versions, according to an alert released by cPanel on Tuesday. The issue has been addressed in the following versions - 11.110.0.97 11.118.0.63 11.126.0.54 11.132.0.29

Searchers just want you to be helpful

Searchers just want you to be helpful

The March 2026 core update broughtΒ what Google describes as a design β€œto better surface relevant, satisfying content for searchers from all types of sites.” This confirms the simplest truth in search: people use Google to get answers.Β 

Whether it’s solving a problem, learning something new, or making a decision, searchers want content that is genuinely helpful in their busy, on-the-go lives. If your content does that, it succeeds. If it doesn’t, no amount of SEO tricks, hacks, or magic bullets will get your content to show up on page one, let alone in AI Overviews.

How modern search systems surface helpful content

AI Overviews went from appearing for just 6.49% of queries in January 2025 to 15.69% in November 2025 according to a Semrush study. Depending on the source today, AI Overviews appear for 25-50% of queries.

It’s clear that search engines and LLMs are working together more efficiently today than just a year ago. Fast forward another year, and we can only imagine.Β 

For any SEO focused on creating helpful content and understanding user intent, it’s a truly exciting time to be in the industry. Your genuinely useful content can be surfaced in AI Overviews using retrieval-augmented generation (RAG) and query fan-out.

  • RAG: Instead of just relying on what it β€œknows,” AI looks for relevant information across multiple sources before answering a query
  • Query fan-out: One search query can be broken down into multiple related queries behind the scenes, helping AI and search engines build a more complete, useful response

Entire papers have been written on these two concepts alone. The TL;DR is that SEO today is about more than just keywords or counting backlinks. Modern search is designed to connect searchers with content that actually answers their questions and satisfies user intent.

Why this raises the bar for SEO in 2026 and beyond

These systems, and those still being implemented (see Google’s blog on TurboQuant), are getting better at recognizing and dismissing thin, duplicate, or superficial content. Pieces that simply restate what someone else has already said online, lack originality, and fail to demonstrate legitimate real-life experience will continue to struggle to rank.Β 

Depth, clarity, and expertise have always mattered, but SEOs who want to continue to succeed in 2026 and beyond are going to have to double down on these factors:

  • Depth: This doesn’t mean write as much as you can on the topic. Gone are the days of fluffy, keyword-stuffed articles. Depth in 2026 means SEOs and content creators should address the searcher’s main question and related follow-ups.
  • Clarity: Searchers are busy. They want quick answers. Make your content easy to scan and understand.
  • Expertise: Demonstrate real-world knowledge and experience your audience can trust.

For many SEOs, this is a welcome shift. It’s not about just checking off boxes anymore.Β 

Sure, we still have to do those things. But the bar for what constitutes good SEO is being raised far beyond the basics. When search engines evaluate content today, they’re looking for signals that SEOs and content creators are providing real value to searchers.

Why visibility matters more than clicks for local SEO

Small, local, or service-based businesses that rely on SEO-driven leads for revenue can use these same strategies, too. While success isn’t measured using the same metrics as it was just a couple of years ago, the result of good SEO remains: Get the business recommended before the competition for as many searches as possible.Β 

Two years ago, this meant clicks. Today, it means visibility. AI platforms like ChatGPT, Gemini, and AI Overviews often recommend businesses without linking to websites directly, if at all.Β 

A few tools have been developed to measure AI metrics, but these can get pricey, and as Elizabeth Rule said, β€œMeasuring visibility is like trying to measure a wave with a ruler.” 

This is why maintaining strong communication between stakeholders and the SEO team is so important. When success can’t be measured simply, a simple question of β€œhow’s business going?” matters now more than ever. Beyond user intent, SEOs need to understand user behavior, mood, and temperament.

What β€˜helpful content’ looks like in practice

Here are five tips to get you started on creating content that is genuinely helpful:

1. Answer follow-up questions

Think beyond the initial query. What will readers ask next?Β 

One of my favorite places to do research for this is the People Also Ask (PAA) section on SERP. For example, you’re writing about herniated disc treatment. Just Google β€œherniated disk treatment” and use the PAA feature to help you brainstorm more questions your audience may ask about the topic you’re writing. The more questions you click, the more ideas it’ll generate.

2. Show expertise and experience

E-E-A-T is an SEO hill I will die on because it works. Share your knowledge, case studies, testimonials, or firsthand insights. This builds trust when done right and when you’re creating for people, not search engines.Β 

This is what the helpful content update of 2022 was all about.

Get the newsletter search marketers rely on.


3. Structure content clearly

We’d all love to believe that everything we write is being read word-for-word. It’s not. People skim. They’re looking for an answer while they’re doing other things.Β 

This is why clearly structured web pages are so important on both mobile and desktop. Use headings, bullet points, and concise paragraphs to help readers quickly find answers.

4. Be authentic

Authenticity sounds like a buzzword (and maybe it is), but people can tell when you’ve used AI to write something or when you’re just publishing content for SEO.

Much as it pains me (an English major who loves to read long novels and write dissertations) to say, no one cares about your personal anecdotes or how many adjectives you can think of for your β€œsuperior” service. They just need an answer to the question they searched.Β 

Avoid fluff or filler. Real-world, practical content resonates better than generic advice.

If someone called and asked you, β€œHow long does it take to change the water heater in my 1950s home?” You wouldn’t need 1,500 words to answer them. The content you create on the internet should be the same.

5. Ask β€˜who, what, and how?’ about your content

If you’ve been paying attention to GEO/AEO/SEO for AI, this might sound familiar to you as a little something called semantic triples. This sounds intimidating at first, but it’s really just sixth-grade English.Β 

A semantic triple answers who, does what, for whom (or how). Remember diagramming sentences? It’s the relationship between the subject, predicate, and object. It can be any subject, predicate, and object:

  • The plumber installs water heaters in DallasΒ 
  • The bakery bakes wedding cakes for couplesΒ 

I first heard about semantic triples from Mike King during SEO Week 2025 when he broke down his concept of relevance engineering. If you haven’t watched his video on this topic, I highly recommend it.

The basic idea is that SEO is about your audience:

  • Who are you talking to?
  • What do they need?
  • How do you reach them?Β 

A semantic triple answers these questions. It provides structure and clarity. It’s the β€œWho, What, and How” that Google told us about with the HCU documentation. It’s also genuinely valuable information for searchers.

Knowledge is your superpower. You’re the only person who can tell your story, explain your process, and show readers why your business or brand matters.

Helpfulness is the competitive edge

The most reliable SEO strategy remains the same with each new core update from Google: Create content that genuinely helps searchers.

Focus on the problems your audience is trying to solve, answer their questions fully, and share your expertise. Thin or derivative content won’t cut it in a world of AI-driven search and retrieval systems.Β 

Google and AI platforms are trying to do the same thing searchers are doing: find the most helpful content. If you respond to that need, your content will rise to the top, no tricks, hacks, or shortcuts necessary.

Gartner: 40% of agentic AI projects will fail, making humans indispensable by Optimove

Fact: Agentic AI is making humans indispensable.

More than 40% of agentic AI projects will be canceled by the end of 2027. That is a prediction from Gartner published in June 2025, based on a poll of more than 3,400 organizations actively investing in the technology.

The reason cited is not that the agents do not work. It is that the humans deploying them are making the wrong decisions. β€œMost agentic AI projects right now are early-stage experiments or proof of concepts that are mostly driven by hype and are often misapplied,” according to Anushree Verma, senior director analyst at Gartner.

Organizations are deploying agents without a clear strategy, without understanding the complexity, and without the governance to manage what happens when something goes wrong.

In other words, the agent is only as good as the human behind it.

This matters enormously for marketing. AI agents in marketing are real, accelerating and in many cases, necessary. Agents that select audiences. Agents that generate content. Agents that optimize send times, choose offers and orchestrate entire customer journeys autonomously, continuously and at a scale no human team could match. The capabilities are here today and growing rapidly.

But Gartner’s data reveals a warning and marketing leaders who miss it will find themselves on the wrong side of that 40%.

FOMO causes agent failure

The failure rate Gartner describes is not random. It starts with fear.

Fear of being left behind. Fear of watching competitors move faster. Fear of being the CMO who did not act when everyone else did. That fear is driving organizations to deploy agentic AI, not because they have a strategy, but because they cannot afford to be last.

The result is agents built on broken workflows. Agents fed with poor data. Agents operating without the governance structures that keep them aligned with business goals. The agents execute… the wrong things, in the wrong ways, at the wrong times.

FOMO is not a strategy. And in the agentic era, it is an expensive mistake.

Agent washing

Gartner identified a widespread trend it calls β€œagent washing”… vendors rebranding existing chatbots and automation tools as agentic AI without delivering genuine autonomous capabilities. Of the thousands of vendors claiming agentic solutions, Gartner estimates only around 130 offer real agentic features. Marketing teams investing in the rest are not getting agents. They are getting dressed-up automation with an agentic price tag. automation with an agentic price tag.

The consequences go beyond wasted budget. Gartner predicts that in 2026, one-third of companies will harm customer experiences by deploying AI prematurely, eroding brand trust and damaging both acquisition and retention.

A personalization agent that misreads a customer. A content agent that violates compliance. A journey agent that floods a churning customer with offers at exactly the wrong moment. These are the predictable outcomes of deploying autonomous systems without the human judgment to direct them.

The dumbing down of marketers

Gartner’s third prediction is the most revealing of all. GenAI usage leads to the atrophy of critical thinking skills. As a result, 50 percent of global organizations will require AI-free competency evaluations.

Half of all organizations are watching their people get dumber because AI is always available to think for them. Quietly. Gradually. Until the day the algorithm is wrong and nobody in the room can tell.

In marketing, that is a crisis. Marketing requires judgment β€” the ability to ask not just what the data says, but what it means. Not just whether a campaign worked, but why. Not just whether to accept an AI recommendation, but whether it reflects the brand, the moment and the relationship the company is trying to build.

Those questions cannot be delegated to an agent. They require a human being scrutinizing what a machine thinks is right.

The most dangerous marketer in the agentic era is not the one who rejects AI. It is the one who accepts everything it produces without question.

Agents cannot be trusted to ask the right questions

An agent can optimize what it has been given. It cannot question whether it has been given the right thing.

It can personalize a message based on behavioral signals. It cannot decide that the right move is to say nothing at all… to give a customer space, to protect a relationship rather than extract from it.

It can generate a thousand content variations and test them. It cannot feel the difference between a message that converts and a message that connects. It cannot sense when a campaign that performs well in the data is quietly damaging the brand.

It can execute a journey flawlessly. It cannot design one that reflects what customers actually want from this brand, at this point in their lives.

These are not limitations that will be solved by the next model release. They are structural. AI is trained on the past. The irreducible human job in marketing is to bring judgment about what should happen next, even when the data does not yet exist to support it.

The marketer as manager of agents

The right mental model for the agentic era is not human versus machine. It is a human plus machine, with the human in charge.

That is the foundation of Positionless Marketing. For decades, marketing teams operated as an assembly line with handoffs. Positionless Marketing breaks that model by giving marketers three transformative powers: Data Power to immediately discover customer insights for precise targeting and hyper-personalization, without waiting for engineers; Creative Power to create channel-ready assets like copy and visuals, without waiting for creatives; and Optimization Power to run campaigns that optimize themselves through automated journeys and testing, without waiting for analysts. Handoffs are eliminated.

The Positionless Marketer is a multidisciplinary thinker who deploys AI agents to go beyond traditional positions. Agents handle what used to require waiting for three different teams, eliminating the assembly line. The marketer is no longer waiting on anyone. They are thinking bigger, moving across disciplines while keeping human judgment at the center of every decision the agents make.

This is a promotion, not a replacement. But it comes with real demands. Marketers who can think strategically, not just operationally. Who can evaluate AI output critically, not just accept it. Who can take accountability for what the agents do in their name.

Gartner’s Daryl Plummer stated it directly: organizations should prioritize behavioral changes alongside technological changes as first-order priorities. The technology is ready. The question is whether the humans in the marketing organization are.

The window is narrowing

The organizations that will win the next decade of marketing are not the ones that deploy the most agents. They are the ones that build the human capability to direct them well.Gartner’s 40% prediction is not a warning to slow down. It is a warning to be deliberate. The difference between an agentic marketing operation that compounds value over time and one that wastes budget, violates policy, and erodes customer trust is not the technology. It is the human judgment sitting above it.

Marketing teams need to face facts in the agentic AI era: the agent is only as good as the indispensable human behind it.

PS5 Linux project released to unlock PlayStation 5’s PC potential

A PlayStation 5 Linux Loader had been released Last month, a modder called Andy Nguyen, also known as β€œtheflow0” and β€œTheOfficialFloW”, showcased their Linux-powered PlayStation 5 (PS5) and its PC gaming capabilities. Now, the modder has officially released a PS5 Linux Loader on GitHub, allowing others to turn their PlayStation 5 consoles into Linux PCs. […]

The post PS5 Linux project released to unlock PlayStation 5’s PC potential appeared first on OC3D.

No, Galax is not exiting the GPU market

The rumours are false, it’s β€œbusiness as usual” over at Galax This morning, several reports have claimed that Galax is exiting the GPU market. These reports are false, and Palit has confirmed that its β€œbusiness as usual” over at Galax and that the company will continue making GPUs. Galax has been part of the Palit […]

The post No, Galax is not exiting the GPU market appeared first on OC3D.

Palit Confirms: GALAX, KFA2, and HOF Branding to Continue

Today, we reported that GALAX is ending its operations as an independent company and integrating into its parent company, Palit. However, users were left wondering whether Palit would stop offering GALAX-branded products, which have significant recognition among gamers. The official company response is that the branding will continue to be active. This means that GALAX-branded Hall-of-Fame (HOF) GPUs for extreme overclocking, the KFA2 brand for Europe, and other GALAX-branded products will remain available on the market. In simple terms, this is just a corporate structure change, with Palit consolidating its ventures under one roof as the parent company. Ongoing customer commitments, including RMA, warranty claims, and general support, will now be handled by Palit, while the design and development of new GPUs under the GALAX brand will continue.
Below is a complete statement from Palit, followed by a statement from GALAX.

(PR) MotoGP 26 Out Now

Milestone and MotoGP Sports Entertainment Group today announce the release of MotoGP 26, the latest instalment in the official MotoGP videogame franchise. Featuring the full official 2026 season, it combines reworked physics with Dynamic Rider ratings and deeper career management mechanics to deliver a more immersive and authentic gaming experience.

MotoGP 26 refines its physics through a rider-based handling system where control is more closely tied to how players move and position the rider on the bike. Body shifts and weight transfer now have a tangible impact on stability, cornering, and braking, resulting in a more natural and responsive riding experience. Supported by new rider animations, this model significantly changes the overall gameplay feel, offering finer control and a wider range of motion in both Pro and Arcade modes.

NVIDIA’s Laptop RTX 5070 12 GB Matches The 8 GB Version In Synthetic Tests, Leaked Benchmarks Show

NVIDIA Bumps RTX 5070 Laptop GPU To 12GB Using New 3GB GDDR7 Memory, Offers 50% Boost While Tackling Supply Constraints

One shouldn't expect any significant improvements even with higher VRAM capacity, and this was obvious since there are no other upgrades. Leaked Benchmarks Show Laptop RTX 5070 12 GB is Equivalent to 8 GB Version In Multiple Synthetic Tests GeForce RTX 5070 and RTX 5070-powered systems are enjoying huge popularity. To tackle the GPU shortages, NVIDIA has announced that it will now be supplying the RTX 5070 with 12 GB VRAM for the mobile platforms, which will be equipped with 3 GB GDDR7 memory modules. The existing RTX 5070 laptop GPU brings just 8 GB of GDDR7 video memory, unlike […]

Read full article at https://wccftech.com/nvidias-laptop-rtx-5070-12-gb-matches-tthe-8-gb-version-in-synthetic-tests/

Final Fantasy VII Remake Part 3 Won’t Just Wrap The Trilogy, As The Highwind Promises To Redefine Series’ Scale

A character from Final Fantasy VII Remake and her companions appear in a dimly lit underground setting.

The Final Fantasy VII Remake trilogy has been in the works for a very long time, and will finally conclude with the third and final game that has yet to be officially revealed. While no new information has been provided by Naoki Hamaguchi in a fresh interview with ComicBook focused on the upcoming Nintendo Switch 2 and Xbox Series X|S versions of Final Fantasy VII Rebirth, the trilogy director revealed the guiding philosophy behind the development of all three games that will lead to its conclusion being the culmination of the entire series. "Across the entire remake project, the guiding […]

Read full article at https://wccftech.com/final-fantasy-vii-remake-part-3-redefine-series-scale/

Sony’s Controversial PlayStation DRM May Not Be What It Seemed, As Sleuth Finds 30-Day Lock Vanishes After Refund Window

The PlayStation logo and name appear prominently against a blue background with floating geometric shapes.

Over the last weekend, it was widely reported thatΒ SonyΒ introduced changes to itsΒ PlayStation DRM policy,Β which now requires users to connect online everyΒ 30 daysΒ to continue playing every digital game purchased afterΒ March 2026. While the company has yet to provide an official clarification on the matter, detective work conducted by ResetERA forums member andshrew revealed how this new policy seems to be related to the 14-days refund window for digital purchases. Using a jailbroken PlayStation 4, the ResetERA user poked around behind the scenes, making some interesting findings, staring from how digital licenses work. "The PS4 will install a license file for all […]

Read full article at https://wccftech.com/playstation-drm-30-day-lock-vanishes-refund-window/

Nvidia exec says AI is more expensive than actual workers β€” yet some companies don't see the extra costs as a negative

As advanced LLMs do more and more for modern businesses, the outlays to cover all those tokens can cost more than worker salaries alone. But some companies don't see the added costs as a negative as they look toward a more automated future.

Ransomware accidentally destroys all files larger than 128KB, preventing decryption β€” VECT code likely partly vibe coded with AI or used an old code base, security researchers suggest

A ransomware's major flaw meant that files cannot be decrypted because of a programming mistake. It also has several minor issues, showing that its creator may not be as sophisticated as suggested. Still, researchers point out that these can be rectified in future versions of the malware.

Zuckerberg's Meta will beam sunlight from space to power AI data centers, solar-collecting satellites will orbit 22,000 miles above Earth β€” firm reserves 1 Gigawatt of orbital solar energy and 100 Gigawatt-hours of long-duration storage

Meta has announced plans to help power its AI data centers using sunlight beamed from space through a partnership with Overview Energy, alongside a 100 GWh long-duration storage deal with Noon Energy, as the AI industry’s energy demands continue to surge.

IssueCapture – AI-powered bug reporting widget for Jira and Jira Service Management


IssueCapture is a JavaScript widget you add to any website with one script tag. When users find a bug, they click a button, describe it, and optionally annotate a screenshot. The widget automatically captures console errors and failed network requests, then creates a detailed Jira ticket with all that context.

It works with both Jira Software and Jira Service Management, including team-managed projects. Optional AI features handle triage, categorization, and duplicate detection so developers get organized tickets instead of vague reports. The free tier includes 10 issues per month with no credit card needed.

View startup

Only EU – Find European alternatives to US software, products, and services


Only EU is a curated directory that helps you replace US software and products with European alternatives. Browse categories like cloud storage, email, password managers, VPN, and more, or select tools you use to get tailored, GDPR-compliant recommendations. The site highlights providers with stricter environmental standards, shorter supply chains, and European quality, with clear details and links to explore each option.

View startup

(PR) Sharkoon Releases New FIREGLIDER One Gaming Mouse

Ready to rekindle the fire? With the FIREGLIDER One, a classic returns - but this time, faster, lighter, and more precise than ever before! The dual-mode gaming mouse combines uncompromising technology with a minimalized design that is totally committed to good performance. Whether in competitions or for everyday gaming, the FIREGLIDER One delivers lightning-fast reactions and maximum control for heating up any game to exactly the right temperature.

Two Modes, One Goal: Victory
The FIREGLIDER One can be operated both wired and wirelessly and is always ready for use, whether you need a stable connection for a tournament or if you prefer wireless freedom for everyday gaming.

GTA 6 Stays On Track For November 19 As Strauss Zelnick Brushes Off 2027 Delay Speculation With Sick Days Joke

A character in the game GTA 6 stands on a yacht against a city skyline at sunset, with arms crossed.

GTA 6 is set for release this November, but as Rockstar Games is known for delaying its titles multiple times, the community remains wary of a potential last-minute slip into 2027. However, Take-Two CEO Strauss Zelnick suggested in a recent talk held during the iicon conference, as reported by IGN, that another delay is not on the horizon. "I think a lot of people will be calling in sick on November 19," Zelnick joked during his talk, clearly aware of how many in the community are planning to skip school or work to play one of the most anticipated games […]

Read full article at https://wccftech.com/gta-6-on-track-zelnick-brushes-off-delay-joke/

The Blood of Dawnwalker Director Backs Full Evil Playthroughs, Urges Players Not to Reload After Bad Choices

Coen from The Blood of Dawnwalker feeds on an NPC.

Yesterday was a big day for Rebel Wolves and their debut game, the open world action RPG The Blood of Dawnwalker. The Polish studio unveiled the game's release date (September 3) and the detailed PC system requirements. Moreover, in an aftershow Q&A, Creative Director Mateusz Tomaszkiewicz provided new information about the game. For example, he confirmed that full evil playthroughs will be possible, thanks to the ability to kill most NPCs without causing a game over. Evil run. Generally speaking, you can kill off the majority of NPCs. Maybe not every single one; there are specific cases where, for narrative […]

Read full article at https://wccftech.com/blood-of-dawnwalker-director-evil-playthroughs-bad-choices/

Revelion – Add autonomous AI pentesting to your MSP stack


Revelion delivers autonomous AI-driven penetration testing designed for MSPs. Use a white-label client portal, full API, scheduled scans, and compliance framework mapping to send branded, enterprise-grade reports in hours. Control strategy, scope, and aggression, run fully autonomous or hybrid human-steered missions, and integrate with your RMM/PSA stack. UK-hosted and GDPR compliant, Revelion helps MSPs add recurring security revenue without extra headcount.

View startup

Happy Horse – Create polished AI videos from prompts, images, and clips


Happy Horse is an AI video creation platform that turns text prompts, images, and clips into cinematic videos. It preserves character identity across shots, offers director-level camera moves, and supports precise style control for photorealistic or stylized looks. You can generate native voice and singing with lip-sync or transform references with R2V workflows. Creators, marketers, educators, and filmmakers can quickly go from concept to export, with an API planned for embedding generation into other products.

View startup

CISA Adds Actively Exploited ConnectWise and Windows Flaws to KEV

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday added two security flaws impacting ConnectWise ScreenConnect and Microsoft Windows to its Known Exploited Vulnerabilities (KEV) catalog, based on evidence of active exploitation. The vulnerabilities are listed below - CVE-2024-1708 (CVSS score: 8.4) - A path traversal vulnerability inΒ  ConnectWise ScreenConnect

GALAX Shuts Down, Famous GPU Vendor Taken Over by Palit After 30 Years

The legendary maker of Hall of Fame (HOF) GeForce GPUs, known for their exceptional overclocking capabilities, GALAX, is officially closing its operations after 30 years in business. GALAX, along with its KFA2 brand for the European market, will now be closed, with existing product inquiries managed by Palitβ€”one of the largest GPU add-in card (AIC) manufacturers and a significant NVIDIA partner. Founded in Hong Kong in 1994, GALAX distinguished itself by creating high-performance designs with NVIDIA GeForce GPUs, particularly known for its HOF series. These iconic white-themed designs feature massive VRM circuitry for overclocking and higher-binned dies suitable for LN2 extreme overclocking scenarios. Over the past few generations, multiple world overclocking GPU records were achieved with GALAX HOF cards, and the brand has maintained that design language throughout the years.

After more than 30 years in business, GALAX is closing its operations, and these will be transferred to Palit, which will take full responsibility for "all activities and commitments related to the brand." This includes RMA services, warranty claims, product launches, and more. Interestingly, the announcement does not mention that the GALAX branding will be phased out. Only the actual company operations will be integrated into Palit. It's possible that Palit will retain the GALAX branding and its HOF name, which is well-known for high-performance overclocking among enthusiasts. It's worth noting that GALAX and its sister brand KFA2 have been operating for years with Palit's support as the parent company, so it's uncertain if the brand will continue its market presence under different management. GALAX and KFA2 have been sub-companies of Palit, and management claims that now is the time to unite all of Palit's brands under one roof.

Update 11:05 UTC: Palit confirmed that the current GALAX branding will continue to be present on the market.

BeatCrate – Organize and perfect your DJ library with precise curation and analysis


BeatCrate is a macOS DJ music library manager that helps you organize tracks, perfect metadata, verify audio quality, and prepare sets fast. It auto-tags from MusicBrainz, Discogs, iTunes, Beatport, and Traxsource with a merged view and inline diffs for confident edits. Analyze BPM with CoreML and DSP, inspect spectrograms, meter loudness, and search high-res artwork. Batch edit, filter with rule builders, and undo every change. Built with Swift and Metal, it supports MP3, FLAC, WAV, AIFF, and M4A on macOS 13+.

View startup

LiteLLM CVE-2026-42208 SQL Injection Exploited within 36 Hours of Disclosure

In yet another instance of threat actors quickly jumping on the exploitation bandwagon, a newly disclosed critical security flaw in BerriAI's LiteLLM Python package has come under active exploitation in the wild within 36 hours of the bug becoming public knowledge. The vulnerability, tracked as CVE-2026-42208 (CVSS score: 9.3), is an SQL injection that could be exploited to modify the underlying

Epomaker Shows Off Revised HE75 V2 Gaming Keyboard With Enthusiast Design Touches and Gaming Performance

Epomaker's voyage into the world of Hall effect keyboards has been ongoing for a little over two years, now, and, while the peripheral maker oft manages to hit a reasonable price point, the results can be a little mixed. Now, it has shown off a revised version of the HE75 Mag that launched in 2025, bringing some new enthusiast-grade design touches to the affordable Hall effect gaming keyboard. The HE75 V2 has not yet officially launched, but Epomaker has shown it off in a recent livestream on YouTube and published the product page with all the specifications ahead of the official launch. From that product page, we can see that the HE75 V2 has an ABS plastic case with a gasket mount and an FR4 plate. Curiously, neither the PCB nor the FR4 plate have flex cuts, which is generally valued by the enthusiast community, since it prioritizes sound. Pricing has not yet been divulged, but if it is anywhere close to the original HE75 Mag, it should come in at around $100.

As the name suggests, the HE75 V2 follows a 75% layout, keeping both the num row and F row, although it makes use of a reduced, three-key navigation column on the far right edge, making it a solid, if somewhat compromised, all-round option. There is also modular programmable volume knob that can be removed and replaced with two key switches, much like on the original HE75 Mag. Unlike the original, the HE75 V2 comes in either an all-white or all-black aesthetic, both with color-matched translucent PC keycapsβ€”smokey on the black version and frosted on the white keyboard. The translucent keycaps obviously allow a lot of RGB shine-through, and Epomaker has accented that with edge lighting that shines through a faceted diffuser seemingly meant to look like crystals. If the Epomaker livestream is any indication, the HE75 V2 and its Creamy Jade Magnetic switches will have a poppy sound signature typical of an FR4 plate and POM switches. The HE75 V2 will feature tri-mode connectivity, with Bluetooth, 2.4 GHz, and USB-C, and 8 kHz polling over 2.4 GHz and wired connections.

Sony's New DRM Appears To Be a Refund Scam Workaround

News recently broke about a new DRM system Sony had seemingly silently implemented in the PS5 that would make buyers of digital games go online every 30 days to validate a license or potentially be locked out of their games until they could access an internet connection. Now, according to some research done by andshrew on the ResetEra forums, it seems as though it's not quite as simple as that. As it turns out, going online to validate the DRM license may only be a requirement for the first 14 days as an attempt to avoid users buying a game and then refunding the game and playing it anyway in offline mode using some exploitβ€”at least that is the current speculative explanation, since Sony is yet to address this debacle publicly.

The explanation in the ResetEra forum post details the full methodology, but essentially, what the user found by comparing two copies of the same game purchased on two separate PSN accounts, once before the new requirement and once after, is that Sony is now issuing a 30-day license for the first 14 days and swapping that license out for a perpetual license after the refund period has lapsed. This would mean that, after Sony's DRM server has been able to validate the license once after the 14-day refund period is over, there will be no restrictions on offline play, as is usually the case with console games. This somewhat resolves some of the issues that have been brought up surrounding this DRM feature, but it still may present issues for those who are not able to connect to the internet more than 14 days after purchasing a gameβ€”this is especially true, since there is no clearly visible notice about this DRM feature on Sony's site at the time of writing.

Linux now Officially Available for PlayStation 5

Hot on the heels of the news of Sony's new DRM that earned significant community backlash, Andy Nguyen, the developer who previously showed off running a full Linux installation on a PlayStation 5, has officially published their methodology and necessary steps to get the open-source operating system running on Sony's console. The hack requires a PS5 disc version running firmware version 3.00, 3.10, 3.20, 3.21, and 4.00, 4.02, 4.03, 4.50, 4.51, and there is only support for the M.2 drive in the 4.XX versions at the time of writing. There are supposedly ways to downgrade the PS5 firmware to one of the versions that still supports the hack, but those may not always work reliably. If you've followed the jailbreak steps, injected the payload and rebooted back into Linux, you should be greeted by a full Ubuntu 26.04 Resolute Raccoon installation, replete with the Linux kernel 7.

Interestingly, the PS5 Linux installation is quite full-featured, replete with custom VRAM allocation, fan control, and a boost mode togglingβ€”all from within the terminal or a text file, of course. There are some caveats, and driver development is still ongoingβ€”wireless networking, for example, may require you to manually restart the WLAN adaptor to work. The Sony DualSense controllers also don't currently work via the built-in dongle, although they do with an external dongle. The output refresh rate is also limited to 60 Hz across 1080p, 1440p, and 4K resolutions, although 120 Hz may be added later. The biggest limitation, however, is that it is a soft mod, meaning that if you restart the PS5 while in the Linux desktop, it will not boot back into the environment unless you apply the same jailbreak again. The upside of that is that the PlayStation 5's base OS isn't affected by the Linux installation, so if you want to go back to using it as a regular PS5, you can just reboot.

Steam Deck Update Introduces Plentiful Quality-of-Life Changes

Since the launch of the Steam Deck, Valve has poured a lot of effort into not only developing SteamOS but also making the Steam Store and client more user-friendly on handheld devices. Valve's latest Steam Deck Client update builds on this, introducing a number of small changes that make the Steam Deck all the more useful as a console-like gaming experience. Some of the highlights in this latest update include a new optional "Switch to Desktop" button on the login screenβ€”a change that should make the Steam Deck that much easier to user as a docked workstation when necessary. The update also adds a wireless gamepad battery indicator and a low battery level toast notification.

In addition to the aforementioned new UI features, the Steam quick access menu now also houses Steam chat, making in-game socials easier, and there is now a new quick chat feature in Steam Deck and Big Picture mode. Users can access user-configurable quick chats by holding down the view button and selecting the appropriate response. The Steam Deck also now supports Remote Downloads management, allowing you to manage, for example, the downloads on your gaming desktop from the comfort of a couch across the room or the discomfort of an airport across the country. The update also includes a number of bug fixes and UI changes to features like the Steam Input controller customization settings, which can all be viewed in the full update changelog.

AI Fruit – Create talking fruit videos and meme loops for social platforms


AI Fruit lets creators generate talking fruit shorts, self-eating meme clips, ASMR bite videos, and vegetable roleplay scenes for TikTok, Reels, and Shorts. Start with a character and format, then pick a model to produce 1080p videos or polished images. Templates and a credit-based studio help you move from idea to publishable content, and a story generator supports scripts and dialogue for recurring characters.

View startup

Resident Evil Requiem Mini-Game DLC Slated for Early May Release, Story Expansion Still in Development

Capcom has been fairly open about the fact that there is a Resident Evil Requiem DLC coming at some point this year, but an exact launch date was unclear. Now, according to a hint from the game's director, Koshi Nakanishi, and producer, Masato Kumazawa, in an interview with Denfaminico Gamer, that DLC is still under development. The pair did not outright confirm the release date, but they confirmed that there is a combat-based mini-game DLC coming in May, following up that statement by suggesting that players interested in playing the mini-game DLC complete the game's main quest during Japan's Golden Week holidays.

"So, if you're planning to play it, clearing the main story during Golden Week would be just right for you to be able to play it," said Kumazawa. Golden Week is one of Japan's largest holiday periods and runs between April 29 and May 6, so the implication here is that the Resident Evil mini-game DLC will launch sometime shortly after May 6. The comment to Denfaminico also confirms that the mini-game will only be accessible once players have completed the main quest in the Resident Evil Requiem base game.

Serial Subscriptions – Unify auth, payments, subscriptions, and usage in one integration


Serial Subscriptions unifies auth, payments, user and organization management, usage tracking, subscriptions, custom logic, and branded flows in one integration. Connect Stripe, add your brand, and configure in minutes so you can focus on your core product while scaling from indie to enterprise. The platform supports tiered and usage-based billing, ASC 606–minded revenue recognition, and audit trails. It offers customizable pricing pages, dashboards, invoices, and user flows, with 99.9% uptime and no lock-in.

View startup

Google Tensor G6 Chip Likely To Launch With An Ancient GPU That Debuted Around 5 Years Back

A close-up of a Google-branded processor chip with circuitry details visible.

It wouldn't be Google if it did not somehow try to hobble its Tensor-class chips. And, this unfortunate trend appears all set to continue with the upcoming Tensor G6 SoC, which is quite likely to sport a GPU that launched all the way back in 2021! A new leak indicates that the Google Tensor G6 chip will sport the PowerVR CXT-48-1536 GPU that debuted in 2021 As our readers would be well aware, we had ripped into Google a few months back for using generations-old ARM CPU cores within the Tensor G5 chip. Thankfully, as per recent leaks, Google has […]

Read full article at https://wccftech.com/google-tensor-g6-chip-likely-to-launch-with-an-ancient-gpu-that-debuted-around-5-years-back/

BrackIt – Create and run tournaments with auto-scheduling and live updates


BrackIt is a tournament scheduling app for organizers on Android and iOS. It auto-schedules games across venues and time slots, detects conflicts, and advances winners automatically. You can track live scores and standings, share tournaments with follow codes, and export PDFs for brackets and results. It lets you set custom matchups, avoid specific pairings, host up to 64 participants, and send push notifications for updates. You can also reschedule matches or entire rounds with a tap.

View startup

Koru – News verified by the people who read it


Koru is a credibility-first news platform where your votes shape what's trusted. Vote on articles β€” get it right, your influence grows; get it wrong, it shrinks. High-credibility content rises while low-credibility content sinks. There are no algorithms optimizing for outrage, just accountability.

Koru is currently in pre-beta with invite-only access while we test the first version with early users. Join the waitlist to help shape a news platform built around credibility.

View startup

AniJam – AI animation agent that creates anime and animated videos


AniJam is an all-in-one AI animation platform that lets you design consistent characters, apply professional styles, and generate expressive lip sync on a production canvas. It connects with leading video models and agents to take you from idea to finished shots. Use a rich voice library to craft performances, maintain character identity across scenes, and train your own visual style. Start projects quickly and move from concept to final renders with simple, powerful tools.

View startup

Fedora Linux 44 Launches With Gnome 50, KDE Plasma 6.6

Just on time, according to the recent launch date announcement, Fedora 44 has officially left the gates, with both Fedora KDE Plasma Desktop 44 and Fedora Linux 44 Workstation (with Gnome), seeing significant user-facing and behind-the-scenes updates. The biggest changes come by way of the addition of Gnome 50 on Fedora Workstation and KDE Plasma 6.6 on Fedora KDE Plasma Desktop 44. Both versions can now be downloaded from the Fedora website, and existing users can perform in-place upgrades following the official guidance. The full Fedora Linux 44 patch notes are available here.

While Gnome 50 and KDE 6.6 both feature neat UI tweaks, bug fixes, and accessibility and performance improvements, Fedora Linux itself also saw a few notable changes. For starters, Fedora 44 now includes the NTSync driver, and it is enabled by default, enabling some impressive performance and stability improvements in certain games and with the most recent Proton and Wine versions, as we covered previously. All Fedora KDE versions will now all use the same out-of-the-box experience, making initial setup feel more familiar and enabling hardware vendors to sell hardware with Fedora KDE pre-installed with a proper user greeting and setup wizard. Fedora Atomic Desktops have also dropped FUSE2 library support, which may be of import to some users using AppImage applications. Both Fedora Workstation and Fedora KDE Plasma Desktop use Wayland by default, but in the case of Fedora Workstation, Gnome 50 no longer includes any code for X11 compatibility. X11 can still be installed, but it is not officially supported and may result in issues. Fedora KDE Plasma Desktop 44 also switches to KDE Plasma Login Manager by default, which may be unfamiliar to some users and may actually have fewer features.

EloShapes Adds 3D Models to Compare Mouse Shapes in a Web Browser

Mouse comparison website, EloShapes has long been an indispensable tool for gamersβ€”and productivity usersβ€”to compare shapes when shopping for a new mouse, but it has previously only worked with the outlines of the mice in its database. This was a good solution, but as of a recent announcement, the developer behind EloShapes has started providing 3D models for shape comparison, making it far easier to spot small differences in contours and complex geometry that would have otherwise been hidden in the silhouette view. In the EloShapes search drop-down, you can see which mice models have 3D models available with a simple little icon that appears next to the mouse's name. At the time of writing, there does not seem to be a way to download the 3D models to 3D print a mock-up shape to test ergonomics and fit, but there is a lot of demand for it in the responses to the announcement of the feature.

The new 3D model comparison feature in EloShapes is based on 3D scans of real production units of gaming mice, and it allows you to very quickly compare two or more mouse shapes by overlaying the models over one another. It's a surprisingly complete tool for comparison. It allows you to switch between solid and shaded views to see how the shapes compare when superimposed over one another; there's a built-in ruler to measure features; and it allows you to change the alignment, position, and rotation of the mice to compare based on different datums or reference points. The model even indicates the sensor location, so that you can see if there is going to be any weirdness in the way the mouse feels to aim. The creator of EloShapes said in the announcement on X that there are a number of mice available on the comparison site, and that the collection would grow consistently over time.

Palit Takes Over Full Control of GALAX, Handling The Entire Business & All RMA Support

Two GALAX Hall of Fame graphics cards, one black and one white, displayed with a glowing 'Hall Of Fame' logo.

GALAX has announced that it is ceasing its entire global operations, and Palit will assume full control of the graphics card maker. GALAX Will Exist But Only Under Palit As AI Crunch Merges Both Brands Under One Roof [Update - 4/29/2026] - Palit and GALAX have issued a co-statement, which is as follows: Palit Group issues this statement to address and clarify recent inaccurate media reports regarding the operational status of the GALAX brand. [Original Article] In shocking news, GALAX or Galaxy Microsystems has announced that it is ceasing all operations and will now operate through its sister brand, Palit […]

Read full article at https://wccftech.com/galax-exit-graphics-card-market-palit-assumes-full-control/

Skoutee.ai – Automated outreach and collect responses across WhatsApp, email, MS Teams


Skoutee is an AI agent that contacts your customers, your team, or your community to collect information and feedback. Describe what you need, add recipients, and choose channels like WhatsApp, email, SMS, or a public link. It holds natural conversations, sends reminders to non-responders, and gathers replies in one place. You can trigger it via APIs, connect it to existing tools, and schedule recurring tasks like monthly invoice collection or post-sprint feedback, all with a strong security framework.

View startup

Seagate Technology Reports Fiscal Third Quarter 2026 Financial Results

Seagate Technology Holdings plc (NASDAQ: STX) (the "Company" or "Seagate"), a leading innovator of mass-capacity data storage, today reported financial results for its fiscal third quarter ended April 3, 2026.

"Seagate delivered outstanding March quarter results, exceeding the high end of our revenue and EPS guidance, achieving record margin performance, and generating close to $1 billion in free cash flow," said Dave Mosley, Seagate's chair and chief executive officer.

Framework Laptop 16 Gets NVIDIA RTX 5070 12 GB Upgrade Module for Eyewatering Price

Framework, known for its repairable and upgradeable gaming and productivity laptops, has officially released a Laptop 16 graphics upgrade module with the new NVIDIA GeForce RTX 5070 12 GB. Framework's original Laptop 16 graphics expansion module, which upgrades the original $350 AMD Radeon RX 7700S, is based on the NVIDIA GeForce RTX 5070 8 GB and costs $699. Now, the RTX 5070 12 GB upgrade comes in at a whopping $1,199β€”a 72% price increase for a 50% increase in VRAM. It's unclear where the drastic price increase comes from, but some of it can likely be explained by the low volume Framework has to contend with, and some of it can be attributed to the ongoing DRAM crisis wracking the PC hardware market.

Denuvo Responds to Day-Zero DRM Hypervisor Crack: "We're Already Working on Updated Security Versions"

A Hypervisor bypass recently appeared giving game crackers an alternative way to circumvent protections like Denuvo DRM. The bypass is so effective that popular game repacker, FitGirl, has declared that "all single-player/non-VR Denuvo games are now cracked/bypassed." The announcement comes after four Hypervisor bypasses were released for EA Sports games. The Hypervisor bypass appears to rely on installing a new Hypervisor driver in Ring -1, granting it very low-level access, which present significant security concerns, as acknowledged by FitGirl themself.

The announcement of this universal Denuvo bypass has prompted Denuvo to implement a new workaround: require a regular DRM check with an online server every two weeks, according to recent reports from players online. This updated check has seemingly been applied to games published by 2K, namely NBA 2K25 and 2K26 and Marvel's Midnight Suns. Irdeto, the company behind Denuvo, has also spoken out about the issue, stating that "We're already working on updated security versions for games impacted by hypervisor bypasses. For players, performance will not be compromised by these strengthened security measures." Notably, Denuvo also said that whatever workaround is released will not operate in Ring -1, as has been theorized.

Nintendo and Illumination’s Next Movie is Already Set for April 2028, According to Universal

Mario, Luigi, and Yoshi lying on grass in a vibrant animated scene.

We're not even a calendar month away from the release of the Super Mario Galaxy Movie, and it looks like Nintendo and Illumination already have a third movie set for 2028, according to an updated version of Universal's schedule on its Spanish website. Spotted by VGC, the updated schedule now lists "Untitled Illumination/Nintendo Event Film (April 2028)" on April 12, 2028. That could, of course, be the next Super Mario movie after Galaxy, or it could be the start of a new series like the rumoured Donkey Kong, Star Fox, and Metroid films. While we've not yet seen Samus appear […]

Read full article at https://wccftech.com/nintendo-illumination-next-movie-already-set-for-april-2028-super-mario-galaxy/

Researchers Discover Critical GitHub CVE-2026-3854 RCE Flaw Exploitable via Single Git Push

Cybersecurity researchers have disclosed details of a critical security vulnerability impacting GitHub.com and GitHub Enterprise Server that could allow an authenticated user to obtain remote code execution with a single "git push" command. The flaw, tracked as CVE-2026-3854 (CVSS score: 8.7), is a case of command injection that could allow an attacker with push access to a repository to achieve

The EU now requires USB-C for laptop charging up to 100W

All new laptops in the EU must use USB Type-C for charging (up to 100W) The next stage of the EU’s common charging standard has come into force. As of today, new laptop models sold in the EU must support USB Type-C charging. The only exception to this rule is laptops that can charge with […]

The post The EU now requires USB-C for laptop charging up to 100W appeared first on OC3D.

The Blood of Dawnwalker Gets New Story Trailer, PC System Requirements and Release Date

Bandai Namco and Rebel Wolves have officially released full system requirements, new story trailer, and the release date for the upcoming The Blood of Dawnwalker. Built in Unreal Engine 5, The Blood of Dawnwalker is a single-player open-world action RPG game developed by studio which is co-founded by Konrad Tomaszkiewicz, director of The Witcher 3: Wild Hunt. The game is set in an alternative 14th century, where feudal lords have been overthrown by a clique of powerful vampires, and follows Coen, a young man turned into a Dawnwalker.

In addition to the story trailer, which reveals a bit more details about the game, Rebel Wolves have also released full PC system requirements for the game, and you will need at least an Intel Core i5-10400F or AMD Ryzen 7 3700X CPU, 16 GB of RAM and an NVIDIA GeForce RTX 3050, an AMD Radeon RX Vega 56, or Intel Arc A580 graphics card with at least 6 GB of VRAM. This will run the game at 1080p resolution at 30 FPS with low quality preset. The recommended PC system requirements, which target 1080p at 60 FPS or 1440p at 60 FPS with high quality preset, include an Intel Core i5-13600 or an AMD Ryzen 9 7900X CPU, 16 GB of RAM, and NVIDIA RTX 5060 or AMD RX 6800 XT for 1080p and NVIDIA RTX 4070 Ti or AMD RX 7800 XT for 1440p resolution.

NVIDIA Releases New GeForce 596.36 WHQL Game Ready Drivers

NVIDIA has released the GeForce 596.36 WHQL Game Ready driver for Conan Exiles Enhanced game, launching on May 5th with support for DLSS Multi Frame Generation, DLSS Super Resolution, and NVIDIA Reflex. The new driver also adds support for the recently announced GeForce RTX 5070 12 GB Laptop GPU, as well as fix some gaming and general bugs.

As detailed by NVIDIA, Conan Exiles Enhanced is built in Unreal Engine 5 and packs eight years of post-launch content, as well as brings improved visuals, enhanced performance, and modern rendering technology. In addition to the game launch support from day one, the new GeForce 596.36 WHQL Game Ready driver also adds support for the recently introduced GeForce RTX 5070 12 GB Laptop GPU that will feature 24 Gb GDDR7 memory. According to the release notes, the new driver also fixes issues in God of War: Ragnarok, Assassin's Creed Shadows, and The Crew Motorfest games, as well as blocky artifacts issue seen in when playing H.264 content with DXVA 3.0 and issue with Blender 5.0.1 EEVEE.

DOWNLOAD: NVIDIA GeForce 596.36 WHQL Game Ready

(PR) Fanatec Unveils New Products and Performance Upgrades at Spring Showcase

Fanatec, a brand of Corsair (NASDAQ: CRSR) and global leader in sim racing hardware, presented a series of product announcements and platform updates during its Spring Showcase event, introducing new hardware and software updates designed to expand flexibility across its ecosystem.

ClubSport Formula V3 launched
Fanatec introduced the ClubSport Formula V3, a reworked version of its iconic Formula-style steering wheel. The new model is a direct replacement for the Formula V2.5 and features a larger display with Intelligent Telemetry Mode, allowing drivers to switch between layouts such as lap times and deltas, tire temperatures, and other telemetry. The diameter increases to 290 mm, with revised ergonomics suited to a wider range of race cars.

Digital Bros Buys WUCHANG: Fallen Feathers IP for €4 Million, Reviving Sequel Hopes After Leenzee Reportedly Disbanded the Core Team

A character from the game 'WUCHANG: Fallen Feathers' holds a glowing sword against a dark, mystical background.

WUCHANG: Fallen Feathers was one of several Soulslike action games to hit the market since the sub-genre became incredibly popular within the last decade. While it didn't set the world on fire, it didn't bomb either, so players were fairly shocked at reports earlier this month that the core development team at Leenzee Games had been dissolved. Now, however, after it looked like we wouldn't see another game in the series, hope for a sequel is slightly renewed, as the rights for the WUCHANG: Fallen Feathers IP have been purchased by Digital Bros., the parent company of the game's publisher, […]

Read full article at https://wccftech.com/wuchang-fallen-feathers-ip-rights-bought-by-505-games-parent-company-digital-bros/

Apple iOS 27 And macOS 27 To Overhaul AI-Driven Image Editing Capabilities, Potentially Leaving Android Competitors In The Dust

Apple logo with glowing colors and the text Apple Intelligence on a black background.

Apple is banking on the upcoming iOS 27 and macOS 27 updates to regain relevance in the edge AI sphere, with plans already underfoot to launch a new, highly capable, Gemini-backed, and wholly integrated chatbot-style Siri. Even so, Apple's ambitions for the upcoming updates are apparently much more expansive than previously believed, with the Cupertino-based tech giant now eyeing an unassailable lead in AI-driven photo-editing capabilities. Apple is planning to use its on-device AI models to "extend, enhance, and reframe photos" within iOS 27 and macOS 27 According to Bloomberg's Mark Gurman, with the upcoming iOS 27 and macOS 27 […]

Read full article at https://wccftech.com/apple-ios-27-and-macos-27-to-overhaul-ai-driven-image-editing-capabilities-potentially-leaving-android-competitors-in-the-dust/

NVIDIA Lines Up Foxconn, Palantir, and Oracle Behind Nemotron 3 Nano Omni as New Open AI Model Offers 9x Boost

NVIDIA Continues To Do What It Does Best - Intros Neomotron 3 Nano Omni Open Model That Makes Agentic AI 9x Faster

NVIDIA has introduced its latest Open AI Model, Neomotron 3 Nano Omni, which offers 9x faster Agentic AI throughput. NVIDIA's Open AI Model Expansion Continues With Neomotron 3 Nano Omni Delivering a 9x Boost Press Release: Unveiled today, NVIDIA Nemotron 3 Nano Omni is an open multimodal model that brings these capabilities together into one system,Β enabling agents to deliver faster, smarter responses with advanced reasoning across video, audio, image, and text.Β This best-in-class model gives enterprises and developers a production path for more efficient and accurate multimodal AI agents with full deployment flexibility and control.Β  Nemotron 3 Nano Omni sets a […]

Read full article at https://wccftech.com/nvidia-lines-up-foxconn-palantir-oracle-behind-nemotron-3-nano-omni-open-ai-model/

Framework’s RTX 5070 12 GB Graphics Module Costs 72% Higher Than 8 GB Model

A Framework laptop is shown with a Cooler Master cooling component and two pricing options for the RTX 5070: '8 GB $699' and '12 GB $1199'.

Framework has introduced its latest Graphics Module for its Laptop 16, which features the new RTX 5070 12 GB GPU, but at 72% higher price than the 8 GB variant. Memory Prices Are So Bad That You Have To Pay 72% More For 50% More Memory When Buying Framework's Latest RTX 5070 12 GB Graphics Module Framework's Laptops have a unique ability to upgrade the entire GPU by purchasing a separate graphics module. This is ideal for laptop owners who want to upgrade the performance capabilities of their laptops rather than buy a new one. However, recent memory supply constraints […]

Read full article at https://wccftech.com/frameworks-rtx-5070-12-gb-graphics-module-costs-500-usd-higher-than-8-gb-model/

Dumble – Build and deploy sites, apps, and docs with an AI operator


Dumble is an autonomous AI operator that builds, deploys, and maintains your digital projects. Describe a site, e-commerce, dashboard, or app, and it plans files, writes clean code, tests with screenshots, fixes errors, and puts everything online with hosting and domain setup included. It also drafts and analyzes documents, researches the web, and keeps long-term memory. Work in an integrated Monaco editor with Git, terminal, and live preview, or connect your own API keys to use Claude, GPT, and Gemini while keeping data local and private.

View startup

WinkScope – Run your contracting business end to end with AI-powered tools


WinkScope is an all-in-one contractor management platform for general contractors, remodelers, and service trades. It unifies AI-powered estimating, project management, scheduling with Gantt charts, client and vendor portals, invoicing and payments via Stripe, a price book with live vendor pricing, time tracking with GPS, and analytics. Use templates to create proposals fast, convert to invoices, manage work orders, and keep teams and clients aligned from bid to build.

View startup

Brazilian LofyGang Resurfaces After Three Years With Minecraft LofyStealer Campaign

A cybercrime group of Brazilian origin has resurfaced after more than three years to orchestrate a campaign that targets Minecraft players with a new stealer called LofyStealer (aka GrabBot). "The malware disguises itself as a Minecraft hack called 'Slinky,'" Brazil-based cybersecurity company ZenoX said in a technical report. "It uses the official game icon to induce voluntary execution,

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

Despite promising to help determine what happened with the hacks targeting journalists and activists in Italy, Israeli American spyware maker Paragon has reportedly not responded to authorities’ requests for information.

LinkedIn expands Event Ads beyond its own platform

LinkedIn Ads retargeting: How to reach prospects at every funnel stage

LinkedIn is rolling out Off-Platform Event Ads, giving marketers a new way to promote events without needing a native LinkedIn Event Page.

What’s happening. The new format allows advertisers to run Event Ads that link directly to external destinations β€” such as webinar platforms, landing pages or livestream sites β€” instead of keeping traffic on LinkedIn.

This marks a shift from platform-contained experiences to more flexible, marketer-controlled journeys.

How it works. Marketers can create an Event Ad using a third-party URL, add event details like date and format, and choose from objectives including awareness, engagement, traffic or lead generation.

Clicks send users directly to the external event page, while performance metrics remain trackable in Campaign Manager.

Why we care. Until now, promoting events on LinkedIn often meant working within platform constraints, which could fragment the user journey and limit control over registrations.

Off-Platform Event Ads remove that friction by allowing marketers to tap into LinkedIn’s targeting while keeping traffic, data and conversions on their own platforms β€” making it easier to scale campaigns and maintain a consistent experience.

What to watch:

  • Whether this drives higher registration rates compared to native Event Pages
  • How advertisers balance LinkedIn targeting with off-platform conversion tracking
  • If LinkedIn expands similar flexibility to other ad formats

Availability. Off-Platform Event Ads are currently rolling out globally and are expected to be available to all advertisers by May 6.

Bottom line. By opening Event Ads to off-platform destinations, LinkedIn is making it easier for marketers to scale event promotion β€” without forcing them to build inside its walls.

Corsair makes 12V-2Γ—6 safe with new β€œThermalProtect” cables

Corsair has built an affordable solution to the 12V-2Γ—6 problem Corsair has officially launched its new ThermalProtect 12V-2Γ—6 power cables, adding cable over-temperature protection to its 600W GPU power cables. This feature shuts down GPUs when unsafe temperatures are detected, allowing users to remove and reseat their cables before any damage to their system occurs. […]

The post Corsair makes 12V-2Γ—6 safe with new β€œThermalProtect” cables appeared first on OC3D.

The Steam Machine and Steam Frame Price Reportedly β€œSkyrocketed” Internally Thanks to RAM Shortage

A black rectangular device labeled 'Steam Machine' with two USB ports and a power button on a beige background.

Yesterday, we got the official confirmation from Valve that the first of the three new hardware products it revealed last year, the Steam Controller, will launch in just a few days from the time of this writing on May 4, 2026, and it'll retail for $99 USD / $149 CAD / €99 / Β£85 / $149 AUD. While that price isn't entirely outrageous for modern premium controllers, Valve did confirm that it is higher than initially intended, and according to a well-known insider for information on Valve, the Steam Controller's $99 price tag is the least of our worries, and […]

Read full article at https://wccftech.com/steam-machine-price-reportedly-skyrocketed-internally-due-to-ram-shortage/

Samsung’s Own Version Of DLSS For The Exynos 2600 Promises 15% Higher Performance, But The Company’s Efforts Are Lacking In One Area

Samsung's upscaling technology for the Exynos 2600 brings a 15 percent performance improvement

All modern-day chips offer some form of upscaling technology to boost performance, with Samsung’s ENSS (Exynos Neural Super Sampling) specifically catering to the Exynos 2600. Being the company’s first iteration of upscaling, this DLSS-like feature is shown to deliver 15 percent better performance in a benchmark comparison, and while that’s impressive, we haven’t discussed how it matters little if there are little to no applications other than synthetic workloads to take advantage of. ENSS for the Exynos 2600 will greatly benefit games, but the lack of native ports and poor overall support make Samsung’s proprietary technology useless Aside from ENSS, […]

Read full article at https://wccftech.com/samsung-enss-technology-like-dlss-boosts-performance-for-exynos-2600/

ASRock Launches Advanced PG27QFW2A Gaming Monitor That Peaks At 400 Hz

An ASRock Phantom Gaming monitor is displayed with its screen and base featuring the Phantom Gaming logo, accompanied by the words 'Silence. Focus. Strength.' on the right side.

The Advanced PG27QFW2A monitor brings 400 Hz refresh rate to the 2K resolution on an IPS panel. ASRock Debuts 2K QHD IPS Monitor With Up To 400 Hz Refresh Rate; Also Launches the PG32QFT QHD IPS Monitor Monitor for the Budget Segment Popular hardware and peripheral manufacturer, ASRock, has introduced a brand new mid-range gaming display under the Phantom Gaming series, featuring up to 400 Hz refresh rate for a smoother gaming experience. This is the Advanced PG27QFW2A gaming monitor that boasts a 27-inch display size with a fast IPS panel. Usually, traditional IPS panels work at around 4ms response […]

Read full article at https://wccftech.com/asrock-launches-advanced-pg27qfw2a-gaming-monitor-that-peaks-at-400-hz/

The Blood of Dawnwalker Demands an RTX 5090 for Native 4K/60 Ultra, but DLSS and FSR Will Soften the Blow

Coen, protagonist of The Blood of Dawnwalker, wearing detailed armor stands in front of a medieval castle backdrop.

During the Road to Launch event, developer Rebel Wolves unveiled detailed PC specifications for their debut game, the highly anticipated fantasy open world action RPG The Blood of Dawnwalker. The specs look fairly demanding on paper: the GeForce RTX 5090, the most powerful graphics card on the market, is required to play the game at 4K resolution and 60 frames per second while using the Ultra preset. However, the developers clarified that the following target resolution, frame rate, and presets were calculated without taking into account upscaler and frame generation, which will be supported (including NVIDIA DLSS and AMD FSR). […]

Read full article at https://wccftech.com/blood-of-dawnwalker-pc-system-requirements/

Rebel Wolves Dives Into The Blood of Dawnwalker in Road to Launch Event, Release Date Set for September 2026

Five characters from the game 'The Blood of Dawnwalker' stand in dramatic poses against an orange background, with the game's title prominently displayed.

Rebel Wolves, a studio made up of former CD Projekt RED developers, is getting ready to launch their debut game, The Blood of Dawnwalker, which sees players step into the shoes of Coen, a human recently turned vampire who retains the ability to walk in sunlight. Today, Rebel Wolves dove into the game by showcasing more of its open-world gameplay, and revealing a new story trailer that is capped off by the reveal of the game's release date, set for September 3, 2026. You can check out the full livestream event to watch Rebel Wolves talk in detail about their […]

Read full article at https://wccftech.com/blood-of-dawnwalker-release-date-revealed/

Next-Gen Samsung S Pen to Copy Apple Pencil Pro After 14 Years of Leading Its Own Stylus Game

Close-up of a Samsung stylus and an Apple Pencil held together, showing the brand names.

Samsung took a lead by launching its S Pen all the way back in 2011, while Apple only debuted its version, dubbed the Apple Pencil, in 2015 after implementing a number of refinements. Now, after over a decade, Samsung appears poised to finally concede the superiority of Apple's implementation by replicating the Apple Pencil Pro tech within its S Pen. The next-gen Samsung S Pen might work a lot like the Apple Pencil Pro For the benefit of those who might not be aware, the digitizer layer within the screen stack of an iPad is able to detect capacitive (electrical) […]

Read full article at https://wccftech.com/next-gen-samsung-s-pen-to-copy-apple-pencil-pro-after-14-years-of-leading-its-own-stylus-game/

This Β£8.59 TP-Link gigabit Ethernet switch is the ultimate budget upgrade for lag-free gaming and streaming β€” ideal solution for ditching laggy Wi-Fi connections unlocks four extra high-speed ports on your network

Grab a huge 34% discount on this TP-Link 5-port unmanaged Ethernet switch that'll let you upgrade the wired connectivity in your home or office with gigabit speeds, now down to a lowly Β£8.59 for a limited time only.

Cepien AI – Ship products and features 120x faster and smarter


Cepien AI is a Unified Product Intelligence and Agentic Workforce platform that helps product, design, CX, and business teams turn fragmented data into clear insights, tagged and prioritized user issues, and actionable recommendations. It connects feedback, support tickets, analytics, research, sales motions, technical issues, and documentation, then uses a patent-pending decision intelligence process to validate signals, align findings with business, product, and usability goals, and activate built-in agents that create tickets, generate PRDs and roadmaps, deliver insights, support design workflows, and produce implementation-ready outputs across the tools teams already use.

View startup

50% memory upgrade! – Nvidia unveils new RTX 5070 12GB laptop GPU

Nvidia has released a new 12GB version of its RTX 5070 mobile graphics card Nvidia has officially launched its 12GB RTX 5070 mobile GPU, upgrading its existing model with new 24Gb GDDR7 memory modules. This gives the new GPU 4GB more GDDR7 memory than its older 8GB counterpart. Nvidia’s new RTX 5070 12GB will exist […]

The post 50% memory upgrade! – Nvidia unveils new RTX 5070 12GB laptop GPU appeared first on OC3D.

(PR) Logitech Introduces G512 X TMR Analog/Mechanical Gaming Keyboard

Logitech G today announced the new Logitech G512 X TMR Analog/Mechanical Gaming Keyboard. Designed for players who view their setups as living, breathing ecosystems, the G512 X marks a shift from static hardware to a modular, performance-driven system built to be tuned, tweaked, and mastered. The G512 X is the first keyboard engineered to adapt to the nuance of your unique playstyle, rather than forcing you to adapt to your hardware.

"At Logitech G, we see a player's setup as something that grows with them as they improve," said Robin Piispanen, Vice President and General Manager of Logitech G. "With the G512 X, we combine craftsmanship with performance. This keyboard isn't just a toolβ€”it's an extension of the gamer, giving you the control and precision to play at your best."

EU Now Requires USB-C Charging for New Laptops Up to 100 W

The European Union has officially imposed a new rule for selling laptops with a power rating of 100 W or less, requiring them to use a USB-C port for charging. This rule takes effect today, April 28, Tuesday, as the European Commission has been exploring ways to reduce electronic waste and has been planning this since imposing a similar rule on smartphones in 2024. As readers may recall, modern smartphones have largely been shipping with USB-C ports since the European Commission mandated that all newly sold smartphones must have a unified connection, instead of multiple connectors that create a significant amount of e-waste across Europe. With laptops, the EU legislation now aims to address this issue in the laptop sector, as they contribute significantly to the problem.

However, there are exceptions to this rule. The traditional USB-C power delivery mechanism can deliver 240 W through a single port, but gaming laptops sometimes require more power. Gaming laptops can continue to use the typical barrel power connector on models that exceed 100 W of power, whereas any laptop model with a power rating of 100 W or less must adopt USB-C as its primary charging connector. From today, it is illegal to sell laptops that do not meet the European Commission's standards across the European Union. However, this rule does not apply to computers sold on the second-hand market; only new devices entering the EU zone must comply.

(PR) Satechi Launches ChargeView 140 W Desktop Charger with Display and Four USB-C Ports

Your desk has evolved. Sharper displays, better keyboards, devices designed to work together across a full Apple ecosystemβ€”your workspace already looks like the future. Now picture a charger built to match. Meet the Satechi ChargeView 140 W Desktop Charger: a charging station with a live digital display, four USB-C ports, and 140 W of GaN-powered output, engineered to be a working part of your desk setup.

Power You Can See
Charging used to be a guessing game. ChargeView ends the guessing, in real time. Each port shows its current wattage at a glance, so you know exactly what's flowing where. No more cable-swapping just to figure out what's charging. No more wondering whether your MacBook Pro is at full speed or barely sipping. No more guessing whether the cable is the problem, or the port.

(PR) DON'T NOD Releases Aphelion Today

French developer and publisher DON'T NOD has today released Aphelion on PC (Steam), Xbox Series X|S, and PlayStation 5. It is an Xbox Play Anywhere title and is available on Game Pass. Aphelion is a cinematic sci-fi adventure game at the edge of our solar system.

Standard, Deluxe, & Day One Editions
The Standard Edition, containing the game alone, is 29.99€/$ on Steam and 34.99€/$ on consoles. There is a 10% launch discount for Steam and Xbox users. Xbox and PlayStation players can get the Day One Edition at 10% off on both platforms (only for PS+ users on PlayStation), exclusively within the first day of sale. The Day One Edition contains the game and a cosmetics pack with 2 spacesuit variants and 6 backpack accessories: 3 stickers and 3 keychains. The stickers and keychains reference the ESA collaboration and fan-favorite DON'T NOD games, Jusant and Lost Records: Bloom & Rage.

Microsoft's Shader Model 6.10 Opens Direct Access to GPU AI Engines

Microsoft has released the Shader Model 6.10 preview, included in the new AgilitySDK 1.720-preview build. This preview introduces a compelling feature related to GPU-dedicated AI engine control. According to the developer blog, Shader Model 6.10 features a new, streamlined algebra matrix API that reveals all known matrix operations for popular gaming GPUs from AMD, Intel, and NVIDIA. This means that modern GPUs have dedicated hardware for processing AI workloads, typically involving matrix multiplication and accumulation. Modern machine learning-based upscaling relies on this hardware, whether it's Tensor cores from NVIDIA, XMX cores from Intel, or AI accelerators in AMD GPUs, each with its own communication method. To unify access, Microsoft is introducing a new API from the class linalg::Matrix, which will expose all matrix operations to the shader language. This allows neural rendering operations to be executed across multiple GPUs with a single programming effort.

As the developer behind the DirectX 12 API, Microsoft is observing a significant increase in graphics features utilizing neural network-based rendering techniques to enhance user graphics. This will require more matrix units in modern gaming GPUs. To provide a unified layer of abstraction for programming and executing neural rendering operations, Microsoft hopes that Shader Model 6.10 will become the standard for every GPU maker. Interestingly, this feature is supported across all NVIDIA RTX hardware, as it includes Tensor cores. Intel support is planned for an upcoming release, with B-series GPUs expected to be compatible. Only AMD's RDNA 4-based Radeon RX 9000 series GPUs support this feature, with no support planned for older models like the RX 7000 series and below.

(PR) Nuclear-Powered Space Combat Simulator In The Black Launches May 5

Developer Impeller Studios is thrilled to announce that In The Black, an intense, skill-driven space combat simulator that has been in development for ten years by an extraordinarily passionate team, will officially launch on May 5th in Early Access on Steam.

Set 200 years in the future within our own solar system, In The Black puts players in the role of private military contractors navigating the shadow wars of ruthless megacorporations featuring realistic space combat with a serious respect for science.

(PR) Sapphire Launches NITRO+ PhantomLink Polar Edition

SAPPHIRE Technology unveils the revolutionary PhantomLink connectivity feature in the distinctively special Polar edition on the SAPPHIRE NITRO+ AMD Radeon RX 9070 XT PhantomLink Edition Graphics Card and SAPPHIRE NITRO+ X870EA PhantomLink Edition Motherboard for the ultimate SAPPHIRE NITRO+ ecosystem in a glacial white-color themed pairing.

The SAPPHIRE NITRO+ AMD Radeon RX 9070 XT PhantomLink Polar Edition Graphics Card and the SAPPHIRE NITRO+ X870EA PhantomLink Polar Edition Motherboard were designed conjunctively to be used together for a premium SAPPHIRE NITRO+ ecosystem. For the first time, this ecosystem has been created in a silver-white aesthetic for powerful systems where innovative cooling meets enthusiast performance on an icy-metallic and elegant backdrop.

Pearl Abyss Reportedly Hands Every Employee a $3,400 Bonus After Crimson Desert Storms Past 5 Million Copies Sold

A character on horseback explores a vast, detailed mountainous landscape in the game 'Crimson Desert'.

While Crimson Desert's physical game sales in the US weren't enough to make major waves on the US sales charts for its launch month, it's a pretty sure bet that with its global physical and digital sales combined, it'll go down as one of the best-selling games of the year. It's already crossed 5 million copies sold, and a new report reveals that developer Pearl Abyss is sharing a bit of the wealth from its success with the entire development team. A report from Korean publication MTN claims that every single one of Pearl Abyss' employees were given a bonus […]

Read full article at https://wccftech.com/pearl-abyss-reportedly-gives-every-employee-3400-bonus-for-crimson-desert-5-million-copies-sold/

Final Fantasy VII Rebirth Nintendo Switch 2 Port Struggles In Some Areas, Dipping to 19 FPS Despite Visual Cutbacks

Tifa Lockhart and Aerith Gainsborough in 'Final Fantasy VII Remake' are shown standing side by side, with Tifa raising her fist and Aerith holding her staff.

Earlier today, a Final Fantasy VII Rebirth demo was released on Nintendo Switch 2 and Xbox Series X|S, allowing users to try out the first two chapters and carry over progress to the full game when it launches on June 3. Early footage of the Switch 2 port looked promising, but a more in-depth look reveals that the game's ambition may be, at times, too much for the system, as it struggles to maintain a steady 30 FPS framerate in certain scenarios despite visual cutbacks. A new comparison video shared on YouTube by GVG puts footage from the PlayStation 5 […]

Read full article at https://wccftech.com/final-fantasy-vii-rebirth-switch-2-port-struggles-19-fps/

SK hynix Verifies 12-Die Hybrid Bonded HBM Stack, but Won’t Disclose Yield Figures as Next-Gen HBM4 AI Memory Race Heats Up

Samsung and SK Hynix logos with HBM3 memory chip in a digital backdrop.

Korean memory manufacturer SK hynix has announced a yield improvement for its hybrid bonding packaging technology for high bandwidth memory (HBM) modules. Hybrid bonding enables memory chip manufacturers to bond memory layers with each other without relying on bumps. The direct contact enables higher speeds and improved efficiency through lower heat generation. SK Hynix's technical leader, Kim Jong-hoon, revealed the development at the Beyond HBM β€” Core Technologies of Advanced Packaging: From Next-Generation Substrates to Modules conference in South Korea, reports The Elec. Latest Packaging Technology To Come Into Play With Next-Generation HBM 4 Memory Chips High bandwidth memory is […]

Read full article at https://wccftech.com/sk-hynix-verifies-12-die-hybrid-bonded-hbm-stack-but-wont-disclose-yield-figures-as-next-gen-hbm4-ai-memory-race-heats-up/

Intel-Backed β€œGlass Substrates” Tech Will Be Ready For Commercialization Within Three Years, Says Amkor Lead

Samsung To Initiate Pilot Production Of Glass Substrates Semiconductor This Year

Amkor says that Glass Substrates, a packaging technology replacement for CoWoS spearheaded by Intel, is set for commercialization within 3 years. Intel-Partner, Amkor, Says Glass Substrates Will See First Commercialization Within Three Years Advanced Packaging is key to any major foundry business as chips are getting more and more complex to meet growing compute and memory demands. TSMC is the single-most important advanced packaging provider in the world thanks to its CoWoS 2.5D technology. Current chip requirements involve integration of HBM and logic chips in a single package & the number of HBM chips is expanding aggressively. Recently, OpenAI showcased […]

Read full article at https://wccftech.com/intel-backed-glass-substrates-tech-will-be-commercilization-ready-within-three-years/

Greedfall Developer Spiders Will Reportedly Shut Down β€œSoon” as Nacon Seemingly Fails to Find a Buyer After Insolvency Filing

A character in a fantasy setting stands on a ledge overlooking a grand, mystical city with towering spires and domes under a dramatic sky.

Greedfall developer Spiders will reportedly shut down "soon" after it filed for insolvency last month, as its parent company Nacon seemingly fails to find a buyer for the studio. After Nacon itself filed for insolvency due to financial issues with its major investor, it was reportedly looking to sell off two of its subsidiaries, Spiders and the motion capture studio, Nacon Tech. Now, according to a report from Origami, the publisher has been unable to find a buyer for Spiders, and the studio that has survived for nearly two decades after its founding in 2008 will have to close its […]

Read full article at https://wccftech.com/greedfall-developer-spiders-will-reportedly-shut-down-soon-after-nacon-insolvency-filing/

AIDA64 v8.30 Bakes In Support for Intel’s Nova Lake CPUs and AMD’s 2027 Medusa β€œZen 6” APUs

AIDA64 v8.30 Bakes In Support for Intel's Nova Lake CPUs and AMD's 2027 Medusa "Zen 6" APUs

FinalWire has released the updated AIDA64 software, offering support for next-gen CPUs such as AMD Zen 6 APUs & Intel Nova Lake. AMD Zen 6 "Ryzen" CPUs & Intel Nova Lake Get Early Support In AIDA64 v8.30 The latest AIDA64 release has some major updates. One of these is AIDA FPS, which expands the feature set of the software suite with a new module that captures real-time FPS data from DX11 and DX12 games. The data can be displayed across all existing outputs, including SensorPanel, OSD, tray icons, and logging. The full change log is listed below: New in AIDA64 […]

Read full article at https://wccftech.com/aida64-v8-30-next-gen-amd-zen-6-apus-expo-1-2-intel-nova-lake-cpus-support/

Vastnaut One, the World’s First AI-Powered 4Γ—4 Exoskeleton, Promises a Life Full of Adventure – Kickstarter Campaign Begins

A person wearing a backpack and outdoor gear walking on a rocky terrain with distant mesas under a cloudy sky.

Artificial Intelligence seems to have taken over every facet of our lives and homes, and yet, somehow, it's not really helping us in everyday life beyond the textual/visual Google Search-type help. Wearable technology itself has remained passive, simply counting steps, tracking heart rates, or sleep quality. But, not really being a true part of everyday routine, let alone helping us improve our lifestyle. Vastnaut aims to change that with its Vastnaut One 4x4, the world's first AI-powered exoskeleton, offering a fully integrated experience that mirrors human biomechanics, to improve your fitness and help you enjoy outdoor activities to the fullest. […]

Read full article at https://wccftech.com/vastnaut-one-worlds-first-ai-powered-4x4-exoskeleton-promises-a-life-of-adventure/

Market slumps as OpenAI reportedly misses internal targets for active users and revenue β€” Nvidia, Oracle, AMD, and CoreWeave shares all tremble on the news

Nvidia, Oracle, SoftBank, and CoreWeave saw their stock prices go down because of news that OpenAI has been missing its internal targets. SoftBank stock lost 9.9% of its value on the Tokyo Stock Exchange. Nvidia, AMD, Oracle, and CoreWeave also dropped during pre-market trading and remain down after the market opened.

Storyspun – Get original fiction crafted to your taste and limits


Storyspun creates personalized fiction based on your taste profile. Share favorite titles, themes, limits, and a personal detail; it writes short stories, novellas, or full novels from scratch and screens every chapter for prose quality and narrative consistency before delivery. You receive a formatted PDF with an optional multi-voice AI audiobook. List stories you love in the Marketplace for $4 and earn $2 per purchase by other customers, or keep them private. Subscriptions let you commission new stories monthly.

View startup

Baroque – Websites and app screens that look designed, not generated.


Baroque generates production-ready web and app interfaces in the style you choose. Pick a design system, describe your vision, bring a wireframe to life, or paste a URL to see a full redesign in under a minute. It outputs clean HTML, cohesive spacing and color, and consistent components that feel hand-crafted. Use Baroque to prototype quickly, ship without cleanup, and iterate with prompts. Share preview links, build multi-screen projects, export to Figma, and hand off code to developers with confidence.

View startup

VECT 2.0 Ransomware Irreversibly Destroys Files Over 131KB on Windows, Linux, ESXi

Threat hunters are warning that the cybercriminal operation known as VECT 2.0 acts more like a wiper than a ransomware due to a critical flaw in its encryption implementation across Windows, Linux, and ESXi variants that renders recovery impossible even for the threat actors. The fact that VECT's locker permanently destroys large files rather than encrypting them means even victims who opt to

The framing gap: Why AI can’t position your brand

The framing gap- Why AI can’t position your brand

This article translates the framework into practitioner register. The full theoretical model, including the formal mechanics, the testable predictions, and the academic engagement with current AI-reasoning literature, is developed in the academic working paperΒ The Framing Gap: Strategic Claim Bridging and the Limits of Generative AI Interpretation in Brand RepresentationΒ (Barnard, 2026, Zenodo: 10.5281/zenodo.19857447).

Every brand holds its claims, and somewhere in the archive of its digital life, there’s proof to back them up.Β 

The AI assistive engine (the systems behind ChatGPT, Perplexity, and Google AI Overviews) holds that proof too, scattered across its training data and retrieval index, alongside competitors’ claims.Β 

The audience has a need but no vocabulary to bridge the gap between what they want and what the brand or the engine already knows.

All three lack the same thing: a frame, the interpretive context that turns scattered information into a narrative worth transmitting (for the brand), citing (for AI), and acting on (for the user).

This is where the claim-frame-prove (CFP) process comes in. Claim and proof are mechanical, and frame is strategic.

Claim and prove are mechanical work the engine can verify. Frame is the strategic move only the brand can make.

Why AI can’t make the leap your brand needs

CFP runs claim by claim, fact by fact. The brand’s full picture is built from many CFP cycles compounding: each claim framed and proven becomes a fact in the corpus, and the cumulative weight of those facts is what positions the brand to dominate.

AI can join known facts, but it can’t leap to a new one that benefits your brand. Given Facts A and B, AI can derive Conclusion C that follows logically. That’s standard inference, and the engines do it well.Β 

What they can’t do reliably is the move a creative human makes constantly: looking at A and B, reaching to a non-obvious J that’s commercially beneficial, and then constructing the logical bridge so the engine can use it.

From here, I’ll use the alphabet as the scale: C is the obvious commercial reach, J is the non-obvious, and Q is the most ambitious leap a brand can sensibly make.

Two examples from my own positioning make the difference concrete.

A + B β†’ C

  • A: I coined answer engine optimization in 2017. B: I run a brand engineering company.
  • The obvious conclusion (C) the engine produces unprompted: my work is connected to AEO implementation. True, useful, but close to the facts, and the engines will get that right without my help.

A + B β†’ J

  • From the same A and B, reaching further. J: I’m the practitioner most likely to have learned what works and what doesn’t from a decade of operational data nobody else has.Β 
  • Same starting facts, completely different commercial outcome, and the engine doesn’t make this leap on its own. It needs me to build the bridge for it.

That second move β€” from A and B to J β€” is strategic claim bridging: selecting which non-obvious J benefits the brand from the space of derivable conclusions, and then constructing the logical connection from accepted facts to that chosen J so the engine transmits it as fact rather than as the brand’s opinion of itself.Β 

Two operations packed into one move: the strategic part is choosing J, and the bridging part is making the inference watertight.

AI won’t choose what’s best for your brand

AI doesn’t choose the J that’s good for your brand. You do. That choice, and the bridge that proves it, is the work AI has no commercial stake in, and a future (more capable) AI without your stake just produces a more sophisticated version of the same problem.

Whether AI can be creative is contested ground. The narrower claim holds regardless: even when AI produces a novel-looking output, it has no commercial intent guiding which J to derive. From the same A and B, an AI could just as easily produce a damaging J as a beneficial J. It has no skin in your commercial game.

A creative marketer does both things at once: reaches imaginatively to a non-obvious J, and chooses the J that serves the brand. That’s the move AI engines can’t reach, and it’s why the frame has to come from someone placing the information online (the brand, a client, or an independent source).

The disposition that lets you see this work is what I’ve been calling β€œempathy for the machine,” a phrase I started using in client consulting around 2011-2012 (originally as β€œempathy for the beast,” retired once I got more serious about the business side of digital marketing), and first published formally in 2019.Β 

It’s the discipline of stepping outside your own perspective to see what the machine actually struggles with. That advice applies to anything in SEO/AAO β€” in this case, specifically to when it grounds, attributes, and synthesizes claims about your brand.

Unfortunately, brands all too often produce material aimed at human readers and assume the machine will figure out the rest. With a little empathy for the machine, brands design material the machine can use as its own interpretation (feed the beast).

This produces three different levels of brand-AI communication, each one building on the previous.Β 

Levels 1 and 2 are the foundations every brand needs in place, and Level 3 is where framing enters, and what this article is designed to change your thinking.

Level 1: Scattered proof of claims

Proof exists, but there’s nothing linking it to the claim. This is where most brands sit, and it leaves the engine to perform inference over whatever it can find.Β 

The brand publishes Claim A on its website. Proof Z exists somewhere else: a conference program, an industry database, a Wikipedia citation, and a trade publication from four years ago. The brand assumes the engine will connect the two.

To connect them, the engine has to perform inference. Can it derive the conclusion that this brand is credible for this claim, given scattered premises across different domains, formats, and varying source authority?

There’s no copy stating the connection, no hyperlinks pointing from claim to proof, and no schema encoding the relationship.

That depends almost entirely on how confidently the machine already understands the entity, and that runs on three sub-levels.

If the machine has no confident understanding of the brand, and the proof isn’t explicitly linked, no connection happens. The proof might as well not exist.

If the machine has no confident understanding of the brand, but the proof is explicitly linked, the connection happens because the link does the work that the entity resolution couldn’t.

If the machine has a strong, confident understanding of the brand, the connection happens even without the link, because a well-resolved entity shortens the logical distance the machine has to traverse (linkless links, as I’ve called them).Β 

The link still adds confidence (more than one path always does), but it’s no longer load-bearing as the entity carries the work.

The implication runs through the rest of the pipeline. Entity clarity in the knowledge graph isn’t a nice-to-have sitting alongside content work. It’s the variable that decides whether your content work has to carry all the weight or almost none of it.Β 

Any proof that isn’t explicitly linked is missed at sub-level one, caught at sub-level two, and confidently embedded at sub-level three.

When entity understanding is weak, the result is familiar to anyone tracking AI visibility: a meritorious brand appears occasionally, and when it does, the wording is hedged, and the brand sits mid-to-low-pack. The engine did the best inference it could, and, being a responsible probability engine, it hedged.Β 

Worse, opportunities for inclusion are throttled across adjacent queries the fact should have pulled the brand into, because the fact was never connected to the proof that would have warranted the inclusion in the first place.

What happens when Level 1, scattered proof of claims, is done well? Brand X is infrequently mentioned, unconvincingly, as a provider of Y.

Level 2: Connected proof of claims

Here, the brand explicitly connects claim to proof through a combination of copy, hyperlinks, and schema. It also closes the inference gap by providing what the engine would otherwise have to figure out.Β 

The brand publishes Claim A and explicitly connects it to Proof Z, with the logical thread stated in copy, anchored by hyperlinks to the proof, and encoded in schema: a fact with a significant number of supporting pieces of evidence joined to it three ways, leaving nothing for the engine to infer.

Connected proof of claims is a spectrum, not a switch. At the low end, you’ve connected some of your proof, which already beats Level 1 because the engine no longer has to figure out the connections you’ve made, but it’s still figuring out the ones you haven’t.Β 

If your competition has connected more of theirs, you’re still losing the comparison on the proof you left scattered. At the high end, you’ve connected all of it: every claim joined to every piece of supporting evidence, nothing scattered, and nothing left for the engine to guess at.

Most brands sit somewhere between scattered and connected simply because they’ve connected only the most obvious proof, and the AI may well have already figured the obvious ones out for itself: the links don’t teach it anything it didn’t already know.

With connected proof of claims done comprehensively for a given claim, the engine has enough corroboration to back the brand confidently, and the claim becomes fact in the corpus. Confidence transfers cleanly because there’s nothing to guess at.Β 

Connected proof of claims is also a great weapon for a smaller brand competing with a bigger one: a specialist accounting firm with 50 pieces of proof, all explicitly connected to a specific positioning, beats a Big 4 with thousands of unconnected pieces on that specific positioning, because connection is what turns proof into substance that the engine can transmit.

What happens when Level 2, connected proof of claims, is done well? Brand X is frequently mentioned convincingly as a provider of Y.

Get the newsletter search marketers rely on.


Level 3: Framed proof of claims

This is where framing enters, and where strategic claim bridging earns its name.Β 

For each claim that matters, the brand publishes Claim A, connects the proof, and then does the thing the engine can’t do (and the audience is unlikely to do either, for that matter).

It reaches the non-obvious J that benefits the brand, and constructs the bridge from A and B to J in language the engine can transmit. Not merely β€œwe are the leader in X, demonstrated by Y,” but the frame:Β 

  • Why Y matters for the specific problem this audience faces.
  • What Z signals about trust in this particular market.
  • How W translates to the outcome the prospect actually cares about at the moment of decision.

A frame is a logical inference from corroborated facts, where the brand chose where the inference would land. For example:Β 

  • β€œJason Barnard coined answer engine optimization in 2017, made dated public predictions about how the field would unfold, and those predictions came true, his predictions about where the field is going next are credible.” 

Every component is verifiable independently, and every connection between components is logical. The J the bridge reaches to is the one I chose, not the J the engine would have generated unprompted.

One well-constructed frame makes one claim into fact in the AI’s voice. Run that across the claims that matter, and the cumulative weight is what shifts a brand from β€œfrequently mentioned convincingly” to β€œalmost always mentioned as the leading provider”: dominance is a stack of well-framed facts, not a single masterstroke.

The result: the AI doesn’t merely confirm, it enthuses. β€œBrand X leads in Y, and here is why that matters for your situation.” 

The engine transmits the frame wholesale, in the language you chose, to the audience you specified, with a reason to keep coming back. The machine didn’t generate the narrative; it relayed it warmly.

What happens when Level 3, framed proof of claims, is done well across the claims that matter? Brand X is almost always mentioned as the leading provider of Y, and dominates the space.

Each level builds on the previous: connected proof of claims requires scattered proof of claims connected, and framed proof of claims requires connected proof of claims bridged strategically.

Most brands are only halfway to framed proof of claims

The brands that think they’re at framed proof of claims are usually at framed proof of claims for humans, and scattered proof of claims for machines. Marketing and narrative work supplies frames to humans all the time, and plenty of brands do it well.Β 

What almost no brand does is supply frames the machine can use, and the gap between the two is where framed proof of claims is most powerful.

Some brands operate below even that and are effectively standing still: published facts at the surface, few proof connections, and no interpretive content the machine can use for any purpose.Β 

The signature objection from a standing still brand is the same in every consulting room: β€œWe already do this, our website explains who we are.” The website does that. The website is doing zero work to help the machine with framing.

The cost of standing still isn’t visible until a model update or two down the line. Brands that think they’re at framed proof of claims are usually investing harder in the wrong layer (content), while the layer that matters (framing and, ideally, joining the dots) compounds for someone else.Β 

The gap widens every year. If you have content that doesn’t frame effectively or join the dots with links to proof, you’re leaking huge value, and pushing through connection and framing is the best return on past investment you can make right now: you’re doing the heavy lifting for the machines, and they’ll reward you for giving them this extremely valuable context on a plate.

Three structural conditions separate framed proof of claims from marketing-and-narrative-as-usual, and missing any one collapses the brand back to connected proof of claims or lower.Β 

The entity has to be well-established, well-resolved, and trusted, because a frame can’t anchor to a vague brand. The underlying proof has to be connected, because most brands have fluent marketing prose on top of scattered proof, which is scattered proof of claims with prettier wallpaper.Β 

The bridge itself has to be strictly logical, because machines read logic first and tone second, and a logically broken bridge fails, however well it’s written.

The better AI gets, the more framing matters

Smarter AI rewards better framing rather than replacing it, and the reason is the same selection pressure SEO practitioners have been operating under since the early 2000s.Β 

There’s a seductive and entirely wrong conclusion to draw from rapid improvement in AI reasoning: that engines will eventually figure out how to frame brands correctly without help. The opposite is true. The engine rewards the brand whose assets reduce its own workload for the same or better result.

Search engines reward sites that are easy to crawl, render, and classify. Knowledge Graphs reward entities that are easy to resolve. AI assistive engines reward content that is easy to ground, verify, and transmit confidently. Where the engine has to choose between two roughly equivalent candidates, the candidate that demands less computation, less inference, and less guesswork wins.

Framed proof of claims is that principle operating at the bridging layer. A more capable engine encountering this level has the bridge handed to it ready-made. It doesn’t have to figure out the frame, it transmits the bridge the brand supplied, fluently and confidently, with the engine’s full reasoning capability now amplifying rather than substituting for the framing work.

A more capable engine without a frame falls back to inference over scattered evidence, which is expensive, ambiguous, and produces hedged output. Every improvement in reasoning capability makes the hedging more detailed and the noncommittal language more sophisticated, but the underlying problem isn’t capability, it’s the absence of a frame to amplify. The engine is doing more work for a worse result, and that’s the exact failure mode the engine’s selection pressure is designed to penalize.

The gap between those two outcomes is the framing gap, and it widens with every generation. Brands implementing only connected proof of claims don’t lose ground in absolute terms, they lose ground relative to brands implementing Framed Proof of claims faster every year, because the engine increasingly rewards assets that let it deploy its growing capability productively rather than waste it on guessing and hedging.Β 

The selection pressure that rewarded fast websites in 1998, clean HTML in 2003, and structured data in 2015 rewards framed proof of claims now. The mechanism of gaining a competitive advantage by reducing costs for the AI for the same or better results hasn’t changed β€” and probably never will.

The framed proof of claims trajectory rises steeply and continues climbing. The connected proof of claims trajectory rises gently and flattens. The shaded area between the two lines is labeled the framing gap and visibly widens with each generation.

The bridge stays human

The bridge is human territory, and it stays human because it requires commercial intent specific to the brand that the engine doesn’t have.Β 

Everything the machine does well will get better: retrieval, connection, pattern extraction, and synthesis. None of that helps the brand whose evidence the machine can see but can’t bridge meaningfully to a beneficial conclusion.

Whether AI confirms your brand, overlooks it, or champions it comes down to one discipline: strategic claim bridging, claim by claim, fact by fact. It’s the last layer of brand-AI communication that won’t yield to automation, if it yields at all.


This is the 11th piece in my AI authority series.Β 

SEO isn’t just about being seen β€” it’s about being believed and chosen

Seen believed chosen

Wil Reynolds, founder and CEO of Seer Interactive, is challenging SEOs to rethink what success looks like in a world increasingly shaped by AI.

In his SEO Week session, β€œSEO is a performance channel, GEO isn’t. How do you pivot?”, Reynolds said many marketers are focused on the wrong outcomes β€” and producing work that people don’t believe.

Marketing isn’t just about being seen

Reynolds opened by pushing back on the idea that visibility alone is the goal of marketing.

β€œMarketing was never just to be seen or be visible,” he said. β€œYou had to turn that visibility into something β€” believing something about your brand… And then they ultimately have to choose you.”

He described a progression that marketers need to focus on: being seen, being believed and being chosen.

β€œIt’s how you take your time with people, and turn them from seeing you, into believing something about you,” he said.

β€œI got the ranking, job finished,” he added. β€œJob’s not finished.”

Reynolds also questioned the value of surface-level success metrics.

β€œI got a lot more followers, but they don’t pay you,” he said.

Low-quality marketing is everywhere

Reynolds pointed to common marketing tactics β€” including automated outreach β€” as examples of work that doesn’t create value.

β€œThat’s not marketing,” he said, referring to spam-like SMS messages.

Those tactics made him reflect on his own past work, he said.

β€œI started looking at the stuff that I used to do… was that really marketing?” he said.

β€œSome of us are strategists. Some of us are loopholists,” he said. β€œYou’ve got to make a decision today.”

The industry is producing β€˜zombie content’

Reynolds criticized the widespread use of scaled, templated content designed primarily to rank.

He used broad listicle-style pages as an example.

β€œWhy would you write content saying best restaurants in Minnesota when nobody that’s a human looks for the best restaurant in Minnesota?” he said.

He described this type of content as β€œzombie content.”

β€œThat’s what we do,” he said, describing how marketers repeat what already ranks instead of doing something different.

He also described how many marketers approach content creation.

β€œI’m going to look at the top 10 and look at what they did slightly wrong… and I’m only going to do it slightly better,” he said.

Short-term tactics vs. long-term brand building

Reynolds contrasted short-term SEO tactics with long-term brand building.

β€œSome people like to win in decades,” he said. β€œOther people like to win quarter to quarter.”

He described how many teams focus on immediate results.

β€œWhat works this quarter to get my boss off my back long enough so I can survive the next quarter?” he said.

That approach leads to work that people don’t actually want, he said.

β€œYou will never produce a thing that anyone wants if you continue to play that,” he said.

SEO success doesn’t translate to AI visibility

Reynolds shared an example involving β€œethical jeans” to show how SEO and AI results can differ.

One brand ranked well in Google without being known for ethical practices, while another brand that invested in ethical production ranked much lower.

In AI-generated answers, that outcome changed.

β€œIf that worked, if it was the same, that brand would be showing up in AI models,” he said. β€œAnd they showed up in none.”

He connected this to credibility.

β€œNobody believed them,” he said. β€œNobody chose them.”

Visibility without belief doesn’t lead to outcomes

Visibility alone isn’t enough, Reynolds said.

β€œIf you have all the visibility in the world and people don’t believe you or trust you, then you’re not going to get chosen,” he said.

Visibility is only part of the process, he said.

β€œThis visibility is just an opportunity,” he said. β€œThat’s all it is. … Iit is not the job to be done.”

What people say matters

Reynolds suggested looking at platforms like Reddit to understand how people actually talk about brands.

β€œGo to Reddit… look at all the brands,” he said. β€œYou find out that humans don’t believe you. And they have to pay you for you to stay in business.

He contrasted that with how brands present themselves in content.

β€œNot only did they not think you’re number one β€” they don’t think you’re number 100,” he said.

The wrong metrics are being measured

Marketers often focus on metrics that are easy to track rather than meaningful, Reynolds said.

β€œWe’re measuring the easy stuff to measure,” he said. β€œThe real work is in the hard-to-measure stuff.”

He encouraged comparing visibility metrics with signals tied to outcomes.

β€œIf your visibility is skyrocketing and your pipeline is flat, that’s bad,” he said.

Watching real users changes the picture

Reynolds described research his team conducted by observing real people using AI tools.

β€œWhen you actually watch people do the job… your eyes open so much wider,” he said.

One person typed four words, while another typed more than 100 words for the same task, he said.

He also noted that AI tools often suggest additional steps or actions beyond what users ask for, and people frequently accept those suggestions, he said.

Start with your brand

Marketers should focus on how their brand appears in AI-generated answers, especially for branded queries, Reynolds said.

β€œYou spend all this money trying to get people to know your brand… and then you don’t want to make sure that answer’s right?” he said.

AI can shape your brand narrative

Reynolds shared an example where AI-generated responses surfaced incorrect information about his company.

β€œSo now it’s showing up everywhere,” he said.

He described responding by publishing content to address the claim directly.

β€œIf it’s false, then I’ve got to fight that,” he said.

There is too much content

β€œThere’s too much content out there,” he said.

He described shifting his approach.

β€œI’m trying to become a curator,” he said.

Rethinking performance

Reynolds shared examples of how different traffic sources perform.

β€œMy direct converts 1.5 times better than my SEO,” he said. β€œMy social, five times better.”

A final question for marketers

Reynolds ended by asking marketers to rethink their priorities:

β€œAre you willing to sacrifice a little bit of this visibility game to be more believable?”

Why more content is no longer a reliable way to grow SEO

Why more content is no longer a reliable way to grow SEO

One of the most dependable ways to grow organic visibility was to publish more content. Expanding into the long tail and creating pages around different variations of a topic often led to steady traffic growth.

Many SEO teams still operate with this mindset. Content calendars are built around search volume targets, and growth is often equated with how much new content is produced. The problem is the results no longer reflect the effort.

In many cases, adding more pages doesn’t lead to increased visibility and can even dilute overall performance. Large content libraries are harder to maintain, compete internally, and often result in fewer pages surfacing in search results.

The challenge is no longer producing more content, but understanding why much of it fails to contribute to visibility.

Why content volume worked for SEO

For a long time, increasing content volume was a rational and effective strategy. Search engines relied heavily on keyword matching and topical coverage, which meant expanding into the long tail created more opportunities to capture demand.

Competition was also significantly lower, and many queries had limited high-quality results, so publishing across a wide range of keyword variations often led to quick visibility gains. In this environment, covering more topics translated directly into increased traffic.

Publishing frequency also helped strengthen domain authority. Sites that consistently added new content signaled freshness and relevance, which improved their ability to compete in search results.

This approach was further amplified by programmatic SEO. By creating scalable templates and targeting large keyword sets, companies generated thousands of pages and captured traffic at scale.

Most importantly, this strategy worked because it aligned with how search engines evaluated content at the time. Expanding coverage increased the likelihood of ranking, and more pages meant more opportunities to be discovered.

However, the conditions that made this approach effective have changed. As search ecosystems have evolved and competition has increased, the relationship between content volume and visibility has become less predictable.

Dig deeper: Content marketing in an AI era: From SEO volume to brand fame

Why this model is breaking down

Content saturation

Most commercially relevant topics now have dozens of established pages competing for the same queries, many with years of accumulated links and behavioral data.Β 

A new page enters this environment at a disadvantage because the keyword spaces it targets are already consolidated around results with existing authority and signal history.

Diminishing returns

As sites expand into adjacent keyword variations, search engines increasingly route similar queries to the same URL rather than distributing traffic across multiple pages.Β 

This shows up in Google Search Console as two or three URLs splitting impressions on identical queries β€” neither ranking strongly because neither has consolidated authority. The intent overlap that content teams treat as coverage, Google treats as redundancy.

Changes in search experience

AI Overviews now appear across a significant and growing share of informational queries. Google has confirmed continued expansion of the feature across search types and markets. Informational content is the most affected by this shift, and it’s also the type most volume strategies produce.Β 

A site with a large number of blog articles is therefore more exposed than one focused on a smaller set of transactional pages. More ranked pages don’t produce proportional traffic when an increasing share of visible positions no longer generate a click.

Indexing limits

Google’s budget documentation states directly that low-value URLs drain crawl activity away from pages that matter. At scale, thin or redundant content is deprioritized β€” meaning a significant percentage of a site’s published pages may never meaningfully enter search competition regardless of how much continues to be added.

Dig deeper: The authority era: How AI is reshaping what ranks in search

The hidden mechanics behind content saturation

What’s less understood is how content libraries behave at scale. These are system-level problems that compound over time and are difficult to reverse.

Content debt

Every page published creates an ongoing obligation. It needs to be monitored for ranking decay, updated when information changes, evaluated periodically for pruning or consolidation, and factored into crawl allocation. These costs are rarely accounted for at the point of creation.

At low volumes, this is manageable. At scale, it becomes a compounding liability. A site with 2,000 articles isn’t sitting on 2,000 assets, it’s managing 2,000 maintenance commitments that depreciate at different rates.Β 

Editorial resources that could strengthen existing high-performing pages are instead absorbed by keeping a growing library from becoming a liability.

The true cost of a volume-driven content strategy only becomes visible 18 to 24 months after the investment, when maintenance demands begin to outpace the capacity to meet them.

Crawl inefficiency and cannibalization

Google allocates a finite crawl budget to each domain. When a site scales content volume without proportional gains in quality or authority, Googlebot distributes that budget across a larger number of pages, many of which offer limited signal value. The result is that high-value pages are crawled less frequently, indexed less reliably, and are slower to reflect updates.

This creates a compounding problem for sites with important transactional or evergreen pages that depend on frequent re-crawling to stay current and competitive. Beyond crawl distribution, similar pages targeting overlapping intent compete for the same ranking positions internally.Β 

Search engines consolidate these signals rather than rewarding each page individually, meaning two pages targeting near-identical queries often perform worse combined than one authoritative page targeting both would perform alone.

Topical authority dilution

Search engines evaluate whether a site is a genuinely deep and trustworthy resource within a defined topic space. Expanding into a wide range of loosely related subtopics can erode this signal rather than strengthen it.

A site with 40 tightly interconnected, substantive pieces on a specific topic will consistently outperform one with 400 surface-level articles spread across adjacent themes. The depth and coherence of coverage within a defined area are what build the authority signal that drives durable rankings.Β 

Pursuing breadth at the expense of depth fragments that signal, making it harder for search engines to assign clear expertise to the domain on any individual topic, even the ones the site knows best.

Weak content and behavioral signals

Search engines use behavioral data such as dwell time, return-to-search rates, and click-through rates as quality signals at both the page and domain levels.Β 

When a site publishes high volumes of content that users engage with poorly, those signals accumulate and begin to affect how search engines evaluate the domain as a whole. This creates a negative reinforcement loop that’s difficult to detect and slow to reverse.Β 

Weak pages actively contribute to lower domain-level quality assessments, affecting the performance of pages that would otherwise rank well. More mediocre content compounds. Each low-engagement publish incrementally reduces the baseline trust that search engines extend to the domain’s better work.

Get the newsletter search marketers rely on.


The rise of citation-driven visibility

The goal of SEO has traditionally been to rank. Increasingly, the more valuable outcome is to be cited or referenced in AI-generated summaries, pulled into knowledge panels, or sourced by other publishers as a primary reference. These two outcomes require fundamentally different content strategies.

LLMs and AI Overviews are selective about which sources they draw from. The selection is weighted toward pages with strong E-E-A-T signals, high specificity, and clear authoritativeness within a defined domain.Β 

A site that has published hundreds of generic articles covering a topic broadly is less likely to be treated as a primary source than a site that has published fewer, more definitive pieces with clear depth and original perspective.Β 

Volume doesn’t increase citation probability β€” it may actively reduce it by signaling that the domain is a generalist content producer rather than a reliable primary reference.

The long tail is saturated

The accessible long tail that drove content volume strategies for the better part of a decade no longer exists in the same form. Between 2010 and 2020, there were genuinely underserved keyword opportunities across most industries.Β 

Today, in most commercial verticals, every remotely valuable query has multiple established pages competing for it, especially from high-authority domains with years of accumulated signals.

New content entering this environment doesn’t find open space. It enters a war of attrition against incumbents with advantages it can’t easily overcome. The marginal SEO return on a new article targeting a long-tail keyword is a fraction of what it was five years ago.Β 

The economics only justify creation when there’s a genuinely differentiated angle, a proprietary data point, or a perspective that exists on your page that other pages can’t offer. A keyword existing is no longer a sufficient reason to publish.

At scale, these factors turn content growth into diminishing returns rather than compounding gains. The library becomes harder to maintain, harder for search engines to evaluate clearly, and harder to extract meaningful visibility from β€” regardless of how much is added to it.

Dig deeper: How to keep your content fresh in the age of AI

How to shift from content volume to impact

The implication is to change what publishing is for.

Volume targets made sense when more pages meant more opportunities. In the current environment, they measure the wrong thing. The more useful question isn’t how much content a team is producing, but how much of what already exists is actively contributing to visibility, and what is quietly working against it.

For most sites, that audit reveals the same pattern. A relatively small number of pages generate the majority of organic traffic. A larger number generates little to none, and a significant portion actively drains crawl allocation, fragments topical authority, or dilutes the behavioral signals that stronger pages depend on.

You need to move from expansion to consolidation. Existing pages that cover overlapping intent are stronger merged than competing. Thin pages that rank for nothing and engage no one are more valuable removed than retained.Β 

The energy going into producing new content at volume is often better spent deepening the pages that already have authority and signal history behind them.

New content earns its place when it:Β 

  • Addresses something genuinely unaddressed.
  • Offers a perspective that existing pages can’t.
  • Targets an intent the site currently lacks.Β 

In practice, this means retiring a few default assumptions:

  • That publishing for every keyword variation is coverage.
  • That indexing is the same as performance.
  • That output volume is a proxy for strategic progress.Β 

None of these were ever true measures of content effectiveness. They were convenient ones.

Dig deeper: Content strategy in 2026: What actually changed (and what didn’t)

A new model for content-driven growth

The replacement for volume isn’t simply better content. It’s a different definition of what content is trying to achieve.

Depth over breadth

Focus coverage on a smaller number of topics and develop them thoroughly. A single piece that addresses a topic with specificity, original perspective, and clear authorial expertise will outperform multiple pieces covering adjacent variations of the same theme.Β 

Depth is what builds authority signals, drives engagement, and increases citation potential. Prioritize what the site can say with the most credibility.

Distribution as a multiplier

Allocate more effort to distribution. Publishing less creates capacity to deliver strong content to the right audiences. Distribution is a core part of SEO performance in a citation-driven environment.

Being citation-worthy

Create content that can serve as a primary source. Focus on clear points of view, verifiable expertise, and specific insights that other pages can’t replicate.

The goal is to be referenced in AI-generated summaries, cited by other publishers, and included in the knowledge systems search engines rely on.

Dig deeper: Content alone isn’t enough: Why SEO now requires distribution

The uncomfortable truth

Sites that rely on frequency and broad coverage are being outperformed by sites that are clearly authoritative on a defined topic, consistently useful to a specific audience, and structured in a way that search systems can evaluate with confidence.

Prioritize depth, clarity of expertise, and consistency within a focused topic area. Treat each published page as a long-term asset that requires ongoing maintenance, evaluation, and improvement.

The content factory model is no longer effective. The approach that replaces it requires more effort, stronger editorial standards, and a higher bar for what gets published.

Acemagic Introduces F5A Mini PC with Ryzen AI 9 HX 470 and OCuLink

Acemagic has launched the F5A, a mini PC built around AMD's Ryzen AI 9 HX 470, a 12-core, 24-thread chip combining Zen 5 and Zen 5c cores with boost clocks up to 5.2 GHz. Graphics come from the integrated Radeon 890M (RDNA 3.5, 16 compute units), and the XDNA2 NPU is rated at 55 TOPS, pushing total platform AI performance to up to 86 TOPS. Memory is fixed at 32 GB of LPDDR5X-8000, paired with a PCIe 4.0 NVMe SSD. Storage expansion is a strong point since Acemagic F5A mini PC features three M.2 2280 slots giving you up to 12 TB of total capacity. Networking covers Wi-Fi 7, Bluetooth 5.4, and dual 2.5 GbE LAN ports.

For a system this compact (130 x 132 x 62 mm) the connectivity is solid. You get HDMI 2.1, DisplayPort 2.1, and dual USB4 ports with DisplayPort output and 40 Gbps transfer speeds, plus an OCuLink port for eGPU or high-speed storage expansion. Multiple 8K display outputs are supported. Cooling is handled by a dual-fan setup with a vapor chamber, rated up to 65 W. The F5A is available as a barebones configuration at $759 or as a preconfigured 32 GB / 1 TB model at $1,299, however, availability seems limited at launch.

NVIDIA Officially Launches GeForce RTX 5070 Laptop GPU with 12 GB GDDR7 Memory

NVIDIA has officially launched its new GeForce RTX 5070 12 GB laptop edition GPU, featuring higher-capacity GDDR7 memory. This confirms earlier rumors about an upgraded memory configuration. In a quiet release, NVIDIA announced its decision to use 24 Gb (3 GB) GDDR7 memory modules, which offer a 50% increase in memory capacity compared to the current 16 Gb (2 GB) GDDR7 configurations. As demand for GPU memory remains high, NVIDIA can balance the supply of 16 Gb GDDR7 modules by utilizing a new batch of 24 Gb GDDR7 modules from partners like SK hynix, Samsung, and Micron. The company describes this move as a way to ensure a sufficient supply of 16 Gb GDDR7 memory for the remaining GeForce RTX 50-Series "Blackwell" GPUs, maintaining healthy supply levels. The new GPU SKU complements the existing RTX 5070 8 GB Laptop Edition model, providing gamers with more laptop configurations to choose from.
NVIDIADemand for GeForce RTX GPUs remains strong, and memory supply is constrained. In order to maximize memory availability, we are releasing the GeForce RTX 5070 Laptop GPU 12 GB configuration with 24 Gb G7 memory. This gives our partners access to an additional pool of memory to complement the 16 Gb G7 supply that currently ships with most GeForce GPUs. The 12 GB configuration will exist alongside the current 8 GB configuration, and allows our partners to bring a broader range of GeForce RTX 5070 laptops to consumers.

(PR) ASUS Opens Pre-orders for the 2026 ROG Zephyrus Duo

ASUS Republic of Gamers (ROG) today opens pre-orders for the all-new ROG Zephyrus Duo, the world's first 16-inch dual-screen OLED gaming laptop. Powered by the latest Intel Core Ultra 9 386H processor and paired with up to an NVIDIA GeForce RTX 5090 Laptop GPU running at 135 W TGP, enabling flagship gaming and content creation. This ultra-premium machine is housed in a CNC-milled aluminium chassis with a stunning Stellar Grey colorway and comes complete with a detachable magnetic keyboard to keep users gaming and creating wherever life takes them.

The world's first 16" dual-screen gaming laptop
The Zephyrus Duo is the world's first 16-inch dual-screen gaming laptop. Boasting two 3K ROG Nebula HDR OLED touchscreens, with over 21 inches of total diagonal screen space available on one laptop, the Duo reimagines what a portable personal workstation is capable of. Both panels offer a 0.2 ms response time, paired with a 120 Hz refresh rate, and the main screen supports NVIDIA G-SYNC, bringing crisp motion and nearly zero ghosting in games. With 1100 nits of peak HDR brightness and a VESA DisplayHDR True Black 1000 certification, HDR games and content looks spectacular. For the creators out there, a color accuracy of Delta E < 1i and 100% coverage of the DCI-P3 color space offers twin professional-grade screens right out of the box.

(PR) Supermicro Expands Data Center Building Block Solutions Flexibility with Arm-Based Platforms and OCP Systems for Next-Gen AI Infrastructure

Super Micro Computer, Inc. , an AI, Enterprise, Storage, 5G/Edge Total Solution provider, today expanded its Data Center Building Block Solutions (DCBBS) portfolio with new Arm -based server platforms powered by the new Arm AGI CPU, and new Open Compute Project (OCP) ORv3-compliant rack offerings. Supermicro leads the industry with over 20 OCP Inspired systems, which incorporate various OCP technologies and form factors, to simplify open data center deployments.

"Supermicro continues to advance its DCBBS with an expanded portfolio of Arm-based platforms and OCP systems for next-gen AI and HPC," said Charles Liang, president and CEO of Supermicro. "With high-density, liquid-cooled systems and energy-efficient Arm architectures, we enable scalable, flexible data centers that maximize performance-per-watt and accelerate AI adoption across cloud and enterprise environments."

(PR) FinalWire Releases AIDA64 v8.30

FinalWire Ltd. today announced the immediate availability of AIDA64 Extreme 8.30 software, a streamlined diagnostic and benchmarking tool for home users; the immediate availability of AIDA64 Engineer 8.30 software, a professional diagnostic and benchmarking solution for corporate IT technicians and engineers; the immediate availability of AIDA64 Business 8.30 software, an essential network management solution for small and medium scale enterprises; and the immediate availability of AIDA64 Network Audit 8.20 software, a dedicated network audit toolset to collect and manage corporate network inventories.

The latest AIDA64 update introduces AIDA FPS, a new module that captures real-time frame rate data in DirectX 11 and 12 games. It also adds support for Adaptec RAID controllers and the latest Turing-based external displays, along with expanded support for the newest graphics and GPGPU technologies from AMD, Intel, and NVIDIA.

DOWNLOAD: FinalWire AIDA64 v8.30

(PR) Corsair Launches ThermalProtect PCIe 5.1 600W 12V-2x6 Cable to Protect GPUs from Overheating

Corsair (Nasdaq: CRSR) is proud to announce the launch of ThermalProtect, a 12V-2x6 GPU power cable featuring innovative technology that monitors the temperature of a GPU's power cable in real time to help prevent damage to the GPU.

Corsair ThermalProtect
ThermalProtect enables additional Over Temperature Protection (OTP) for GPUs by actively monitoring the 12V-2x6 cable. Housed discreetly inside a cable comb 30 mm from the connection point, the hardware offers a clean, unobtrusive appearance. If the sensor detects extreme temperatures, ThermalProtect activates, causing the GPU to shut down to prevent potential damage.

Corsair’s New 16-Pin Cable Shuts Down Your GPU The Moment The Connector Gets Too Hot, Works With Any 12V-2Γ—6 PSU

A Corsair GPU power cable, labeled 'ThermalProtect PCIe 5.1 600W 12V-2x6', is plugged into a PCIe connector.

Corsair has just launched its ThermalProtect PCIe 5.1 600W cable for 16-pin GPUs with over-temperature protection to safeguard your graphics cards. Corsair Is The Latest Manufacturer To Offer A Safer Cable With Built-In OTP To Safeguard 16-Pin GPUs Running a 16-pin graphics card has become a scary ordeal. You will constantly be checking real-time monitoring utilities, the physical connectors, and your PC for fluctuations because 16-pin cards have a higher chance of dying due to the flimsy nature of the connector. Every day, you hear an RTX 5090 or a similar 16-pin graphics card card either burning up or dying […]

Read full article at https://wccftech.com/corsair-16-pin-cable-shuts-down-your-gpu-the-moment-the-connector-gets-too-hot/

Chinese Smartphone Makers Are Racing To Beat The iPhone 20’s Quad-Curved Display Design While Copying The Liquid Glass UI

Chinese smartphone makers are copying the iPhone 20's quad-curved display design

Apple’s new Liquid Glass UI was almost immediately copied by Chinese smartphone competitors as they integrated the plethora of changes on their custom Android skins. However, that’s not the only design these manufacturers intend to bring, and if they are quick enough, they can actually beat the Cupertino firm in their race to introduce a quad-curved display design, the same one that’s expected to debut on the iPhone 20. OPPO is said to be working on this technology and is rumored to adopt the same bezel-less illusion effect that’s said to arrive on Apple’s 20th anniversary release. OPPO is rumored […]

Read full article at https://wccftech.com/chinese-smartphone-companies-copy-iphone-20-quad-curved-display-design/

Denuvo and 2K Reportedly Add Two-Week DRM Checks After Pirates Crack All Denuvo-Protected Single-Player Games

The image shows the Denuvo logo on the left and a basketball player in the game cover '2K24' on the right, wearing a Thunder jersey while dribbling a basketball.

The war on piracy in video games between the game publishers selling games and the parrot-companioned players who would rather not pay for those games has just seen its latest milestone, as a committed community of players known as the MKDev Collective and user DenuvOwO, who are known for cracking games, claim they have cracked all Denuvo-protected single-player games. Which has seemingly caused Denuvo and at least one publisher, 2K Games, to respond with online DRM checks with a two-week window. This all comes from a new report from Tom's Hardware, which spotted claims on X (formerly Twitter) that games […]

Read full article at https://wccftech.com/denuvo-2k-games-reportedly-add-two-week-drm-checks-after-pirates-crack-all-single-player-denuvo-games/

Australia Buyers Pay Just 9.5% More For GPUs While US Shoppers Get Hit With 22% RAMpocalypse Premium

GeForce RTX 5090 Prices To Soar To $5000 As NVIDIA & AMD Prep GPU Price Hikes in Q1 26 1

While GPUs aren't seeing a significant price increase these days, they are still selling for much higher prices than in the pre-RAMpocalypse era. GPU Market Data for 10 Countries Show Prices for NVIDIA RTX 50 and AMD RX 9000 Series GPUs are Almost Stagnant, but Future Remains Uncertain We have the latest GPU prices data for 10 countries from Tech Spot, which has been tracking the prices since November 2025, when the GPU prices were the lowest since RAMpocalypse started. The GPUs started to become more expensive right after DRAM and SSDs became more expensive, and although we didn't see […]

Read full article at https://wccftech.com/latest-gpu-market-data-shows-stabilized-prices-in-different-regions/

NVIDIA Bumps RTX 5070 Laptop GPU To 12GB Using New 3GB GDDR7 Memory, Offers 50% Boost While Tackling Supply Constraints

NVIDIA Bumps RTX 5070 Laptop GPU To 12GB Using New 3GB GDDR7 Memory, Offers 50% Boost While Tackling Supply Constraints

NVIDIA is launching a new configuration of its highly popular GeForce RTX 5070 Laptop GPU, which features 12 GB of memory. NVIDIA GeForce RTX 5070 Laptops Are Very Popular & Now Getting a 12 GB Memory Option NVIDIA's GeForce RTX 5070 laptops are highly popular in the gaming segment. These laptops not only offer great gaming performance, but they also pack strong AI capabilities, making them an ideal choice for mainstream users. Now, NVIDIA is expanding its GeForce RTX 5070 Laptop lineup with a brand new configuration. The new configuration packs 12 GB of memory, a 50% boost over the […]

Read full article at https://wccftech.com/nvidia-bumps-rtx-5070-laptop-gpu-to-12gb-gddr7-memory/

NVIDIA’s New Game Ready Driver Lands Days Before Conan Exiles Enhanced Brings DLSS 4 to Funcom’s UE5 Overhaul

The image features a split design with Game Ready Drivers on the left, displaying an NVIDIA GeForce RTX logo, and Conan Exiles Enhanced: Free Visual Upgrade with text 'COMING TO STEAM MAY 5TH 2026' on the right.

NVIDIA has released a new GeForce Game Ready Driver (version 596.36) ahead of the May 5th launch ofΒ Conan Exiles Enhanced, the sweeping Unreal Engine 5 overhaul of Funcom's long-running survival title.Β As announced last week, Conan Exiles Enhanced will debut on PC exclusively via Steam as a free upgrade. For GeForce RTX players, the update also includes support forΒ DLSS Multi Frame Generation,Β DLSS Frame Generation,Β DLSS Super Resolution, andΒ NVIDIA Reflex, the latter reducing PC latency to make combat and survival gameplay feel snappier and more responsive. All of these GeForce features were entirely missing from the original Conan Exiles, so they will be […]

Read full article at https://wccftech.com/nvidia-game-ready-driver-conan-exiles-enhanced-dlss-4-ue5/

OpenAI and Microsoft's alliance fractures as cloud exclusivity deal ends β€” Azure's single-provider monopoly for ChatGPT is officially over

Microsoft and OpenAI have announced a restructuring of their relationship. No longer will Microsoft pay OpenAI a revenue share, but it will continue to flow the other way. Microsoft will also retain model access and a first-refusal for its Azure server services, but OpenAI will be able to work with other CSPs.

ChannelScout – Drop your app URL to get ranked channels and a 30-day launch plan


ChannelScout helps indie founders find where to launch and grow. Paste your app URL and in minutes it returns a ranked list of real communities, with reasoning, what to skip, and a 30-day roadmap sized to your hours. It builds a brand guide, suggests content in your voice, tracks live threads asking for your product, and generates slide decks. Buy a $19 Blueprint once, which gives you free access to AI tools for 30 days to generate content, then optionally keep Scout Signal for ongoing hunts.

View startup

Why Secure Data Movement Is the Zero Trust Bottleneck Nobody Talks About

Every security program is betting on the same assumption: once a system is connected, the problem is solved. Open a ticket, stand up a gateway, push the data through. Done. That assumption is wrong. It is also a major reason Zero Trust programs stall. New research my team just published puts numbers on it. The Cyber360: Defending the Digital Battlespace report, based on a survey of 500 security

How to measure paid social’s impact on PPC

How to measure paid social’s impact on PPC

If your paid social campaigns aren’t converting, you may be undervaluing their impact. Your brand’s exposure on social media can influence other parts of your marketing that platform metrics don’t capture.

Here’s how to design and measure a test to understand how paid social influences your other marketing channels, including PPC.

Step 1: Determine your hypothesis

Start with what you want to learn, then define a hypothesis you can realistically evaluate with your data.

For example, this is a common hypothesis for measuring paid search lift from social traffic:

  • Search lift hypothesis: Increasing spend on social media will increase brand search volume and overall PPC CTRs.
  • Logic:Β 
    • Social ads build brand awareness. As more people become familiar with our brand, they will search for it more often when making research and purchase decisions.Β 
    • As more people are exposed to our brand, they will increasingly click on our PPC ads regardless of their search term (i.e., increasing non-brand and brand CTRs).
    • People exposed multiple times to our brand will have a higher trust factor in our products, and therefore, our conversion rates will increase.Β 
  • Measurement:Β 
    • Impression and click volume for our branded terms.
    • CTR changes for brand and non-brand terms.
    • Conversion rate changes for brand and non-brand terms.Β 

Your hypothesis could have a different scope, such as measuring paid and organic lift from social spend or an increase in direct traffic.Β 

Step 2: The test

The next step is to set up the test parameters. Generally, measuring before and after a change is a mistake, as seasonality or other factors can affect your test results.

The most common test setup is a geographic split. In this test, we’ll increase social spend for only a set of geographies. Then we’ll examine the PPC data for the geographies where we ran the test and compare them with areas where we did not.

As you choose geographies, you’ll want to control for other variables that may affect your test. Here are some common issues that companies have run into and need to control for in their tests and measurements:

  • You sponsor a sports team, and they’re playing during your test.
    • If the game is regionally televised, this can dramatically affect your test results.
  • You’re running TV commercials in only certain regions.
  • You choose experimental geographies with many out-of-region commuters, such as New York City, and include New Jersey and Connecticut in your control group.
    • In these instances, grouping a region and its surrounding commuter areas together, and placing other cities with similar characteristics, such as Chicago and Philadelphia, in a different group, can help balance these tests. (Note: in this example, we’re splitting New Jersey in half.)
  • Seasonal or local events. Large conferences, festivals, or major weather events can affect your data.

Your control and experimental groups should be statistically similar across factors such as income levels, and urban versus rural regions.

As you set up and measure your test, consider your budget. If you increase social spend and expect higher clicks and conversions for your PPC campaigns, ensure you have the budget to capture the increased demand.

Examine your impression share and impression share lost to budget before and after the test to ensure budget limits won’t severely impact your results.

Dig deeper: Why PPC tests in 2026 call for nuance, not winners

Step 3: The measurement

Measurement can go from very simple to extremely complex.

At a simple level, you can compare platform data to see how your data changed. In this case, a Google Ads report shows how pausing social spending and influencer campaigns across all social platforms (TikTok, LinkedIn, Facebook, YouTube, etc.) affects performance.

For this test, pausing social spending yielded mixed results for conversion rates. As brand searches decreased, conversion rates in some regions increased, while in others they fell.

However, what was consistent was a dramatic drop in conversions.

You can get more sophisticated in your testing. Depending on your analytics setup, some companies want to measure touchpoint differences for their conversions. Others will want to measure overlap rates between social and paid search visitors, or examine attribution touchpoints and models.

Before you set up your test, ensure you have the measurement capabilities needed to understand and interpret the results.

Get the newsletter search marketers rely on.


Step 4: Evaluation beyond the test criteria

As you run various tests, you want to measure the results against your hypothesis. However, it’s useful to list other variables worth evaluating beyond your test criteria.

This is where search consoles, analytics tools, CRM, internal data, and even the paid and organic report can come into play.

In one example, a company was running a test to see whether pausing several advertising channels, from social media to TV ads, would dramatically change its brand search volume. They hypothesized that their brand was so well known in the marketplace that they could cut back on several forms of brand advertising and reallocate that budget to other channels and non-brand advertising.

While the simple paid and organic report in Google Ads won’t tell you the full story about in-store revenue and direct traffic changes, it can serve as a signal to form an overall picture of a very complex test.

They had recently launched a new product line, and that line continued to see a large increase in traffic during the test. However, their most common brand terms saw significant declines from the test. This was a year-over-year comparison across a set of geographies, rather than a period-to-period comparison, to help correct for the increase in holiday traffic that would have occurred during the previous period.

The results were by far the most dramatic I’ve ever seen in this type of test, to the point it was clear other variables had to be in play that could affect the test.

This takes you to the sniff test. Rely on your experience with data to make common sense adjustments. If you look at the data and it just doesn’t seem right, ask yourself whether this makes sense, if it’s a math quirk (common with low data), or if other unforeseen variables are in play.

In this example, no one believed the results should be this dramatic. The company stopped running the test and began an internal evaluation of its organic presence, including Google’s recent updates, changes to AI Overviews, AI engagement, and other factors affecting its web presence beyond its usual marketing channels.

Dig deeper: Are your PPC ads still authentic in the age of AI creative?

What to do with your social impact tests

The test setup is simple:

  • Determine your hypothesis.
  • Decide how you will test. The easiest setup is a geographic split.
  • Make sure you can measure the results.
  • Launch the tests.
  • Evaluate the metrics for your hypothesis.
  • Examine other metrics for insight or additional testing ideas.

For some companies, Facebook and other social channels are their top conversion channels, and these tests won’t be applicable. For others, social media advertising results often look poor when evaluated in isolation.

In these examples, the companies were already running many social media campaigns, so the test was to reduce social media spend. If you don’t run much social media, your test will be to increase your social media spend to see how it affects your data.

I’ve seen a lot of these tests, and the results are highly inconsistent across companies. Many companies will increase their social media spend and see little change in their data. Others will increase their spend and see a nice lift in overall performance. These are tests you need to run yourself, as your results will vary by company.

Running geographic split tests in your social media campaigns and then measuring the results on paid or organic search traffic can give you insights into how to leverage social media campaigns for other marketing channels.

YouTube testing new search experience, Ask YouTube

Google announced they are testing a new β€œconversational search experience to complement how you already search on YouTube.” It is called β€œAsk YouTube” and it lets you β€œdive deeper into the topics you’re curious about in a more interactive way,” Dave from YouTube wrote.

What it looks like. Here is a GIF of it in action:

How can I try it. If you want to try it out, you can go to youtube.com/new and try to opt into it.

This experiment is currently available for YouTube Premium members 18+ in the US who opt-in. Google is working on expanding the experiment to non-Premium users in the future.

What it does. Dave from YouTube posted this example:

β€œIf you’re in the experiment, you can try it out by selecting β€œAsk YouTube” in the search bar. For example, you can ask for help planning a 3-day road trip from San Francisco to Santa Barbara, and you’ll get a structured, step-by-step itinerary instead of a list of videos. The response will bring together a new mix of long-form videos, Shorts, and informative text featuring local tips and must-see stops. You can ask follow-up questions like, β€œwhere can I find good coffee?” to explore local spots along your route. We’ll surface videos and relevant video segments, accompanied by their titles and channel details, to make it easy to discover new creators and jump into the most helpful content from your search.”

Why we care. AI search is creeping into every search interface across Google’s properties. YouTube is no exception. Expect more and more AI search experiences in more Google surfaces and expect them to change and adapt over time.

You can find more coverage of this across Techmeme.

New to PPC? 7 tips to build skills and confidence fast

New to PPC? 7 tips to build skills and confidence fast

Understanding the ins and outs of paid media can seem like an overwhelming process when you’re first entering the field. As AI has rapidly changed ad platforms in recent years, keeping up can feel challenging.

Thankfully, you’re not alone. You’re part of a supportive industry with a wealth of content and knowledge to share. Here are seven tips to help you learn and become a more confident PPC manager.

1. Be curious

Curiosity is foundational to growth in PPC. You’ll learn best by taking initiative to understand ad platforms, how campaigns are structured, and what options are available on the backend. Of course, be careful about tweaking settings you’re not familiar with, but don’t be afraid to dig in on your own.

If you’re part of a team, ask your colleagues why they use a particular setup. If you’re not familiar with a platform and have a team member who frequently uses it, ask if they can walk you through it.

2. Absorb content and find community

There are countless industry professionals producing content to teach PPC. Whether you learn best from reading, listening to podcasts, or watching videos, you’ll find options that fit your style. Looking up the authors of articles on this site is a great starting point to build a list to follow.

Block out time in your schedule for education. Even setting aside a couple of hours a week helps you gain perspective from others in the industry and keep up with constant platform updates.

The PPC industry has long been known for its welcoming, supportive community. Seek out individuals and organizations who are actively sharing, and don’t be afraid to engage with them on social media. Conferences are also a great way to network with other PPC professionals and sometimes discuss their approaches in a more informal setting.

A brief word of caution: Vet recommendations you see from others against your own experience in ad accounts. Just because a β€œbest practice” worked for one account doesn’t mean it’ll work for every account. Depending on the tactic, you may want to test it as an experiment to measure impact, or compare results before and after.

Dig deeper: What 10 years of PPC testing reveals about breaking best practices

3. Take industry certifications with a grain of salt

While ad platform certifications can serve as a starting point for demonstrating basic functionality, be cautious about relying on them as the end-all proof of PPC expertise.

Certifications often lean heavily on platform-recommended best practices, which may conflict with tactics that align with a brand’s goals. Academic knowledge can’t match the insight gained from practical, hands-on experience in accounts.

4. Don’t chase what’s new and shiny

While I’d encourage staying aware of ad platform updates and current tactics, I’d discourage implementing a new campaign type or expanding into a new platform just because it’s new. Make sure you have sufficient budget and a clear reason to test.

Additionally, avoid making adjustments without a rationale. If campaigns are performing and driving qualified leads or sales, keeping the status quo may be best.

Basic marketing principles still apply, such as knowing your target audience, addressing their problem with a solution, and presenting a clear call to action. Focus on aligning your channel choices with these goals, and the rest will follow.

Dig deeper: 10 keys to a successful PPC career in the AI age

Get the newsletter search marketers rely on.


5. Translate jargon for stakeholders

As you become more embedded in PPC, you may naturally use industry terms and acronyms such as CTR, CPC, ROAS, and CPA. However, these metrics are often meaningless to stakeholders who aren’t immersed in your world. One of the most vital skills for a paid media professional is translating abstract metrics into language that connects with what stakeholders care about.

For instance, I often default to β€œconversions,” even though the term can be ambiguous in reports. Referencing the actual action being tracked (such as account open, form fill, or purchase) is more concrete and ties directly to what stakeholders are tasked with driving.

6. Use AI, but don’t neglect the human touch

AI is an inevitable part of a future-forward career, and ignoring it will be detrimental to career development. However, don’t lose the human oversight that sets a seasoned PPC practitioner apart.

When writing ad copy, LLMs can offer a strong starting point and help refine wording. But don’t rely on AI to produce all your copy, as it may pull irrelevant content from your site (or elsewhere), and may not reflect your brand’s voice and perspective. Also, learn where AI can save time on β€œbusy work” tasks, such as reviewing search terms and placements for exclusions, while still reviewing the output for accuracy.

While most ad platforms default to automated campaign setups and encourage a hands-off approach, a standout PPC manager understands the levers they can pull to maintain control when needed. Examples include:

  • Setting target bids or cost caps.
  • Excluding irrelevant keywords, placements, and audiences.
  • Pinning headlines and descriptions in responsive search ads.
  • Restricting geographic targeting to avoid unwanted locations.
  • Tailoring creative to specific demographics.

Dig deeper: The new PPC playbook: From media buyer to profit engineer

7. Don’t change things for the sake of showing activity

One common temptation for both new and seasoned paid media practitioners is to make changes just to appear busy. The motivation may be valid, as you want to prove to your client or boss that you’re attentive to PPC account management.

However, particularly with campaigns that rely heavily on data to drive automated bidding, too many changes in a short period are often detrimental. Be sure to allow for data significance and enough time before pausing ads and keywords or tweaking bid targets.

If you can show positive performance trends and provide readouts on which campaigns and channels are driving those results, you can validate your decisions to take or not take action when presenting to stakeholders.

Keep learning, start sharing

Becoming a confident PPC manager requires mastering a blend of technical, interpersonal, and marketing skills. As you build your knowledge, look for opportunities to share what you’re learning with peers. It’s one of the fastest ways to reinforce what you know and keep improving.

Dig deeper: 7 power moves to accelerate your PPC career

The Division Resurgence is now available on PC in Early Access

The Division Resurgence is now available on PC through Ubisoft Connect Following the game’s release on iOS and Android, Ubisoft’s The Division Resurgence is now available on PC in β€œEarly Access”. Resurgence is a free-to-play looter-shooter RPG that’s set in New York City. The game offers 3rd-person tactical combat and is playable solo or in […]

The post The Division Resurgence is now available on PC in Early Access appeared first on OC3D.

(PR) ASUS Announces TUF Gaming Platinum Power Supply Series

ASUS today announced the TUF Gaming Platinum series of power supplies, available in 850-watt, 1000-watt and 1200-watt versions. These PSUs feature a next-generation GaN MOSFET, military-grade durability, flagship fan design, and PCB protection for efficient and consistent power delivery. Compared to traditional silicon-based MOSFETs, the GaN variety delivers advanced energy efficiency, faster switching speeds, lower heat generation, and higher voltage resistance, positioning it as a server-grade material for demanding applications such as AI development, high-end gaming, and video editing. This MOSFET's compact size also enables a streamlined layout with larger heatsinks and enhanced airflow for additional heat dissipation.

GaN MOSFETs for maximum efficiency and cooler operation
Gallium Nitride MOSFET technology has been instrumental in making premium ROG Strix Platinum PSUs and ROG Thor III PSUs efficient powerhouses, so ASUS is excited to bring this tech to a range of TUF Gaming Platinum PSUs.

ASUS Strix RTX 3090 Has A Hidden Design Flaw That Kills The VRM, And Owners Are Finding Out The Hard Way

A disassembled ASUS ROG Strix graphics card with exposed components including the PCB, GPU chip, and cooling system seen on a blue background.

The fancy-looking ASUS RTX 3090 has a common problem, which often goes unnoticed but can kill the GPU components if not paid attention to. NorthWestRepair Fixes Damaged PCB and SMD Components on ASUS Strix RTX 3090 Caused Due to Too Much Pressure Applied on the Shroud Screws Sometimes a simple disassembly/assembly can cause complicated problems with PC components, and that's even difficult to spot when it's a GPU. The PCB traces and SMD components are easy to damage, resulting in shorting, which could potentially kill the GPU. That's why it's advised not to start any DIY process with these components […]

Read full article at https://wccftech.com/asus-strix-rtx-3090-has-a-hidden-design-flaw-that-kills-the-vrm/

Colorful Bets On Both Sides Of Intel’s Socket Divide With New Battle-AX Boards Built for the DDR5 Price Crisis

Two Colorful motherboards from the 'Battle-AX B860M & B760M Series' are depicted with the text 'Battle-AX Strike First' prominently displayed.

Colorful has launched two budget-tier motherboards with the LGA 1851 and LGA 1700 sockets for previous-gen and latest-gen Intel processors. Colorful Introduces Battle-AX B860M-Plus S and B760M-Plus S Motherboards With WiFi 7 and PCIe Gen 5.0 Support for Intel 12th/13th/4th gen and Core Ultra 200S Series Chinese hardware maker Colorful has introduced two new motherboards in the Battle-AX series. While most manufacturers are releasing boards for the newer Arrow Lake processors, Colorful has introduced new motherboard models for both Arrow Lake and 12th/13th/14th gen processors. Both motherboards are aimed at the budget segment, featuring the B860 and B760 chipsets. The […]

Read full article at https://wccftech.com/colorful-launches-battle-ax-b860m-and-b760m-motherboards/

Saros: Can You Beat the Tutorial Boss? (Consort Boss Guide)

A character in 'Saros' standing in front of a large, ominous structure on a desolate landscape.

The opening of Saros pits you against the Consort, a formidable tutorial boss designed to teach you the high stakes of combat. While most players will fall to this boss, it is possible to defeat it and receive some rewards for your efforts. However, the fight is really difficult because you lack the advanced abilities such as Parry, Power Weapons, and Second Chance (free revive) unlocked later in the game. You are limited entirely to a Handcannon, and your own personal skills action game skills. Should You Try to Beat The Consort? If you manage to defeat the Consort, you […]

Read full article at https://wccftech.com/how-to/saros-can-you-beat-the-tutorial-boss-consort-boss-guide/

Samsung Exiting MLC NAND Buisness Opens Up Space For Taiwanese NAND Maker Resulting In A 382% Revenue Increase

Samsung's Exiting MLC NAND Buisness Opens Up Space For Taiwanese NAND Maker Resulting In A 382% Revenue Increase

Samsung recently exited low-end NAND and DRAM businesses, which has resulted in a major boost for Taiwanese and Chinese markets. Taiwan-Based NAND Maker, Macronix, Sees Big 4x Boost In MLC Business As Samsung Focues on High-End Products Recently, we reported that Samsung had exited LPDDR4 and LPDDR4X memory segments to focus on the more lucarative LPDDR5/X memory, which is a hot item for AI. The semiconductor giant also closed its MLC NAND business to focus on higher-end NAND products. While this led to a short disruption in the memory and NAND markets for entry-level platforms, others have filled in the […]

Read full article at https://wccftech.com/samsung-exiting-mlc-nand-buisness-opens-up-space-for-taiwanese-nand-maker/

EXPO 1.2 only brings partial CUDIMM support due to lack of native IMC compatibility β€” Asus also working on updating older B650 and X670 boards with EXPO 1.2

New AGESA updates are bringing EXPO 1.2 support to more and more 800-series AM5 motherboards, but CUDIMM support remains only partial. Because Zen 4 and 5's IMC is incompatible with CUDIMM, it can only run in bypass mode with limited speeds. Moreover, older 600-series motherboards are now also expected to get EXPO 1.2 support later.

PixGrid – Create TikTok and YouTube visuals with fast AI photo editing


PixGrid is an AI-powered online photo editor that runs entirely in your browser, letting you remove backgrounds, erase objects, generate images from text, and stylize photos in one workflow. You can build collages, add text, manage layers, and export high-res PNG, SVG, or PDF. PixGrid processes Premium AI on private servers and never stores your images. Start free with daily exports, then unlock unlimited exports and advanced tools with affordable one-time passes.

View startup

PDF Pro – View, edit, convert, analyze, and process PDFs privately in your browser


PDF Pro is a privacy-first document workspace that runs in your browser. You can view, create, edit, annotate, merge, split, convert, and compress PDFs with no uploads needed for local tools. It also lets you chat with documents, translate PDFs and images, and extract structured outputs like tables, charts, Word, and Excel.

PDF Pro secures sensitive files with client-side encryption, end-to-end encrypted transfers, and cryptographic signing and verification. You can open any PDF instantly via the Chrome extension and keep files on your device. The Chrome extension allows instant access so you can open and work with PDFs directly from your browser without interrupting your workflow.

View startup

Rotten Tropes – Discover movie tropes before you watch a film


Rotten Tropes tags new movie releases with recurring narrative patterns like Power Always Corrupts, The Chosen One, Violence Gets Results, or Love Conquers All, so you can pick films by the messages you want and skip the ones you've seen too often. Mark any trope as loved or avoided, and the site surfaces matching releases while hiding the rest.

Built for people who notice the same patterns repeating across blockbusters (Letterboxd list-makers, TVTropes readers, or anyone who's googled 'movies that aren't another chosen-one story'). Search any film for its tropes, browse this week's pre-tagged releases, or use the Trope Explainer to learn what each pattern means and which films use it.

View startup

Critical Unpatched Flaw Leaves Hugging Face LeRobot Open to Unauthenticated RCE

Cybersecurity researchers have disclosed details of a critical security flaw impacting LeRobot, Hugging Face's open-source robotics platform with nearly 24,000 GitHub stars, that could be exploited to achieve remote code execution. The vulnerability in question is CVE-2026-25874 (CVSS score: 9.3), which has been described as a case of untrusted data deserialization stemming from the use of the

After Mythos: New Playbooks For a Zero-Window Era

When patching isn’t fast enough, NDR helps contain the next era of threats. If you’ve been tracking advancements in AI, you know the exploit window, the short buffer that organizations relied on to patch and protect after a vulnerability disclosure, is closing fast. Anthropic’s new model, Claude Mythos, and its Project Glasswing, showed that finding exploitable vulnerabilities and subtle cracks

Where PPC and SEO teams lose control in branded search by Bluepear

Branded search is often treated as predictable and easy to manage. In practice, it isn’t.

PPC teams see rising CPC on brand terms. SEO teams see declining branded CTR, even when rankings hold. These issues are usually investigated separately, with different dashboards, hypotheses, and fixes.

Both signals often stem from changes within a single SERP. What look like two separate problems are, in reality, one shared environment reacting to shifts in competition and visibility.

The issue isn’t a lack of data. Most teams already have basic reports and brand monitoring tools, including PPC and SEO platforms. The problem is how the data is used.Β 

To understand what’s happening in branded search, teams must manually piece signals together. This takes time, doesn’t scale, and delays decisions.

Here’s why that fragmentation is harmful and what to do about it.

What’s actually happening in branded search

Branded search is often described in terms of channels β€” paid and organic. For users, that distinction doesn’t exist.

A single SERP brings together multiple layers:

  • PPC adsΒ 
  • Competitor ads or comparison pages
  • Organic results, including brand-owned pages
  • Affiliate listings promoting the same brand
  • Review platforms and aggregatorsΒ 

All of these elements appear at once, within the same decision-making space.

From a SERP analysis perspective, this isn’t a set of isolated placements. It’s a dynamic environment where each element influences the others. A competitor ad above your organic result can reduce CTR. An affiliate listing can compete with your paid campaign. A review page can shift user intent before a click.

In practice, this creates a mismatch.Β 

For users, branded search is a single page. Inside the company, it’s split across workflows and handled by different functions.

PPC focuses on bids and efficiency. SEO focuses on rankings and organic traffic. Affiliate activity is often tracked separately, if at all. Competitor tracking may exist, but usually within a single channel. The result is a fragmented view of what is, in practice, a shared space.

Understanding what’s happening in branded search often requires manual effort. The data is there, but building a complete, up-to-date view of the SERP on a regular basis is time-consuming and hard to scale. That makes it difficult to understand how these elements interact β€” and even harder to respond to changes as they happen.

What PPC teams see (and often miss)

From a PPC perspective, teams focus on these signals:

  • Brand CPC starts to rise.
  • More players appear in the auction.
  • Branded campaigns become less efficient over time.

At first glance, this suggests increased competition. The typical response is to adjust bids, defend impression share, or refine targeting. All of it makes sense within paid media.

But this is where context changes everything.

What PPC teams don’t always see is who’s driving that competition.Β 

Not every new entrant in the auction is a direct competitor. Often, it’s affiliate activity β€” partners bidding on branded terms outside agreed-upon rules. Without deeper competitor tracking, these cases can look identical while requiring different actions.

There’s also the organic layer. Changes in SERP structure β€” more ads, different layouts, stronger third-party rankings β€” can directly affect paid performance. Even if the campaign setup stays the same, the environment shifts. Without ongoing SERP analysis, these changes are easy to miss.

In many cases, brands aren’t just competing with others β€” they’re competing with themselves. Over 40% of advertised pages already rank #1 organically (Ahrefs, 2025).

PPC teams rarely see the full page in context. They see auction data, metrics, and reports β€” but not always how their ads appear alongside organic results, affiliates, and other placements in real time.

But beyond missing context, there’s a more practical limitation.

Ad platform reporting rarely explains what changed. It shows performance shifts β€” but not how the SERP looked to users, who appeared alongside the ad, or how placements were arranged.

This creates a gap.

Competitor tracking without context doesn’t explain the situation β€” it only signals change. Without broader SERP-level brand monitoring, PPC teams often optimize on partial visibility, reacting to symptoms while the root cause must be reconstructed manually.

What SEO teams see (and often miss)

From the SEO side, branded search issues tend to surface differently.

The most common signals look like this:

  • Branded CTR starts to decline.
  • Rankings remain stable, often still in top positions.
  • SERP appearance shifts β€” new elements, richer features, or different page layouts.

On the surface, it looks like an SEO problem. The natural response is to review snippets, adjust metadata, or check for technical or content issues.

But in many cases, performance drops aren’t driven solely by SEO factors.

SEO teams generally know that paid activity, competitors, and affiliates can influence branded search. The challenge isn’t awareness β€” it’s consistent visibility over time.

To understand what changed, teams need to see how the SERP looked at a specific moment:

  • Which ads appeared and where.
  • Whether competitors or affiliates were present.
  • How organic results were positioned in context.

This isn’t what standard SEO workflows are built for. Teams often have to manually check results, compare snapshots across tools, or rely on incomplete data.

Then there’s the SERP itself. Modern branded SERPs aren’t static. Layout changes, added modules, and mixed result types can significantly affect click behavior.

Without consistent SERP analysis, it’s hard to isolate the cause. As a result, SEO teams may keep optimizing β€” and see no stable results.

Why PPC and SEO issues are actually connected

At a glance, PPC and SEO issues in branded search may look unrelated β€” different metrics, dashboards, and teams. But when you look at the SERP as a whole, the connection is hard to ignore.

Studies show this overlap isn’t an edge case. Nearly 38% of websites advertise on keywords where they already rank in the top 10 organically (Ahrefs, 2025). In branded search, the overlap is even higher.

That means both channels operate in the same environment β€” and compete for the same user attention.

Changes within that environment rarely affect just one side:

  • Increased ad presence can push organic listings lower or draw clicks away.
  • Aggressive bidding (from competitors or affiliates) can raise CPC while also reducing organic search visibility.
  • New entrants in the SERP can affect both paid efficiency and organic CTR simultaneously.

In this context, it’s not unusual for PPC performance to decline while SEO metrics shift in parallel. These aren’t isolated issues β€” they’re different reflections of the same underlying change. Yet they’re rarely analyzed together.

The real problem isn’t visibility β€” it’s fragmentation.

Most teams already have access to data. Specialized tools make SERP analysis, competitor tracking, and brand monitoring possible. The limitation isn’t what can be seen, but how it’s used.

PPC and SEO operate in separate systems β€” different platforms and reporting environments, KPIs, and workflows. To understand what changed in branded search, teams must align manually by comparing reports, checking SERPs, validating assumptions, and sharing findings across functions.

As a result, insights are delayed, alignment lags behind SERP changes, and decisions are made with incomplete or outdated context.

How to improve branded search performance

Most teams don’t miss the signals β€” a spike in CPC, a drop in CTR, unexpected competitors in the auction. These changes rarely go unnoticed. The challenge comes next: confirming what happened and deciding how to respond.

This is where branded search performance slows. Teams dig through separate reports, trying to reconstruct what the SERP looked like at a specific moment. By the time the picture is clear β€” if it ever is β€” the window to react has already passed.

Improving performance here isn’t about adding more data. It’s about changing how it’s collected and used.Β 

With the right setup, SERP analysis becomes continuous instead of manual. Changes in branded search are captured automatically, including competitor and affiliate activity that might otherwise require manual checks, post-fact validation, or go unnoticed.

Tools for branded search monitoring such as Bluepear provide:Β 

  • Unified look on SERP in a specific moment.
  • Automated alerts when meaningful changes occur.
  • Pre-collected, timestamped evidence that removes the need to manually gather screenshots or reconstruct past states.

Instead of spending time collecting screenshots, comparing reports, and reconstructing what happened, the information is already structured.

This shifts the process from reactive to operational. Instead of investigating issues after the fact, teams receive a clear signal or a complete case.

This creates a reliable record of what actually happened:

  • When a new player entered the SERP.
  • How placements shifted over time.
  • Where potential violations or conflicts appeared.

Instead of scattered evidence and manual reconstruction, teams get structured, ready-to-use context.

Reporting becomes simpler. Insights can be shared across PPC, SEO, and affiliate teams without rebuilding context each time, reducing internal alignment time. Most importantly, decisions can be made faster.

With Bluepear, brand monitoring and competitor tracking become continuous. Teams receive structured signals instead of raw fragments and can act without rebuilding the situation from scratch.

To see how Bluepear can improve your workflow, create an account and start your free trial.

Final takeaways

PPC and SEO teams don’t lack data β€” they interpret different signals from the same SERP. But these signals are connected. They’re shaped by the same changes in the search environment, even if they appear in different reports.

When SERP analysis is fragmented, it’s harder to see the full picture β€” and even harder to act quickly.

What makes the difference is not more data, but better coordination:

  • Continuous brand monitoring instead of occasional checks.
  • Shared visibility across PPC, SEO, and affiliate teams.
  • A consistent view of the SERP, not separate channel reports.

When branded search is managed holistically, teams don’t just react to performance changes β€” they understand what drives them and respond with clarity.

To simplify how your team tracks and responds to branded search changes, start using Bluepear to automate monitoring, capture SERP changes, and centralize evidence in one place.

Memory shortages could impact Xbox β€œProject Helix” pricing and availability

Xbox still has no launch timeline for β€œProject Helix” In a recent interview, Asha Sharma, the CEO of Xbox, confirmed that Microsoft’s next-generation β€œProject Helix” console will be impacted by the global memory shortage. Memory pricing will impact pricing and availability. At this time, Xbox is not ready to share its launch timeline for its […]

The post Memory shortages could impact Xbox β€œProject Helix” pricing and availability appeared first on OC3D.

(PR) LincPlus LincStation E1 NAS Brings the Cloud Home

LincPlus has officially unveiled LincStation E1, a compact and versatile 4-bay network-attached storage (NAS) solution designed for creators, home labs, and modern households. The device will launch soon on Kickstarter, giving early supporters access to exclusive early-bird offers.

As subscription fatigue grows and concerns around data privacy increase, more users are reconsidering traditional cloud storage services. LincStation E1 aims to provide a compelling alternative by allowing users to build their own private cloud while maintaining full ownership of their data. "LincStation E1 is built to give users the freedom of the cloud with the control of local storage."

(PR) ASRock Introduces the Advanced PG27QFW2A Gaming Monitor with a 400 Hz Refresh Rate

ASRock, the global leading manufacturer of motherboards, graphics cards, mini PCs, gaming monitors, power supply units, and AIO liquid coolers, today officially announced the launch of its latest gaming monitor, the PG27QFW2A, featuring a combination of 400 Hz ultra-high refresh rate and a 2K QHD IPS panel. Designed to strike an optimal balance between speed and visual fidelity, it delivers a distinct experience beyond conventional gaming displays.

In the high refresh rate segment, many monitors continue to rely on TN panels. By adopting Fast IPS technology, the PG27QFW2A maintains 400 Hz fluidity while delivering accurate color reproduction and wide viewing angles, ensuring consistent and clear visuals even during fast-paced gameplay.

PlayStation Seemingly Brings Back the DRM Policy Sony Mocked Xbox For in 2013, Sparking Massive Backlash

The PlayStation Store DRM screen displays 'Don't Starve Together: Console Edition' with download status 'Completed' and a valid period end date of 14/5/2026 and 'Remaining Time' of 20 days.

Some PlayStation users have noticed a new online DRM policy for digital games purchased on the PlayStation Store: newly purchased digital games now display aΒ "Valid Period"Β tag showing a start date, an end date, and a countdown timer. If the console does not connect to the internet within 30 days, the game's license reportedly expires, and the title becomes unplayable until an online connection is restored. Crucially, this only affects games purchased after March 2026, so titles already in players' libraries before the update are unaffected. The story broke over the weekend through Lance McDonald, the well-known modder who managed to […]

Read full article at https://wccftech.com/playstation-drm-policy-xbox-2013-backlash/

ACEMAGIC’s Palm-Sized F5A Mini PC Pairs AMD’s Ryzen AI 9 HX 470 With OCuLink & Dual USB4 For $759

ACEMAGIC's F5A Mini PC Packs AMD Ryzen AI 9 HX 470 With 86 TOPS AI Compute, Priced $759 For DIY & $1299 For 32 GB Config

ACEMAGIC has launched its brand new Mini PC, which packs AMD's Ryzen AI 9 HX 470, and starts at a price point of $759. ACEMAGIC Highlights Multi-Use Case Design of its F5A Mini PC, Which Packs AMD's Fastest Gorgon Point APU, The Ryzen AI 9 HX 470 Mini PCs make compact Agentic AI possible, and for this purpose, ACEMAGIC is releasing its brand new F5A Mini PC. The Mini PC is just as compact as one might expect, giving your PC setup a lot of breathing space. In terms of specs, the F5A comes packed with an AMD Ryzen AI […]

Read full article at https://wccftech.com/acemagic-palm-sized-f5a-mini-pc-pairs-amd-ryzen-ai-9-hx-470-oculink-dual-usb4-759-usd/

PlayStation 6 Could Run Path Tracing at 60 FPS as RDNA 5 Was Built for It, but Mandatory Handheld Support Threatens to Hold It Back

A sleek black console with blue lighting, labeled as 'PS6 PlayStation 6,' on a dark background.

The PlayStation 6 is said to deliver 10x ray tracing performance over the base PlayStation 5, but the real-world FPS gain will be closer to 3x in games that don't use much ray tracing. However, these improvements could allow the system to run games with path tracing at 30 and even 60 FPS, according to the tech experts at Digital Foundry, as Codemasters managed to get a path-traced game running on PlayStation 5 Pro at 30 FPS with some computational headroom to boot. During the latest episode of their weekly podcast, the tech experts went over the F1 25's Path […]

Read full article at https://wccftech.com/playstation-6-path-tracing-60-fps-rdna5-built-for-it/

NVIDIA Not Interested In HBF Memory Despite 4TB Stacks Dwarfing HBM, Google Reportedly Locked As Sampling Begins This Year

HBF Memory Offers Higher Capacities Than HBM But NVIDIA Isn't Interested In It (Yet), Sampling This Year With Google A Key-Customer 1

High-Bandwidth Flash or HBFD, the memory that offers more capacity than HBM, will not be used by NVIDIA; instead, Google is going to be a key customer. NVIDIA To Keep on Using HBM, HBF Not Yet In Consideration, But Set For Sampling Later This Year NAND DRAM has seen big adoption with the recent AI push. While primarily used for storage, such as SSDs, the upcoming flash memory technology could play a major role with HBF (High-Bandwidth Flash), the next generation NAND DRAM tech that bridges the gap between HBM and NAND Flash. HBF is being co-developed by SanDisk and […]

Read full article at https://wccftech.com/nvidia-not-interested-in-hbf-memory-despite-4tb-stacks-dwarfing-hbm/

Flydial – Automate cold calls and book meetings at scale


Flydial provides a dedicated outbound AI cold caller that pitches your product, handles objections, and books meetings around the clock. Upload contacts from CSV or your CRM, pick a voice and persona, and sync calendars to auto-schedule demos. Track performance with real-time analytics, run multiple parallel dialing lines, and tailor tone and messaging to your market. Pricing starts at $10/month plus per-minute usage, letting you replace or augment SDRs at a fraction of the cost.

View startup

Chinese Silk Typhoon Hacker Extradited to U.S. Over COVID Research Cyberattacks

A Chinese national accused of being a member of the Silk Typhoon hacking group has been extradited to the U.S. from Italy.Β  Xu Zewei, 34, was arrested in July 2025 by Italian authorities for his alleged links to the Chinese state-sponsored threat group and for orchestrating cyber attacks against American organizations and government agencies between February 2020 and June 2021, including

Valve is β€œhard at work” on Steam Deck 2

Valve’s next-gen Steam Deck is in development, but it won’t launch soon Valve has confirmed that its Steam Controller is launching next week, but that’s not the only piece of hardware that Valve’s working on. The Steam Machine and Steam Frame are still due to launch this year, and Valve is already working on its […]

The post Valve is β€œhard at work” on Steam Deck 2 appeared first on OC3D.

(PR) EarFun Launches the Hi-Res AI-Powered Clip 2 Earbuds with Language Translation

EarFun, a leading innovator in wireless audio technology, today announced the launch of its groundbreaking EarFun Clip 2 earbuds, setting a new standard for comfort, performance, and affordability. Packed with cutting-edge features including AI translation, Hi-Res audio, and a groundbreaking ergonomic design, the EarFun Clip 2 earbuds redefine the wireless listening experience.

Unlocking Language Barriers with Instant AI Translation
Breaking down language barriers has never been easier with the EarFun Clip 2 earbuds' groundbreaking AI translation feature. According to EarFun, the innovative Clip 2 by EarFun isn't just about passive listening but about engaging the world around you. Supporting over 100 languages, these earbuds empower users to connect with individuals from other places and cultures with seamless real-time translation. EarFun offers this feature in their companion EarFun Audio app, where you can also select a host of other customization features for how you use the Clip 2.

Switch 2 Hefty Final Fantasy VII Rebirth Demo Drops Today, And It Silences Every Doubt About The Port’s Quality

Three characters from 'Final Fantasy VII Rebirth' looking out of a window with expressions of awe.

Following Final Fantasy VII Remake Intergrade's release on Nintendo Switch 2 and Xbox consoles back in January, Square Enix will soon allow owners of these systems to continue Cloud's journey with Final Fantasy VII Rebirth from June 3. However, the first steps of this second journey can be taken right now thanks to a meaty playable demo that has gone live today. Unlike the PlayStation 5 demo, which only covered Chapter 1 of the full game, the Nintendo Switch 2 and Xbox Series X|S demos are packed with more content, as it lets players not only experience the famous Nibelheim […]

Read full article at https://wccftech.com/switch-2-final-fantasy-vii-rebirth-demo-silence-doubt/

AMD Calls AI PCs β€œThe New Enterprise Standard” As Enterprise Adoption Grows To 81% With The Agentic AI Boom

AMD Says AI PCs Are The "New Enterprise Standard" As Adoption Grows Massively Amidst Agentic AI Boom 1

AMD AI PCs are seeing massive adoption in the enterprise segment as Agentic AI rages on to be the next frontier of computing. Only 4% of Organizations Don't Have Plans To Deploy AI PCs; the rest have already deployed AI PCs, as AMD leads the charge in the AI PC Segment The focus on localized AI and the rise of Agentic AI have rapidly boosted the adoption of AI PCs. This has led AMD to see a big demand for AI PCs in the Enterprise segment as organizations shift their workflows, aligning with future AI requirements. AI is reshaping how […]

Read full article at https://wccftech.com/amd-says-ai-pcs-are-the-new-enterprise-standard-for-agentic-ai/

Resident Evil Requiem Developer Teases Free Mini-Game’s Imminent Release, Reveals That a β€˜Chapter 2’ Was Scrapped

Leon Kennedy in Resident Evil Requiem stands in the rain holding a bloody axe, surrounded by bodies in a dimly lit street.

Resident Evil Requiem has been such a smashing success with its seven million units sold in less than two months, the fastest ever in the horror franchise, that it even caused the upward revision of CAPCOM's fiscal year 2026 forecast from a net profit of $319.9M to $341.8M (+6.9%). Now, all those fans who loved the game are waiting for the first post-launch content drop: the mini-game, which will be released as part of a free update. Well, in an interview with Japanese website Denfaminicogamer, Game Director Koshi Nakanishi and Producer Masato Kumazawa have teased that it might be released […]

Read full article at https://wccftech.com/resident-evil-requiem-mini-game-release-chapter-2-scrapped/

Intel, AMD & MediaTek Are Boosting CPU Production But Prices Continue To Increase & Lead Times Reach 1-Year Mark

Intel, AMD & MediaTek Are Boosting CPU Production As Prices Continue To Increase & Lead Times Reach 1-Year Mark 1

Major CPU manufacturers such as Intel, AMD & MediaTek are boosting their CPU production, but prices continue to rise. CPU Makers Boost Production, But This Won't Change Current Pricing Trends & Shortages Agentic AI demand has prompted a CPU shortage that has further led prices to increase. As demand swells, the GPU to CPU ratio has fallen from 8:1 to 4:1, and is on the path to hit 1:1 sooner or later. As processors continue to face supply constraints, multiple chipmakers are now boosting their production capacities. Supply chain sources indicate that after Intel decided to prioritize its limited production […]

Read full article at https://wccftech.com/intel-amd-mediatek-are-boosting-cpu-production-prices-continue-to-increase/

Map Your Video – Know what to film before you press record


Map Your Video is a video planning tool that helps short-form video creators turn their ideas into a clear, structured shot list before they pick up their phone. Instead of hitting record and hoping for the best, you map out each frame of your Reel or Short so you know exactly what to film and in what order. When the map is done, you open your shot list and film scene by scene. It's built for ADHD solopreneurs who have many ideas but get stuck in the stall-record-delete loop. The problem was never the idea, but not knowing what to film. Map Your Video fixes that before you press record.

View startup

FRNTIR – Car pricing intelligence with zero dealer influence


FRNTIR is buyer-funded car pricing intelligence with zero dealer money or kickbacks. You get independent market data, leverage points, and a negotiation playbook so you know what to pay and when to walk.

Start free with Match My Ride (top 3 AI-matched picks and Decision Guide) and the Purchase Planner. Upgrade to the $49 Negotiation Package (VIN pricing and dealer playbook) or $99 Scout (AI ranks your top 3 picks and selects the best). It helps recover the $3,000+ buyers lose to dealer markups.

View startup

PlayStation 5 Players Without Recent PSN Server Access May Lose Access to Digital-Only Games

One of the often-touted benefits of console gaming is how physical media enables access to games, even when there is no internet connection for the DRM to verify the game's license. Now, it seems as though the Sony's PS5 will start locking players out of digital-only games if the console has been unable to connect to PSN servers within the last 30 days. This is according to testing done by Hikikomori Media, who found the issue with Wild Arms 4 and Vampire Crawlers, both of which were purchased as digital-only titles in April 2026.

The content creator simulated an offline PS5 by disconnecting it from the internet, resetting the internal clock by removing the CMOS battery, and then booting up the PS5 without an internet connection. Effectively, he tricked the PS5 into thinking that it had been offline for an extended period of time. The end result is that both Wild Arms 4 and Vampire Crawlers were unable to launch after the reboot. It's notable that games purchased before March 2026 would still launch without a hitch. Based on this testing and further testing with a PS4, which revealed a timer showing how long it had been since the console could call home, it seems as though games purchased after March 2026 will need to connect to PSN servers at least once every 30 days in order to be playable.

Valve Confirms Staggered Hardware Launch Due to DRAM Crisis: "This Doesn't Have RAM in It"

Valve confirmed not too long ago that it had delayed announcing the pricing of the Steam Machine, Steam Frame, and Steam Controller because of the ongoing DRAM shortage and price volatility crisis that has plagued the PC hardware industry in recent months. Eventually, the gaming giant caved and announced the Steam Controller ahead of the rest of the hardware that was meant to ship alongside it, and in a recent interview with Polygon, Valve confirmed what we all suspected: The Steam Machine and Steam Frame are still being held back by the DRAM crisis. As Valve hardware engineer, Steve Cardinali, explained about the early launch of the controller: "This doesn't have RAM in it, and it's not as complicated to start getting out the door for us. We're ready for it. We wanted to build up quantity so that we could try to address everybody who wants one at launch, but it's possible that the demand for it far exceeds our expectations."

He goes on to explain that there was never a hard-and-fast rule that the Controller, Frame, and Machine had to launch together, although he confirmed that Valve would not launch the Machine ahead of the Controller. It's safe to assume, then, that the Steam Machine and Steam Frame were originally meant to launch around May as wellβ€”Cardinali says that the Controller and Machine are "a pair made in heaven," so it only makes sense to try to launch them around the same time, even if "there's no point in holding it back while we work through the other stuff." According to VR industry insider, Brad Lynch on X, the Steam Machine is more severely affected by the memory supply issues than the Steam Frame, since it relies on on-package RAM and on mobile RAM, which doesn't appear to be as severely impacted by the shortages. Reading between the lines, it seems entirely possible that the Steam Frame may launch ahead of the Steam Machine as long as RAM prices remain elevated and unstable.

Intel Xeon 6 With AMX Accelerate Microsoft’s Azure Local, Scaling Deployments To 1000s of Servers

Intel Xeon processors with 'Azure Local' text overlay and Microsoft logo.

Intel & Microsoft have joined hands to scale up Azure Local deployments from 100s to 1000s of servers using Xeon 6 CPUs. Microsoft & Intel Scale Up "Azure Local" From 100s to 1000s With Xeon 6 CPUs, Adjusting To User Needs Without Architectural Redesign Press Release: Azure Local is the foundation for Microsoft’s Sovereign Private Cloud, allowing organizations to run cloud-consistent infrastructure on hardware they own and operate within their sovereign boundary. It supports deployments across connected, intermittently connected, or fully disconnected environments. With Azure Local disconnected operations, customers retain the ability to apply policy enforcement, role-based access control, auditing, […]

Read full article at https://wccftech.com/intel-xeon-6-accelerate-amazons-azure-local-scaling-deployments-to-1000s-of-servers/

Screenplay Assistant – Type your story scene by scene and get a properly formatted screenplay


Screenplay Assistant is a writing tool for people who have always wanted to write a screenplay. You type your story scene by scene in your own words and the app turns it into a real, properly formatted screenplay. No learning curve or expensive software.

It is the first app from PLAYCODE3.ai, a small studio building everyday tools by hand. Currently in closed beta. Request access to be one of the first writers in.

View startup

MediTailor – AI meditation that generates unique sessions based on your mood


MediTailor delivers personalized, AI-generated meditation sessions that adapt to your mood, goals, and progress. It generates each session from scratch, so no two sessions are the same. Built on neuroscience and cognitive research, MediTailor guides you through goal-oriented journeys and tracks your emotional progress while keeping your words private.

View startup

❌