Reading view

Server DRAM Pricing Jumps 50%, Only 70% of Orders Getting Filled

Server memory has become the scarcest commodity in tech. According to DigiTimes, Samsung and SK Hynix quietly circulated fourth-quarter contract appendix that retroactively increased RDIMM prices by 40-50%. Even hyperscalers that signed agreements in August must now pay the new rate or risk losing their queue position. The two Korean manufacturers simultaneously reduced confirmed allocations by 30%, pushing Tier-1 U.S. and Chinese cloud order books to an effective 70% fill rate and eliminating the safety stock most buyers believed they had secured. Module manufacturers such as Kingston and ADATA now pay $13 for 16 GB DDR5 chips that cost $7 six weeks ago, an increase large enough to erase entire gross margin.

Smaller OEMs and channel distributors have been told to expect 35-40% fulfillment through the first quarter of 2026, forcing them either to gamble on the spot market or idle production lines. Even the older DDR4, now reduced to 20% of global DRAM output, is affected. Switches, routers, and set-top boxes that still use DDR4 are suddenly facing very long lead times because no fabrication plant wants to allocate wafers to trailing nodes. Analysts at TrendForce now forecast that the DRAM shortfall will outlast the 2026 hyperscaler build-out, meaning the industry's next relief valve may not be new capacity but a potential demand contraction, an outcome no manufacturer is willing to budget for.

Windows 11 Will Start Memory Scans After BSOD to Prevent Future Issues

The land of Windows 11 is finally getting a feature most users will appreciate, with the introduction of the new memory scanning for issues after a blue screen of death (BSOD) happens. "We're introducing a new feature that helps improve system reliability. If your PC experiences a bug check (unexpected restart), you may see a notification when signing in suggesting a quick memory scan," noted Windows Insider Program lead Amanda Langowski. Additionally, the "If you choose to run it, the system will schedule a Windows Memory Diagnostic scan to run during your next reboot (taking 5 minutes or less on average) and then continue to Windows. If a memory issue is found and mitigated, you will see a notification post-reboot."

Microsoft notes that this first wave flags every bug check so they can watch how memory glitches turn into blue screens, and they will refine targeting of these issues in the later updates. At the moment the preview will not run on Arm64 PCs, machines that have Administrator Protection turned on, or any BitLocker setup that boots without Secure Boot enabled. Users that are part of the Windows Insiders Dev and Beta channels will be able to access this feature. Windows 11 Insider Preview Build 26220.6982 (KB5067109) and Windows 11 Insider Preview Build 26120.6982 (KB5067109) are the first in the rollout, so they can beta test the feature before it hits the main stable Windows 11 branch.

NVIDIA DGX Spark Reportedly Runs at Half the Power and Performance

NVIDIA's DGX Spark machine, designed as the ultimate AI box for local and fast AI prototyping, is reportedly operating at half the expected power and performance levels. John Carmack, founder of AGI-focused Keen Technologies and former CTO of Oculus VR, claims that the DGX Spark mini PC is not meeting its specified performance. NVIDIA rates the DGX Spark mini PC at 240 W of system power, but Carmack's benchmarks indicate it only draws about 100 W, effectively halving the power draw and performance. The DGX Spark's peak throughput is approximately 31 TeraFLOPS for FP32 and around 1,000 TOPS with NVIDIA's NVFP4 reduced-precision format. At BF16 dense compute, it is supposed to achieve 125 TeraFLOPS, but these targets are not being met. The measured compute is about 480 TeraFLOPS at FP4 and only about 60 TeraFLOPS at BF16.

After facing multiple delays, NVIDIA's DGX Spark has finally reached developers. However, many are reporting software and firmware issues on NVIDIA's end. There may also be thermal throttling problems, causing the chip to reduce frequency and voltage to prevent overheating. In some cases, the system has rebooted, potentially due to inadequate cooling. The GB10 SoC is rated for a 140 W TDP, and the 128 GB configuration of LPDDR5X could add several dozen additional watts. Therefore, a 100 W power draw doesn't seem feasible for the DGX Spark. It remains to be seen whether a software or firmware update will address these issues, or if NVIDIA will provide an aftermarket cooling solution for its $3,999 machine if it continues to overheat.

(PR) AMD Sells ZT Systems Data Center Manufacturing to Sanmina

AMD (NASDAQ: AMD) today announced the completion of the agreement to divest the ZT Systems U.S.-headquartered data center infrastructure manufacturing business to Sanmina (NASDAQ: SANM). As part of the transaction, AMD retains ZT Systems' world-class design and customer enablement teams to accelerate the quality and time-to-deployment of AMD AI systems for cloud customers. Additionally, Sanmina becomes a preferred new product introduction (NPI) manufacturing partner for AMD cloud rack and cluster-scale AI solutions to further strengthen the AMD ecosystem of ODM and OEM partners.

"Rack-scale innovation marks the next chapter in the AMD data center strategy," said Forrest Norrod, executive vice president and general manager, Data Center Solutions business unit at AMD. "By extending our leadership from silicon to software to full systems, we're giving cloud and AI customers an open, scalable path to deploy AMD performance faster than ever. Our strategic partnership with Sanmina brings U.S.-based manufacturing strength together with AMD AI systems design and enablement expertise to deliver quality, speed and flexibility at scale."

AMD Introduces "New" Ryzen Branding: Ryzen 10 "Zen 2" and Ryzen 100 "Zen 3+" Processors

AMD just pulled another name shuffle, now with a couple of older-generation processors. The company quietly added two "new" processor families, Ryzen 100 and Ryzen 10, to its public price list, even though the dies inside very likely date back to when "Rembrandt" and "Mendocino" first shipped, around 2021-2022. If you see a bargain laptop this holiday season with one of those badges, you might be buying a rebadged Rembrandt or Mendocino chip dressed up for 2025 shelves. Early listings suggest the top Ryzen 100 models are essentially reshuffled Rembrandt, Zen 3+ parts carrying 8C/16T, a Radeon 680M iGPU and the FP7-R2 platform, repackaged with a 28 W TDP that vendors commonly tune between 15 W and 30 W.

Further down the stack, Ryzen 5 parts shave off cores and clocks but often keep the same 680M GPU. The Ryzen 10 line looks like a reuse of Mendocino, Zen 2 silicon for entry level systems that pairs 4C/8T CPU with a cut-down 2-CU Radeon 610M and typical 15 W power targets. Many of the refreshed listings still point to PCIe 3.0 as the fastest lane and leave USB4 as optional, so don't expect any modern I/O fireworks on these parts. Readers might wonder why is AMD bringing older silicon back? Our best guess is that AMD may be monetizing existing inventory, wafers and validated designs produced when 6 nm capacity was limited and costly. Having secured that capacity at TSMC, AMD was obliged to use it even if it moved on to newer CPU generations.

NVIDIA Prepares New Driver Era for "Rubin" GPUs in 2026

NVIDIA appears to be gearing up for software support for "Rubin," the architecture expected to succeed "Blackwell," after a set of driver patches introduced a new identification register called BOOT_42. The initial patches landed on public mailing lists late on a Friday and include a change titled for next-generation GPUs. In the patch cover letter, NVIDIA engineer John Hubbard explains that architecture and revision metadata will transition from NV_PMC_BOOT_0 to NV_PMC_BOOT_42, with BOOT_0 being cleared. These updates were submitted to the Nova kernel driver tree, which is serving as the public platform for this early work. The patches adjust the detection path so the driver can recognize devices from Turing onward using the new register layout.

The submission also adds comments that document how BOOT_0 and BOOT_42 evolve across GPU generations, and it removes a couple of legacy types, Spec and Revision. According to the notes, this simplification removes some complexity and makes future boot42 support updates easier to follow, avoiding generation-specific adjustments when new revisions arrive. The changes confirm that current generations from Turing through Blackwell will continue to use BOOT_0 while future, post-Blackwell chips will be identified solely via BOOT_42. Targeting Rubin, this work is another example where NVIDIA is actively preparing driver support ahead of hardware launch. If Rubin enters volume production in the second half of 2026 as previously reported, having the identification and selection logic ready in advance should help reduce the gap between shipments and software enablement, meaning that fast-moving hardware release cycles will be followed very closely with optimized software.

NVIDIA Orders AIC Partners to Prioritize 16 GB RTX 5060 Ti as Gamers Reject 8 GB Model

NVIDIA's GeForce RTX 5060 Ti is available in 16 GB and 8 GB models. It appears that NVIDIA is focusing more on producing the 16 GB version, as many gamers prefer GPUs with higher capacity. As a result, the 8 GB version has become less of a priority, with most attention now on the RTX 5060 Ti 16 GB. According to Board Channels, NVIDIA has directed its AIC partners to prioritize the production of the 16 GB SKU over the 8 GB model, making the latter a secondary option. It is obvious from this that the demand for this mid-range GPU is good, and that gamers in this range want to future-proof their systems to last a bit longer, as modern games increase their VRAM usage.

This is happening almost four months after we reported an insight into the sales of NVIDIA's GeForce RTX 5060 Ti 16 GB and 8 GB models from one of Germany's largest retailers, Mindfactory.de. According to the sum of the units sold, the 16 GB version of the GeForce RTX 5060 Ti is outselling the 8 GB version by more than 16 times, which represents a 1,600% difference. Mindfactory lists each GeForce RTX 5060 Ti model with a tag indicating the number of units sold, showing how many units were sold to customers of a specific model. Even with good availability of both SKUs, gamers have consistently opted for the 16 GB version. MSRPs of both GPUs are $50 apart, with the 8 GB SKU at $379.99 and 16 GB at $429.99.

Intel's Advanced Packaging Could Be America's Answer to Silicon Dominance

Intel might be the key to re-realizing the American dream of advanced chip production and packaging on U.S. soil. Under the Trump administration, the United States has been making significant efforts to establish leading-edge chip manufacturing domestically. Achieving this goal is challenging for several reasons. In the race for the most advanced silicon, only a few major veteran players remain: Intel, Samsung, and TSMC. Among these, Taiwan-based TSMC has consistently overcome obstacles to become the leader in the semiconductor industry. Through a strategic approach to semiconductor development, TSMC has excelled in both leading-edge node production and advanced chip packaging, enhancing performance.

Historically, Intel has faced difficulties with leading-edge semiconductor node production, even outsourcing some chip manufacturing to TSMC. However, there is a significant opportunity for Intel to not only produce silicon but also establish itself as a major packaging partner for many manufacturers, including TSMC. TSMC's facilities in Arizona address the issue of USA-based manufacturing only partially. While TSMC's Arizona Fab 21 produces 4 nm wafers, these wafers must be sent back to Taiwan for packaging, disrupting the sovereign supply chain that is crucial for domestic manufacturing. Addressing these issues could present a good opportunity for Intel, even if Team Blue doesn't manufacture the underlying silicon.

Intel "Nova Lake" Xe3P iGPU to Ship in Five SKUs, But Ray Tracing Won't Be Universal

Intel plans to ship "Nova Lake" in a five-SKU lineup that covers desktops and several mobile segments, with its Xe3P next-generation GPU generation serving as the integrated GPU. The lineup includes "Nova Lake-S" for desktops, "Nova Lake-U" for standard low-power laptops, "Nova Lake-UL" for ultra-low-power devices, "Nova Lake-H" for gaming laptops, and "Nova Lake-HX" for high-performance mobile systems. This range indicates that Intel aims to extend Nova Lake's reach beyond the primarily mobile-focused Panther Lake generation, offering clearer product differentiation for OEMs. The company seems to be selectively enabling advanced GPU features across these SKUs rather than providing a uniform feature set.

Initial driver listings suggest that ray tracing will be available on select models. Nova Lake-U and Nova Lake-H are expected to support ray tracing, while Nova Lake-S, Nova Lake-HX, and Nova Lake-UL may not. This type of segmentation is a common strategy to differentiate products using the same silicon, influencing purchasing decisions for gamers, creators, and laptop buyers. On the open-source front, preliminary work to integrate Xe3P identifiers and basic support into Linux and Mesa has begun. Patches have been introduced to add Nova Lake PCI IDs and entries for the Iris Gallium3D OpenGL driver and the ANV Vulkan driver, though the support is currently preparatory and marked as experimental. Early enablement suggests that we will be hearing a lot more about these SKUs soon, and how Intel plans to separate Xe3P iGPU into ray tracing and non-ray tracing SKUs.

Capacity Constraints Hit Intel as Demand Outstrips Intel 10/7 Node Supply

Intel's Q3 earnings delivered a solid number, but a few interesting things are happening at Intel's Fabs that aren't related to any leading-edge node production. According to Intel CFO David Zinsner, "In Q3, Intel Foundry delivered Intel 10 and 7 volume above expectations," indicating that demand for Intel's products built on these nodes is significantly exceeding supply. Zinsner noted, "Capacity constraints, especially on Intel 10 and Intel 7, limited our ability to fully meet demand in Q3 for both data center and client products," suggesting strong demand for Core and Xeon processors built on these nodes.

Intel first introduced Intel 7 node, previously known as 10 nm Enhanced SuperFin, with 12th Generation "Alder Lake." However, that is now discontinued and Intel continues to ship 13th Generation "Raptor Lake" and 14th Generation Core "Raptor Lake Refresh" client CPUs on the Intel 7 node. For Xeon server chips, the company uses the Intel 7 node for the 4th Generation "Sapphire Rapids," 5th Generation "Emerald Rapids" Xeon Scalable and manufactures the I/O die for the latest "Granite Rapids" Xeon 6, where the compute dies are made on the Intel 3 node. High demand from the gaming sector for 13th and 14th Gen gaming CPUs, along with strong demand for Xeon processors, has created a manufacturing bottleneck. Intel will prioritize delivering data center products first and gaming products second.

Intel Reassures 18A Yields Are Okay, Continues 14A Development

Intel has just reported its Q3 earnings, delivering stronger than expected results and major revival of operational income. During the conference call after the earnings were published, Intel reassured its investors that the company is working with good, predictable 18A yields. In addition to 18A, the company confirmed that 14A node development is shaping up to be a leading-edge node co-designed with customers. CFO David Zinsner confirmed "We are making steady progress on Intel 18A. We are on track to bring Panther Lake to market this year. Intel 18A yields are progressing at a predictable rate, and Fab 52 in Arizona, which is dedicated to high-volume manufacturing, is now fully operational. In addition, we are advancing our work on Intel 18A, and we continue to hit our PDK milestones. Our Intel 18A family is the foundation for at least the next three generations of client and server products."

Adding to that, the CFO commented on 18A yields: "I wouldn't say Intel 18A yields are in a bad place. They're where we want them to be at this point. We had a goal for the end of the year, and they're going to hit that goal. To be fully accretive in terms of the cost structure of Intel 18A, we need the yields to be better. That's like every process. That's what happens. It's going to take all of next year, I think, to really get to a place where that's the case." For Intel, 18A yields are now at a very low defect rate where manufacturing even some of the bigger dies is not a problem or financial burden. However, as every node, it matures over time where defect rates are constantly reduced to increase operating margin and reduce waste dies. Hence, Intel is still investing into the 18A refinement, and the node will stick for a very long time.

AMD Radeon AI PRO R9700 GPU Arrives October 27 at $1,299 for Retail

AMD has finally set the retailer date and the price tag for its professional Radeon AI PRO 9000 series graphics cards, suitable for tasks like professional visualization or workstations that can accommodate up to four cards. Set to launch on October 27 for retailers, the flagship Radeon AI PRO R9700 will arrive at $1,299. Previously, AMD focused solely on the OEM and system integrator segment, leaving retailers and individuals building their own workstations for AI and professional visualization workloads without options. The company aimed to launch this GPU for retailers by the end of July, but supply constraints prevented this, as OEMs and system integrators consumed the entire supply.

These Radeon AI PRO R9700 GPU features a 4 nm "Navi 48" GPU, equipped with 64 RDNA 4 compute units, providing 4,096 stream processors and 128 AI accelerators for enhanced matrix operations across various data formats. A key improvement over the RX 9070 XT gaming GPU is the increased memory capacity, now doubled to 32 GB. The memory subsystem includes 20 Gbps GDDR6 across a 256-bit wide interface, delivering 640 GB/s of memory bandwidth, supported by a 64 MB 3rd Gen Infinity Cache. The AMD Radeon AI PRO R9700 achieves up to 191 TeraFLOPS FP16 dense performance and up to 1531 TOPS INT4 sparse performance, all within a power footprint of up to 300 W. The card is packed in dual-slot design with blower-style cooler, so dense multi-GPU configurations can be easily achieved.

Microsoft Clippy Makes a Comeback as Mico AI Assistant

Microsoft brings a playful new face to Copilot with Mico, a shape-shifting animated orb that appears when users speak to the assistant. Fans of the 30-year-old long discontinued Clippy assistant might recognize the spirit, but Microsoft says Mico is meant to be less intrusive and more responsive, reacting to conversations instead of interrupting them. The rollout begins in the United States, the United Kingdom, and Canada, and for a bit of nostalgia there is an Easter egg. Tapping Mico repeatedly briefly transforms it into the classic paperclip. Mico is tied to Copilot's new memory features, which let the assistant remember user preferences and past conversations to provide more personalized responses.

Microsoft also introduced Learn Live, a teaching mode that turns Mico into an interactive tutor that walks users through problems step by step with whiteboards and visual aids. The company frames Mico as a persistent companion rather than a tool you summon only occasionally, aiming to give Copilot a consistent identity and presence. The update includes practical improvements beyond the avatar. Copilot can now support group conversations with up to 32 participants, with tools for breaking up tasks and voting on decisions. A feature called Real Talk lets Copilot push back when users make incorrect assumptions, rather than simply agreeing, and Copilot Health draws on reputable sources to answer medical questions and suggest doctors. These changes are part of the Copilot Fall Release, which starts in the United States before expanding to other regions in the coming weeks.

Aggressive Profit Targets Prompt Xbox Layoffs, Price Increases, and Budget Cuts

Inside Microsoft's Xbox gaming division there is a battle for aggressive company profits, which by the looks of it must be accomplished at any cost. According to Bloomberg, Xbox CFO Amy Hood set a company-wide target to reach a 30% profit margin for the entire Xbox unit back in the fall of 2023. Since then, the company has been undergoing significant restructuring, including price increases, layoffs, project cancellations, and budget cuts to meet this profit target. As some readers may recall, Xbox recently revamped its subscription lineup, introducing Essential, Premium, and Ultimate tiers, along with a PC-only plan. These plans saw a 50% price increase, leaving many gamers unhappy with the steep rise in their subscription costs.

According to estimates from S&P Global Market Intelligence, the video game industry averages profits of 17-22%. This figure reflects recent years of game development, which have seen a significant rise in costs due to the complex nature of modern games. For example, the upcoming Grand Theft Auto 6 is rumored to have a $2 billion budget, an astonishing amount for game development. Such high costs are necessary to support all aspects of the complex development cycle. A 30% profit margin, which Xbox aims to achieve, is rare in the gaming industry and is typically reserved for a few top-tier publishers. As we approach Microsoft's earnings call scheduled for October 29, we expect an update on the Xbox division. To reach a 30% margin, Xbox may focus on more premium hardware, higher-priced software, and budget cuts on non-essential projects.

Copilot for Gaming Screenshots Your Games, Uploads Them to MS, Enabled by Default

Microsoft is experimenting with integrating Copilot AI functionality across various platforms. However, this time the company is doing something that many users might not be aware of. The Gaming Copilot AI is being trained by default using users' screengrabs, performing OCR on the screenshots, and sending the extracted text back to Microsoft servers to train its large language models—all without users being informed of this process. A Resetera forum member, "RedbullCola," discovered this through network traffic analysis, noticing that the Gaming Copilot AI app was transmitting data to Microsoft servers without his knowledge. To make matters worse, this occurred while he was testing a game under an NDA. This situation raises multiple security concerns, as it could potentially breach the NDA agreement between the user and the game company, risking the leakage of text from the unreleased game.

We verified this ourselves and can confirm that the feature is indeed enabled by default. The only option you can opt-in for is model training on your voice chats, which is thankfully disabled by default. To check whether your setting for training text models from your screenshots is being used, go to Game Bar, open Gaming Copilot, head to Settings, and click on Privacy. There, you can uncheck the training slider to prevent your screen records from being sent to Microsoft for LLM training. For example, under the GDPR, using EU users' personal data to train AI requires a transparent notice. Automatically enabling screenshot collection for model training without an appropriate legal basis or explicit informed consent could breach the law, and invoke a lawsuit from EU's governing body.

Samsung Foundry and TSMC Share Tesla AI5 Chip Production

Samsung has secured a significant win for its previously struggling foundry business as Tesla has decided to divide the manufacturing of its new AI5 accelerator between Samsung and TSMC. The chips will be produced at Samsung's plant in Taylor, Texas, and TSMC's facility in Arizona. This decision is part of a supply strategy following Samsung's previous $16.5 billion agreement to manufacture Tesla's upcoming AI6 processor. By using both top-tier foundries, Tesla aims to rapidly increase capacity while maintaining production within the United States. This dual-fab strategy is a pragmatic choice, allowing Tesla to avoid bottlenecks and build excess capacity for both its vehicles and expanding data-center needs. Elon Musk has praised Samsung's foundry capabilities, noting that the company already manages components of Tesla's AI4 platform, making the shared production of AI5 a logical step.

Meanwhile, Samsung is investing in next-generation High-NA EUV equipment and high-precision tools to close the gap with competitors on advanced nodes. The design of the AI5 chip is Tesla's desire for tight control of the componentry. Engineers have reportedly removed legacy GPU and image-signal blocks to create a more simplified chip optimized for real-time driving and AI workloads. This approach reduces die size and enhances cost efficiency. Tesla claims that the new architecture offers significantly better performance per dollar compared to its older designs, with company benchmarks indicating substantial improvements over AI4. In some cases, up to 40x performance improvements. Maybe the dream of Tesla Full Self-Driving (FSD) can now be accomplished with newer chips and better technology, if the conditions allow it.

Intel "Nova Lake" Could Arrive Without AVX10, APX, and AMX Support

Intel is still finalizing its next-generation "Nova Lake" specifications. However, the core microarchitecture has been done, the NPU design is finished and upgraded, and the instruction set is finalized. It could be the case that these consumer CPUs will again lack support for Intel's AVX10, APX, and AMX instruction set—designed to encompass 512-bit acceleration and fast vector/matrix multiplication for tasks like content creation, encoding/decoding, AI, and much more—remaining exclusive to Intel Xeon processors. According to the latest GCC compiler patch, the initial Nova Lake enablement patch does not include AVX10, AMX, or APX. This might suggest that support for these new x86 instructions could be missing from the next-generation CPU. Initially, Intel decided to disable AVX-512 for "Alder Lake" and "Raptor Lake" client-oriented CPU lineups, meaning that only server-grade Xeon CPUs got the accelerated 512-bit data paths.

This GCC patch seems to contradict the findings from August, when Intel introduced AVX10.2 support for "future Intel Core processors" in the oneDNN software library, likely targeting the Nova Lake series. We need to wait for more information from other enablement patches or direct confirmation from Intel regarding whether AVX10, APX, and AMX will be included in the 52-core Nova Lake SKUs. Enabling fast vector and matrix acceleration on 52 cores would be ideal for consumers, content creators, gamers, and workstation users alike. In comparison, AMD introduced full AVX-512 support with its "Zen 5" cores across its product range, providing a performance boost in optimized applications for both desktop and server CPUs. This marked the first time AMD did not emulate 512-bit AVX, which previously required splitting 512-bit data into two 256-bit units to be processed over two cycles.

NVIDIA H100 Makes Its Cosmic Debut as the First AI Accelerator in Space

NVIDIA has decided that covering the earth with data centers is not enough, so the company now wants to accelerate infrastructure in the outer space. According to the latest blog post, NVIDIA will be a part of an AI-equipped satellite from Starcloud, powered by a single NVIDIA H100 "Hopper" GPU. After many months or even years of validation and testing that goes into designing a space payload, Starcloud and NVIDIA have designed a 60 kg package the size of a small fridge, that will be installed in space satellite orbiting the planet Earth. This will be more powerful than any current space infrastructure, at more than 100x compute capacity than the existing accelerators. We still don't know if this GPU has been radiation hardened, or it has just been isolated enough for space radiation to not represent a problem.

In space, the power requirements can easily be neglected. With the use of solar panels, any hardware is capable of tapping nearly infinite energy for powering the infrastructure. That way, NVIDIA H100 GPU can run at full throttle crunching any AI computation without any issues, and making sure that the Starcloud satellite is doing its job. The cooling structure of this AI module is intriguing. It is engineered to function effectively without relying on convective cooling, instead using radiative heat rejection into the cold expanse of space. Since outer space is essentially a vacuum, there is no air to conduct heat away. Therefore, the module must internally conduct heat from the computing components, such as CPUs and GPUs, to external radiators that emit infrared radiation into deep space. When oriented favorably, with radiators facing away from direct solar irradiation, the module can maintain a lower operating temperature than would be possible on Earth. However, this is not a "magic" solution. The design must carefully balance factors such as the thermal interface, radiator area, emissivity, orientation, and internal heat flow.

Intel "Nova Lake" Processor to Feature 6th Generation NPU, Generation Ahead of "Panther Lake"

Intel's next-generation "Nova Lake" desktop processors will feature 6th generation Neural Processing Unit (NPU) design, which is an entire generation ahead of the 5th generation NPU Intel introduced with "Panther Lake." As the latest Linux patch from Intel's engineers confirms, Nova Lake will already feature an upgraded NPU, making the 5th generation NPU in Panther Lake lasting only a single generation.

The patch states: "Add support for NPU6 generation that will be present on Nova Lake CPUs. As with previous generations, it maintains compatibility so no bigger functional changes." As a reminder, the 5th generation NPU in Panther Lake boosts up to 50 TOPS, making it compatible for Copilot+ AI PCs. So we can expect the NPU in Nova Lake to get a bigger TOPS budget, with the same software compatibility. This is great news for software that introduced NPU acceleration in the previous generations, as the additional performance increases will go smoothly.

Intel's "Jaguar Shores" AI Accelerator Expected to Finalize Design by Mid-2026

Intel is reportedly in talks with Taiwanese ASIC design house Alchip over the development of its next AI accelerator family, codenamed "Jaguar Shores." The design could be wrapped up by the first half of 2026, and be Intel's first solid footing in datacenter AI hardware after years of playing catch-up. Additionally it also comes at the time as the company shifts to a faster, more regular cadence for AI product refreshes. If the timeline holds, Jaguar Shores will be one of the clearest tests yet of whether Team Blue can stand as a meaningful challenge to the well-established players like NVIDIA, AMD, and countless other ASIC designers.

Under the deal being discussed, Alchip would not simply be a contractor but a deep engineering partner, taking on tasks from RTL integration and extensive verification to advanced packaging and system-level validation, while leaving wafer fabrication to leading foundries. Intel will be supplying IP and system requirements, while Alchip will be supplying custom ASIC execution. This could shorten development cycles and reduce risk. For Intel, outsourcing those heavy-lift engineering stages makes sense as it lets the company focus on architecture and software while leveraging Alchip's experience getting complex AI designs from spec to silicon.

ChatGPT Atlas Launches as an AI-Powered Alternative to Regular Browsers

OpenAI has introduced a new web browser called ChatGPT Atlas, designed to serve as the ultimate AI experience center. This browser is a fork of Google's open-source Chromium and aims to compete with Google Chrome, Microsoft Edge, and Firefox. While challenging the dominant trio in the browser space is difficult for most companies, OpenAI is determined to make its mark. Initially launching on macOS, Atlas opens searches with answers generated by ChatGPT, while still providing quick access to traditional search results, images, and other views. A persistent Ask ChatGPT sidebar allows users to interact with the assistant without leaving the page. Advanced agent features, which can perform web tasks, are initially available to paid users, though the basic browsing experience is accessible to everyone from the start.

Atlas is designed to streamline the transfer of information between tabs and chat windows. The assistant can summarize pages, compare products, answer questions about on-screen content, and assist with editing or checking code by having contextual access to the active tab. The browser also includes a memory function to tailor responses over time, a Cursor Chat tool for inline editing, and an Operator agent that can handle small tasks like making reservations, creating shopping lists from recipes, and filling out forms. Familiar browser tools such as tabs, bookmarks, history, and password management are also included, presented in a straightforward layout.

AMD Could Launch 16-Core Ryzen 9 9950X3D2 CPU with 192 MB L3 Cache and 200 W TDP

AMD seems to be preparing an update to its "Granite Ridge" Ryzen 9000 X3D series, introducing two new chips targeted at high-performance gamers and creators. Among the two, a new SKU type has appeared, carrying the "X3D2" suffix. Leaks suggest a new flagship model, the Ryzen 9 9950X3D2, featuring 16 cores and 32 threads. This chip reportedly includes 3D V-Cache on both chiplets, totaling approximately 192 MB of L3 cache. It is said to operate at a base frequency of 4.30 GHz, with a boost up to around 5.6 GHz, and has a TDP of about 200 W. The 9950X3D2, with its new naming convention, suggests a tradeoff of slightly lower peak boost speeds (-100 MHz) compared to the single-stack X3D Ryzen 9 9950X3D model, but offers significantly increased cache capacity. This combination could be a sweet spot for cache-sensitive workloads and certain games.

The expanded cache and increased thermal headroom may necessitate more attention to cooling and VRM design for system builders, potentially leading AMD to price these chips at a premium or adjust current MSRPs. A companion model, the Ryzen 7 9850X3D, is expected to follow a similar design with eight cores, 96 MB of L3 cache, and a boost up to 5.6 GHz, with a 120 W power envelope. The 9850X3D appears to be a refined, higher-clocked (+400 MHz) alternative to existing 8-core Ryzen 7 9800X3D models, offering improved single-thread performance that is highly valued by gamers, while maintaining a relatively modest TDP.

Microsoft Readies Windows-on-Arm for Gaming With AVX/AVX2 Support

With the release of the October 2025 cumulative update (KB5066835) for Windows 11 versions 24H2 and 25H2, Microsoft has expanded Prism emulation for Arm64 devices. This update enables Windows to emulate x86 vector instructions such as AVX and AVX2 on Arm PCs. As a result, many apps and games that previously couldn't run on Arm hardware can now at least launch. Intel's AVX is a set of SIMD instructions commonly used by media tools, game engines, and creative applications to speed up tasks like video encoding, physics, and effects. Arm-based chips like Snapdragon do not natively support Intel's proprietary AVX instructions, which often led to failures or poor performance in these programs. Prism's emulation translates AVX instructions, allowing these programs to run on Arm devices. However, because it is emulation, there is additional CPU overhead, and performance will vary significantly depending on the application.

Arm, on the other hand, offers the Scalable Vector Extension (SVE), which enhances vector processing capabilities in the Armv8-A architecture, with SVE2 being its successor in Armv9-A. Unlike traditional SIMD designs with fixed-size vectors, SVE supports flexible vector lengths from 128 to 2048 bits in 128-bit increments. This flexibility allows chip manufacturers to select the vector size that best suits their processors. The main advantage is that programs written for SVE can run on any SVE-compatible processor without recompilation, regardless of the processor's vector size.

NVIDIA's Arizona-Made "Blackwell" GPUs Must Travel Back to Taiwan for Packaging

Just three days ago, NVIDIA and TSMC celebrated the production of the first "Blackwell" GPU at TSMC's Fab 21 complex in Phoenix, Arizona. This marked a significant milestone for both companies, as it signifies the creation of the world's most powerful silicon on U.S. soil. However, the production phase is only part of the story, as the chip benefits not only from an advanced node but also from TSMC's advanced packaging systems. Specifically, the Blackwell family of GPUs utilizes TSMC's Chip on Wafer on Substrate (CoWoS) packaging system.

For CoWoS access, TSMC must send these chips back to the Taiwanese facilities overseas and house the raw die in a CoWoS-S configuration with silicon interposer. This establishes a supply chain dependency on Taiwanese facilities, which remain essential for completing the production of advanced silicon. As a result, the chip isn't entirely manufactured on American soil, indicating that some issues still need to be resolved. TSMC produces NVIDIA's Blackwell chips using a custom 4N 4 nm class node specifically designed for NVIDIA. This die is paired with eight stacks of HBM3E, all housed within a CoWoS-S package.

Intel Core Ultra X7 358H "Panther Lake" 12-Core Xe3 iGPU Tested

Intel's latest Core Ultra X7 358H CPU has appeared in recent Geekbench tests, marking the debut of the "Panther Lake" series with 12 Xe3 cores in online benchmark databases. Notably, this is the first instance of the Xe3 GPU in its complete integrated form. This lineup features the fastest memory subsystem among all Panther Lake models, with a speed of 9,600 MT/s, and a CPU architecture that includes four P-cores, eight E-cores, and four LPE-cores, totaling 16 CPU cores running at a max frequency of 4,776 MHz. While this SKU is most likely an engineering sample, it should be close to the final clocks Intel plans to ship. In OpenCL testing, the Panther Lake iGPU scored 52,014 points, which is not particularly high. This score places it in the range of NVIDIA's GeForce RTX 3050 discrete desktop GPU.

Historically, Intel's integrated GPUs tend to score lower in OpenCL benchmarks within Geekbench's GPU testing suites, where AMD and NVIDIA usually perform better. Therefore, this result should be interpreted with caution. When comparing Intel's iGPU performance across generations in this benchmark, from the "Arrow Lake-H" Arc 140T iGPU to the 12-core Xe3 GPU, there is roughly a 25% improvement favoring the newer architecture. Interestingly, the PTL iGPU performs closely to the Arc A550M discrete GPU, which features 16 cores on the first-generation Xe "Alchemist" microarchitecture, with a 60-80 W TDP and 8 GB of GDDR6. This may suggest that the OpenCL testing in Geekbench may not be fully optimized with drivers and performance enhancements. Before the official release, Intel is expected to update the drivers to ensure the GPU can deliver maximum performance.

NVIDIA RTX PRO 5000 "Blackwell" GPU with 72 GB GDDR7 Memory Appears

NVIDIA's RTX PRO 5000 "Blackwell" GPU has officially appeared with 72 GB of GDDR7 ECC memory, on the company's product page. Initially, NVIDIA launched an RTX PRO 5000 Blackwell GPU with 48 GB of GDDR7 memory spread across 24 modules with the capacity of 2 GB, sandwiching the PCB on both sides. However, NVIDIA has now decided to update the GPU with the new 3 GB memory modules, which are upping the total capacity to 72 GB on a 512-bit bus with 1,344 GB/s memory bandwidth, making the card a much more attractive offer for professionals working in the fields of data science, AI, HPC, and many other areas such as professional video editing. However, this 24 module design with 12 GDDR7 modules on each side is less impressive than the RTX PRO 6000 Blackwell GPU, which arranges 16 such modules on each side of the PCB to achieve a remarkable 96 GB capacity.

As a reminder, the RTX PRO 5000 GPU is based on the GB202 GPU and carries 14,080 CUDA cores, 440 TMUs, and 176 ROPs. Upgraded NVENC/NVDEC blocks speed up high-quality encode/decode for live production and fast video editing. Multi-Instance GPU (MIG) gives IT and cloud/VDI admins an easy way to split a GPU into isolated instances, so more users get guaranteed, accelerated performance. NVIDIA has wrapped all of this in a 300 W TDP in a dual-slot GPU, with a blower fan. Pricing structure is still unknown, but given that its older sibling, the RTX PRO 6000 Blackwell, is retailing just under $10,000, we could see the 72 GB version just a grand or two lower. Additionally, as we see NVIDIA incorporating 3 GB GDDR7 memory modules inside more products, the company could be gearing up for a "SUPER" overhaul of the Blackwell gaming lineup.

AWS Outage Takes Half of the Internet Down, Services Now Recovering

Amazon Web Services (AWS) outage has just taken down half of the known internet. However, services are now recovering. Just as Monday started on the Pacific Daylight Time (PDT), AWS reported incidents of multiple AWS services being down, with a simple confirmation that "We are investigating increased error rates and latencies for multiple AWS services in the US-EAST-1 Region." Online services like Amazon.com, Alexa, ChatGPT, Epic Games Store, Epic Online Services, social media like Snapchat, and games like Fortnite, have been down since.

A few hours later, AWS confirmed that it had isolated the issue. "We have identified a potential root cause for error rates affecting the DynamoDB APIs in the US-EAST-1 Region," they stated, adding, "This issue also impacts other AWS services in the US-EAST-1 Region. Global services or features that rely on US-EAST-1 endpoints, such as IAM updates and DynamoDB Global tables, may also be experiencing issues. During this time, customers may be unable to create or update Support Cases. We recommend customers continue to retry any failed requests." Subsequently, AWS confirmed that the "underlying DNS issue has been fully mitigated, and most AWS service operations are now succeeding normally."

Update 19:45 UTC: AWS is still experiencing some outage, but full recovery is in progress. The team stated "We continue to observe recovery across all AWS services, and instance launches are succeeding across multiple Availability Zones in the US-EAST-1 Regions."

NVIDIA "Vera Rubin" NVL144 Servers Set for 2026 Volume Production

Foxconn has begun engineering validation for NVIDIA's "Vera Rubin" NVL144 MGX liquid-cooled racks, with plans for mass production in the latter half of 2026, according to internal supply-chain documents obtained by Taiwan Economics Daily. Each NVL144 unit features 144 Rubin GPUs with HBM4, updated 88-core Vera CPUs, and ConnectX-9 NICs and NVLink 6 interconnect for CPU-to-GPU and GPU-to-GPU communication, effectively doubling the compute density of the current NVL72 platform. The Taiwanese contractor Foxconn currently handles about 60% of NVIDIA's AI server output and is expanding its immersion-cooling and copper-plating lines at its Wisconsin and Houston facilities to meet new domestic manufacturing requirements.

The gap between GB300 peak shipments and Rubin's first silicon will be six to eight months, the shortest generational gap in NVIDIA's history. Despite the accelerated roadmap, Blackwell Ultra remains NVIDIA's main revenue source through mid-2026. Procurement data shows that cloud providers have reserved around 400,000 GB300 nodes for delivery in late 2025 and early 2026, indicating strong near-term demand. At the same time, OpenAI has placed a large preorder for Vera Rubin silicon, securing initial production volumes and supporting Foxconn's capacity investments. The rapid transition from Hopper to Blackwell and then to Rubin within 30 months marks the fastest acceleration of datacenter equipment cycles, leading hyperscale operators to speed up depreciation schedules.

Windows 11 25H2 October Update Bug Renders Recovery Environment Unusable

It seems like Microsoft has encountered a significant issue with the latest Windows 11 25H2 October update, KB5066835. The company confirmed that this update disrupts mouse and keyboard functionality within the Windows Recovery Environment (WinRE), making them unresponsive and unusable. As a result, the WinRE feature is completely inoperative. WinRE is a built-in troubleshooting toolkit included with Windows. It's intended to assist users when their computer encounters startup problems or system issues. WinRE activates automatically when Windows crashes or fails to boot properly, but users can also access it manually to utilize various repair tools.

However, with the current problem affecting keyboard and mouse input, WinRE is essentially ineffective. Microsoft stated that "the USB keyboard and mouse continue to work normally within the Windows operating system," and assured users that they are "working to release a solution to resolve this issue in the coming days. More information will be provided when it is available." This is yet another incident related to the recent Windows 11 updates, which have previously caused localhost issues. The list of Windows 11 problems continues to grow as the latest updates are released. Microsoft maintains a Windows 11 version 25H2 known issues and notifications website that provides status updates on the latest problems, including this one.

Update October 21 16:05 UTC: Microsoft has issued a hot fix. The company claims that "this out-of-band (OOB) update includes quality improvements. This update is cumulative and includes security fixes and improvements from the October 14, 2025, security update (KB5066835)..." which you can check out here.
❌