❌

Reading view

Intel Arc Pro B70 32 GB GPU Tested In Games – Up To 40% Faster In Raster & 65% Faster In RT Versus B580, Trades Blows With 5060 Ti 16GB

Intel Arc Pro B70 32 GB GPU Tested In Games - Up To 40% Faster In Raster & 65% Faster In RT Versus B580, Trades Blows With 5060 Ti 16GB

Intel's recently released Arc Pro B70 32 GB GPU has been tested in games, outperforming the Arc B580, and trading blows with the RTX 5060 Ti. Intel Arc Pro B70 GPU Gaming Benchmarks Give Us A Hint of What The Arc B770 "Big Battlemage" GPU Could've Offered To Gamers Back in March, Intel launched its Arc Pro B70 graphics card based on the Battlemage BMG-G31 GPU, which we all know as Big Battlemage. This bigger Battlemage chip was long-awaited in the gaming segment, but Intel decided to focus its efforts towards the AI market, equipping the card with a large […]

Read full article at https://wccftech.com/intel-arc-pro-b70-32-gb-gpu-tested-in-games-65-percent-faster-vs-b580-trades-blows-with-5060-ti/

xAI Is Reportedly Using Just 11% of Its 550,000 NVIDIA GPUs, While Meta and Google Squeeze Out 43-46% From Their Fleets

xAI Is Reportedly Using Just 11% of Its 550,000 NVIDIA GPUs, While Meta and Google Squeeze Out 43-46% From Their Fleets

xAI is reportedly able to utilize just over 10% of its entire NVIDIA GPU fleet, as report suggests lackluster AI software stack optimizations. AI Software Stack Bottlenecks Are An Industry-Wide Problem, As xAI Is Only Able To Utilize 11% of Its Entire NVIDIA GPU Installation. The Information has reported that Elon Musk's xAI, the software firm behind Gorq and other key AI-based components, is only able to utilize a small chunk of its total installed GPU capacity. Currently, xAI runs around 550,000 NVIDIA GPUs, which are a combination of H100s and H200s. These are deployed within xAI's Memphis and Colussus […]

Read full article at https://wccftech.com/xai-using-just-11-percent-gpus-while-meta-google-squeeze-out-much-more/

Micron CEO Warns AI Is Only in the β€˜First Innings’ as Memory Supply Tightens, With DRAM and NAND Demand Set to Exceed 50% of Industry TAM

SSD Prices To Skyrocket By 50% In 2024 As Manufacturers Implement Large-Scale Price Bumps 1

Micron posted a record Q2 as DRAM demand increases, but its CEO says this is just the beginning as more memory is required for AI to reach its full capabilities. Micron CEO Sees Demand For Faster Memory To Increase Massively For AI To Reach Its Full Potential Memory & Storage maker, Micron, has seen exceptional growth across all of its businesses, which include DRAM, NAND, and HBM. The growth comes from skyrocketing demand for their products as the Agentic AI craze continues to lift off every memory and storage firm. During a conversation with CNBC, Micron's CEO said that what […]

Read full article at https://wccftech.com/micron-ceo-warns-ai-is-only-in-the-first-innings-memory-supply-tightens-dram-nand-demand/

Ryzen 9 PRO 9965X3D CPU Leak Points To AMD’s First 16-Core β€œPRO” CPU With 3D V-Cache

Ryzen 9 PRO 9965X3D CPU Leak Points To AMD's First 16-Core "PRO" CPU With 3D V-Cache

AMD seems to be preparing its first PRO CPU with 16 "Zen 5" cores & 3D V-Cache, the Ryzen 9 PRO 9965X3D. AMD's Ryzen PRO Lineup Used To Max Out at 12-Cores, But That Changes With The Upcoming 16-Core Ryzen 9 PRO 9965X3D AMD Ryzen PRO Desktop CPUs are designed for professionals, content creators, and AI users. These chips come with advanced security, manageability, and stability versus the standard chips. For a long time, AMD's Ryzen PRO family has featured up to 12-core models, but it looks like the company is about to release its fastest chip to date. On […]

Read full article at https://wccftech.com/ryzen-9-pro-9965x3d-cpu-leak-amd-first-16-core-pro-cpu-with-3d-v-cache/

From Toilets to AI Chips: Toto Joins MSG Maker Ajinomoto as Japan’s Strangest Beneficiaries of the AI Spending Frenzy

From Toilets to AI Chips: Toto Joins MSG Maker Ajinomoto as Japan's Strangest Beneficiaries of the AI Spending Frenzy

Well, isn't that unexpected? A Japanese toilet maker has just announced that it will pivot to AI chips, leading to a 18% stock surge. Ceramics Used For Making Toilets Are Now Going To Power The AI Chip Industry Who would've predicted that a company involved in making toilets and other sanitary equipment would start making AI chips? Well, Toto, A Japanese company, has just announced that and is seeing a boost to its stock. The company leverages creamics for its current operations, but the same ceramics can be used for making components that are critical to AI infrastructure. As per […]

Read full article at https://wccftech.com/from-toilets-to-ai-chips-toto-joins-msg-maker-ajinomoto-as-japans-strangest-beneficiaries-of-the-ai-spending-frenzy/

QNAP Pairs a 6-Year-Old Zen 2 EPYC With NVIDIA’s 96GB RTX PRO 6000 Blackwell in Its New Edge AI NAS

Two NVIDIA GPUs are placed on a QNAP storage unit, with a monitor displaying a desert scene in the background.

QNAP has introduced its new AI NAS, which packs a 16-Core AMD EPYC "Zen 2" CPU & can be paired with up to an RTX PRO 6000 Blackwell GPU. Old Meets New In QNAP's "QAI-h1290FX" AI NAS: 16 Zen 2 Cores & 96 GB RTX PRO 6000 GPU The "QAI-h1290FX" is QNAP's latest Edge AI offering that combines two distinct hardware components. But before we get into the details, it should be mentioned that QNAP's latest AI NAS is designed for LLM, RAG, and various GenAI applications. Two components power the server, the first is the AMD EPYC 7302P CPU, […]

Read full article at https://wccftech.com/qnap-pairs-6-year-old-zen-2-epyc-with-nvidia-96gb-rtx-pro-6000-blackwell-in-new-edge-ai-nas/

Sub-1nm Process Technology Won’t Arrive Till 2034, Logic Roadmap Highlights 2D FETs For 0.2nm & Sub 0.2nm Nodes By 2043-2046

Sub-1nm Process Technlogy Won't Arrive Till 2034, IMEC Logic Roadmap Highlights 2DFETs For 0.2nm & Sub 0.2nm Nodes By 2043-2046

Moore's Law has slowed down, but progress continues in logic development as a new roadmap points to sub-1nm process nodes around 2034. It will Be Years Before Process Technology Go Sub-1nm, But They Are In Development: 0.7nm by 2034 & <0.2nm by 2046 Process technologies have slowed down as we transition into the Angstrom era. While newer nodes continue to offer uplifts, they are getting expensive to produce as the machinery needed to achieve newer designs comes at higher costs. Furthermore, the reliance on chiplets through advanced package solutions has reduced the need to shift to newer nodes immediately, as […]

Read full article at https://wccftech.com/sub-1nm-process-node-technology-wont-arrive-till-2034-logic-roadmap-2dfets-sub-0-2nm-2046/

TSMC’s A16 β€˜1.6nm’ Node Promises 10% Speed Boost or 20% Power Cut Over 2nm, With Backside Power Hitting Production by Q4 2026

TSMC's next-generation A16 or 1.6nm process tech will be the start of its "Angstrom Era" journey, delivering improved performance/power profiles versus 2nm. Angstrom Era For TSMC - A16 Process Tech Offers Better Performance/Power Profiles Versus 2nm While Adding Backside Power At the 2026 VLSI symposium, TSMC will be presenting its A16 process node technology. A16 is part of TSMC's Angstrom era family of nodes, which includes A14 and the recently announced A13 & A12. In a preview provided by VLSI of the upcoming paper titled "T1.5", TSMC restates the performance/power profile advantages. One of the biggest features for A16 will […]

Read full article at https://wccftech.com/tsmc-a16-node-promises-speed-boost-power-cut-over-2nm-backside-power-production-q4-2026/

Tenstorrent Vows to β€˜Crush Everyone’ as Galaxy Blackhole Hits 350 Tokens/s on DeepSeek R1, Undercutting NVIDIA’s GB300 5x AI TCO

Several Tenstorrent server units are installed in a data center rack, displaying intricate geometric vent designs with visible green LED indicators.

Tenstorrent made a bold claim during their TT-Deploy livestream, saying they are going to crush everyone at everything, including AI, with their Galaxy servers. Tenstorrent Galaxy Supercluster Offers 10x Faster GenAI Video, And Destroys Current-Gen GPUs With "Blitz Mode", Offering 350+ Tokens/s In DeepSeek R1 Jim Keller and his Tenstorrent are on a mission to challenge the existing AI hierarchy with their RISC-V-powered platforms. As such, the company unveiled its latest Galaxy Blackhole servers for AI at scale. With Galaxy Blackhole, Tenstorrent offers a fully Networked and native AI solution that includes compute, memory, and networking, all unified into a […]

Read full article at https://wccftech.com/tenstorrent-vows-to-crush-everyone-galaxy-blackhole-hits-350-tokens-on-deepseek-r1-undercut-nvidia-gb300-ai-tco/

Agentic AI Pushes CPUs to Pack 400 GB of Memory, 4x More Than Today, as DRAM Shortage Spirals Toward 2027

CPUs or GPUs, but require lots of memory for running Agentic AI, and this demand is spiraling to unseen levels as DRAM constraints persist. CPUs Running Agentic AI Will Be Equipped With Up to 400 GB of Memory, Further Crushing The DRAM Supply Chain Memory makers are earning big profits but are also unable to meet the demand. We have seen reports on how major manufacturers are rapidly expanding their production facilities, but these are yet to become operational, and Samsung itself has stated that 2027 will be worse for the DRAM industry than 2026, so it's looking like a […]

Read full article at https://wccftech.com/agentic-ai-pushes-cpus-to-pack-400-gb-of-memory-4x-more-than-today/

Intel & AMD Work On APX, The Next Major Step In The Evolution of x86 Architectures, Adds More Performance Without Requiring More Die Area & Power

Intel & AMD Work On APX, The Next Major Step In The Evolution of x86 Architectures, Adds More Performance Without Requiring More Die Area & Power

APX or Advanced Performance Extensions are the next evolution of x86 as Intel & AMD co-develop new standards for the architecture. APX Expands the x86 Instruction Set, Bringing Faster Performance & New Features That Will Benefit Both Intel and AMD's Next-Gen Chips Two days ago, we talked about ACE (AI Compute Extensions), which is a unified instruction set that aims to increase matrix-multiply performance for next-gen x86 chips. ACE is just one part of the grander scheme in which both Intel and AMD are working together to evolve the x86 architecture under a single unified framework through the recently established […]

Read full article at https://wccftech.com/intel-amd-work-on-apx-the-next-major-step-in-the-evolution-of-x86-architectures/

AMD Aims Ryzen AI Halo Mini PC at NVIDIA’s $4,699 DGX Spark, Targets June Launch With Ryzen AI MAX+ 395

A compact device with a textured surface featuring the AMD logo on the front, placed on a marble table next to a keyboard.

AMD's powerful Ryzen AI Halo Mini PC is reportedly launching in June and features the top Ryzen AI MAX+ 395 SoC. AMD's Powerful & Compact "Ryzen AI Halo" Mini PC Is Expected To Launch Next Month AMD recently hosted its AI Dev Day in San Francisco, where it once again showcased the Ryzen AI Halo Mini PC. A Reddit user, 1ncehost, has posted pictures from the event where Jack Hyuh was holding in the Mini PC, and based on the information, AMD is expected to launch the Mini PC in June, which is next month. The company didn't state any […]

Read full article at https://wccftech.com/amd-aims-ryzen-ai-halo-mini-pc-at-nvidia-dgx-spark-targets-june-launch/

NVIDIA Discontinues Older Jetson Modules With LPDDR4 Memory As DRAM Prices & Supply Gets Worse

Five NVIDIA GPUs are arranged around a central graphic with the text 'PHASE OUT' surrounded by green energy beams and digital effects.

NVIDIA is discontinuing its older Jetson developer modules due to shortages and rising prices of LPDDR4 memory. Higher LPDDR4 Prices & Memory Shortages Affect SBCs Too, as Older NVIDIA Jetson Modules Now Being Phased Out NVIDIA's Jetson modules are embedded platforms that are designed for robotics and Edge AI workloads. Think of them as NVIDIA's Raspberry Pi solutions. These SBCs or Single-Board Computers come in various shapes and sizes, all featuring a compact form factor. But in light of recent memory shortages & increasing prices, NVIDIA's partners are now phasing out older Jetson modules. The models that have been affected […]

Read full article at https://wccftech.com/nvidia-discontinues-old-jetson-modules-with-lpddr4-as-memory-prices-supply-worsen/

JEDEC Pushes DDR5 MRDIMM Memory to 12,800 MT/s, a 45% Jump Over Gen1 as AI Datacenters Starve for Bandwidth

An illustrated JEDEC RAM module is shown against a futuristic, digital background.

JEDEC continues the development of DDR5 MRDIMM memory for next-gen datacenters, now offering increased bandwidth. MRDIMM DDR5 Memory is designed to meet the growing bandwidth & Capacity Demands of AI Datacenters & JEDEC just unleashed its fastest design yet Two years ago, the first DDR5 MRDIMM memory was announced, offering up to 256 GB capacities per module and 8800 MT/s speeds. Now, as AI & datacenter requirements continue to grow, JEDEC is advancing its MRDIMM roadmap ahead with faster modules that operate at speeds of up to 12,800 MT/s, marking a 45% uplift over the initial design. Press Release: JEDEC […]

Read full article at https://wccftech.com/jedec-pushes-ddr5-mrdimm-memory-to-12800-mtps-big-bandwidth-boost-for-ai-datacenters/

Huawei Is The Biggest Winner In China’s AI Market After NVIDIA Pullout, AI Share To Reach 60% This Year

A close-up of a chip with intricate circuitry and orange and gold components, positioned above a motherboard in a dark, futuristic environment.

NVIDIA pulling out from China's AI market has boosted the share of domestic firms, with Huawei winning the biggest chunk. Huawei's China Market Share in AI to Reach 60% as NVIDIA CEO Confirms Zero Chip Share in China After US Policy Shift The US Government has moved to ban all leading-edge AI chip sales in China. NVIDIA, being the biggest name in the AI industry, has seen its share drop to zero after the policy shift, prompting an increased reliance on domestically produced chips in China. Currently, the situation has prompted China's AI chipmakers to double down on production and […]

Read full article at https://wccftech.com/huawei-biggest-winner-in-china-ai-market-after-nvidia-pullout-60-percent-ai-share-2026/

❌