Reading view
NVIDIAβs Nemotron 3 Super Tops The Open-Source AI Model Chart, Beating DeepSeek & GPT-OSS
NVIDIA's Open-Source "Nemotron 3 Super" AI model has topped the EnterpriseOps-Gym leaderboard, showcasing NVIDIA's software prowess. NVIDIA Is Topping Both AI Hardware and Software Leaderboards With Its Open-Source Nemotron 3 Super, Leading The Pack In March this year, NVIDIA introduced its Neomtron 3 Super, a 120B AI model with 12B active parameters. Based on a hybrid MoE architecture, the model is designed to deliver a 5x throughput versus the previous Nemotron Super model, and tackles large context with a native 1M-token context windows that gives agents long-term memory for aligned, high accuracy reasoning. Some of the highlights of NVIDIA's Nemotron [β¦]
Read full article at https://wccftech.com/nvidia-nemotron-3-super-tops-the-open-source-ai-model-chart-beating-deepseek-gpt-oss/

NVIDIA Feynman GPUs Push Power Semi Content To $191,000, 17 Times Increase Over Blackwell As Industry Embraces 800V DC Architectures
As compute requirements grow in AI datacenters, so do the power requirements, which are estimated to reach 17x higher with NVIDIA's Feynman. NVIDIA Feynman Racks Estimated To Feature 17x Higher Power Semi Costs Per Rack Versus Blackwell NVIDIA Feynman GPUs feature several groundbreaking features and will launch in 2028, after Rubin. The company has been working hard to deliver more efficient AI solutions, but as requirements grow, power requirements have increased tremendously. Morgan Stanley Research has published a chart that visualizes the total power semi content of three AI rack solutions from NVIDIA. Starting with the baseline Blackwell or B200, [β¦]
Read full article at https://wccftech.com/nvidia-feynman-gpus-push-power-semi-content-17-times-higher-vs-blackwell/

AMDβs Server CPU Revenue Set To Surge 80% This Year, With An Estimated 1.9M AI GPUs Shipping By 2027
AMD's immense success for its CPUs and GPUs in the AI segments will be a key driver for the company in the current year & the years ahead. AMD Is Expected To Ship Nearly 1.9 Million AI GPUs By 2027, While It's Share In China Will Be Larger Than NVIDIA's Intel, gaining momentum this quarter, thanks to Agentic AI's reliance on CPUs, has given us a teaser of what to expect from AMD's upcoming earnings. As per UBS, AMD isn't as constrained as Intel in the CPU segment and is anticipated to grow its Server CPU revenue by 80% this [β¦]
Read full article at https://wccftech.com/amd-server-cpu-revenue-to-surge-80-percent-2026-estimated-1-9m-ai-gpus-shipping-by-2027/

ASUS Starves the RTX 5070 Ti as Memory Shortages Force a Pivot Toward the More Profitable RTX 5080
ASUS is planning to limit the production of its RTX 5070 Ti GPUs and prioritize the higher-end RTX 5080 models in its place. ASUS Prioritizes Higher-End RTX 5080 GPUs By Limiting Other 16 GB Models, Such As The RTX 5070 Ti While NVIDIA has confirmed that production across all RTX 50 GPUs remains stable, AIBs are doing things differently on their end. As per industry sources quoted by Channel Gate, it looks like ASUS is making some significant changes to its GPU allocation, where it will prioritize higher-end 16 GB graphics cards while limiting the supply of RTX 5070 Ti [β¦]
Read full article at https://wccftech.com/asus-starves-the-rtx-5070-ti-as-memory-shortages-force-a-pivot-toward-the-more-profitable-rtx-5080/

Anthropic Eyes UK Startupβs Fusion Tech Promising 100x Faster AI Inference at One-Tenth the Cost of NVIDIAβs Groq
Anthropic, the creators of Claude AI, are reportedly in early talks with a UK startup whose SRAM tech can boost AI inference by 100x & reduce costs by 10x. Anthropic Reportedly In Early Talks With Fractile, A UK-based Startup Working on the fusion architecture as an AI Inference Booster Currently, Anthropic sources its chips from various companies, including NVIDIA, Google, and Amazon. This trio allows the company to keep running its AI infrastructure without major concerns that are often associated with relying on a single chipmaker. But as compute demand intensifies in the AI space, many AI firms are now [β¦]
Read full article at https://wccftech.com/anthropic-sets-eyes-on-uk-startup-tech-speeds-up-ai-inference-100x-reduces-costs-10x/

NVIDIA Fast-Forwarded Co-Packaged Optics Five Years Ahead of Schedule, Arriving First With Its Feynman GPUs
NVIDIA's Feynman GPUs will be the first to feature Co-Packaged Optics, but this wasn't always the case until the AI giant decided to switch gears. Co-Packaged Optics Were Many Years Away, But NVIDIA Decided To Move Ahead With Its Feynman GPUs CPO or Co-Packaged Optics (Silicon Photonics) is the next-generation solution thatΒ reduces reliance on copper and harnesses light to transfer signals. These CPOs are packaged alongside hardware accelerators such as GPUs and will be a key solution for next-gen AI factories, offering improved interconnect latency and creating high-bandwidth connections between CPU and GPU. If we go by the original plans, [β¦]
Read full article at https://wccftech.com/nvidia-fast-forwarded-co-packaged-optics-five-years-ahead-arriving-with-feynman-gpus/

NVIDIAβs RTX 5050 Finally Crawls Onto Steamβs Hardware Survey, But Itβs Already Trailing AMDβs Lone RDNA 4 Entry
After a long time, the entry-level RTX 50 series GPU made its debut on the Steam Hardware Survey charts. All NVIDIA RTX 50 GPUs Appear on Steam Hardware Survey With RTX 5050 As the Latest Entry; 16 GB VRAM GPUs Now Closer in Popularity to 8 GB GPUs The previous Steam Hardware Surveys recorded almost all NVIDIA GeForce RTX 50 series GPUs, but the RTX 5050 always remained missing. Despite launching mid-2025, there was no sign of the GeForce RTX 5050 until now, when the card suddenly appeared on the Steam database. Both laptop and desktop variants are now available [β¦]
Read full article at https://wccftech.com/rtx-5050-makes-debut-in-steam-hardware-survey/

xAI Is Reportedly Using Just 11% of Its 550,000 NVIDIA GPUs, While Meta and Google Squeeze Out 43-46% From Their Fleets
xAI is reportedly able to utilize just over 10% of its entire NVIDIA GPU fleet, as report suggests lackluster AI software stack optimizations. AI Software Stack Bottlenecks Are An Industry-Wide Problem, As xAI Is Only Able To Utilize 11% of Its Entire NVIDIA GPU Installation. The Information has reported that Elon Musk's xAI, the software firm behind Gorq and other key AI-based components, is only able to utilize a small chunk of its total installed GPU capacity. Currently, xAI runs around 550,000 NVIDIA GPUs, which are a combination of H100s and H200s. These are deployed within xAI's Memphis and Colussus [β¦]
Read full article at https://wccftech.com/xai-using-just-11-percent-gpus-while-meta-google-squeeze-out-much-more/

Pentagon inks deals with Nvidia, Microsoft, and AWS to deploy AI on classified networks
Pentagon signs Nvidia, Microsoft, AWS for classified AI programs
Legal AI startup Legora hits $5.6B valuation and its battle with Harvey just got hotter
Morgan Stanley sees agentic AI lifting CPU demand

Mercedes-Benz to launch Nvidia-powered driving tech in S Korea







