Normal view

Today — 28 October 2025Main stream

Qualcomm Challenges Nvidia with In-House AI Accelerator Chips

27 October 2025 at 22:29
AH Qualcomm Logo (1)

Qualcomm has officially entered the AI chip race with the launch of its two new accelerator chips. With this move, the company aims for a major shift from its traditional focus on smartphone and wireless connectivity semiconductors. With the launch of its new AI200 and AI250 chips, the company has positioned itself as a new challenger in the booming data center market, currently dominated by Nvidia and AMD.

Qualcomm announces AI200 and AI250 accelerator chips

According to an official announcement, Qualcomm plans to commercially release its new accelerator chip, the AI200, in 2026. The AI250 is scheduled to launch later in 2027. Both of these chips are designed for large-scale, liquid-cooled server racks. They are capable of powering an entire rack with up to 72 chips acting as one system.

Qualcomm builds its data center chips on the same hexagon neural processing unit (NPU) as its mobile processors. According to the company’s general manager of data center and edge, Durga Malladi, this is a part of a strategic move. She says that “We first proved ourselves in other domains, and then scaled up to the data center level.”

The new AI chips are competing on cost, efficiency, and flexibility

Unlike NVIDIA, whose GPUs are primarily used for training AI models, Qualcomm’s chips focus on inference, running pre-trained models efficiently. The company claims its rack-scale systems will cost less to operate. It is said to consume around 160 kilowatts per rack, roughly similar to Nvidia’s systems.

Malladi further adds in her statement that Qualcomm will offer modular sales. Clients will be able to purchase full racks or individual components. Interestingly, even competitors like Nvidia or AMD could use Qualcomm’s CPUs or other data center parts. The new AI cards are also said to be capable of handling 768GB of memory, surpassing both NVIDIA and AMD in this metric.

The post Qualcomm Challenges Nvidia with In-House AI Accelerator Chips appeared first on Android Headlines.

Qualcomm’s New AI Rack-Scale Solutions Actually Uses LPDDR Mobile Memory Onboard, Boldly Hoping to Take on NVIDIA and AMD

27 October 2025 at 21:47

Qualcomm server emphasizing Rack-scale performance and Low total cost of ownership, featuring AI200 and AI250 models.

Qualcomm has announced its latest AI chips, which are designed to scale up to a purpose-built rack-level AI inference solution, but interestingly, they employ mobile memory onboard. Qualcomm's New AI Chips Take a 'Daring' Pivot Away From HBM To Target Efficient Inferencing Workloads Qualcomm has come a long way from being a mobile-focused firm, and in recent years, the San Diego chipmaker has expanded into new segments, including consumer computing and AI infrastructure. Now, the firm has announced its newest AI200 and AI250 chip solutions, which are reportedly designed for rack-scale configurations. This not only marks the entry of a […]

Read full article at https://wccftech.com/qualcomm-new-ai-rack-scale-solution-actually-uses-lpddr-mobile-memory-onboard/

Yesterday — 27 October 2025Main stream

Qualcomm’s Bold AI Inference Play Challenges NVIDIA Dominance

27 October 2025 at 18:46

The post Qualcomm’s Bold AI Inference Play Challenges NVIDIA Dominance appeared first on StartupHub.ai.

Qualcomm, a titan long synonymous with smartphone processors, is executing a strategic pivot, aiming to capture a significant slice of the burgeoning artificial intelligence inference market. This calculated move, detailed in a CNBC report by Kristina Partsinevelos, signals a direct challenge to NVIDIA’s established dominance, leveraging Qualcomm’s deep expertise in power-efficient neural processing units (NPUs). […]

The post Qualcomm’s Bold AI Inference Play Challenges NVIDIA Dominance appeared first on StartupHub.ai.

Before yesterdayMain stream

TSMC Says in the ‘NVIDIA vs ASIC War’, It Will Always Win as Customers From Both Sides Turn to the Chip Giant for Foundry Orders

26 October 2025 at 20:41

TSMC is well aware of the growing rivalry between GPU customers and ASICs, but the firm apparently doesn't worry much, as chip orders from both sides come directly to the Taiwan giant. TSMC Has Managed to Play 'From Both Sides' of the NVIDIA vs ASIC Competition, Being an Important Part of the Supply Chain The AI industry is experiencing growing compute demands every day, and firms like NVIDIA and AMD are struggling to keep up with the requirements from Big Tech, at least for now. At the same time, companies like Amazon, Google, and OpenAI are pursuing custom AI silicon […]

Read full article at https://wccftech.com/tsmc-says-in-the-nvidia-vs-asic-war-it-will-always-win/

❌
❌