Normal view

Today — 28 October 2025Main stream

Qualcomm’s New AI Rack-Scale Solutions Actually Uses LPDDR Mobile Memory Onboard, Boldly Hoping to Take on NVIDIA and AMD

27 October 2025 at 21:47

Qualcomm server emphasizing Rack-scale performance and Low total cost of ownership, featuring AI200 and AI250 models.

Qualcomm has announced its latest AI chips, which are designed to scale up to a purpose-built rack-level AI inference solution, but interestingly, they employ mobile memory onboard. Qualcomm's New AI Chips Take a 'Daring' Pivot Away From HBM To Target Efficient Inferencing Workloads Qualcomm has come a long way from being a mobile-focused firm, and in recent years, the San Diego chipmaker has expanded into new segments, including consumer computing and AI infrastructure. Now, the firm has announced its newest AI200 and AI250 chip solutions, which are reportedly designed for rack-scale configurations. This not only marks the entry of a […]

Read full article at https://wccftech.com/qualcomm-new-ai-rack-scale-solution-actually-uses-lpddr-mobile-memory-onboard/

Yesterday — 27 October 2025Main stream

Qualcomm’s Bold AI Inference Play Challenges NVIDIA Dominance

27 October 2025 at 18:46

The post Qualcomm’s Bold AI Inference Play Challenges NVIDIA Dominance appeared first on StartupHub.ai.

Qualcomm, a titan long synonymous with smartphone processors, is executing a strategic pivot, aiming to capture a significant slice of the burgeoning artificial intelligence inference market. This calculated move, detailed in a CNBC report by Kristina Partsinevelos, signals a direct challenge to NVIDIA’s established dominance, leveraging Qualcomm’s deep expertise in power-efficient neural processing units (NPUs). […]

The post Qualcomm’s Bold AI Inference Play Challenges NVIDIA Dominance appeared first on StartupHub.ai.

❌
❌