Reading view

A New Security Layer for macOS Takes Aim at Admin Errors Before Hackers Do

A design firm is editing a new campaign video on a MacBook Pro. The creative director opens a collaboration app that quietly requests microphone and camera permissions. MacOS is supposed to flag that, but in this case, the checks are loose. The app gets access anyway. On another Mac in the same office, file sharing is enabled through an old protocol called SMB version one. It’s fast and

Amazon’s Cloud and AI Crossroads: Navigating Intense Competition and Infrastructure Demands

The post Amazon’s Cloud and AI Crossroads: Navigating Intense Competition and Infrastructure Demands appeared first on StartupHub.ai.

The burgeoning demands of generative AI are fundamentally reshaping the competitive landscape of cloud computing, compelling even market leaders like Amazon to critically assess their strategic investments. CNBC’s MacKenzie Sigalos, reporting on Amazon’s third-quarter earnings, underscored that the company’s cloud momentum and substantial AI infrastructure spending are now under intense scrutiny, with investors eager to […]

The post Amazon’s Cloud and AI Crossroads: Navigating Intense Competition and Infrastructure Demands appeared first on StartupHub.ai.

Amazon Layoffs Spell Doom Lord of the Rings MMO Which Apparently Wasn't Dead After All

It's no secret that Amazon Games had been working on some version of a Lord of the Rings MMO, although the game had already been cancelled once due to contract disputes, and the second version, which was being worked on in conjunction with Embracer, hasn't been seen or heard from in a while. The recent news of Amazon's mass layoffs shift towards casual games, and specifically away from MMOs and AAA games, raised questions about the future of the Lord of the Rings MMO, however Amazon had—and still hasn't—made an official proclamation regarding the game's demise.

Ashley Amrine, a software engineer affected by those Amazon Gaming layoffs has more or less confirmed, in a now-deleted LinkedIn post (via Rock Paper Shotgun), that the Lord of the Rings MMO has seemingly been cancelled. In her post addressing the fact that she had been laid off, Ashleigh Amrine, a former engineer at Amazon Games, wrote: "This morning I was part of the layoffs at Amazon Games, alongside my incredibly talented peers on New World and our fledgling Lord of the Rings game (y'all would have loved it)." Her revelation suggests that it would still have been a while before the new MMO launched, but it was previously described as "open-world MMO adventure in a persistent world set in Middle-earth, featuring the beloved stories of The Hobbit and The Lord of the Rings literary trilogy" when it was first announced. The team working on the game was the same one that had been responsible for New World, which, despite its hiccups, seemed to have largely been a success, managing to hold onto a fairly impressive ~35,000 daily players on Steam alone.

(PR) Apple Announces Fourth Quarter 2025 Results

Apple today announced financial results for its fiscal 2025 fourth quarter ended September 27, 2025. The Company posted quarterly revenue of $102.5 billion, up 8 percent year over year. Diluted earnings per share was $1.85, up 13 percent year over year on an adjusted basis.

"Today, Apple is very proud to report a September quarter revenue record of $102.5 billion, including a September quarter revenue record for iPhone and an all-time revenue record for Services," said Tim Cook, Apple's CEO. "In September, we were thrilled to launch our best iPhone lineup ever, including iPhone 17, iPhone 17 Pro and Pro Max, and iPhone Air. In addition, we launched the fantastic AirPods Pro 3 and the all-new Apple Watch lineup. When combined with the recently announced MacBook Pro and iPad Pro with the powerhouse M5 chip, we are excited to be sharing our most extraordinary lineup of products as we head into the holiday season."

(PR) Western Digital Reports Fiscal First Quarter 2026 Financial Results

Western Digital Corp. today reported fiscal first quarter 2026 financial results for the period ended October 3, 2025.

"Western Digital continues to execute well in a strong demand environment driven by growth of data storage in the cloud. In our fiscal first quarter, we achieved revenue and gross margin above the high end of our guidance range, while delivering strong free cash flow," said Irving Tan, CEO of Western Digital. "As AI accelerates data creation, Western Digital's continued innovation and operational discipline position us well to capture new opportunities and drive sustained shareholder value. Reflecting confidence in the company's strong business momentum, the Board of Directors has increased the quarterly cash dividend on the company's common stock by 25% to $0.125 per share."

Meet Aardvark, OpenAI’s security agent for code analysis and patching

OpenAI has introduced Aardvark, a GPT-5-powered autonomous security researcher agent now available in private beta.

Designed to emulate how human experts identify and resolve software vulnerabilities, Aardvark offers a multi-stage, LLM-driven approach for continuous, 24/7/365 code analysis, exploit validation, and patch generation!

Positioned as a scalable defense tool for modern software development environments, Aardvark is being tested across internal and external codebases.

OpenAI reports high recall and real-world effectiveness in identifying known and synthetic vulnerabilities, with early deployments surfacing previously undetected security issues.

Aardvark comes on the heels of OpenAI’s release of the gpt-oss-safeguard models yesterday, extending the company’s recent emphasis on agentic and policy-aligned systems.

Technical Design and Operation

Aardvark operates as an agentic system that continuously analyzes source code repositories. Unlike conventional tools that rely on fuzzing or software composition analysis, Aardvark leverages LLM reasoning and tool-use capabilities to interpret code behavior and identify vulnerabilities.

It simulates a security researcher’s workflow by reading code, conducting semantic analysis, writing and executing test cases, and using diagnostic tools.

Its process follows a structured multi-stage pipeline:

  1. Threat Modeling – Aardvark initiates its analysis by ingesting an entire code repository to generate a threat model. This model reflects the inferred security objectives and architectural design of the software.

  2. Commit-Level Scanning – As code changes are committed, Aardvark compares diffs against the repository’s threat model to detect potential vulnerabilities. It also performs historical scans when a repository is first connected.

  3. Validation Sandbox – Detected vulnerabilities are tested in an isolated environment to confirm exploitability. This reduces false positives and enhances report accuracy.

  4. Automated Patching – The system integrates with OpenAI Codex to generate patches. These proposed fixes are then reviewed and submitted via pull requests for developer approval.

Aardvark integrates with GitHub, Codex, and common development pipelines to provide continuous, non-intrusive security scanning. All insights are intended to be human-auditable, with clear annotations and reproducibility.

Performance and Application

According to OpenAI, Aardvark has been operational for several months on internal codebases and with select alpha partners.

In benchmark testing on “golden” repositories—where known and synthetic vulnerabilities were seeded—Aardvark identified 92% of total issues.

OpenAI emphasizes that its accuracy and low false positive rate are key differentiators.

The agent has also been deployed on open-source projects. To date, it has discovered multiple critical issues, including ten vulnerabilities that were assigned CVE identifiers.

OpenAI states that all findings were responsibly disclosed under its recently updated coordinated disclosure policy, which favors collaboration over rigid timelines.

In practice, Aardvark has surfaced complex bugs beyond traditional security flaws, including logic errors, incomplete fixes, and privacy risks. This suggests broader utility beyond security-specific contexts.

Integration and Requirements

During the private beta, Aardvark is only available to organizations using GitHub Cloud (github.com). OpenAI invites beta testers to sign up here online by filling out a web form. Participation requirements include:

  • Integration with GitHub Cloud

  • Commitment to interact with Aardvark and provide qualitative feedback

  • Agreement to beta-specific terms and privacy policies

OpenAI confirmed that code submitted to Aardvark during the beta will not be used to train its models.

The company is also offering pro bono vulnerability scanning for selected non-commercial open-source repositories, citing its intent to contribute to the health of the software supply chain.

Strategic Context

The launch of Aardvark signals OpenAI’s broader movement into agentic AI systems with domain-specific capabilities.

While OpenAI is best known for its general-purpose models (e.g., GPT-4 and GPT-5), Aardvark is part of a growing trend of specialized AI agents designed to operate semi-autonomously within real-world environments. In fact, it joins two other active OpenAI agents now:

  • ChatGPT agent, unveiled back in July 2025, which controls a virtual computer and web browser and can create and edit common productivity files

  • Codex — previously the name of OpenAI's open source coding model, which it took and re-used as the name of its new GPT-5 variant-powered AI coding agent unveiled back in May 2025

But a security-focused agent makes a lot of sense, especially as demands on security teams grow.

In 2024 alone, over 40,000 Common Vulnerabilities and Exposures (CVEs) were reported, and OpenAI’s internal data suggests that 1.2% of all code commits introduce bugs.

Aardvark’s positioning as a “defender-first” AI aligns with a market need for proactive security tools that integrate tightly with developer workflows rather than operate as post-hoc scanning layers.

OpenAI’s coordinated disclosure policy updates further reinforce its commitment to sustainable collaboration with developers and the open-source community, rather than emphasizing adversarial vulnerability reporting.

While yesterday's release of oss-safeguard uses chain-of-thought reasoning to apply safety policies during inference, Aardvark applies similar LLM reasoning to secure evolving codebases.

Together, these tools signal OpenAI’s shift from static tooling toward flexible, continuously adaptive systems — one focused on content moderation, the other on proactive vulnerability detection and automated patching within real-world software development environments.

What It Means For Enterprises and the CyberSec Market Going Forward

Aardvark represents OpenAI’s entry into automated security research through agentic AI. By combining GPT-5’s language understanding with Codex-driven patching and validation sandboxes, Aardvark offers an integrated solution for modern software teams facing increasing security complexity.

While currently in limited beta, the early performance indicators suggest potential for broader adoption. If proven effective at scale, Aardvark could contribute to a shift in how organizations embed security into continuous development environments.

For security leaders tasked with managing incident response, threat detection, and day-to-day protections—particularly those operating with limited team capacity—Aardvark may serve as a force multiplier. Its autonomous validation pipeline and human-auditable patch proposals could streamline triage and reduce alert fatigue, enabling smaller security teams to focus on strategic incidents rather than manual scanning and follow-up.

AI engineers responsible for integrating models into live products may benefit from Aardvark’s ability to surface bugs that arise from subtle logic flaws or incomplete fixes, particularly in fast-moving development cycles. Because Aardvark monitors commit-level changes and tracks them against threat models, it may help prevent vulnerabilities introduced during rapid iteration, without slowing delivery timelines.

For teams orchestrating AI across distributed environments, Aardvark’s sandbox validation and continuous feedback loops could align well with CI/CD-style pipelines for ML systems. Its ability to plug into GitHub workflows positions it as a compatible addition to modern AI operations stacks, especially those aiming to integrate robust security checks into automation pipelines without additional overhead.

And for data infrastructure teams maintaining critical pipelines and tooling, Aardvark’s LLM-driven inspection capabilities could offer an added layer of resilience. Vulnerabilities in data orchestration layers often go unnoticed until exploited; Aardvark’s ongoing code review process may surface issues earlier in the development lifecycle, helping data engineers maintain both system integrity and uptime.

In practice, Aardvark represents a shift in how security expertise might be operationalized—not just as a defensive perimeter, but as a persistent, context-aware participant in the software lifecycle. Its design suggests a model where defenders are no longer bottlenecked by scale, but augmented by intelligent agents working alongside them.

Meta researchers open the LLM black box to repair flawed AI reasoning

Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model's (LLM) reasoning and even intervene to fix its mistakes. Called Circuit-based Reasoning Verification (CRV), the method looks inside an LLM to monitor its internal “reasoning circuits” and detect signs of computational errors as the model solves a problem.

Their findings show that CRV can detect reasoning errors in LLMs with high accuracy by building and observing a computational graph from the model's internal activations. In a key breakthrough, the researchers also demonstrated they can use this deep insight to apply targeted interventions that correct a model’s faulty reasoning on the fly.

The technique could help solve one of the great challenges of AI: Ensuring a model’s reasoning is faithful and correct. This could be a critical step toward building more trustworthy AI applications for the enterprise, where reliability is paramount.

Investigating chain-of-thought reasoning

Chain-of-thought (CoT) reasoning has been a powerful method for boosting the performance of LLMs on complex tasks and has been one of the key ingredients in the success of reasoning models such as the OpenAI o-series and DeepSeek-R1

However, despite the success of CoT, it is not fully reliable. The reasoning process itself is often flawed, and several studies have shown that the CoT tokens an LLM generates is not always a faithful representation of its internal reasoning process.

Current remedies for verifying CoT fall into two main categories. “Black-box” approaches analyze the final generated token or the confidence scores of different token options. “Gray-box” approaches go a step further, looking at the model's internal state by using simple probes on its raw neural activations. 

But while these methods can detect that a model’s internal state is correlated with an error, they can't explain why the underlying computation failed. For real-world applications where understanding the root cause of a failure is crucial, this is a significant gap.

A white-box approach to verification

CRV is based on the idea that models perform tasks using specialized subgraphs, or "circuits," of neurons that function like latent algorithms. So if the model’s reasoning fails, it is caused by a flaw in the execution of one of these algorithms. This means that by inspecting the underlying computational process, we can diagnose the cause of the flaw, similar to how developers examine execution traces to debug traditional software.

To make this possible, the researchers first make the target LLM interpretable. They replace the standard dense layers of the transformer blocks with trained "transcoders." A transcoder is a specialized deep learning component that forces the model to represent its intermediate computations not as a dense, unreadable vector of numbers, but as a sparse and meaningful set of features. Transcoders are similar to the sparse autoencoders (SAE) used in mechanistic interpretability research with the difference that they also preserve the functionality of the network they emulate. This modification effectively installs a diagnostic port into the model, allowing researchers to observe its internal workings.

With this interpretable model in place, the CRV process unfolds in a few steps. For each reasoning step the model takes, CRV constructs an "attribution graph" that maps the causal flow of information between the interpretable features of the transcoder and the tokens it is processing. From this graph, it extracts a "structural fingerprint" that contains a set of features describing the graph's properties. Finally, a “diagnostic classifier” model is trained on these fingerprints to predict whether the reasoning step is correct or not.

At inference time, the classifier monitors the activations of the model and provides feedback on whether the model’s reasoning trace is on the right track.

Finding and fixing errors

The researchers tested their method on a Llama 3.1 8B Instruct model modified with the transcoders, evaluating it on a mix of synthetic (Boolean and Arithmetic) and real-world (GSM8K math problems) datasets. They compared CRV against a comprehensive suite of black-box and gray-box baselines.

The results provide strong empirical support for the central hypothesis: the structural signatures in a reasoning step's computational trace contain a verifiable signal of its correctness. CRV consistently outperformed all baseline methods across every dataset and metric, demonstrating that a deep, structural view of the model's computation is more powerful than surface-level analysis.

Interestingly, the analysis revealed that the signatures of error are highly domain-specific. This means failures in different reasoning tasks (formal logic versus arithmetic calculation) manifest as distinct computational patterns. A classifier trained to detect errors in one domain does not transfer well to another, highlighting that different types of reasoning rely on different internal circuits. In practice, this means that you might need to train a separate classifier for each task (though the transcoder remains unchanged).

The most significant finding, however, is that these error signatures are not just correlational but causal. Because CRV provides a transparent view of the computation, a predicted failure can be traced back to a specific component. In one case study, the model made an order-of-operations error. CRV flagged the step and identified that a "multiplication" feature was firing prematurely. The researchers intervened by manually suppressing that single feature, and the model immediately corrected its path and solved the problem correctly. 

This work represents a step toward a more rigorous science of AI interpretability and control. As the paper concludes, “these findings establish CRV as a proof-of-concept for mechanistic analysis, showing that shifting from opaque activations to interpretable computational structure enables a causal understanding of how and why LLMs fail to reason correctly.” To support further research, the team plans to release its datasets and trained transcoders to the public.

Why it’s important

While CRV is a research proof-of-concept, its results hint at a significant future for AI development. AI models learn internal algorithms, or "circuits," for different tasks. But because these models are opaque, we can't debug them like standard computer programs by tracing bugs to specific steps in the computation. Attribution graphs are the closest thing we have to an execution trace, showing how an output is derived from intermediate steps.

This research suggests that attribution graphs could be the foundation for a new class of AI model debuggers. Such tools would allow developers to understand the root cause of failures, whether it's insufficient training data or interference between competing tasks. This would enable precise mitigations, like targeted fine-tuning or even direct model editing, instead of costly full-scale retraining. They could also allow for more efficient intervention to correct model mistakes during inference.

The success of CRV in detecting and pinpointing reasoning errors is an encouraging sign that such debuggers could become a reality. This would pave the way for more robust LLMs and autonomous agents that can handle real-world unpredictability and, much like humans, correct course when they make reasoning mistakes. 

Stripe’s AI Backbone: Powering the Agent Economy with Financial Infrastructure

The post Stripe’s AI Backbone: Powering the Agent Economy with Financial Infrastructure appeared first on StartupHub.ai.

Stripe, under the leadership of Emily Glassberg Sands, Head of Data & AI, is not merely adapting to the artificial intelligence revolution; it is actively constructing the financial infrastructure upon which this burgeoning agent economy will operate. In a recent Latent Space podcast interview with hosts Shawn Wang and Alessio Fanelli, Sands articulated Stripe’s ambitious […]

The post Stripe’s AI Backbone: Powering the Agent Economy with Financial Infrastructure appeared first on StartupHub.ai.

Poolside reportedly raising up to $1B to advance AI code generation

The post Poolside reportedly raising up to $1B to advance AI code generation appeared first on StartupHub.ai.

AI code generation startup Poolside is reportedly raising up to $1 billion from investor Nvidia to build tools that accelerate software development.

The post Poolside reportedly raising up to $1B to advance AI code generation appeared first on StartupHub.ai.

AI’s Relentless March: Efficiency, Autonomy, and Economic Reshaping

The post AI’s Relentless March: Efficiency, Autonomy, and Economic Reshaping appeared first on StartupHub.ai.

The accelerating integration of artificial intelligence into daily life and industrial infrastructure is no longer a distant vision but a tangible reality, as evidenced by the rapid-fire developments discussed in Matthew Berman’s latest Forward Future AI news briefing. From the nascent stages of consumer robotics to revolutionary computing paradigms, the AI landscape is undergoing a […]

The post AI’s Relentless March: Efficiency, Autonomy, and Economic Reshaping appeared first on StartupHub.ai.

XPO’s AI-Driven Efficiency in a Soft Freight Market

The post XPO’s AI-Driven Efficiency in a Soft Freight Market appeared first on StartupHub.ai.

In an era where artificial intelligence often conjures images of job displacement, XPO CEO Mario Harik offers a refreshingly pragmatic perspective: AI, for his logistics giant, is fundamentally about efficiency and optimization, not headcount reduction. This insight anchored a recent interview on CNBC’s Worldwide Exchange with anchor Frank Hollan, where Harik detailed XPO’s latest earnings […]

The post XPO’s AI-Driven Efficiency in a Soft Freight Market appeared first on StartupHub.ai.

Amazon’s Anthropic investment boosts its quarterly profits by $9.5B

Amazon just opened Project Rainier, one of the world’s largest AI compute clusters, in partnership with Anthropic.

Amazon’s third-quarter profits rose 38% to $21.2 billion, but a big part of the jump had nothing to do with its core businesses of selling good or cloud services.

The company reported a $9.5 billion pre-tax gain from its investment in the AI startup Anthropic, which was included in its non-operating income for the quarter.

The windfall wasn’t the result of a sale or cash transaction, but rather accounting rules. After Anthropic raised new funding in September at a $183 billion valuation, Amazon was required to revalue its equity stake to reflect the higher market price, a process known as a “mark-to-market” adjustment.

To put the $9.5 billion paper gain in perspective, the Amazon Web Services cloud business — historically Amazon’s primary profit engine — generated $11.4 billion in quarterly operating profits.

At the same time, Amazon is spending big on its AI infrastructure buildout for Anthropic and others. The company just opened an $11 billion AI data center complex, dubbed Project Rainier, where Anthropic’s Claude models run on hundreds of thousands of Amazon’s Trainium 2 chips.

Amazon is going head-to-head against Microsoft, which just re-upped its partnership with ChatGPT maker OpenAI; and Google, which reported record cloud revenue for its recent quarter, driven by AI. The AI infrastructure race is fueling a big surge in capital spending for all three cloud giants.

Amazon spent $35.1 billion on property and equipment in the third quarter alone, up 55% from a year earlier. Andy Jassy, the Amazon CEO, sought to reassure Wall Street that the big outlay will be worth it.

“You’re going to see us continue to be very aggressive investing in capacity, because we see the demand,” Jassy said on the company’s conference call. “As fast as we’re adding capacity right now, we’re monetizing it. It’s still quite early, and represents an unusual opportunity for customers and AWS.”

The cash for new data centers doesn’t hit the bottom line immediately, but it comes into play as depreciation and amortization costs are recorded on the income statement over time.

And in that way, the spending is starting to impact on AWS results: sales rose 20% to $33 billion in the quarter, yet operating income increased only 9.6% to $11.4 billion. The gap indicates that Amazon’s heavy AI investments are compressing profit margins in the near term, even as the company bets on the infrastructure build-out to expand its business significantly over time.

Those investments are also weighing on cash generation: Amazon’s free cash flow dropped 69% over the past year to $14.8 billion, reflecting the massive outlays for data centers and infrastructure.

Amazon has invested and committed a total of $8 billion in Anthropic, initially structured as convertible notes. A portion of that investment converted to equity with Anthropic’s prior funding round in March.

Corsair Smacks Down Hit Box & Razer With Its Sweet New Novablade Hall Effect Leverless Fight Controller

Corsair Smacks Down Hit Box & Razer With Its Sweet New Novablade Hall Effect Leverless Fight Controller Corsair just announced its entrance into the leverless fightstick scene first popularized by Hit Box, the new Corsair Novablade Pro. Or to be more specific, the Corsair Novablade Pro Wireless Hall Effect Leverless Fight Controller—but we'll stick with Novablade Pro for now. The Novablade Pro is a direct competitor to the likes of Hit Box's

Halo 2 and 3 Remakes Leak: Full Campaign Remakes With No Multiplayer

With Halo: Campaign Evolved, the remake of Combat Evolved having been confirmed to launch in 2026, replete with a PS5 launch, rumors have started circulating that the 2004 and 2007 Halo 2 and Halo 3 will also be getting a remake, supposedly also courtesy of Halo Studios. These leaks come courtesy of @leaks_infinite on X, a prolific leaker who previously tipped the upcoming Campaign Evolved game, so there's a solid chance they're legitimate. No release date has yet been revealed by the leaker's source, suggesting that the release date would still be a way out, although there are a handful of details that have been revealed.

As with the Combat Evolved, Halo 2 and 3 will be complete remakes of the games, but with the addition of sprinting and with the multiplayer game modes stripped out. Supposedly, multiplayer gameplay is reserved for Halo 7, the next modern release in the franchise, with the justification that having multiplayer in too many Halo games would result in the different games in the franchise simply competing against themselves, ultimately resulting in a diminished experience for all of the games involved. This means that all the focus for the remakes will also go into the campaigns and general single-player gameplay, which has many Halo fans excited about the prospect of the remakes.

ARC Raiders Outshines Free-to-Play Cousin as Extraction Shooter Tops 260,000 Players on Launch Day

Arc Raiders launched less than 24 hours ago, and it has already become a smash hit, beating out many of the top free-to-play games currently topping Steam charts, despite its $39.99 price tag on Steam. At the time of writing, around seven hours after Arc Raiders launched, the game had peaked at 264,673 players, putting it in fourth place in SteamDB's real-time concurrent player tracker and in the number-one spot in the trending games category. For comparison, The Finals, a free-to-play shooter from the same developer, Embark Studios, launched almost two years ago, and it similarly hit high player counts as early as day one, topping out at 242,619 concurrent players, according to SteamDB. Given that Arc Raiders launched on a Thursday and reached such heights already, it stands to reason that the weekend may see player counts rise even further.

Arc Raiders still has a way to go to catch up to some of its more popular peers in the shooter genre, though. Battlefield 6, which launched earlier this month, has recorded concurrent players counts as high as 747,440 players, while PUBG: Battlegrounds peaked at over 3 million players in January 2018. Arc Raiders's popularity comes with a rather impressive "Very Positive" (89% positive) rating on Steam, despite the developer's use of divisive generative AI tech in the game's development. As is the case with The Finals, Arc Raiders only offers a Windows installer package and uses Easy Anti-Cheat, but Embark Studios is not hostile towards Linux users, and many users on ProtonDB have reported that the game works on Linux with Valve's Proton compatibility layer, even without any tweaks, earning it a "Gold" rating on the database.

GTBOX T1 Launches As AMD-Powered Mini PC Disguised as a Bluetooth Speaker

With the recent barrage of AMD-powered mini PCs coming to the market, it was only a matter of time before manufacturers started dipping into peculiar form factors and niche gimmicks in order to set themselves apart from the vaguely Mac Mini-shaped crowd. GTBox has apparently decided that its new take on the mini PC would take shape as a PC disguised as a navy blue Bluetooth speaker. Not coincidentally, the mini PC also includes two 5 W stereo speakers, which GTBox claims deliver "precise localization, a wide soundstage, and deep immersion," although it's more likely something akin to a sound bar, given how much of the PC's internals are taken up by the actual PC components.

Questions about the speaker quality aside, the internals boast some decent specifications, including an AMD Ryzen 7 8745HS, with its Radeon 780M, 32 GB of DDR5 RAM, and a 1 TB PCIe SSD. The port selection isn't bad either, with the T1 equipped with 3× USB 3.2 ports, 1× USB 2.0, 1× USB4, an RJ45 port for 2.5 Gbps Ethernet, a DisplayPort 2.0 port, and an HDMI 2.1 port. All of the ports are located on the rear, alongside the exhaust for the single fan, all in an attempt to maintain the clean design with its fabric cover. The mini PC also features a backlit GTBox logo and an LED ring on the top of the case, which can be adjusted to green, blue, or yellow. It also comes with Bluetooth 5.0 and Wi-Fi 6 on-board.

Leaked Intel Core Ultra X7 358H and Ultra 5 338H Cinebench R23 Scores Reveal Concerning CPU Performance

Intel's upcoming Panther Lake laptop CPU configurations previously leaked, revealing a line-up ranging from the modest Core Ultra 3 320U, with 2 P-Cores and 4 LPE-Cores to the notably higher-end Core Ultra X9 388H, featuring 4 P-Cores, 8 E-Cores, and 4 LPE-Cores. Now, performance figures for the more mid-range Intel Core Ultra X7 358H and Core Ultra 5 338H have both been leaked, thanks to Laptopreview Club. According to the site, the Core Ultra X7 in question managed a Cinebench R23 score of around 20,000, while the Core Ultra 5 338H came in at around 16,000 points or 20% slower than the X7 variant—the site doesn't provide exact scores, only rounded figures, but it's some idea of what to expect in terms of CPU performance for the next-gen Intel CPU platform.

As previously discussed, Panther Lake appears to be focusing more on iGPU performance, with leaked benchmarks revealing RTX 3050-tier performance from the 12-core Xe3 iGPU solution, but this is the first indication of CPU performance for the new CPU generation. The leaker claims that the 358H will be around 10% slower than the 255H at the same power consumption, in this case 60-65 W, with Cinebench R23 scores for the 255H ranging from the low 17,000s to upwards of 22,000. It appears, thus, that Panther Lake may sacrifice some CPU performance in order to improve iGPU performance, although these are still early rumors, and things are subject to change ahead of launch. The idea that Intel would launch a new CPU that underperforms its old platform by such a margin seems unlikely. It's possible that these tests were on the lower side of average or a worst-case scenario, either of which would actually make the performance of the new CPUs reasonably impressive.

‘It’s culture’: Amazon CEO says massive corporate layoffs were about agility — not AI or cost-cutting

Amazon CEO Andy Jassy at the GeekWire Summit in 2021. (GeekWire File Photo / Dan DeLong)

Amazon CEO Andy Jassy says the company’s latest big round of layoffs — about 14,000 corporate jobs — wasn’t triggered by financial strain or artificial intelligence replacing workers, but rather a push to stay nimble.

Speaking with analysts on Amazon’s quarterly earnings call Thursday, Jassy said the decision stemmed from a belief that the company had grown too big and too layered.

“The announcement that we made a few days ago was not really financially driven, and it’s not even really AI-driven — not right now, at least,” he said. “Really, it’s culture.”

Jassy’s comments are his first public explanation of the layoffs, which reportedly could ultimately total as many as 30,000 people — and would be the largest workforce reduction in Amazon’s history.

The news this week prompted speculation that the cuts were tied to automation or AI-related restructuring. Earlier this year, Jassy wrote in a memo to employees that he expected Amazon’s total corporate workforce to shrink over time due to efficiency gains from AI.

But his comments Thursday framed the layoffs as a cultural reset aimed at keeping the company fast-moving amid what he called “the technology transformation happening right now.”

Jassy, who succeeded founder Jeff Bezos as CEO in mid-2021, has pushed to reduce management layers and eliminate bureaucracy inside the company.

Amazon’s corporate headcount tripled between 2017 and 2022, according to The Information, before the company adopted a more cautious hiring approach.

Bloomberg News reported this week that Jassy has told colleagues parts of the company remain “unwieldy” despite efforts to streamline operations — including significant layoffs in 2023 when Amazon cut 27,000 corporate workers in multiple stages. 

On Thursday’s call, Jassy said Amazon’s rapid growth led to extra layers of management that slowed decision-making.

“When that happens, sometimes without realizing it, you can weaken the ownership of the people that you have who are doing the actual work and who own most of the two-way door decisions — the ones that should be made quickly and right at the front line,” Jassy said, using a phrase popularized by Bezos to help determine how much thought and planning to put into big and small decisions.

The layoffs, he said, are meant to restore the kind of ownership and agility that defined Amazon’s early years.

“We are committed to operating like the world’s largest startup,” Jassy said, repeating a line he’s used recently.

Given the “transformation” he described happening across the business world, Jassy said it’s more important than ever to be lean, flat, and fast-moving. “That’s what we’re going to do,” he said.

Jassy’s comments came as Amazon reported quarterly revenue of $180.2 billion, up 13% year-over-year, with AWS revenue growth accelerating to 20% — its fastest pace since 2022.

Amazon said it took a $1.8 billion severance-related charge in the quarter related to the layoffs.

Amazon joins other tech giants including Microsoft that have trimmed headcount this year while investing heavily in AI infrastructure.

Related coverage:

Apple Reveals A $1.1 Billion Hit From Tariffs In Its Fiscal Q4 2025, With Another $1.4 Billion Hit Oncoming

Two fists painted with American and Chinese flags grip a cracked Apple logo with a lightning bolt in the background.

Apple has deftly managed its geopolitical risk exposure by negotiating a broad-based import tariff exemption from the Trump Administration. Even so, the Cupertino giant has not been able to fully neutralize the impact of the US import tariffs, courtesy of its labyrinthine and sprawling global supply chain. Apple faced $1.1 billion in tariff-related costs in its fiscal Q4 2025 Apple adopted a 2-pronged strategy to deal with US import tariffs and trade war: Moreover, Apple is also planning to: As such, Apple has already started shipping its US-made servers to its datacenters, where they will help power features such as […]

Read full article at https://wccftech.com/apple-1-1-billion-hit-in-q4-2025/

Tim Cook: Apple Will Have Its Best Ever December Quarter, Thanks To The iPhone 17 Lineup

Base iPhone 17 OLED is cheaper to make, claims new report

Apple's iPhone product segment missed consensus expectations of analysts for the just-concluded fiscal fourth quarter of 2025, largely due to transitory weakness in iPhone 17 sales. Now, however, Apple has not only given a reasonable explanation behind this miss, but also offered surprising guidance for the ongoing December-ending quarter. Apple to experience its best ever December-ending quarter As we noted in our dedicated post on the topic, Apple's iPhone revenue missed expectations for its fiscal Q4 2025, which were pegged at $50.19 billion vs. the $49.03 billion haul that the Cupertino giant reported for the three-month period. During the earnings call, […]

Read full article at https://wccftech.com/tim-cook-apple-will-have-its-best-ever-december-quarter-thanks-to-the-iphone-17-lineup/

Tim Cook: The New Siri To Debut In 2026 Under The Apple Intelligence Banner

Apple logo with glowing colors and the text Apple Intelligence on a black background.

Ever since Apple announced its AI strategy revamp under the Apple Intelligence banner, there has been a perception that the company is struggling to maintain a swift pace with its lofty ambitions. There are increasing signs, however, that Apple is making some much-needed headway in this sphere, as per the tidbits gleaned from Apple's Q3 2025 Earnings Call. Apple Intelligence: A winding road ahead Do note that Apple has been working to introduce a number of key Apple Intelligence features with its Spring 2026 iOS update (iOS 26.4 most likely). These include: Of course, Apple Mac users can already enjoy […]

Read full article at https://wccftech.com/tim-cook-the-new-siri-under-the-apple-intelligence-banner-to-debut-in-2026/

Apple Fiscal Q4 2025 Earnings: Revenue From iPhones And iPads Disappoints

Apple Store exterior with large logo, city setting.

Apple has just announced the earnings for its fiscal Q4 2025, reporting $102.47 billion in total revenue, which includes $49.03 billion from iPhones and $28.75 billion from services, and $27.47 billion in net profit. Apple Fiscal Q4 2025 Earnings: iPhones and services show growth Here are the key highlights from Apple's latest quarterly earnings release: For the full fiscal year 2025, Apple earned $416.16 billion in revenue, corresponding to a year-over-year increase of 6.42 percent relative to $391.04 billion that it earned in its last fiscal year. During the just-concluded fiscal year 2025, Apple earned $307 billion from its products […]

Read full article at https://wccftech.com/apple-fiscal-q4-2025-earnings-revenue-from-iphones-and-ipads-disappoints/

Apple’s 14-Inch M5 MacBook Pro Gets The Highly Anticipated Discount On Amazon, 512GB & 1TB Versions Now $50 Off, But The Real ‘MVP’ Is The M4 MacBook Pro

Amazon shaves off $50 from the 512GB and 1TB versions of the 14-inch M5 MacBook Pro

Maintaining its tradition for every Apple Silicon Mac that has launched so far, Amazon has introduced a discount for the 14-inch M5 MacBook Pro, shaving off $50 from the base and 1TB storage model, meaning that the new lineup starts from $1,549 instead of $1,599, with the price cut applied to both the Space Black and Silver colors. While the discount will slowly creep up as the months go by, you might want to keep the 14-inch M4 MacBook Pro a viable option because it is oozing tremendous value right now. For those looking to save money, the 14-inch M4 […]

Read full article at https://wccftech.com/m5-macbook-pro-in-512gb-and-1tb-options-now-50-cheaper-on-amazon/

Zillow posts $676M in Q3 revenue as rentals and mortgage businesses power growth

(Zillow Photo)

This story originally appeared on Real Estate News.

Zillow continues to be an overachiever, at least with its financial performance. 

The home search giant’s revenue has consistently beat expectations for the past two years, and Q3 was no different: Revenue was $676 million for the third quarter, up 16% year-over-year and above the company’s previous guidance, driven by the strength of its rentals and mortgage divisions.

Rentals revenue was up 41% year-over-year to $174 million, while mortgage revenue increased 36% to $53 million, according to Zillow’s shareholder letter. The company’s main revenue stream, residential, rose 7% to $435 million.

Zillow also turned a profit, netting $10 million during the quarter and sustaining its run of profitability for a third consecutive quarter.

What Zillow had to say

“Zillow’s Q3 results show how well we’re delivering on our mission to make buying, selling, financing and renting easier,” Zillow CEO Jeremy Wacksman said in a news release. “Zillow is leading the industry toward a more transparent, consumer-first future.”

The real estate portal also continues to see growth in its website traffic, hitting 250 million average monthly unique visitors in the third quarter, up 7% year-over-year.

Wacksman and CFO Jeremy Hofmann acknowledged that they are also aware of the “external noise” that has gotten louder in recent months, possibly referring to recent lawsuits involving the company and the debate over exclusive listings, including Zillow’s private listing ban.

Key numbers

Revenue: $676 million, up 16% year-over-year. Residential increased 7% to $435 million; mortgage revenue was up 36% to $53 million; and rentals revenue climbed 41% to $174 million.

Cash and investments: $1.4 billion at the end of September, up from $1.2 billion at the end of June.

Adjusted EBITDA (earnings before interest, taxes, depreciation and amortization): $165 million in Q3, up from $127 million a year earlier.

Net income/loss: A gain of $10 million in Q3, up from $2 million the previous quarter, an improvement over its $20 million loss a year ago.

Traffic and visits: Traffic across all Zillow Group websites and apps totaled 250 million average monthly unique users in Q3, up 7% year-over-year, the company said. Total visits were 2.5 billion in Q3, up 4% year-over-year. 

Q4 outlook: For the fourth quarter, Zillow estimates revenue will be in the $645 million to $655 million range, which would represent high single-digit year-over-year growth.

Why IT leaders should pay attention to Canva’s ‘imagination era’ strategy

The rise of AI marks a critical shift away from decades defined by information-chasing and a push for more and more compute power. 

Canva co-founder and CPO Cameron Adams refers to this dawning time as the “imagination era.” Meaning: Individuals and enterprises must be able to turn creativity into action with AI.  

Canva hopes to position itself at the center of this shift with a sweeping new suite of tools. The company’s new Creative Operating System (COS) integrates AI across every layer of content creation, creating a single, comprehensive creativity platform rather than a simple, template-based design tool.

“We’re entering a new era where we need to rethink how we achieve our goals,” said Adams. “We’re enabling people’s imagination and giving them the tools they need to take action.”

An 'engine' for creativity

Adams describes Canva’s platform as a three-layer stack: The top Visual Suite layer containing designs, images and other content; a collaborative Canva AI plane at center; and a foundational proprietary model holding it all up. 

At the heart of Canva’s strategy is its Creative Operating System (COS) underlying. This “engine,” as Adams describes it, integrates documents, websites, presentations, sheets, whiteboards, videos, social content, hundreds of millions of photos, illustrations, a rich sound library, and numerous templates, charts, and branded elements.

The COS is getting a 2.0 upgrade, but the crucial advance is the “middle, crucial layer” that fully integrates AI and makes it accessible throughout various workflows, Adams explained. This gives creative and technical teams a single dashboard for generating, editing and launching all types of content.

The underlying model is trained to understand the “complexity of design” so the platform can build out various elements — such as photos, videos, textures, or 3D graphics — in real time, matching branding style without the need for manual adjustments. It also supports live collaboration, meaning teams across departments can co-create. 

With a unified dashboard, a user working on a specific design, for instance, can create a new piece of content (say, a presentation) within the same workflow, without having to switch to another window or platform. Also, if they generate an image and aren’t pleased with it, they don’t have to go back and create from scratch; they can immediately begin editing, changing colors or tone. 

Another new capability in COS, “Ask Canva,” provides direct design advice. Users can tag @Canva to get copy suggestions and smart edits; or, they can highlight an image and direct the AI assistant to modify it or generate variants. 

“It’s a really unique interaction,” said Adams, noting that this AI design partner is always present. “It’s a real collaboration between people and AI, and we think it’s a revolutionary change.”

Other new features include a 2.0 video editor and interactive form and email design with drag-and-drop tools. Further, Canva is now incorporated with Affinity, its unified app for pro designers incorporating vector, pixel and layer workflows, and Affinity is “free forever.” 

Automating intelligence, supporting marketing

Branding is critical for enterprise; Canva has introduced new tools to help organizations consistently showcase theirs across platforms. The new Canva Grow engine integrates business objectives into the creative process so teams can workshop, create, distribute and refine ads and other materials. 

As Adams explained: “It automatically scans your website, figures out who your audience is, what assets you use to promote your products, the message it needs to send out, the formats you want to send it out in, makes a creative for you, and you can deploy it directly to the platform without having to leave Canva.”

Marketing teams can now design and launch ads across platforms like Meta, track insights as they happen and refine future content based on performance metrics. “Your brand system is now available inside the AI you’re working with,” Adams noted. 

Success metrics and enterprise adoption

The impact of Canva’s COS is reflected in notable user metrics: More than 250 million people use Canva every month, just over 29 million of which are paid subscribers. Adams reports that 41 billion designs have been created on Canva since launch, which equates to 1 billion each month. 

“If you break that down, it turns into the crazy number of 386 designs being created every single second,” said Adams. Whereas in the early days, it took roughly an hour for users to create a single design. 

Canva customers include Walmart, Disney, Virgin Voyages, Pinterest, FedEx, Expedia and eXp Realty. DocuSign, for one, reported that it unlocked more than 500 hours of team capacity and saved $300,000-plus in design hours by fully integrating Canva into its content creation. Disney, meanwhile, uses translation capabilities for its internationalization work, Adams said. 

Competitors in the design space

Canva plays in an evolving landscape of professional design tools including Adobe Express and Figma; AI-powered challengers led by Microsoft Designer; and direct consumer alternatives like Visme and Piktochart.

Adobe Express (starting at $9.99 a month for premium features) is known for its ease of use and integration with the broader Adobe Creative Cloud ecosystem. It features professional-grade templates and access to Adobe’s extensive stock library, and has incorporated Google's Gemini 2.5 Flash image model and other gen AI features so that designers can create graphics via natural language prompts. Users with some design experience say they prefer its interface, controls and technical advantages over Canva (such as the ability to import high-fidelity PDFs). 

Figma (starting at $3 a month for professional plans) is touted for its real-time collaboration, advanced prototyping capabilities and deep integration with dev workflows; however, some say it has a steeper learning curve and higher-precision design tools, making it preferable for professional designers, developers and product teams working on more complex projects. 

Microsoft Designer (free version available; although a Microsoft 365 subscription starting at $9.99 a month unlocks additional features) benefits from its integration with Microsoft’s AI capabilities, Copilot layout and text generation and Dall-E powered image generation. The platform’s “Inspire Me” and “New Ideas” buttons provide design variations, and users can also import data from Excel, add 3D models from PowerPoint and access images from OneDrive. 

However, users report that its stock photos and template and image libraries are limited compared to Canva's extensive collection, and its visuals can come across as outdated. 

Canva’s advantage seems to be in its extensive template library (more than 600,000 ready-to-use) and asset library (141 million-plus stock photos, videos, graphics, and audio elements).​ Its platform is also praised for its ease of use and interface friendly to non-designers, allowing them to begin quickly without training. 

Canva has also expanded into a variety of content types — documents, websites, presentations, whiteboards, videos, and more — making its platform a comprehensive visual suite than just a graphics tool. 

Canva has four pricing tiers: Canva Free for one user; Canva Pro for $120 a year for one person; Canva Teams for $100 a year for each team member; and the custom-priced Canva Enterprise. 

Key takeaways: Be open, embrace human-AI collaboration

Canva’s COS is underpinned by Canva’s frontier model, an in-house, proprietary engine based on years of R&D and research partnerships, including the acquisition of visual AI company Leonardo. Adams notes that Canva works with top AI providers including OpenAI, Anthropic and Google. 

For technology teams, Canva’s approach offers important lessons, including a commitment to openness. “There are so many models floating around,” Adams noted; it’s important for enterprises to recognize when they should work with top models and when they should develop their own proprietary ones, he advised. 

For instance, OpenAI and Anthropic recently announced integrations with Canva as a visual layer because, as Adams explained, they realized they didn’t have the capability to create the same kinds of editable designs that Canva can. This creates a mutually-beneficial ecosystem. 

Ultimately, Adams noted: “We have this underlying philosophy that the future is people and technology working together. It's not an either or. We want people to be at the center, to be the ones with the creative spark, and to use AI as a collaborator.”

Amazon stock soars 11% after topping Q3 estimates with $180B in revenue, $21B in profits

An Amazon Prime delivery van outside the company’s Seattle headquarters. (GeekWire File Photo / Kurt Schlosser)

Amazon beat estimates for its third-quarter earnings with $180.2 billion in revenue, up 13% year-over-year, and earnings per share of $1.95, up from $1.43 in the year-ago period.

  • Net income was $21.2 billion, up from $15.3 billion last year.
  • Wall Street expected $177.7 billion in revenue, and earnings per share of $1.56.

Amazon shares were up more than 11% in after-hours trading. Growth in the company’s stock has lagged behind rivals Microsoft and Google this year.

Investors were likely pleased with a re-acceleration in Amazon’s closely watched cloud computing unit, which reported $33 billion in sales, up 20% year-over-year and topping analyst estimates. In a press release, Amazon CEO Andy Jassy said AWS is “growing at a pace we haven’t seen since 2022.”

“We continue to see strong demand in AI and core infrastructure, and we’ve been focused on accelerating capacity — adding more than 3.8 gigawatts in the past 12 months,” Jassy added.

The cloud growth should help Amazon counter the Wall Street narrative that its cloud business is falling behind Microsoft and Google in pursuing the AI opportunity.

  • Amazon and other cloud giants are pouring billions of dollars into capital expenditures to support AI initiatives. Amazon said earlier this year it expects to increase capital expenditures to more than $100 billion in 2025.
  • The company makes most of its operating profits from AWS — $11.4 billion in the third quarter, more than half Amazon’s total operating income.
  • AWS was hit with a major outage last week that took down several major sites and services. It blamed an internal issue within the cloud giant’s infrastructure.

Amazon’s overall operating income reached $17.4 billion in the third quarter — flat compared to a year ago. The company had forecast operating income of $15.5 billion to $20.5 billion.

The company said its Q3 operating income reflected two special charges:

  • A $2.5 billion charge related to a recent settlement with the Federal Trade Commission related to Prime memberships.
  • About $1.8 billion in estimated severance costs related to its massive 14,000 corporate layoff announced earlier this week.

The workforce reduction comes amid an efficiency push at Amazon. Jassy has cited a need to reduce bureaucracy and become more efficient in the new era of artificial intelligence.

  • Reuters reported this week that the number of layoffs could ultimately total as many as 30,000 people, which is still a possibility as the cutbacks continue into next year. 
  • Jassy told employees in a company-wide memo earlier this year that Amazon’s corporate workforce will shrink in the coming years as generative AI takes hold.

Online store sales were $67.4 billion, up 10%.

  • The revenue includes sales from the company’s annual Prime Day sales event from July 8-11.
  • Analysts are watching for impact from tariffs on the company’s retail business, which still makes up the largest portion of its overall revenue.
  • In its Q1 earnings report in April, Amazon added “tariff and trade policies” to a list of factors that create uncertainty in its results, joining existing risks such as inflation, interest rates, and regional labor market constraints.

Here are more details from the second quarter earnings report:

Advertising: The company’s ad business brought in $17.7 billion in revenue in the quarter, up 24% from the year-ago period, topping estimates. Advertising, along with AWS, is a major profit engine.

Third-party seller services: Revenue from third-party seller services was up 12% to $42.5 billion.

Shipping costs: Amazon spent $25.4 billion on shipping in Q3, up 8%.

Physical stores: The category, which includes Whole Foods and other Amazon grocery stores, posted revenue of $5.6 billion, up 7%.

Headcount: Amazon employs 1.57 million people, up 2% year-over-year. That figure does not include seasonal and contract workers.

Prime: Subscription services revenue, which includes Prime memberships, came in at $12.6 billion, up 11%. 

Guidance: The company forecasts Q4 sales between $206 billion and $213 billion. Operating income is expected to range between $21 billion and $26 billion, compared with $21.2 billion in the year-ago quarter.

$AMZN Amazon Q3 FY25:

• Revenue +13% Y/Y to $180.2B ($2.4B beat).
• Operating margin 10% (+0.5pp Y/Y).
• EPS $1.95 ($0.39 beat).
• Q4 Guidance: ~$209.5B ($1.4B beat).

☁️ AWS:
• Revenue +20% Y/Y to $33.0B.
• Operating margin 35% (-3pp Y/Y). pic.twitter.com/2kaNIvC7oy

— App Economy Insights (@EconomyApp) October 30, 2025

Google's Built-In AI Defenses on Android Now Block 10 Billion Scam Messages a Month

Google on Thursday revealed that the scam defenses built into Android safeguard users around the world from more than 10 billion suspected malicious calls and messages every month. The tech giant also said it has blocked over 100 million suspicious numbers from using Rich Communication Services (RCS), an evolution of the SMS protocol, thereby preventing scams before they could even be sent. In

Russian Ransomware Gangs Weaponize Open-Source AdaptixC2 for Advanced Attacks

The open-source command-and-control (C2) framework known as AdaptixC2 is being used by a growing number of threat actors, some of whom are related to Russian ransomware gangs. AdaptixC2 is an emerging extensible post-exploitation and adversarial emulation framework designed for penetration testing. While the server component is written in Golang, the GUI Client is written in C++ QT for

Ex-Intel CEO Pat Gelsinger praises cutting-edge Nvidia chip production with TSMC on US soil, despite Intel missing out — hails manufacturing milestone of US-based supply chain

Intel's ex-CEO, Pat Gelsinger, has praised the news that Nvidia's latest graphics chips are now in full production on American soil. Highlighting how important the silicon supply chain is for national security, he said that he hopes this will allow Nvidia to go harder and faster in its future developments, too.

Sora Unleashes New Era of AI Character Animation

The post Sora Unleashes New Era of AI Character Animation appeared first on StartupHub.ai.

The barrier between imagination and animated reality has just dissolved, fundamentally altering the landscape for content creators and AI developers alike. OpenAI’s latest announcement, “Sora Character Cameos,” showcased in a refreshingly unconventional promotional video, signals a profound shift in how digital characters can be conceived, generated, and deployed. This is not merely an incremental update; […]

The post Sora Unleashes New Era of AI Character Animation appeared first on StartupHub.ai.

Alphabet’s AI Advantage: A Bullish Outlook on Google’s Enduring Dominance

The post Alphabet’s AI Advantage: A Bullish Outlook on Google’s Enduring Dominance appeared first on StartupHub.ai.

Alphabet is not merely participating in the artificial intelligence revolution; it is poised to be its definitive winner, a sentiment articulated by Michael Nathanson, founding partner and senior research analyst at MoffettNathanson, during a recent discussion on CNBC’s ‘Power Lunch.’ His perspective challenges the prevailing narrative that AI could destabilize Google’s foundational search business, instead […]

The post Alphabet’s AI Advantage: A Bullish Outlook on Google’s Enduring Dominance appeared first on StartupHub.ai.

OpenAI Aardvark is a GPT-5 agent that hunts security bugs

The post OpenAI Aardvark is a GPT-5 agent that hunts security bugs appeared first on StartupHub.ai.

OpenAI's Aardvark is an autonomous AI agent that uses GPT-5 to hunt for software vulnerabilities like a human security researcher.

The post OpenAI Aardvark is a GPT-5 agent that hunts security bugs appeared first on StartupHub.ai.

Google’s Model Armor: The AI Bodyguard Preventing Digital Catastrophes

The post Google’s Model Armor: The AI Bodyguard Preventing Digital Catastrophes appeared first on StartupHub.ai.

The proliferation of AI applications, while transformative, introduces an intricate web of new security vulnerabilities that demand a specialized defense. In a recent “Serverless Expeditions” episode, Google Cloud Developer Advocate Martin Omander spoke with Security Advocate Aron Eidelman about Model Armor, Google’s latest offering designed to shield AI applications from a range of emerging threats. […]

The post Google’s Model Armor: The AI Bodyguard Preventing Digital Catastrophes appeared first on StartupHub.ai.

Archy funding hits $20M to kill the dental server closet

The post Archy funding hits $20M to kill the dental server closet appeared first on StartupHub.ai.

Archy's AI platform aims to save dental practices 80 hours a month by automating the tedious admin work that leads to staff burnout.

The post Archy funding hits $20M to kill the dental server closet appeared first on StartupHub.ai.

Alphabet’s AI Investments Drive Record Revenue, Defying Cannibalization Fears

The post Alphabet’s AI Investments Drive Record Revenue, Defying Cannibalization Fears appeared first on StartupHub.ai.

Alphabet’s recent earnings call revealed a pivotal moment for the tech giant: the tangible monetization of its extensive AI infrastructure bets, validating a long-term strategy that is now driving unprecedented growth across its core businesses. This robust performance underscores a critical shift in the AI landscape, where strategic investments are now yielding significant, measurable returns, […]

The post Alphabet’s AI Investments Drive Record Revenue, Defying Cannibalization Fears appeared first on StartupHub.ai.

Anthropic’s Latest: Claude Code on the Web and Haiku 4.5 Reshape Developer Workflows

The post Anthropic’s Latest: Claude Code on the Web and Haiku 4.5 Reshape Developer Workflows appeared first on StartupHub.ai.

The future of software development is not merely assisted by AI, but actively orchestrated by it, a vision Anthropic brings closer with its latest advancements: Claude Code on the Web and the powerful, cost-efficient Haiku 4.5 model. These releases, detailed by a company representative in a recent video, signal a profound shift towards more intuitive, […]

The post Anthropic’s Latest: Claude Code on the Web and Haiku 4.5 Reshape Developer Workflows appeared first on StartupHub.ai.

Cameo CEO: OpenAI’s Trademark Infringement Threatens Brand Authenticity

The post Cameo CEO: OpenAI’s Trademark Infringement Threatens Brand Authenticity appeared first on StartupHub.ai.

The burgeoning landscape of artificial intelligence, while promising innovation, is simultaneously exposing the critical fault lines in intellectual property law, particularly concerning brand identity. This tension was starkly illuminated when Steven Galanis, CEO of the personalized celebrity video platform Cameo, appeared on CNBC’s “Money Movers” to discuss his company’s trademark lawsuit against OpenAI. Galanis articulated […]

The post Cameo CEO: OpenAI’s Trademark Infringement Threatens Brand Authenticity appeared first on StartupHub.ai.

Esri AWS AI deal targets generative AI for maps

The post Esri AWS AI deal targets generative AI for maps appeared first on StartupHub.ai.

The Esri AWS AI collaboration aims to transform static maps into dynamic, predictive tools using generative AI foundation models.

The post Esri AWS AI deal targets generative AI for maps appeared first on StartupHub.ai.

AMD Strix Point Performance Continues Evolving Nicely With Ubuntu 25.10

This week marks fifteen months since AMD Strix Point laptops began shipping. Back at the end of July 2024 the Linux performance and support was already in good shape while since then the Linux performance has only evolved even more to make these AMD Zen 5 laptops perform even better. Here is a fresh look at how the performance has evolved since launch day and the added gains when moving to the recently released Ubuntu 25.10 and some performance advantages too if moving to the in-development Linux 6.18 kernel.

AMD Radeon RX 6000 and RX 5000 to Miss Day-One Game Optimization, Still Part of Main Driver Branch

In a rather shocking move, AMD has given the Radeon RX 6000 series and RX 5000 series GPUs a partial retirement by removing it from the regular monthly game optimization cycle. While the monthly driver updates will continue to support them, the game optimizations included will only be meant for RX 7000 series and RX 9000 series. The monthly driver releases will address critical security vulnerabilities and fix bugs. What makes the move surprising is the fact that the Radeon RX 6000 series in particular is less than 4 years old, and gamers who purchased those cards did so at extremely inflated prices because RX 6000 series, along with NVIDIA's RTX 30-series, were released amid the cryptocurrency mining rush that saw miners soak up GPU inventory.

In a statement to PCGH, machine translated from German, AMD says:
RDNA 1 and RDNA 2 graphics cards will continue to receive driver updates for critical security and bug fixes. To focus on optimizing and delivering new and improved technologies for the latest GPUs, AMD Software Adrenalin Edition 25.10.2 is placing Radeon RX 5000 and RX 6000 series graphics cards (RDNA 1 and RDNA 2) into maintenance mode. Future driver updates with targeted game optimizations will focus on RDNA 3 and RDNA 4 GPUs.

SK hynix Prepares 7200 MT/s DDR5 Chips with New 2 GB B-Die and 4 GB M-Die

SK hynix appears to be expanding its DDR5 lineup with several new chips rated for a native 7200 MT/s speed above the current JEDEC 6400 MT/s standard. Listings spotted by @unikoshardware on Chinese retailer JD.com show new SK hynix DDR5 modules using part numbers not yet seen in the market. According to the findings, SK hynix has prepared four new dies, all capable of 7200 MT/s (denoted by the "KB" suffix), with densities ranging from 2 GB to 4 GB. These include the company's first 2 GB B-die and a 4 GB M-die, marking the first time this process node has been offered at that capacity.

Just last week, we reported on the second generation of SK hynix DDR5 memory chips with 3 GB A-die ICs where the sample reportedly uses an 8-layer PCB. As noted before, to fully exploit the new die's potential, manufacturers are expected to move to 10 or 12-layer PCBs for greater signal integrity, something that would come in handy especially when overclocked. These listings likely represent early samples, with improved designs expected once production ramps up. SK hynix hasn't officially announced the new chips yet, but the listings suggest development is well underway.

Intel and BOE to Deploy 1 Hz Displays to Increase Laptop Battery Life by Up to 65%

Intel has partnered with Chinese display manufacturer BOE to deploy laptop displays that can operate at 1 Hz. Yes, that is a single Hz refresh rate, meaning that static content, like images, will be dynamically represented to reduce power usage and extend laptop's battery life by up to 65%. Up until now, displays with variable refresh rate have been used, but they were mostly limited in a range from 48 Hz to any top refresh rate limit that the panel can achieve. However, Intel and BOE have now developed a technology that will manage to make the panel refresh one time per second, delivering a massive battery booster to next-generation laptops. For example, displaying static content like an image will be displayed at 1 Hz, while the panel will be boosted to its original refresh rate during scrolling and dynamic content.

That is where Multi-Frequency Display (MFD) technology steps in. Through the use of Intel graphics drivers and operating system kernel, the MDF will automatically recognize the content displayed on the user's screen to increase the frequency when needed. This approach maximizes power efficiency and prolongs systems battery life. Additionally, Intel and BOE designed SmartPower HDR technology to address excessive energy consumption and inconsistent brightness in HDR mode. This works by dynamically adjusting display voltage based on on-screen content luminosity, optimizing energy efficiency. During HDR video playback, for instance, brightness adapts to match the footage. SmartPower HDR substantially reduces power usage during darker scenes while delivering exceptional visual quality in brighter moments.

Heart Machine Workers Suffer Another Round of Layoffs Ahead of Next Game Launch

Heart Machine revealed earlier this month that it was laying off a significant number of staff and "winding down" the development of its most recent early access game, Hyper Light Breaker. This was shortly after the studio announced its upcoming stylized side-scroller action horror game, Possessor(s). Now, however, it seems as though a number of developers and staff who worked on Possessor(s) are also getting the boot, less than two weeks before the game's November 11 launch. The layoffs weren't announced by the studio itself, but rather in a series of posts on Bluesky by the affected workers. One post read: "My time at Heart Machine has sadly come to a rather abrupt end," suggesting an immediate end to their six-year stint at the studio. This is the third round of layoffs at Heart Machine in the last year, with the first coming in November 2024, shortly ahead of the launch of Hyper Light Breaker.

At the time of writing, it's unclear just how many workers were laid off, but it appears as though the layoffs will affect multiple roles in different departments. One worker who worked on PR and community and management, commented in a video that "by the time Possessor(s) comes out on November 11, I don't know that anyone who worked on it will even be at the company anymore." She also mentions that the layoffs would be effective immediately, echoing the sentiments of other posts and suggesting that some staff members have been retained until the game launches. This was confirmed by a producer who was laid off, who says that her time a Heart Machine "will come to an end after Possessor(s) ships," adding that she is too busy "literally shipping the game to start actively looking for work."

The ‘enormous barrier’ that threatens economic growth in the Pacific Northwest

A life sciences panel at the Cascadia Innovation Corridor conference Oct. 29, 2025 in Seattle. From left: Marc Cummings, Life Sciences Washington; Dr. Bonnie Nagel, Oregon Health Sciences University; Dr. Tom Lynch, Fred Hutch Cancer Center. (“PhotosbyKim” Photo)

Leaders in the Pacific Northwest are largely bullish on the region’s continued economic success — but one threat to the region’s fiscal progress worries them in particular.

“What always strikes me, whether I’m in City Hall in Vancouver or Seattle or Portland, is that everybody talks about the same thing — the high cost of housing,” said Microsoft President Brad Smith at this week’s Cascadia Innovation Corridor conference in Seattle.

“It’s become an enormous barrier, not just for attracting new talent, but for enabling teachers and police officers and nurses and firefighters to live in the communities in which they serve,” he added.

Dr. Tom Lynch, president and director of Seattle’s Fred Hutch Cancer Center, was more succinct.

“My people can’t find places to live,” Lynch said during a Tuesday panel at the same event.

Those concerns are bolstered by research in a new report on the economic viability of the corridor running from Vancouver, B.C., through Seattle to Portland.

Housing costs were cited as one of the top threats to the region’s success, noting that Vancouver’s housing-cost-to-income-ratio disparity is among the worst in the world, while in Seattle median home prices relative to wages have doubled in the past 15 years. Portland reports a net out-migration as workers move to more affordable areas.

Other concerns include rising business costs and regulations, declining numbers of skilled workers and new restrictions on foreign talent immigrating to the U.S., and clean energy shortages.

Microsoft President Brad Smith speaking at the Cascadia Innovation Corridor conference. (GeekWire Photo / Todd Bishop)

“We’ve got to find ways to be able to increase the density of our housing, come up with creative solutions for allowing more families to be able to live close to where the jobs are,” Lynch said.

Smith agreed, adding, “The only way to dig ourselves out of this is to harness the power of the market through public-private partnerships, to recognize that zoning and permitting needs to be put to work to accelerate investment.”

Area tech giants have been pursuing those partnerships to tackle the challenge.

In 2019, Microsoft pledged $750 million to boost the affordable housing inventory and has helped build or retain 12,000 units in the region. Amazon in recent years has committed $3.4 billion for housing across three hubs nationally where it has large operations. The company in September celebrated a milestone of building or preserving 10,000 units in the Seattle area.

Despite the efforts, Smith said the shortage keeps worsening and in 2025, new construction starts are expected to be the lowest since before the Great Recession.

The city of Seattle, for one, is looking to sweeten a property-tax exemption deal for developers that could encourage construction and it’s also applying AI to permitting process in an effort to speed up projects.

Smith also promoted the long-held vision of a high-speed rail line in the Pacific Northwest that would make commutes much faster between growing urban hubs. But a panel Wednesday cautioned that dream is still many years out.

Shares of Navan Closed Down 20% In Long-Awaited IPO Debut

Shares of Navan closed at $20, down 20%, in first-day trading on Thursday, indicating lackluster investor demand for the long-awaited debut.

Navan, which operates an expense management platform with an emphasis on travel, had priced shares for its offering at $25 each late Wednesday. It was formerly called TripActions, with the company pivoting to a broader platform when revenue reached zero right after the COVID pandemic hit.

The offering raised $923.1 million for the company, whose shares are trading on the Nasdaq under the ticker NAVN. It set an initial valuation of around $6.2 billion.

The move to the public markets has been a long time coming for Palo Alto, California-based Navan, which reportedly first submitted confidential paperwork for a planned offering more than three years ago.

The company had raised $1.2 billion in debt financing and $1 billion in equity funding from venture investors and credit providers, per Crunchbase data. Major venture stakeholders include Andreessen Horowitz, Lightspeed Venture Partners and Zeev Ventures.

Growing revenue

Navan had revenue of $329 million in the first half of 2025, up 30% year over year. Growth comes as the company has been investing in developing its agentic AI offering, Navan Cognition, to automate more cumbersome tasks around travel planning and reporting.

Still, the company remains far from profitable. Navan’s net loss for the first half of this year came in just shy of $100 million — up about 7% from the year-earlier period. The loss comes amid higher spending on both R&D and sales and marketing — common for companies on the IPO track — looking to appeal to growth-hungry investors.

Per its IPO filing, Navan has incurred net losses in each year since its inception in 2015 and “may not achieve or, if achieved, sustain profitability in the future.”

IPO activity has picked up in 2025, with Navan one of several larger recent debuts, including well-received entries by consumer fintech Klarna and blockchain lender Figure. We’re also seeing heightened buzz around potential new market entrants.

Related Crunchbase query:

Related reading:

Illustration: Dom Guzman

ARC Raiders Surpasses 200K Concurrent Players on Steam at Launch, Beats The Finals Launch Numbers

ARC Raiders game scene featuring a player aiming at a mechanical creature, with the text “enlist. resist” and “ARCRAIDERS.COM” visible.

Update 30/10/2025: It's official, ARC Raiders' launch on Steam is bigger than The Finals, as it reaches 243,386 concurrent players on Steam. Original Story: Embark Studios' third-person extraction shooter, ARC Raiders, is now live and available on PC, PS5, and Xbox Series X/S and, at least on Steam, we know that the game is having a massive launch. It's even on track to surpass what The Finals accomplished in 2024, as ARC Raiders has over 200K concurrent players on Steam at the time of this writing. Per SteamDB, an hour after it went live, it had reached 140K concurrent players […]

Read full article at https://wccftech.com/arc-raiders-surpasses-200k-concurrent-players-on-steam-at-launch/

There’s a New Thief Game Coming Out This Year, Though Only for VR Devices

Thief VR: Legacy of Shadow title screen with hooded figure in a night cityscape.

UK-based studio Maze Theory (which previously made several Doctor Who VR games and Peaky Blinders: The King's Ransom) and publisher Vertigo Games have just announced the release date of Thief: Legacy of Shadow, a new stealth action game coming out on December 4. Unfortunately, the game will only be available for virtual reality devices, which still comprise a minuscule portion of the overall gaming industry. That said, if you are a VR aficionado, chances are you already have a PlayStation VR2, Quest 2 or 3, or a Steam VR-compatible headset. Legacy of Shadow will bring players back to The City […]

Read full article at https://wccftech.com/theres-a-new-thief-game-coming-out-this-year-though-only-for-vr-devices/

AMD RDNA 1 & 2 GPU Driver Support Moved To “Maintenance” Mode, Game Optimizations & New Tech For RDNA 3, 4 & Beyond

AMD Radeon graphics card with sad face emojis on fans against a red background.

The first and second-gen RDNA lineups are being ditched so quickly, and as per the company's latest statement, these will only receive critical updates. AMD Confirms End of Game Optimization and Feature Updates for Radeon RX 5000 and RX 6000 Series AMD's first RDNA GPU series, aka Radeon RX 5000, is hardly 6 years old, and if you think it is too early to see a drop in official optimization and feature updates, then we have the same news for RX 6000 GPU owners. If you checked the latest release notes for AMD's latest Adrenalin Edition 25.10.2, which officially adds […]

Read full article at https://wccftech.com/amd-rdna-1-2-gpu-driver-support-moved-to-maintenance-mode-game-optimizations-new-tech-for-rdna-3-4-beyond/

Call of Duty Movie Will Be Directed by Peter Berg and Written by Yellowstone Creator Taylor Sheridan

Paramount logo Call of Duty movie over a scene with armed soldiers and helicopters in an urban warfare setting.

Today, Deadline reports that the Call of Duty movie has secured a director and a writer: Peter Berg and Taylor Sheridan. Berg will direct and also co-write alongside Taylor Sheridan. Berg is known for directing several action thriller movies, including 2007's The Kingdom, 2013's Lone Survivor, and 2018's Mile 22. Sheridan, on the other hand, is primarily known as the creator of the Yellowstone franchise (as well as Mayor of Kingstown, Tulsa King, and Lioness), but he also wrote the two Sicario movies. Berg and Sheridan worked together on Hell or High Water, the acclaimed 2016 crime drama film that […]

Read full article at https://wccftech.com/call-of-duty-movie-directed-peter-berg-written-taylor-sheridan/

Metal Gear Solid Delta: Snake Eater’s Fox Hunt Mode Now Live on PC, PS5, and Xbox Series X/S

Metal Gear Solid Delta: Snake Eater

Metal Gear Solid Delta: Snake Eater launched late this summer on August 28, 2025, on PC, PS5, and Xbox Series X/S, without its announced multiplayer mode, Fox Hunt. We learned a little before Snake Eater's release that Fox Hunt would not arrive alongside the rest of the game, and now that day is finally here, marked with a new gameplay trailer showcasing the mode. The PvP stealth-action mode leans on all the stealth mechanics in the core Snake Eater campaign, and challenges players to make use of everything in Snake's toolbox as they try to be the sneakiest Fox Unit […]

Read full article at https://wccftech.com/metal-gear-solid-delta-snake-eater-fox-hunt-now-live-pc-ps5-xbox-series-x-s/

Intel’s Ex-CEO Pat Gelsinger Praises NVIDIA’s US-Made Blackwell Wafer, Applauds Efforts to Revive Domestic Chip Manufacturing

Intel's former CEO, Pat Gelsinger, has shared his thoughts on NVIDIA producing the first Blackwell chip wafer in the US, expressing his pleasure with the pursuit of American manufacturing. Intel's Pat Gelsinger Supports NVIDIA's Efforts to Bring Advanced Product Manufacturing to the US This marks one of the rare occasions where Gelsinger has actually appreciated NVIDIA's efforts in the AI segment, as, based on some of his past remarks about the firm, Team Green didn't align with what Intel's former CEO had expected from AI. On a post on X, Pat Gelsinger expressed appreciation for NVIDIA's efforts to bring manufacturing […]

Read full article at https://wccftech.com/intel-ex-ceo-pat-gelsinger-praises-nvidia-us-made-blackwell-wafer/

Apex Legends Season 27: Amped Arrives Next Week With an Olympus Refresh and Buffs for Multiple Legends

Female character flying with jetpack in a futuristic video game setting.

While EA is all-in on Battlefield 6 right now after it had a massive launch at the beginning of October and earlier this week launched its battle royale mode, Battlefield REDSEC, Respawn is still chugging away at Apex Legends, with its 27th season set to arrive next week, titled 'Amped.' The new season arrives with a refresh to one of the game's more popular maps, Olympus, and buffs for a few Legends, specifically Valkyrie, Rampart, and Horizon. This season also adds new mechanics that'll make the game even faster than it already is, with a new mantle boost giving players […]

Read full article at https://wccftech.com/apex-legends-season-27-amped-arrives-next-week/

Apple iPhone Air Battery Replacement Will Cost You $119, iPhone 17 Pro Max Display Will Set You Back By $379

iPhone with Parts & Service screen among disassembled components, including a battery labeled original.

After releasing a bespoke self-repair manual for each of its iPhone 17 models, Apple has now made available the spare parts for the new lineup, and some of those are, unsurprisingly, quite pricey. Apple has now made available the key spare parts for the iPhone 17 lineup via its Self-Service Repair Store Before going further, do note that the self-repair manuals and these spare parts have constituted a significant component of this year's iFixit score improvements, albeit very slight ones, for the new Apple hardware. The following spare parts are now available in Apple's Self-Service Repair Store for the base […]

Read full article at https://wccftech.com/apple-iphone-air-battery-replacement-will-cost-you-119-iphone-17-pro-max-display-will-set-you-back-by-379/

New Final Fantasy Tactics – The Ivalice Chronicles Mod Begins Restoration of War of the Lions Content

Final Fantasy Tactics: The Ivalice Chronicles cover art with two armored characters and two background figures.

Final Fantasy Tactics - The Ivalice Chronicles doesn't feature any of the additional content found in the War of the Lions release and the mobile versions of the game, but this could quickly become a thing of the past, as a new mod now available online restores a small portion of this additional content. The WotL Character Repair mod, developed by Dana Crysalis and now available for download for free from Nexus Mods, fixes Balthier and Luso, making it possible to add them to the party via Cheat Engine tables and use them in battle with their unique Jobs and […]

Read full article at https://wccftech.com/new-final-fantasy-tactics-the-ivalice-chronicles-mod-begins-restoration-of-war-of-the-lions-content/

Amazon Game Studios Lord of the Rings MMO Reportedly Cancelled Again Amidst Amazon’s Mass Layoffs

The Lord of the Rings and Amazon Games logos in an office setting.

Amidst a massive layoff at Amazon, which cut 14,000+ employees and killed further development of New World: Aeternum, has also reportedly killed (for a second time) the Lord of the Rings MMO that was in production at Amazon Game Studios. Spotted by Rock Paper Shotgun, a now former Amazon Game Studios senior gameplay engineer, Ashleigh Amrine, confirmed in a post on her personal LinkedIn page that the "fledgling Lord of the Rings game" was part of the cuts at Amazon. "This morning I was part of layoffs at Amazon Games, alongside my incredibly talented peers on New World and our […]

Read full article at https://wccftech.com/amazon-game-studios-lord-of-the-rings-mmo-cancelled-again-amidst-mass-layoffs/

iFixit: Apple’s Self-Service Tools For The M5 iPad Pro Bump Up Its Repairability Score

Person wearing an iFixit shirt standing next to an Apple iMac displaying a circuit board image in a workshop setting.

iFixit has just published a nearly 6.5-minute video on YouTube detailing the repairability metrics for the new Apple M5 iPad Pro, concluding that the device remains one of the least repairable hardware products from Apple. However, the new self-service tools do manage to boost its overall repairability score. iFixit: "At just 5.1mm thickness, it's thinner than an iPhone Air, which means the screen is mounted flush against the internals" iFixit has noted the following about the new M5 iPad Pro: On the whole, iFixit has pegged a 5/10 provisional repairability score to the new M5 iPad Pro. The M5 iPad […]

Read full article at https://wccftech.com/ifixit-apples-self-service-tools-for-the-m5-ipad-pro-bump-up-its-repairability-score/

NVIDIA’s CEO Is Apparently Having a ‘Great Time’ With Samsung & Hyundai Executives in Korea; Fried Chicken, Beers, and Plenty of Spicy Comments

Three people raising beer glasses at a restaurant table with a Sony handheld recorder visible.

NVIDIA's Jensen Huang is currently in South Korea for the APEC summit, and it seems he is having a pretty interesting day, spending time with his 'executive friends' at Samsung and Hyundai. Jensen Got a 'Little Too Comfortable' In His Visit to Korea, After Delivering the GTC 2025 Keynote This week has been a jam-packed one for NVIDIA's CEO, as Jensen delivered one of the most important keynotes of his career and then took a flight straight to South Korea for the APEC summit, where he met with Samsung's Chairman Lee Jae-yong and the President of Hyundai Motors, Chung Eui-sun. […]

Read full article at https://wccftech.com/nvidia-ceo-is-having-a-great-time-with-samsung-hyundai-executives/

ARC Raiders Arrives With NVIDIA DLSS 4 With 3.6X Performance Boost on RTX 50 Series GPUs

ARC Raiders text next to a character wearing futuristic armor with branding SBWN CORV on helmet.

ARC Raiders is out today on PC, PS5, and Xbox Series X/S consoles, but more importantly for PC players, Embark Studios' latest arrives with support for NVIDIA DLSS 4 with Multi-Frame Generation and NVIDIA Reflex. Furthermore, if you have an RTX 50 Series graphics card in your PC, then you'll be able to multiply the frame rates you see in ARC Raiders by an average of 3.6X, even when playing at 4K. According to NVIDIA, when playing ARC Raiders on an RTX 50 Series card, with DLSS 4 and Multi-Frame Generation while also using DLSS Super Resolution, you can "multiply […]

Read full article at https://wccftech.com/arc-raiders-nvidia-dlss-4-better-performance-multi-frame-generation/

War Sails, the Naval Expansion for Mount & Blade II: Bannerlord, Is Out on November 26

Mount & Blade II: Bannerlord War Sails scene with Viking warriors in battle on ships.

Today, independent developer TaleWorlds Entertainment has confirmed the release date of War Sails, the upcoming expansion for Mount & Blade II: Bannerlord. War Sails was originally scheduled to launch in June, though it was ultimately delayed by TaleWorlds. The expansion is now set to go live on November 26 at 00:00 Pacific Time, 03:00 Eastern Time, 09:00 Central European Time. It will be a simultaneous release on PC and consoles (PlayStation 5 and Xbox Series S|X). Pricing has been confirmed to be $24.99. The announcement was paired with an extensive gameplay showcase that demonstrated the expansion's main features. Players learned […]

Read full article at https://wccftech.com/war-sails-naval-expansion-mount-blade-ii-bannerlord-out-november-26/

keinsaas Navigator beta – Generate production-ready n8n workflows, expert validated


keinsaas Navigator combines AI speed with expert reliability to create n8n workflows that actually work. Simply describe your process, and our AI generates a complete workflow using knowledge from 5000+ proven templates. The key difference: automation experts review and optimize everything before delivery.

In 24 hours, you get a production-ready n8n workflow with setup docs, ready to deploy. No technical learning curve, no broken implementations - just reliable automation from manual process to professional solution.

View startup

New "Brash" Exploit Crashes Chromium Browsers Instantly with a Single Malicious URL

A severe vulnerability disclosed in Chromium's Blink rendering engine can be exploited to crash many Chromium-based browsers within a few seconds. Security researcher Jose Pino, who disclosed details of the flaw, has codenamed it Brash. "It allows any Chromium browser to collapse in 15-60 seconds by exploiting an architectural flaw in how certain DOM operations are managed," Pino said in a

The Sapphire Edge AI 370 is one of the smallest and most impressive mini PC's I've tested

This is one of the smallest and most powerful mini PCs on the market and features an AMD Ryzen AI 9 HX 370 with integrated 890M GPU. Arriving barebones, you have the ability to easily tailor the machine to your needs. From use in the workplace, as a content creation or development hub or even if you're an AI enthusiast, the specifications wanting to see what's possible, the potential defies the small size.

Inside the UW Allen School: Six ‘grand challenges’ shaping the future of computer science

Magdalena Balazinska, director of the UW Allen School of Computer Science & Engineering, opens the school’s annual research showcase Wednesday in Seattle. (GeekWire Photo / Todd Bishop)

The University of Washington’s Paul G. Allen School of Computer Science & Engineering is reframing what it means for its research to change the world.

In unveiling six “Grand Challenges” at its annual Research Showcase and Open House in Seattle on Wednesday, the Allen School’s leaders described a blueprint for technology that protects privacy, supports mental health, broadens accessibility, earns public trust, and sustains people and the planet.

The idea is to “organize ourselves into some more specific grand challenges that we can tackle together to have an even greater impact,” said Magdalena Balazinska, director of the Allen School and a UW computer science professor, opening the school’s annual Research Showcase and Open House.

Here are the six grand challenges:

  • Anticipate and address security, privacy, and safety issues as tech permeates society.
  • Make high-quality cognitive and mental health support available to all.
  • Design technology to be accessible at its inception — not as an add-on.
  • Design AI in a way that is transparent and equally beneficial to all.
  • Build systems that can be trusted to do exactly what we want them to do, every time.
  • Create technologies that sustain people and the planet.

Balazinska explained that the list draws on the strengths and interests of its faculty, who now number more than 90, including 74 on the tenure track.

With total enrollment of about 2,900 students, last year the Allen School graduated more than 600 undergrads, 150 master’s students, and 50 Ph.D. students.

The Allen School has grown so large that subfields like systems and NLP (natural language processing) risk becoming isolated “mini departments,” said Shwetak Patel, a University of Washington computer science professor. The Grand Challenges initiative emerged as a bottom-up effort to reconnect these groups around shared, human-centered problems. 

Patel said the initiative also encourages collaborations on campus beyond the computer science school, citing examples like fetal heart rate monitoring with UW Medicine.

A serial entrepreneur and 2011 MacArthur Fellow, Patel recalled that when he joined UW 18 years ago, his applied and entrepreneurial focus was seen as unconventional. Now it’s central to the school’s direction. The grand challenges initiative is “music to my ears,” Patel said.

In tackling these challenges, the Allen School has a unique advantage against many other computer science schools. Eighteen faculty members currently hold what’s known as “concurrent engagements” — formally splitting time between the Allen School and companies and organizations such as Google, Meta, Microsoft, and the Allen Institute for AI (Ai2).

University of Washington computer science professor Shwetak Patel at the Paul G. Allen School’s annual research showcase and open house. (GeekWire Photo / Taylor Soper)

This is a “superpower” for the Allen School, said Patel, who has a concurrent engagement at Google. These arrangements, he explained, give faculty and students access to data, computing resources, and real-world challenges by working directly with companies developing the most advanced AI systems.

“A lot of the problems we’re trying to solve, you cannot solve them just at the university,” Patel said, pointing to examples such as open-source foundation models and AI for mental-health research that depend on large-scale resources unavailable in academia alone.

These roles can also stretch professors thin. “When somebody’s split, there’s only so much mental energy you can put into the university,” Patel said. Many of those faculty members teach just one or two courses a year, requiring the school to rely more on lecturers and teaching faculty.

Still, he said, the benefits outweigh the costs. “I’d rather have 50% of somebody than 0% of somebody, and we’ll make it work,” he said. “That’s been our strategy.”

The Madrona Prize, an annual award presented at the event by the Seattle-based venture capital firm, went to a project called “Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward.” The system makes AI chatbots more personal by giving them a “curiosity reward,” motivating the AI to actively learn about a user’s traits during a conversation to create more personalized interactions.

On the subject of industry collaborations, the lead researcher on the prize-winning project, UW Ph.D. student Yanming Wan, conducted the research while working as an intern at Google DeepMind. (See full list of winners and runners-up below.)

At the evening poster session, graduate students filled the rooms to showcase their latest projects — including new advances in artificial intelligence for speech, language, and accessibility.

DopFone: Doppler-based fetal heart rate monitoring using commodity smartphones

Poojita Garg, a second-year PhD student.

DopFone transforms phones into fetal heart rate monitors. It uses the phone speaker to transmit a continuous sine wave and uses the microphone to record the reflections. It then processes the audio recordings to estimate fetal heart rate. It aims to be an alternative to doppler ultrasounds that require trained staff, which aren’t practical for frequent remote use.

“The major impact would be in the rural, remote and low-resource settings where access to such maternity care is less — also called maternity care deserts,” said Poojita Garg, a second-year PhD student.

CourseSLM: A Chatbot Tool for Supporting Instructors and Classroom Learning

Marquiese Garrett, a sophomore at the UW.

This custom-built chatbot is designed to help students stay focused and build real understanding rather than relying on quick shortcuts. The system uses built-in guardrails to keep learners on task and counter the distractions and over-dependence that can come with general large language models.

Running locally on school devices, the chatbot helps protect student data and ensures access even without Wi-Fi.

“We’re focused on making sure students have access to technology, and know how to use it properly and safely,” said Marquiese Garrett, a sophomore at the UW.

Efficient serving of SpeechLMs with VoxServe

Keisuke Kamahori, a third-year PhD student at the Allen School.

VoxServe makes speech-language models run more efficiently. It uses a standardized abstraction layer and interface that allows many different models to run through a single system. Its key innovation is a custom scheduling algorithm that optimizes performance depending on the use case.

The approach makes speech-based AI systems faster, cheaper, and easier to deploy, paving the way for real-time voice assistants and other next-gen speech applications.

“I thought it would be beneficial if we can provide this sort of open-source system that people can use,” said Keisuke Kamahori, third-year Ph.D. student at the Allen School.

ConvFill: Model collaboration for responsive conversational voice agents

Zachary Englhardt (left), a fourth-year PhD student, and Vidya Srinivas, a third-year PhD student.

ConvFill is a lightweight conversational model designed to reduce the delay in voice-based large language models. The system responds quickly with short, initial answers, then fills in more detailed information as larger models complete their processing.

By combining small and large models in this way, ConvFill delivers faster responses while conserving tokens and improving efficiency — an important step toward more natural, low-latency conversational AI.

“This is an exciting way to think about how we can combine systems together to get the best of both worlds,” said Zachary Englhardt, a third-year Ph.D. student. “It’s an exciting way to look at problems.”

ConsumerBench: Benchmarking generative AI on end-user devices

Yile Gu, a third-year PhD student at the Allen School.

Running generative AI locally — on laptops, phones, or other personal hardware — introduces new system-level challenges in fairness, efficiency, and scheduling.

ConsumerBench is a benchmarking framework that tests how well generative AI applications perform on consumer hardware when multiple AI models run at the same time. The open-source tool helps researchers identify bottlenecks and improve performance on consumer devices.

There are a number of benefits to running models locally: “There are privacy purposes — a user can ask for questions related to email or private content, and they can do it efficiently and accurately,” said Yile Gu, a third-year Ph.D. student at the Allen School.

Designing Chatbots for Sensitive Health Contexts: Lessons from Contraceptive Care in Kenyan Pharmacies

Lisa Orii, a fifth-year Ph.D. student at the Allen School.

A project aimed at improving contraceptive access and guidance for adolescent girls and young women in Kenya by integrating low-fidelity chatbots into healthcare settings. The goal is to understand how chatbots can support private, informed conversations and work effectively within pharmacies.

“The fuel behind this whole project is that my team is really interested in improving health outcomes for vulnerable populations,” said Lisa Orii, a fifth-year Ph.D. student.

See more about the research showcase here. Here’s the list of winning projects.

Madrona Prize Winner: “Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward” Yanming Wan, Jiaxing Wu, Marwa Abdulhai, Lior Shani, Natasha Jaques

Runner up: “VAMOS: A Hierarchical Vision-Language-Action Model for Capability-Modulated and Steerable Navigation” Mateo Guaman Castro, Sidharth Rajagopal, Daniel Gorbatov, Matt Schmittle, Rohan Baijal, Octi Zhang, Rosario Scalise, Sidharth Talia, Emma Romig, Celso de Melo, Byron Boots, Abhishek Gupta

Runner up: “Dynamic 6DOF VR reconstruction from monocular videos” Baback Elmieh, Steve Seitz, Ira-Kemelmacher, Brian Curless

People’s Choice: “MolmoAct” Jason Lee, Jiafei Duan, Haoquan Fang, Yuquan Deng, Shuo Liu, Boyang Li, Bohan Fang, Jieyu Zhang, Yi Ru Wang, Sangho Lee, Winson Han, Wilbert Pumacay, Angelica Wu, Rose Hendrix, Karen Farley, Eli VanderBilt, Ali Farhadi, Dieter Fox, Ranjay Krishna

Editor’s Note: The University of Washington underwrites GeekWire’s coverage of artificial intelligence. Content is under the sole discretion of the GeekWire editorial team. Learn more about underwritten content on GeekWire.

Perplexity’s AI Patent Search Aims to Demystify IP for Everyone

The post Perplexity’s AI Patent Search Aims to Demystify IP for Everyone appeared first on StartupHub.ai.

Perplexity Patents leverages advanced AI to transform complex patent research into a conversational, accessible experience, democratizing IP intelligence for innovators worldwide.

The post Perplexity’s AI Patent Search Aims to Demystify IP for Everyone appeared first on StartupHub.ai.

AI Breast Cancer Screening Transforms Rural India Access

The post AI Breast Cancer Screening Transforms Rural India Access appeared first on StartupHub.ai.

AI breast cancer screening, powered by MedCognetics and NVIDIA, is bringing critical early detection capabilities to rural India via mobile clinics.

The post AI Breast Cancer Screening Transforms Rural India Access appeared first on StartupHub.ai.

Meta’s AI Patience Test: Goldman Sachs on Divergent Tech Fortunes

The post Meta’s AI Patience Test: Goldman Sachs on Divergent Tech Fortunes appeared first on StartupHub.ai.

The market’s patience for capital expenditure, particularly in the burgeoning field of artificial intelligence, has become a defining factor in big tech’s recent earnings reactions. This sentiment was acutely underscored when Eric Sheridan, Goldman Sachs’ Co-Head of Tech, Media, and Telecom Research, joined CNBC’s “Squawk on the Street” team to dissect the third-quarter earnings of […]

The post Meta’s AI Patience Test: Goldman Sachs on Divergent Tech Fortunes appeared first on StartupHub.ai.

Bevel raises $10M to advance its AI health companion

The post Bevel raises $10M to advance its AI health companion appeared first on StartupHub.ai.

Bevel raised $10 million in a Series A round led by General Catalyst to develop its AI health companion for personalized wellness management.

The post Bevel raises $10M to advance its AI health companion appeared first on StartupHub.ai.

How labor shortages may delay data center plans

The post How labor shortages may delay data center plans appeared first on StartupHub.ai.

The burgeoning demand for data center capacity, fueled by the insatiable appetite for artificial intelligence and cloud computing, is encountering a significant impediment: a critical shortage of skilled labor. CNBC’s Kate Rogers reported on this burgeoning issue, highlighting how the construction and operational needs of these vital infrastructure hubs are being hampered by a lack […]

The post How labor shortages may delay data center plans appeared first on StartupHub.ai.

Enterprise AI Failures: A Startup’s Gold Rush

The post Enterprise AI Failures: A Startup’s Gold Rush appeared first on StartupHub.ai.

The recent MIT “State of AI in Business 2025” report, widely circulated and often misinterpreted, claims a staggering 95% failure rate for enterprise AI projects. Far from signaling AI’s inherent flaws, this statistic, as dissected by Y Combinator partners Garry Tan, Harj Taggar, Diana Hu, and Jared Friedman on their Lightcone podcast, illuminates a profound […]

The post Enterprise AI Failures: A Startup’s Gold Rush appeared first on StartupHub.ai.

Kaizen funding hits $21M to fix awful gov websites

The post Kaizen funding hits $21M to fix awful gov websites appeared first on StartupHub.ai.

Kaizen is using its new $21M in funding to prove that booking a campsite or a DMV appointment can be as seamless as any modern e-commerce experience.

The post Kaizen funding hits $21M to fix awful gov websites appeared first on StartupHub.ai.

No Dark GPUs: Why AI Isn’t a Bubble, But an Existential Race

The post No Dark GPUs: Why AI Isn’t a Bubble, But an Existential Race appeared first on StartupHub.ai.

“I do not believe we’re in an AI bubble today,” declared Gavin Baker, Managing Partner and CIO of Atreides Management, setting a provocative tone for his discussion with David George, General Partner at a16z. This assertion, delivered at a16z’s Runtime event, anchored a sharp analysis of the current AI boom, differentiating it starkly from past […]

The post No Dark GPUs: Why AI Isn’t a Bubble, But an Existential Race appeared first on StartupHub.ai.

Legora raises $150M to advance its AI legal platform

The post Legora raises $150M to advance its AI legal platform appeared first on StartupHub.ai.

Legal technology company Legora raised $150 million to expand its AI-powered platform used by lawyers for research, drafting, and document review.

The post Legora raises $150M to advance its AI legal platform appeared first on StartupHub.ai.

Google’s AI Carbon Removal Strategy Takes Shape in Brazil

The post Google’s AI Carbon Removal Strategy Takes Shape in Brazil appeared first on StartupHub.ai.

Google's new initiative in Brazil demonstrates how AI is becoming indispensable for scaling diverse carbon removal technologies, from methane capture to reforestation.

The post Google’s AI Carbon Removal Strategy Takes Shape in Brazil appeared first on StartupHub.ai.

Andrew Yang on AI’s Economic Storm and Shifting Political Tides

The post Andrew Yang on AI’s Economic Storm and Shifting Political Tides appeared first on StartupHub.ai.

“AI is decimating entry-level jobs.” This stark declaration from Andrew Yang, founder and CEO of Noble Mobile, and former Democratic presidential candidate, cut through the morning bustle of CNBC’s ‘Squawk Box.’ Speaking with interviewers Andrew Ross Sorkin and Becky Quick, Yang offered a compelling commentary on the intertwined forces of technological disruption and political realignment, […]

The post Andrew Yang on AI’s Economic Storm and Shifting Political Tides appeared first on StartupHub.ai.

The Prompting Company Raises $6.5M for Generative AI Advertising

The post The Prompting Company Raises $6.5M for Generative AI Advertising appeared first on StartupHub.ai.

The Prompting Company raises $6.5M to develop its generative AI advertising platform, which inserts brand mentions into AI chatbot conversations.

The post The Prompting Company Raises $6.5M for Generative AI Advertising appeared first on StartupHub.ai.

Slowly but surely, high-speed rail backers believe Cascadia mega-project will become a reality

(Photo by 7 on Unsplash)

Ten years into a dream to connect Vancouver, B.C., Seattle and Portland via a high-speed rail line, stakeholders and backers of the mega-project said Wednesday that they’re still very much onboard — and to prepare for a long trip.

With a lengthy and uncertain timeline ahead, former U.S. Secretary of Transportation Ray LaHood, a speaker at the Cascadia Innovation Corridor conference in Seattle, cautioned many of those in attendance that they likely won’t live long enough to see high-speed rail in the Pacific Northwest.

“When you build big things, they cost big money,” LaHood said. “It took us 50 years to build the interstate system.”

LaHood said the key is to “get on board” now so that “our children and grandchildren” will reap the benefits.

Former U.S. Secretary of Transportation Ray LaHood, left, discusses high-speed rail with Washington State Sen. Marko Liias onstage at the Cascadia Innovation Corridor annual conference in Seattle on Wednesday. (GeekWire Photo / Kurt Schlosser)

At Cascadia Innovation Corridor’s annual event this week, much of the focus was on how to strengthen the cross-border partnership between three growing cities and numerous locales in between. Leaders discussed ideas around innovation, housing affordability, sustainability, and economic development. They signed a Memorandum of Reaffirmation to solidify commitments.

And Wednesday was about the enhanced transportation connectivity that could help drive it all, and the work that lies ahead in building a coalition of public and political support across the region, securing funding, jumpstarting planning, and more. Even producing videos like the new one below is part of the massive outreach under way.

Former Washington Gov. Chris Gregoire, Cascadia Innovation Corridor’s chair, said that a decade ago, high-speed rail was just an idea. The next decade can be a defining one.

“You would have thought we were thinking of doing something in outer space by the reaction,” she said. “Today, it is much more than an idea, and we are actually moving forward. While we do have a long way to go, as you well know, we’re funding the first phase of planning built on one of the most unique coalitions in North America.”

Envisioning a mega-region akin to Silicon Valley, in which Vancouver, Seattle and Portland are each only an hour apart, Gregoire highlighted the possibilities that could come with high-speed mobility.

“A UW student can intern in Vancouver, a family in Puget Sound can explore a job in Portland, and a cancer researcher in Vancouver can get home for dinner after a shift in Seattle,” she said. “It’s a new way of living, working and connecting, one that expands what’s possible for everyone who calls Cascadia home.”

Former Washington Gov. Chris Gregoire, chair of the Cascadia Innovation Corridor, speaks at the group’s annual conference in Seattle on Wednesday. (GeekWire Photo / Kurt Schlosser)

The pace to make the dream a reality has been anything but high-speed.

In 2017, Microsoft — which has an office in downtown Vancouver — gave $50,000 to a $300,000 effort led by Washington state to study a high-speed train proposal. In 2021, officials from Washington, Oregon and British Columbia signed a memorandum of understanding to form a committee to coordinate the plan.

Last year, the Federal Railroad Administration awarded the Washington State Department of Transportation $49.7 million to develop a service development plan for Cascadia High-Speed Rail. A timeline on WSDOT’s website points to 2028 for estimated completion of that plan, and for 2029 and beyond it simply says, “future phases to be determined.”

Cascadia is not alone in its quest for high-speed rail.

LaHood, a Republican cabinet member in the Obama administration, recalled the former president’s commitment to rail transportation. He said the Trump administration “clawing back” $4 billion in funding for California’s high-speed rail project between San Francisco and Los Angeles should not be considered a “death knell,” despite challenges in that state.

LaHood pointed to Brightline train projects in Florida, connecting Orlando and Miami, and Las Vegas, with a plan to offer high-speed connectivity to Southern California. Another plan in Texas would connect Houston and Dallas. All are evidence, he said, that this mode of transportation is what Americans want in order to avoid clogged highways and airports.

“Once the politicians catch on to what the people want, boom, you get the kind of rail transportation that people are clamoring for,” LaHood said.

Here are highlights from other speakers at the conference on Wednesday:

Chelsea Levy, Cascadia High-Speed Rail project manager for the Washington State Department of Transportation, during the Cascadia Innovation Corridor conference. (GeekWire Photo / Kurt Schlosser)
  • WSDOT Secretary Julie Meredith pointed to big Seattle transportation infrastructure projects that transformed the city, including the removal of the Alaskan Way Viaduct and construction of the SR 99 waterfront tunnel, as well as the new SR 520 floating bridge. Even as work will continue for years connecting communities via Link light rail, Meredith said, “I so often describe this program as one I’m most excited about, because it’s an opportunity for us to so fundamentally transform our region up and down the I-5 corridor,” Meredith said.
  • Chelsea Levy, Cascadia High-Speed Rail project manager, said the region can expect a 25% increase in population, or about 3.4 million more people, by 2050. “This pace and magnitude of growth really requires us to act,” Levy said. Among other things, WSDOT will need to integrate with B.C. and Oregon transportation networks and, Levy stressed, the scale and complexity of the project will require a streamlining of permitting processes across the 345-mile mega-region.
  • Hana Doubrava, a Vancouver-based corporate affairs director at Microsoft, leads the Cascadia initiative for the tech giant. She said the company’s support is not just symbolic, and that Microsoft believes modern, efficient transit and transportation options are essential for improved quality of life. “Cascadia is all about partnerships and relationships — despite the current geopolitics or baseball scores,” she said in a nod to Canada’s team, the Toronto Blue Jays, denying the Seattle Mariners a trip to the World Series.

Related:

The Milky Way's Hidden Features Will Amaze You In This Stunning 1 Million Hour Color Image

The Milky Way's Hidden Features Will Amaze You In This Stunning 1 Million Hour Color Image Astronomers from the International Centre of Radio Astronomy Research (ICRAR), primarily based at Curtin University in Australia, have released the most detailed low-frequency radio image of the Milky Way's galactic plane ever assembled. Rather than the starry, luminous band we're more familiar with, the latest images show a vibrant tapestry

(PR) ARC Raiders Available Now With DLSS 4 & Reflex

More than 800 games and applications feature RTX technologies, and each week new games integrating NVIDIA DLSS, NVIDIA Reflex, and advanced ray-traced effects are released or announced, delivering the definitive PC experience for GeForce RTX players. Last week, The Outer Worlds 2, Vampire: The Masquerade - Bloodlines 2, and Jurassic World Evolution 3 all launched with day-one support for DLSS 4 with Multi Frame Generation.

This week, even more DLSS 4 titles join the ever-expanding catalogue of games that run and play best on GeForce RTX, starting with ARC Raiders, which has just launched today with DLSS 4 with Multi Frame Generation, NVIDIA Reflex, and ray tracing. Later this week, Duet Night Abyss launches with DLSS 4 with Multi Frame Generation, and Battlefield 6's battle royale mode is available now, accelerated by DLSS 4. Looking to the future, AION 2, CINDER CITY and Directive 8020 are all launching in 2026 with DLSS 4, and we've got new trailers for each that you can check out below.

(PR) Sapphire Launches the EDGE AI Mini PC Series Based On AMD Ryzen AI 300 Series

SAPPHIRE Technology is rolling out the EDGE AI Series, a new line of ultra-compact AI mini PCs designed to bring next-level performance and real-time intelligence to everything from content creation and office productivity to edge analytics and smart automation. Following a soft launch earlier this year, the SAPPHIRE EDGE AI Series is now preparing for full market availability.

At the heart of each system is the powerful AMD Ryzen AI 300 Series processor, combining a high-efficiency multicore CPU, AMD Radeon 800M graphics, and a built-in Neural Processing Unit (NPU) delivering up to 50 TOPS of AI acceleration-all in a sleek, highly compact footprint. From intelligent automation in offices and factories to local AI inference in healthcare and education, this platform is designed for professionals who need real-time AI performance with ultimate flexibility.

(PR) NVIDIA GeForce NOW Gets 10 New Titles, Including ARC Raiders and The Outer Worlds 2

Get ready, raiders - the wait is over. ARC Raiders is dropping onto GeForce NOW and bringing the fight from orbit to the screen. To celebrate the launch, gamers can score ARC Raiders for free with the purchase of a 12-month Ultimate membership - a bundle packed with everything needed to jump into the resistance. This week also brings 10 games, including the next big adventure in The Outer Worlds 2, the mystical new "Visions of Eternity" expansion for Guild Wars 2 and Ghost Trick: Phantom Detective, a clever classic from Capcom. Sofia, Bulgaria, is the latest region to get GeForce RTX 5080-class power, with Amsterdam and Montreal coming up next. Stay tuned to GFN Thursday for updates as more regions upgrade to Blackwell RTX. Those who want to follow along can track latest progress on the server rollout page.

The Cloud Suits Up
In ARC Raiders, survival means banding together against overwhelming odds. Set in a retro‑futuristic world under siege, the game blends squad-based strategy with explosive firefights wrapped in a striking '80s-inspired sci-fi style. The story is simple but relentless: mysterious mechanical invaders, the ARC, rain down from orbit to strip Earth bare. Battling them takes teamwork, scavenged weapons and quick thinking across sprawling environments. Every encounter feels like a desperate stand where improvisation and cooperation turn the tide.

(PR) MicroProse Cleared Hot Takes Off November 20 on Steam Early Access

MicroProse and Not Knowing Corporation are thrilled to announce that Cleared Hot, the physics-fueled helicopter shooter with tactical flair (and a questionable regard for gravity), will launch into Steam Early Access on November 20, 2025.

Get ready to rain down chaos, sling debris, rescue (or accidentally drop) your squad, and pick up just about anything that isn't bolted to the ground. Cleared Hot brings classic chopper combat into the modern era with fully physics-based gameplay, upgradable helicopters, and the most versatile rope-and-magnet system ever installed on military hardware.

(PR) Endorfy Announces Zephyr 92 Fan Series for Small PC Cases

October is a busy month for Endorfy, packed with exciting new releases. After expanding its lineup with a microphone and headphones in the unique Alt Gray color, and just two days after unveiling the white Arx series case - the brand is launching the Zephyr 92 fan. Designed with smaller PC cases in mind, the Zephyr proves that effective cooling doesn't always come in large sizes - sometimes, it's all about smart design. The result is a perfect blend of solid craftsmanship and high performance, wrapped in a clean, minimalist form and offered at an accessible price.

Performance that fits in your hand
At first glance, the Zephyr 92 looks like a classic black fan in a compact size. But take a closer look, and you'll see that every detail, from the blades to the frame, is engineered for maximum efficiency. The result? Smooth airflow and effective cooling, even in small PC cases where every millimeter counts.

(PR) LG Display Reports Third Quarter 2025 Results

LG Display today reported unaudited earnings results based on consolidated K-IFRS (International Financial Reporting Standards) for the three-month period ending September 30, 2025.
  • Revenues in the third quarter of 2025 increased by 25% to KRW 6,957 billion from KRW 5,587 billion in the second quarter of 2025 and increased by 2% from KRW 6,821 billion in the third quarter of 2024.
  • Operating profit in the third quarter of 2025 stood at KRW 431 billion. This compares with the operating loss of KRW 116 billion in the second quarter of 2025 and with the operating loss of KRW 81 billion in the third quarter of 2024.
  • EBITDA in the third quarter of 2025 increased by 35% to KRW 1,424 billion from KRW 1,054 billion in the second quarter of 2025 and increased by 23% from KRW 1,162 billion in the third quarter of 2024.
  • Net profit in the third quarter of 2025 was KRW 1 billion, compared with the net profit of 891 billion in the second quarter of 2025 and with the net loss of KRW 338 billion in the third quarter of 2024.

Western Digital Investigates Older SMR Hard Drive Failures Tied to Design Flaws

Western Digital has confirmed it is investigating potential problems with some of its older SMR-based hard drives, following reports from multiple data recovery firms about unusually high failure rates. The affected models include 2 TB, 3 TB, 4 TB, and 6 TB WD Blue and Red drives (model numbers WD*0EZAZ, WD*0EDAZ, and WD*0EFAX) released around 2020, products that previously landed the company in a class-action lawsuit over undisclosed use of SMR (Shingled Magnetic Recording) technology. Tom's Hardware reports that Western Digital said in a statement to Heise Online that it takes the findings seriously and that its engineering teams have launched an internal review.

According to 030 Datenrettung Berlin GmbH, which first published the failure analysis, the issue could have its origins in design-level limitations of SMR technology in lower-capacity consumer drives. SMR increases areal density by overlapping data tracks allowing up to 25% more capacity per platter. However, rewriting data can require adjacent tracks to be rewritten as well, introducing latency and potential instability. These shortcomings have long made SMR unsuitable for certain workloads such as RAID or ZFS arrays. Western Digital's earlier decision not to disclose SMR use in these drives led to a $2.7 million lawsuit settlement in 2021. Now, data recovery labs warn that the same models could suffer physical damage and data loss over time. Users with WD Blue or Red drives in the 2-6 TB range from 2020 onward are advised to check their hardware, as early failure symptoms may include unusual clicking or grinding noises from the platters.

Samsung Internet Browser Comes to Windows Desktop PCs

After more than a decade of focusing exclusively on mobile platforms, Samsung is finally bringing its Samsung Internet browser to Windows-based PCs. The company announced today that a beta version of the Chromium-based browser will be available for Windows 11 and Windows 10 systems running version 1809 or newer, starting October 30, 2025, in the United States and Korea. It is a significant decision for Samsung, which has maintained a mobile-first approach since the browser's inception, essentially leaving desktop users out in the cold while millions enjoyed the experience on their Galaxy devices. Additionally, it shows that Samsung may venture into more applications as the local AI integration takes off.

The desktop version is designed to create a unified browsing ecosystem across Samsung's product lineup, letting users carry their digital experience seamlessly between phones and computers. Users who log in with their Samsung Account will have their bookmarks, browsing history, and saved passwords synchronized through Samsung Pass, which should make authentication and form filling straightforward across devices. The browser also brings intelligent features powered by Galaxy AI, including Browsing Assist, which can summarize web pages and translate content on the fly. For those switching between their Galaxy phone and PC, the browser will prompt them to pick up where they left off, eliminating the hassle of hunting down tabs across different devices.

(PR) onsemi Unveils Vertical GaN Semiconductors

onsemi's vGaN technology is a breakthrough power semiconductor technology that sets a new benchmark for efficiency, power density and ruggedness for the age of AI and electrification. Developed and manufactured at onsemi's fab in Syracuse, NY, onsemi holds over 130 global patents covering a range of fundamental process, device design, manufacturing and systems innovations for vertical GaN technology.

"Vertical GaN is a game-changer for the industry and cements onsemi's leadership in energy efficiency and innovation. As electrification and AI reshape industries, efficiency has become the new benchmark that defines the measure of progress. The addition of vertical GaN to our power portfolio gives our customers the ultimate toolkit to deliver unmatched performance. With this breakthrough, onsemi is defining the future where energy efficiency and power density are the currency of competitiveness." Dinesh Ramanathan, Senior Vice President of Corporate Strategy, onsemi.

(PR) Corsair Announces Novablade Pro Wireless Hall Effect Leverless Fight Controller

Corsair, maker of award-winning gaming peripherals, today revealed their first fighting game-focused controller, the Novablade Pro Wireless Hall Effect Leverless Controller, featuring Corsair MGX Hyperdrive magnetic switches, Rapid Trigger and FlashTap SOCD technology.

"With gamers demanding faster, more responsive inputs, building a controller that utilizes our fastest, most responsive switches was an obvious play," said Tobias Brinkmann, Vice President and General Manager of Gaming Peripherals. "The Novablade Pro is a leap in input innovation, fusing MGX Hyperdrive magnetic switches with cutting-edge performance features like Rapid Trigger and FlashTap. For both fighting game professionals and aspiring players aiming for the top, chasing precise timing windows that last mere milliseconds are key - and Novablade Pro has the features they need for their pursuit of reliable frame perfection."

Latest AMD Radeon Drivers Disable USB-C Power Delivery on RX 7900 XTX Reference

Reference-design AMD Radeon RX 7900 XTX and RX 7900 XT graphics cards come with USB type-C ports on the cards that offer DisplayPort 2.1 passthrough and USB power delivery up to PD 3.0 standards of 30 W. While the port doesn't push USB data, it has wiring for DisplayPort and power delivery, letting you connect certain kinds of monitors that use a single USB-C connection for both display input and power. AMD just disabled the latter functionality of this port.

The AMD Software Adrenalin 25.10.2 WHQL drivers apparently disable USB power delivery from this port on reference-design RX 7900 XTX and RX 7900 XT graphics cards. The port continues to provide DisplayPort passthrough, but without power delivery, so your USB monitor might need its power brick plugged in, or to use a USB power delivery shunt. Those for whom this is a dealbreaker are recommended by AMD to switch back to AMD drivers as old as 25.3.1 to reliably use USB-PD from this port.

(PR) Innodisk Unveils Industry's First DDR5-7200 RDIMM Offering 64 GB per Module

Innodisk, a leading global AI solution provider, introduces the industry's first DDR5 7200 RDIMM as a major upgrade to its DDR5 series, delivering superior performance and the largest 64 GB single-module capacity in the industrial market, driving innovation across the embedded and edge AI segments.

The DDR5 7200 series delivers a data transfer rate of 7200 MT/s, representing a 12.5% increase in speed, and offers capacities ranging from 8 GB to 64 GB, with RDIMM models available from 16 GB to 64 GB. It meets the requirements of enterprise data centers and edge AI servers, while also supporting the growing need for on-premises confidential computing, enabling sensitive data to be processed locally with stronger security and privacy protection.

(PR) darkFlash Launches DY460 ATX PC Case

After the success of DY470, darkFlash presents a new interpretation of compact PC design. The DY460 carries the same signature style while reimagining the structure for smarter airflow, space efficiency, and modern aesthetics. It is made for gamers who seek balance between performance and elegance.

Compact Form, Same Signature DNA
DY460 continues the design spirit of DY470 preserving the three-sided panoramic glass concept that has become a favorite among builders. While the overall footprint is reduced by 23%, the case still offers full-size compatibility, supporting ATX motherboards, high-end GPUs, and advanced cooling setups.

Is your PC ready for Resident Evil Requiem? – PC System Requirements released

Here’s what your PC needs to run Resident Evil Requiem Capcom has officially released their PC system requirements for Resident Evil Requiem, which will be arriving on Steam on February 27th 2026. On Steam, Capcom has confirmed that the game will utilise Denuvo’s Anti-Tamper Technology. Furthermore, the game will support Steam Family Sharing. Requiem’s PC […]

The post Is your PC ready for Resident Evil Requiem? – PC System Requirements released appeared first on OC3D.

Sapphire delivers compact power with its Edge AI mini PCs

Sapphire is using the power of Ryzen to fuel its new Edge AI Mini PCs Sapphire has just launched a new range of EDGE AI mini PCs, delivering compact performance with AMD’s Ryzen AI 300 series processors. These mini PCs support up to 96GB of DDR5 memory onboard and can feature up to 12 CPU […]

The post Sapphire delivers compact power with its Edge AI mini PCs appeared first on OC3D.

AMD drops “New Game Support” for RDNA 1 and RDNA 2 GPUs with AMD Software 25.10.2

AMD appears to have axed “New Game Support” for its RDNA 1 and RDNA 2 GPUs Based on the release notes for AMD’s new AMD Software 25.10.2 driver, the company has dropped “New Game Support” and “Expanded Vulkan Extension” support for its older RDNA 1 and RDNA 2 graphics cards. This means that users of […]

The post AMD drops “New Game Support” for RDNA 1 and RDNA 2 GPUs with AMD Software 25.10.2 appeared first on OC3D.

Exclusive: Founded By Uber Alumni, Archy Raises $20M To Put Dental Practices ‘On Autopilot’

It was 2021 and Jonathan Rat was tired of seeing his wife, a dentist, struggle to maintain the tech stack at her practice.

Rat, who had served as a product manager at companies including Uber, Meta and SurveyMonkey, dug into the problem and discovered that “most of the software used in the industry” was more than 20 years old and still required physical services onsite.

“Most lacked integration with other platforms, were slow and buggy, and impossible to train new employees on,” he recalls.

Archy Founders Benjamin Kolin and Jonathan Rat
Archy Founders Benjamin Kolin and Jonathan Rat

So Rat teamed up with Benjamin Kolin, a former director of engineering at Uber, to start Archy, an AI-powered platform that aims “to put dental practices on autopilot.” The pair previously led the rebuilding of Uber’s payment platform that’s still in use today.

“I realized there was a massive need and opportunity for a modern, cloud-based software platform and set out to build that,” Rat told Crunchbase News. “I also realized bigger tech players have been building software for the larger healthcare market but overlooked the $500 billion dental industry.”

And now, Archy has just raised $20 million in Series B funding to help it grow even more, it told Crunchbase News exclusively. TCV led the financing, which also included participation from Bessemer Venture Partners, CRV, Entrée Capital and 25 practicing dentists who wrote checks as angel investors. The raise brings Archy’s total funding to date to $47 million, Rat said.

The company raised a $15 million Series A led by Entrée Capital almost exactly one year ago. Rat confirmed the Series B was an up round, but declined to disclose Archy’s valuation.

All-in-one tool

Archy claims to replace more than five existing tools to handle scheduling, charting, billing, imaging, insurance, payments, staffing, messaging and reporting “from one login.”

It is now building AI agents “to handle the busywork” such as checking eligibility, filing and following up on claims, writing notes, managing patient communications and scheduling, and “turning raw practice data into clear answers,” according to Rat.

The startup processes more than $100 million in payments annually across 45 states and has seen roughly 300% year-over-year growth, he said. It currently serves 2.5 million patients and has processed over 35 million X-rays through its platform.

The company claims that mid-sized dental practices report saving around 80 hours a month by using its technology, and are able to avoid “big hardware costs.” For example, Rat said that one practice saved about $50,000 in its first year of using Archy.

Dual-revenue model

San Jose, California-based Archy operates on a dual-revenue model that combines subscription-based fees with payment processing services, and offers tiered monthly subscription packages. In addition to its subscription fees, Archy serves as a merchant processor for its clients, generating revenue from a percentage of payment transactions processed through the platform.

“This hybrid approach allows us to remain aligned with our clients’ success while providing flexible options that scale with their business needs,” Rat told Crunchbase News.

The company plans to use its new capital to “hire aggressively” across its engineering, AI and go-to-market teams. Presently, it has 57 employees. It plans to expand internationally starting in 2026.

Austin Levitt, partner at TCV, told Crunchbase News via email that his firm had been looking for a way to invest in the dental space “for a long time” but didn’t find a company that was “appropriately tackling the root of the problem — the core PMS (practice management systems)” until it came across Archy.

He added: “We consistently heard that Archy was supremely easy to use, requiring almost no training in contrast to others, providing a seamless ‘iPhone-like’ experience, and reducing what took 10 clicks in other software to one or none in Archy.”

Related Crunchbase queries:

Illustration: Dom Guzman

Fresh off $225M raise, live shopping company Whatnot will boost Seattle headcount in Amazon’s backyard

(Whatnot Photo)

Live-shopping startup Whatnot plans to grow its new Seattle outpost following a $225 million funding round announced this week.

The company aims to hire more than 75 employees in the region over the next six months — tripling its current local headcount — across product, engineering, and related roles.

Whatnot opened its downtown Seattle office earlier this year. The Los Angeles-based company, now valued at $11.5 billion (up from $5 billion a year ago), said the Seattle expansion is one of its largest talent investments to date.

Founded in 2019, Whatnot’s platform mixes e-commerce and livestream entertainment. Sellers host live video shows on the Whatnot app or website, auctioning or selling products in real time. Buyers can watch, chat, and bid directly during live streams.

The New York Times described the trend as “QVC for the TikTok era.” Whatnot competes against the likes of TikTok (TikTok Shop) and Seattle-based e-commerce giant Amazon (Amazon Live).

Whatnot facilitates transactions between buyers and sellers, and handles payments, logistics, and safety features. The company earns revenue by taking a commission — typically around 8% — on sales made by sellers ranging from independent entrepreneurs to established retailers.

Whatnot more than doubled live sales on its platform this year, to $6 billion. Buyers spend more than 80 minutes per day on Whatnot’s live shows, according to the company. Whatnot is not profitable.

Some of its fastest-growing categories include beauty, women’s fashion, handbags, electronics, antiques, coins, golf, snacks, and live plants.

The company’s Seattle office focuses on product and engineering, including areas such as machine learning, marketplace integrity, and trust & safety. Whatnot has 900 employees across its workforce.

Dan Bear, vice president of engineering, and Kelda Murphy, vice president of talent acquisition, are both based in Seattle. Bear previously opened Seattle offices for Snap, Hulu, and CloudKitchens.

Whatnot is one of more than 130 companies that operate satellite offices in the Seattle region, tapping into the area’s technical talent pool.

The company has 31 open positions on its jobs page. It is hosting an engineering and product networking event in Seattle on Nov. 4.

SK Hynix DDR5 Inventory Down To Just 2 Weeks!

SK hynix CXL Memory Module CMM-DDR5 on reflective surface.

Morgan Stanley is raising alarm bells around SK Hynix's rapidly depleting DRAM inventory, which is now at effective "sold-out" levels as the AI-driven demand for high-bandwidth memory (HBM) - a type of DRAM - continues to corner an ever greater proportion of the global memory wafer capacity. SK Hynix: "DRAM (DDR5) inventory is down to about two weeks, effectively at a 'produce-and-ship' level" Morgan Stanley is sounding the proverbial gong today as SK Hynix's DRAM inventory levels continue to sink to the bottom-of-the-barrel levels. Before going further, do note that SK Hynix disclosed its earnings for the third quarter of […]

Read full article at https://wccftech.com/sk-hynix-ddr5-inventory-down-to-just-2-weeks/

Animal Crossing: New Horizons Is Getting a Nintendo Switch 2 Version With Better Graphics and Switch 2 Features Next Year

Animal Crossing: Nintendo Switch 2 Edition game cover featuring a vibrant beach scene with characters and tents.

Today, Nintendo announced that its second-best-selling Nintendo Switch game, Animal Crossing: New Horizons, is getting a Nintendo Switch 2 version, with graphical updates and features that take advantage of the Switch 2 and its improved hardware. Today's Animal Crossing news isn't just for Nintendo Switch 2 players; a new 3.0 title update is also on its way, which will be available to players on both Nintendo Switch and Switch 2. The new Switch 2 version of Animal Crossing: New Horizons leans on the updated hardware for several updates, each shown off in a new trailer, which also goes over the […]

Read full article at https://wccftech.com/animal-crossing-new-horizons-nintendo-switch-2-edition-next-year/

TSMC Reportedly Constructing Four Plants For 1.4nm Wafers, Mass Production Happening In H2 2028, A Single Facility Can Bring In $16 Billion Revenue

TSMC building four plants for 1.4nm production

The Central Taiwan Science Park will hold immense significance in the future because it is where TSMC’s new Phase II plant will be constructed. A report states that the company is planning to establish four plants dedicated to 1.4nm production. Although full-scale manufacturing is not expected until the second half of 2028, it will set the stage for chips made on bleeding-edge lithography and also create thousands of jobs in the process. Up to 10,000 jobs can be made with TSMC’s four 1.4nm construction plans, and looking at the recent timeline, Apple will likely be the semiconductor giant’s first customer […]

Read full article at https://wccftech.com/tsmc-building-four-factories-for-1-4nm-production-each-unit-bringing-in-16-billion-revenue/

Samsung Electronics Q3 2025 Earnings: Record Revenue On Roaring Memory Chip Demand

Samsung phone emerging from a blooming flower in a vivid sunset scene.

Samsung has reported its earnings for the third quarter of 2025, reporting broadly upbeat results on the back of the ongoing chip boom. Samsung Electronics Q3 2025 Earnings Highlight Here are the main highlights of the South Korean giant's latest quarterly earnings: Outlook: Commentary: Samsung Electronics has delivered an all-round pristine result for its third quarter of 2025, posting healthy growth in all segments, barring its Visual Display and Digital Appliances division, where Digital Appliances induced a modest year-over-year weakness of around 1 percent. Unsurprisingly, given the emerging dynamics in the memory business, the division recorded the most aggressive growth […]

Read full article at https://wccftech.com/samsung-electronics-q3-2025-earnings-record-revenue-on-roaring-memory-chip-demand/

GeForce NOW Adds The Outer Worlds 2 and ARC Raiders, Both RTX 5080-Ready

The Outer Worlds 2 game cover with NVIDIA GeForce Now branding.

NVIDIA has confirmed the list of PC games joining the ever-growing GeForce NOW library today. The highlights are Obsidian's sci-fi action RPG The Outer Worlds 2 (which we have reviewed here) and Embark's post-apocalyptic third-person extraction shooter game ARC Raiders. Both games support the NVIDIA RTX Blackwell server upgrade, which means Ultimate subscribers can enable NVIDIA DLSS 4 with Multi Frame Generation to get the highest possible frame rates. Meanwhile, NVIDIA continues to add more RTX 5080-class servers throughout its server regions. The latest one to be enabled is in Sofia, Bulgaria, with Amsterdam and Montréal scheduled to be next. […]

Read full article at https://wccftech.com/geforce-now-adds-the-outer-worlds-2-and-arc-raiders-both-rtx-5080-ready/

AION 2 to Launch Globally in 2026 with DLSS 4 Multi Frame Generation, Says NVIDIA

Aion 2 fantasy characters holding a sword and book against a cosmic landscape.

The MMORPG AION 2 will launch globally in 2026 with DLSS 4 and Multi Frame Generation support on the PC version (the game will also be available on mobile devices). The initial launch in South Korea and Taiwan is set for November 19. The game is a sequel to Aion: The Tower of Eternity, which launched in 2008 in Korea and the following year worldwide. The original game was powered by Crytek's CRYENGINE, whereas this new installment is made with Unreal Engine 5. Aion 2 takes place around two hundred years later and features a world that, according to NCSOFT, […]

Read full article at https://wccftech.com/aion-2-to-launch-globally-2026-with-dlss-4-multi-frame-generation/

Colorful Claims Its New Memory Low Latency And High Performance Modes Can Deliver 15% Higher FPS In Battlefield 6

CVM B650 motherboard with text READY TO FIGHT COLORFUL +15% and BATTLEFIELD 6 over a game explosion scene.

The new memory optimization features apparently improve gaming performance on Ryzen 9000-based systems using Colorful's AM5 motherboards. Colorful Intros "Low Latency" and "High Performance" Memory Modes for its 600 and 800 Series Motherboards Chinese hardware maker, Colorful, has today introduced two new memory-related features for its AM5 motherboards, which can supposedly deliver superior performance in apps and games by reducing the memory latency. Colorful says that since Ryzen 9000 series CPUs have a high memory latency, the new Colorful motherboard memory features can reduce it to optimize performance. Colorful released the "Low Latency" and "High Performance" modes on some of […]

Read full article at https://wccftech.com/colorful-claims-its-new-memory-low-latency-and-high-performance-modes-can-deliver-15-higher-fps-in-battlefield-6/

Xbox Revenues See $113 Million Decline Amid Hardware Sales Drop And Limited Gaming Content and Services Growth

Xbox Series X and Series S consoles floating against a starry background.

Xbox gaming revenues have declined by $113 million, or 2%, due to drop in hardware sales and limited growth of gaming content and services in Q1 FY2026 over the prior year, Microsoft confirmed in its latest financial report. On Wednesday, the company reported its Q1 FY2026 earnings, confirming the Xbox hardware revenue 29% decline, which is offset in part by growth in Xbox content and services, whose $5.5 billion revenue is a 1% improvement over the "strong prior year". This revenue increase was driven by growth in Xbox Game Pass and third-party content, and partially offset by a decline in […]

Read full article at https://wccftech.com/xbox-revenues-see-113-million-decline-amid-hardware-sales-drop-and-limited-gaming-content-and-services-growth/

Alleged Intel Core Ultra X7 358H & Ultra X5 338H “Panther Lake” Leak Points To Similar Multi-Thread Performance As Arrow Lake-H CPUs

Intel Panther Lake processors labeled Built for scale on promotional background.

Alleged performance of Intel's Panther Lake, Core Ultra X7 358H & Ultra X5 338H CPUs, in Cinebench R23 MT has leaked out. Intel Panther Lake Might Offer Similar Performance As Arrow Lake In Multi-Threaded Tests If These "Alleged" ES Tests For Core Ultra X7 358H & Ultra X5 338H Are To Be Believed A few weeks after posting what are seemingly the first non-official benchmarks of Panther Lake's Xe3 iGPU, LaptopReview has now published CPU performance benchmarks for Intel's upcoming CPUs, the Core Ultra X7 358H and the Core Ultra X5 338H. These two Intel Panther Lake CPUs should be […]

Read full article at https://wccftech.com/intel-core-ultra-x7-358h-ultra-x5-338h-panther-lake-leak-similar-mt-performance-as-arrow-lake/

Blogging, AI, and the SEO road ahead: Why clarity now decides who survives

Blogging, AI, and the SEO road ahead: Why clarity now decides who survives

For years, I told bloggers the same thing: make your content easy enough for toddlers and drunk adults to understand. 

That was my rule of thumb. 

If a five-year-old can follow what you’ve written and someone paying half-attention can still find what they need on your site, you’re doing something right.

But the game has changed. It’s no longer just about toddlers and drunk adults. 

You’re now writing for large language models (LLMs) quietly scanning, interpreting, and summarizing your work inside AI search results.

I used to believe that great writing and solid SEO were all it took to succeed. What I see now:

Clarity beats everything

The blogs winning today aren’t simply well-written or packed with keywords. They’re clean, consistent, and instantly understandable to readers and machines alike.

Blogging isn’t dying. It’s moving from being a simple publishing tool to a real brand platform that supports off-site efforts more than ever before.

You can’t just drop a recipe or travel guide online and expect it to rank using the SEO tactics of the past. 

Bloggers must now think of their site as an ecosystem where everything connects – posts, internal links, author bios, and signals of external authority all reinforce each other.

When I audit sites, the difference between those that thrive and those that struggle almost always comes down to focus. 

The successful ones treat their blogs like living systems that grow smarter, clearer, and more intentional with time.

But if content creators want to survive what’s coming, they need to build their sites for toddlers, drunk adults, and LLMs.

In this article, bloggers will learn how to do the following:

  • Understand the current blogging climate and why clarity now matters more than ever.
  • Adapt their content for AI Overviews, LLMs, and emerging retrieval systems.
  • Use recency bias and “last updated” signals to strengthen visibility.
  • Build a recognizable brand that LLMs can cite and retrieve with confidence.
  • See why professional SEO audits are one of the smartest investments bloggers can make.
  • Prepare for the next five years of AI-driven search with practical, proven strategies.

The 2026 blogging climate: Clarity amid chaos

Let’s be honest: the blogging world feels a little shaky right now.

One day, traffic is steady, and the next day, it’s down 40% after an update no one saw coming. 

Bloggers are watching AI Overviews and “AI Mode” swallow up clicks that used to come straight to their sites. Pinterest doesn’t drive what it once did, and social media traffic in general is unpredictable.

It’s not your imagination. The rules of discovery have changed.

We’ve entered a stage where Google volatility is the norm, not the exception. 

Core updates hit harder, AI summaries are doing the talking, and creators are realizing that search is no longer just about keywords and backlinks. It’s about context, clarity, and credibility.

But here’s the good news: the traffic that matters is still out there. It just presents differently. 

The strongest blogs I work with are seeing direct traffic and returning visitors climb. 

People remember them, type their names into search, open their newsletters, and click through from saved bookmarks. That’s not an accident – that’s the result of clarity and consistency.

If your site clearly explains who you are, what you offer, and how your content fits together, you’re building what I call resilient visibility. 

It’s the kind of presence that lasts through algorithm swings, because your audience and Google both understand your purpose.

Think of it this way: the era of chasing random keyword wins is over. 

The bloggers who’ll still be standing in five years are the ones who organize their sites like smart libraries: easy to navigate, full of expertise, and built for readers who come back again and again.

AI systems reward that same clarity. 

They want content that’s connected, consistent, and confident about its subject matter. 

That’s how you show up in AI Overviews, People Also Ask carousels, or Gemini-generated results.

In short, confusion costs you clicks, but clarity earns you staying power.

Takeaway

  • The blogging climate might feel chaotic, but the strategy hasn’t changed as much as people think. 
  • Focus on clarity, structure, and user trust. Build a brand that people – and AI – can easily recognize and rely on.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

The AI acceleration: From search to retrieval

A few years ago, SEO was all about chasing rankings. 

You picked your keywords, wrote your post, built some links, and hoped to land on page one. 

Simple enough. But that world doesn’t exist anymore.

Today, we’re in what can best be called the retrieval era

AI systems like ChatGPT, Gemini, and Perplexity don’t list links. They retrieve answers from the brands, authors, and sites they trust most.

Duane Forrester said it best – search is shifting from “ranking” to “retrieval.” 

Instead of asking, “Where do I rank?” creators should be asking, “Am I retrievable?” 

That mindset shift changes everything about how we create content.

Mike King expanded on this idea, introducing the concept of relevance engineering. 

Search engines and LLMs now use context to understand relevance, not just keywords. They look at:

  • How consistently you cover topics.
  • How well your pages connect.
  • Whether you’re seen as an authority in your niche.

This is where structure and clarity start paying off. 

AI systems want to understand who you are and where you stand. 

They learn that from your internal links, schema, author bios, and consistent topical focus. 

When everything aligns, you’re no longer just ranking in search – you’re becoming a known entity that AI can pull from.

I’ve seen this firsthand during site audits. Blogs with strong internal structures and clear topical authority are far more likely to be cited as sources in AI Overviews and LLM results. 

You’re removing confusion and teaching both users and models to associate your brand with specific areas of expertise.

Takeaway

  • Stop worrying about ranking higher. Start making yourself easier to retrieve. 
  • Build a site that clearly tells Google and AI who you are, what you offer, and why your content deserves to be cited.

Understanding recency bias: Why freshness is your friend

Here’s something I see a lot in my audits: two posts covering the same topic, both written by experienced bloggers, both technically sound. Yet one consistently outperforms the other. 

The difference? One shows a clear “Last updated” date, and the other doesn’t.

That tiny detail matters more than most people realize.

Research from Metehan Yesilyurt confirms what many SEOs have suspected for a while: LLMs and AI-driven search results favor recency, and it’s already being exploited in the name of research.

It’s built into their design. When AI models have multiple possible answers to choose from, they often prefer newer or recently refreshed content. 

This is recency bias, and it’s reshaping both AI search and Google’s click-through behavior.

We see the same pattern inside the traditional SERPs. 

Posts that display visible “Last updated” dates tend to earn higher click-through rates. 

People – and algorithms – trust fresh information.

That’s why one of the first things I check in an audit is how Google is interpreting the date structure on a blog. 

Is it recognizing the correct updated date, or is it stuck on the original publish date? 

Sometimes the fix is simple: remove the old “published on” markup and make sure the updated timestamp is clearly visible and crawlable. 

Other times, the page’s HTML or schema sends conflicting signals that confuse Google, and those need to be cleaned up.

When Google or an LLM can’t identify the freshness of your content, you’re handing visibility to someone else who communicates that freshness better.

How do you prevent this? Don’t hide your updates. Celebrate them.

When you update recipes, add new travel information, or test a product, update your post and make the date obvious. 

This will tell readers and AI systems, “This content is alive and relevant.”

Now, that being said, Google does keep a history of document versions. 

The average post may have dozens of copies stored, and Google can easily compare the recently changed version to its repository of past versions. 

Avoid making small changes that do not add value to users or republishing to a new date years later to fake relevancy. Google specifically calls that out in its guidelines.

Takeaway

  • Recency is a ranking and retrieval advantage. Keep your content updated, make that freshness visible, and verify that Google and LLMs are reading the right dates. 
  • The clearer your update signals, the stronger your trust signals.

Get the newsletter search marketers rely on.


Relevance, entities, and the rise of brand SEO

Let’s talk about what really gets remembered in this new AI-driven world.

When you ask ChatGPT, Gemini, or Perplexity a question, it thinks in entities – people, brands, and concepts it already knows.

The more clearly those models recognize who you are and what you stand for, the more likely you are to be retrieved when it’s time to generate an answer.

That’s where brand SEO comes in.

Harry Clarkson-Bennett in “How to Build a Brand (with SEO) in a Post AI World” makes a great point: LLMs reward brand reinforcement. 

They want to connect names, authors, and websites with a clear area of expertise. And they remember consistency. 

If your name, site, and author profiles all align across the web (same logo, same tone, same expertise), you start training these models to trust you.

I tell bloggers all the time: AI learns the same way humans do. It remembers patterns, tone, and repetition. So make those patterns easy to see.

  • Use a consistent author bio everywhere.
  • Build clear “About” pages that connect your name to your niche.
  • Link your best content internally so Google and AI can map your expertise.
  • Use structured data to reinforce entity relationships (i.e., author, organization, and sameAs markup).
  • And here’s something new I’ve started recommending to audit clients: AI Buttons. 

I originally discussed these AI buttons in my last article, “AI isn’t the enemy: How bloggers can thrive in a generative search world,” and provided a visual example.

These are simple on-site prompts encouraging readers to save or summarize your content using AI tools like ChatGPT or Gemini. 

When users do that, those models start seeing your site as a trusted example. Over time, that can influence what those systems recall and recommend.

Think of this as reputation-building for the AI era. It’s not about trying to game the system. It’s about making sure your brand is memorable, consistent, and worth retrieving.

Fortunately, these buttons are becoming more mainstream, with theme designers like Feast including them as custom blocks. 

And the buttons work – I’ve seen creators turn their blogs into small but powerful brands that LLMs now cite regularly.

They did it by reinforcing who they were, everywhere, and then using AI buttons to encourage their existing traffic to save their sites as high-quality examples to reference in the future.

Takeaway

  • Google and AI don’t just rank content anymore. They recognize entities and remember brands. 
  • The more consistent and connected your brand signals are, the more likely you’ll be retrieved, cited, and trusted in AI search results.

Why every blogger needs an SEO professional (now more than ever)

Blogging has never been easy, but it’s never been harder than it is right now.

Between core updates, AI Overviews, and shifting algorithms, creators are expected to keep up with changes that even seasoned SEOs struggle to track. 

And that’s the problem – too many bloggers are still trying to figure it all out alone.

If there’s one thing I’ve learned after doing more than 160 site audits this year, it’s this: almost every struggling blogger is closer to success than they think. They’re just missing clarity.

A good SEO audit does more than point out broken links or slow-loading pages. It shows you why your content isn’t connecting with Google, readers, and now LLMs. 

My audits are built around what I call the “Toddlers, Drunk Adults, and LLMs” framework. 

If your site works for those three audiences, you’re in great shape.

For toddlers

  • The structure is simple. Your content hierarchy makes sense. Every post has a clear topic, and your categories aren’t a maze.

For drunk adults

  • Your site is fast, responsive, and forgiving. People can find what they need even when they’re not fully focused.

For LLMs

  • Your data is clean, your entities are connected, and your expertise is crystal clear to AI systems scanning your site.

When bloggers follow this approach, the numbers speak for themselves. 

In 2025 alone, my audit clients have seen an average increase of 47% in Google traffic and RPM improvements of 21-33% within a few months of implementing recommendations.

This isn’t just about ranking better. Every audit is a roadmap to help bloggers position their sites for long-term visibility across traditional search and AI-powered discovery.

That means optimizing for things like:

You can’t control Google’s volatility, but you can control how clear, crawlable, and connected your site is. That’s what gets rewarded.

And while I’ll always advocate for professional audits, this isn’t about selling a service. 

You need someone who can give you an honest, technical, and strategic look under the hood.

Why?

Because the difference between “doing fine” and “thriving in AI search” often comes down to a single, well-executed audit.

Takeaway

  • DIY SEO isn’t enough anymore. Professional audits are the most valuable investment a blogger can make in 2026 and beyond. 
  • Not for quick wins, but for building a site that’s understandable, adaptable, and future-ready for both Google and AI.

The road ahead: Blogging in 2026–2030

So where does all this lead? What does blogging even look like five years from now?

Here’s what I see coming.

We’re heading toward an increasingly agentic web, where AI systems do the searching, summarizing, and recommending for us. 

Instead of typing a query into Google, people will ask their personal AI for a dinner idea, a travel itinerary, or a product recommendation. 

And those systems will pull from a short list of trusted sources they already “know.”

That’s why what you’re doing today matters so much.

Every time you publish a post, refine your site structure, or strengthen your brand signals, you’re teaching AI who you are. 

You’re building a long-term relationship with the systems that will decide what gets shown and what gets skipped.

Here’s how I expect the next few years to unfold:

  • AI-curated discovery becomes normal: Instead of browsing through 10 links, users get custom recommendations from trusted sources. The blogs that survive are the ones AI already recognizes as reliable.
  • Brand-first SEO takes over: Ranking for a keyword will matter less than having your brand show up as the answer. Visibility won’t just depend on optimization, it’ll depend on reputation.
  • Entity-first indexing becomes the foundation: Google and AI models are increasingly indexing based on entities, not URLs. That means your author names, structured data, and topical focus all play a direct role in discoverability.
  • Human storytelling becomes the ultimate differentiator: AI can summarize information, but it can’t replicate lived experience, voice, or emotion. The content that stands out will be the content that feels human.

The creators who will win in this next chapter are the ones who stop trying to outsmart Google and start building systems that AI can easily understand and humans genuinely connect with.

It’s not about chasing trends or reinventing your site every time an update hits. It’s about getting the fundamentals right and letting clarity, trust, and originality carry you forward.

Because the truth is, Google’s not the gatekeeper anymore. You are. 

Your brand, expertise, and ability to communicate clearly will decide how visible you’ll be in search and AI-driven discovery.

Takeaway

  • The next five years of blogging will belong to those who build clear, human-centered brands that AI understands and audiences love. 
  • Keep your content fresh, your structure clean, and your voice unmistakably your own.

Clarity over chaos

If there’s one thing I want bloggers to take away from all this, it’s that clarity always wins.

We’re living through the fastest transformation in the history of search. 

AI is rewriting how content is discovered, ranked, and retrieved. 

Yes, that’s scary. But it’s also full of opportunity for those willing to adapt.

I’ve seen it hundreds of times in audits this year. 

Bloggers who simplify their sites, clean up their data, and focus on authority signals see measurable results. 

They show up in AI Overviews. They regain lost rankings. They build audiences that keep coming back, even when algorithms shift again.

This isn’t about fighting AI – it’s about working with it. The goal is to show the system who you are and why your content matters.

Here’s my advice, regardless of the professional you choose:

  • Get your site audited by someone who understands both SEO and AI search.
  • Keep your content updated and your structure clean.
  • Make your brand easy to recognize, both to readers and to machines.
  • Build for toddlers, drunk adults, and LLMs.

It’s never been harder to be a content creator, but it’s never been more possible to build something that lasts. 

The blogs that survive the next five years will be organized, human, and clear.

The future of blogging belongs to the creators who embrace clarity over chaos. AI won’t erase the human voice – it’ll amplify the ones that are worth hearing. 

Here’s to raised voices and future success. Good luck out there.

Dig deeper: Organizing content for AI search: A 3-level framework

Regex for SEO: The simple language that powers AI and data analysis

Regex for SEO

Regex is a powerful – yet overlooked – tool in search and data analysis. 

With just a single line, you can automate what would otherwise take dozens of lines of code.

Short for “regular expression,” regex is a sequence of characters used to define a pattern for matching text.

It’s what allows you to find, extract, or replace specific strings of data with precision.

In SEO, regex helps you extract and filter information efficiently – from analyzing keyword variations to cleaning messy query data. 

But its value extends well beyond SEO. 

Regex is also fundamental to natural language processing (NLP), offering insight into how machines read, parse, and process text – even how large language models (LLMs) tokenize language behind the scenes.

Regex uses in SEO and AI search

Before getting started with regex basics, I want to highlight some of its uses in our daily workflows.

Google Search Console has a regex filter functionality to isolate specific query types.

One of the simplest regex expressions commonly used is the brand regex brandname1|brandname2|brandname3, which is very useful when users write your brand name in different ways.

Google Analytics also supports regex for defining filters, key events, segments, audiences, and content groups.

Looker Studio allows you to use regex to create filters, calculated fields, and validation rules.

Screaming Frog supports the use of regex to filter and extract data during a crawl and also to exclude specific URLs from your crawl.

Screaming Frog regex

Google Sheets enables you to test whether a cell matches a specific regex. Simply use the function REGEXMATCH (text, regular_expression).

In SEO, we’re surrounded by tools and features just waiting for a well-written regex to unlock their full potential.

Regex in NLP

If you’re building SEO tools, especially those that involve content processing, regex is your secret weapon.

It gives you the power to search, validate, and replace text based on advanced, customizable patterns.

Here’s a Google Colab notebook with an example of a Python script that takes a list of queries and extracts different variations of my brand name. 

You can easily customize this code by plugging it into ChatGPT or Claude alongside your brand name.

Google Colab - BrandName_Variations
Fun fact: By building this code, I accidentally found a good optimization opportunity for my personal brand. 

Get the newsletter search marketers rely on.


How to write regex

I’m a fan of vibe coding – but not the kind where you skip the basics and rely entirely on LLMs. 

After all, you can’t use a calculator properly if you don’t understand numbers or how addition, multiplication, division, and subtraction work.

I support the kind of vibe coding that builds on a little coding knowledge – enough to use LLMs effectively, test what they produce, and troubleshoot when needed.

Likewise, learning the basics of regex helps you use LLMs to create more advanced expressions.

Simple regex cheat sheet

SymbolMeaning
.Matches any single character.
^Matches the start of a string.
$Matches the end of a string.
*Matches 0 or more of the preceding character.
+Matches 1 or more of the preceding character.
?Makes the preceding character optional (0 or 1 time).
{}Matches the preceding character a specific number of times.
[]Matches any one character inside the brackets.
\Escapes special characters or signals special sequences like \d.
`Matches a literal backtick character.
()Groups characters together (for operators or capturing).

Example usage

Here’s a list of 10 long-tail keywords. Let’s explore how different regex patterns filter them using the Regex101 tool.

  • “Best vegan recipes for beginners.”
  • “Affordable solar panels for home.”
  • “How to train for a marathon.”
  • “Electric cars with longest battery range.”
  • “Meditation apps for stress relief.”
  • “Sustainable fashion brands for women.”
  • “DIY home workout routines without equipment.”
  • “Travel insurance for adventure trips.”
  • “AI writing software for SEO content.”
  • “Coffee brewing techniques for espresso lovers.”

Example 1: Extract any two-character sequence that starts with an “a.” The second character can be anything (i.e., a, then anything).

  • Regex: a.
  • Output: (All highlighted words in the screenshot below.)
Regex usage - Example 1

Example 2: Extract any string that starts with the letter “a” (i.e., a is the start of the string, then followed by anything).

  • Regex: ^a.
  • Output: (All highlighted words in screenshot below.)
Regex usage - Example 2

Example 3: Extract any string that starts with an “a” and ends with an “e” (i.e., any line that starts with a, followed by anything, then ends with an e).

  • Regex: ^a.*e$
  • Output: (All highlighted words in the screenshot below.)
Regex usage - Example 3

Example 4: Extract any string that contains two “s.”

  • Regex: s{2}
  • Output: (All highlighted words in the screenshot below.)
Regex usage - Example 4

Example 5: Extract any string that contains “for” or “with.”

  • Regex: for|with
  • Output: (All highlighted words in the screenshot below.)

I’ve also built a sample regex Google Sheet so you can play around, test, and experience the feature in Google Sheets, too. Check it out here.

Sample regex Google Sheet

Note: Cells in the Extracted Text column showing #N/A indicate that the regex didn’t find a matching pattern.

Where regex fits in your SEO toolkit

By exploring regex, you’ll open new doors for analyzing and organizing search data. 

It’s one of those skills that quietly makes you faster and more precise – whether you’re segmenting keywords, cleaning messy queries, or setting up advanced filters in Search Console or Looker Studio.

Once you’re comfortable with the basics, start spotting where regex can save you time. 

Use it to identify branded versus nonbranded searches, group URLs by pattern, or validate large text datasets before they reach your reports.

Experiment with different expressions in tools like Regex101 or Google Sheets to see how small syntax changes affect results. 

The more you practice, the easier it becomes to recognize patterns in both data and problem-solving. 

That’s where regex truly earns its place in your SEO toolkit.

The Death of the Security Checkbox: BAS Is the Power Behind Real Defense

Security doesn’t fail at the point of breach. It fails at the point of impact.  That line set the tone for this year’s Picus Breach and Simulation (BAS) Summit, where researchers, practitioners, and CISOs all echoed the same theme: cyber defense is no longer about prediction. It's about proof. When a new exploit drops, scanners scour the internet in minutes. Once attackers gain a foothold,

ThreatsDay Bulletin: DNS Poisoning Flaw, Supply-Chain Heist, Rust Malware Trick and New RATs Rising

The comfort zone in cybersecurity is gone. Attackers are scaling down, focusing tighter, and squeezing more value from fewer, high-impact targets. At the same time, defenders face growing blind spots — from spoofed messages to large-scale social engineering. This week’s findings show how that shrinking margin of safety is redrawing the threat landscape. Here’s what’s

PhantomRaven Malware Found in 126 npm Packages Stealing GitHub Tokens From Devs

Cybersecurity researchers have uncovered yet another active software supply chain attack campaign targeting the npm registry with over 100 malicious packages that can steal authentication tokens, CI/CD secrets, and GitHub credentials from developers' machines. The campaign has been codenamed PhantomRaven by Koi Security. The activity is assessed to have begun in August 2025, when the first

Australia's police to use AI to decode criminals' emoji slang to curb online crime — "crimefluencers" will be decoded and translated for investigators

Australia's police are looking to build an AI tool that would detect and interpret emoji slang online in an effort to curb crime among bad actors in hateful communities, dubbed "crimefluencers." The AI will understand the difference between harmless lingo and coded messages to help police combat violent crime.

xMEMS raises $21M to advance solid-state chip cooling

The post xMEMS raises $21M to advance solid-state chip cooling appeared first on StartupHub.ai.

xMEMS Labs Inc. raised $21 million to commercialize its piezoMEMS technology, a solid-state chip cooling system for compact AI-powered devices.

The post xMEMS raises $21M to advance solid-state chip cooling appeared first on StartupHub.ai.

AI’s Trillion-Dollar Reality: Reindustrialization and Geopolitical Strength

The post AI’s Trillion-Dollar Reality: Reindustrialization and Geopolitical Strength appeared first on StartupHub.ai.

The age of artificial intelligence, often shrouded in speculative hype, is now demonstrably “actually working,” according to Joe Lonsdale, Palantir co-founder and 8VC founding partner. This tangible progress, he asserts, heralds not just a technological revolution but a fundamental reindustrialization of the United States, demanding unprecedented capital and reshaping global power dynamics. Lonsdale shared these […]

The post AI’s Trillion-Dollar Reality: Reindustrialization and Geopolitical Strength appeared first on StartupHub.ai.

AI’s Insatiable Energy Appetite Fuels Uranium Miners

The post AI’s Insatiable Energy Appetite Fuels Uranium Miners appeared first on StartupHub.ai.

The relentless ascent of artificial intelligence, alongside the broader push for electrification, is forging an unprecedented demand for power, a trend that Valérie Noël, Head of Trading at Syz Group, highlights as a potent catalyst for uranium miners. This insight, delivered during her recent interview on CNBC’s *Worldwide Exchange* with anchor Frank Holland, underscored a […]

The post AI’s Insatiable Energy Appetite Fuels Uranium Miners appeared first on StartupHub.ai.

Rakuten Deploys New Guardrail for SAE PII Detection and LLM as a judge

The post Rakuten Deploys New Guardrail for SAE PII Detection and LLM as a judge appeared first on StartupHub.ai.

A new SAE PII detection method deployed by Rakuten uses model internals to achieve a 96% F1 score, compared to just 51% using the same model as a black-box judge.

The post Rakuten Deploys New Guardrail for SAE PII Detection and LLM as a judge appeared first on StartupHub.ai.

Solidatus raises £5M to advance AI data lineage platform

The post Solidatus raises £5M to advance AI data lineage platform appeared first on StartupHub.ai.

Data lineage provider Solidatus secured £5M to accelerate its AI-powered platform for enterprise data governance and compliance.

The post Solidatus raises £5M to advance AI data lineage platform appeared first on StartupHub.ai.

IBM’s Granite 4.0: Small Models, Outsized Impact on Enterprise AI

The post IBM’s Granite 4.0: Small Models, Outsized Impact on Enterprise AI appeared first on StartupHub.ai.

IBM’s latest iteration of its Granite models, Granite 4.0, is poised to reshape the enterprise AI landscape by delivering superior performance, unprecedented efficiency, and cost-effectiveness through a groundbreaking hybrid architecture. This new family of small language models challenges the conventional wisdom that larger models inherently equate to better results, demonstrating that strategic architectural innovation can […]

The post IBM’s Granite 4.0: Small Models, Outsized Impact on Enterprise AI appeared first on StartupHub.ai.

Q.ANT raises $80M to advance photonic AI processors

The post Q.ANT raises $80M to advance photonic AI processors appeared first on StartupHub.ai.

Q.ANT secured total funding of $80 million to commercialize its energy-efficient photonic processors for artificial intelligence and high-performance computing.

The post Q.ANT raises $80M to advance photonic AI processors appeared first on StartupHub.ai.

AI Agent Supervision: Sierra’s Answer to Rogue Chatbots

The post AI Agent Supervision: Sierra’s Answer to Rogue Chatbots appeared first on StartupHub.ai.

Sierra's platform uses AI 'Supervisors' for real-time correction and 'Monitors' for constant evaluation, aiming to solve the AI reliability problem with more AI.

The post AI Agent Supervision: Sierra’s Answer to Rogue Chatbots appeared first on StartupHub.ai.

AMD ROCm 7.1 Release Appears Imminent

AMD continues with their aggressive efforts to enhance their GPU software compute ecosystem with ROCm. The fire under them has been lit and they have been taking their software efforts more expeditiously in recent times to better compete with NVIDIA's CUDA ecosystem and ensuring their Instinct hardware is properly primed to compete. The release dance has begun for ROCm 7.1...

(PR) Cherry Intros Stream Desktop Ultimate Keyboard and Mouse With ProScroll Technology

The new CHERRY STREAM DESKTOP ULTIMATE is the ideal solution for anyone who values efficiency, comfort and precision. The set includes a wireless mouse and keyboard designed to perform across a range of tasks, from software development and graphic design to managing complex spreadsheets and extended coding sessions.

"The CHERRY STREAM ULTIMATE series is built to make precision feel easy. With the CHERRY ProScroll wheel powered by electromagnetic technology, you can pick from preset modes or fine-tune the feel to suit your work. From tactile steps for detailed tasks to freewheeling for fast navigation. Combined with great ergonomics and flexible wireless connectivity, the STREAM DESKTOP ULTIMATE is built to keep you comfortable and in control wherever you work", says Joakim Jansson, Head of Product & Portfolio at CHERRY.

(PR) ASUS Rolls Out NVIDIA GB300 NVL72 Rack Solution

ASUS today announced the rollout of XA GB721-E2, built on NVIDIA GB300 NVL72 rack-scale system. Designed for large-scale model training, high-throughput inference, and advanced AI/HPC workloads, the system combines breakthrough performance, sustainable liquid cooling, and serviceability at rack scale to help enterprises and research institutions accelerate innovation.

Rack-scale performance with NVIDIA Grace Blackwell Ultra GPU
NVIDIA GB300 NVL72 systems integrate 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs in a single NVIDIA NVLink domain, delivering ultra-low-latency, high-bandwidth GPU-to-GPU communication for trillion-parameter workloads. Together with NVIDIA Quantum-X800 InfiniBand platform or NVIDIA Spectrum-X Ethernet platform and NVIDIA ConnectX-8 SuperNIC, the system is engineered for high-throughput inference and cluster-scale expansion, fully support the recent need of enterprise AI factory innovation.

(PR) ASUS Republic of Gamers Unveils Rapture GT-BE19000AI AI Gaming Router

ASUS Republic of Gamers today announced the ROG Rapture GT-BE19000AI, the world's first AI router. Combining breakthrough intelligence, platform-level flexibility, and next-generation performance, it is the ideal solution in an era where gaming, streaming, and smart home devices demand more from networks. The GT-BE19000AI delivers intelligence, automation, and reliability.

Built-in neural processing unit
The ROG Rapture GT-BE19000AI is the first router equipped with a built-in AI core. The AI core is complemented with a quad-core CPU, 4 GB of DDR4, and 32 GB of onboard storage. Unlike conventional routers that rely solely on the CPU, the GT-BE19000AI offers an integrated system that provides dedicated compute resources for Docker apps and related workloads.

(PR) Alphacool Launches New Core XT45 Full Copper Radiators

The Alphacool International GmbH, based in Braunschweig, Germany, is a pioneer in PC water cooling technology. With one of the most comprehensive product portfolios in the industry and over 20 years of experience, Alphacool is now expanding its lineup with the Core XT45 Full Copper Radiators. As the first models of the Core Series, the Core XT45 radiators combine Alphacool's trusted full-copper construction with a completely refreshed design. The clean, modern look can be customized using interchangeable side panels - available either with integrated aRGB lighting or in a classic, unlit style. Thanks to the magnetic mounting system, they can be swapped quickly and without tools at any time.

Inside, Alphacool continues to rely on full-copper components throughout all water-carrying sections for maximum thermal efficiency. A redesigned internal layout with 16 water channels instead of 12 increases performance by improving both flow and heat transfer. For easy installation and handling, the radiators are equipped with two G1/4" ports in the signature Core look. An additional fill/drain port ensures easy integration and simpler system maintenance.

Samsung Ships HBM4 Samples to Customers, Mass Production in 2026

Samsung has just reported its third quarter 2025 results, and the company has confirmed that its next-generation HBM4 memory samples has been shipped to customers worldwide, with mass production expected in 2026. "HBM3E is currently in mass production and being sold to all related customers, while HBM4 samples are simultaneously being shipped to key clients," notes Samsung, confirming that its memory business is thriving and advanced memory solutions are in great demand. In addition, the company has confirmed that "In 2026, the Foundry Business will focus on providing a stable supply of new 2 nm GAA products and the HBM4 base-die, and beginning operations at the Company's fab in Taylor, Texas in a timely manner."

In an HBM stack, consisting of stacked DRAM dies up to a 12-high, connected with TSVs, there is the possibility of embedding an optional base die with customized logic/accelerator circuitry tailored to a specific need. Companies usually opt for off-the-shelf HBM memory for the standardized pricing model implemented across suppliers like Samsung, Micron, and SK Hynix. However, when ordering quantities so large, like NVIDIA and AMD do, they can request special accommodation features. While this may not be necessarily a compute die that brings TeraFLOPS of power, it will likely be a data processing/logic die that helps route data packets more efficiently, cutting latency and improving performance. Especially during inference, where latency is the most important factor, having a "smarter" HBM could yield sizable, double-digit gains in throughput of tokens.

AMD officially confirms Zen 6 “Medusa” Ryzen CPUs at OCP 2025

AMD confirms the leaked codename for Zen 6 Ryzen CPUs—Are all the leaks true? At the Open Compute Project Global Summit, AMD confirmed the codename for its next-generation Zen 6 Ryzen CPUs. AMD’s Zen 6 Ryzen CPUs are “Medusa”, a name that has long been discussed by hardware enthusiasts thanks to prior leaks. This seemingly […]

The post AMD officially confirms Zen 6 “Medusa” Ryzen CPUs at OCP 2025 appeared first on OC3D.

YouTube removes Windows 11 Microsoft account bypass video citing a community guideline violation with potential to cause serious physical harm or death — "I don't think Microsoft had anything to do with it."

YouTube took down a Windows 11 video demonstrating how to install Windows 11 using only a local account, indicating that it violates community guidelines and policies and that it could lead to serious physical harm or even death.

Regulation As Alpha: Why The Smartest Startups Now Build Legal Strategy Into Their DNA

Every founder knows the thrill of the moment: the first term sheet lands, the product is live, the market is opening up. But in 2025, there’s a new line in the sand: Did you clear the regulatory path before you scaled?

Today, it’s not enough to disrupt the market — you have to anticipate the rule-set that will govern it.

Investors are shifting gears. After a decade of “move fast and break things,” they’re asking: Who built the compliance engine before the crash? Because the truth is, regulation has become a form of alpha — a competitive advantage for startups that think of law not as a hurdle, but as a moat.

The new era of smart compliance

The startup landscape has changed. High-profile failures — from crypto exchanges to wild valuations in fintech and AI — taught us that the regulatory cost of growth can be massive. Today’s investors and founders alike expect legal strategy from day one, not as an afterthought.

Consider the RegTech market: One recent estimate projects it will swell to about $70.64 billion by 2030, growing at a compound annual rate of roughly 23%. Another forecast predicts growth to $70.8 billion by 2033. The message: Companies are no longer asking if they need compliance automation and legal-engineering infrastructure. They’re asking when they can monetize it.

So when a startup designs its product around KYC, AML, data-protection or licensing from the outset, it’s not just avoiding risk — it’s building a moat others will struggle to cross. For founders, regulation isn’t just the cost of entry anymore — it’s the cost of exit-edge.

When the law becomes a moat

There are former unicorns, and there are regulation-ready unicorns. The difference hinges on when they built their compliance architecture, hired legal engineers and treated regulation as product.

Take payment infrastructure: Stripe built payment-security and licensing into its model early, as Stripe’s PCI Level 1 certification and multijurisdiction licenses (U.S. money-transmitter, EU/UK e-money) enabled it to integrate cleanly with Apple Pay, power Shopify’s native payments, and — per a 2023 announcement — expand its role processing payments for Amazon.

Or look at crypto: Coinbase built a licensure footprint early, publishing its U.S. money-transmitter licenses and securing New York’s BitLicense in 2017. Its 2021 SEC S-1 repeatedly frames regulatory compliance and licensing as fundamental to the business.

In insurtech, from the outset, Lemonade hired senior insurance veterans (e.g., former AIG executive Ty Sagalow) and, per its S-1 and subsequent filings, expanded licensure across the U.S., operationalizing the 50-state regulatory landscape rather than trying to route around it.

These examples show a pattern: When compliance is built in from the start, the cost of scaling drops and competitors face much higher entry bars. Regulation becomes a moat — not a burden.

The rise of ‘legal engineering’

Welcome to the era of the legal engineer. The traditional model (sign contract, then lawyer reads, then flagged risk) is being replaced by code, automation and internal teams who speak both product and law.

Startups such as Carta built cap-table software that includes “built-in tools and support to help with compliance year-round,” allowing it to embed governance and securities-law readiness into the product nature of equity management.

Plaid has publicly positioned itself for evolving “data use, access, and consumer permission” rules (e.g., Section 1033) by building features such as data transparency messaging and consent-capture into its API stack — indicating a clear regulatory-first posture in its product roadmap.

And what’s happening in AI? Founders are hiring general counsels on day one to forecast imminent regimes — privacy law (GDPR, CCPA), AI transparency bills, emerging algorithms-as-infrastructure regulation.

The startup battle isn’t simply product vs. product anymore — it’s regulatory architecture vs. regulatory architecture.

Reports back this up: One credible industry estimate shows the global compliance, governance and risk market is already around $80 billion and projected to reach $120 billion in the next five years. In short: Startups that solve compliance at scale are building infrastructure for everyone else to rent. That’s platform-level potential.

Investors are taking note

Regulation-ready startups aren’t just surviving — they’re attracting smarter capital. Venture funds now assess regulatory maturity, legal runway and governance readiness early on. A startup that can show it isn’t “waiting to deal with compliance” but designed it, has a valuation edge.

Crunchbase data shows global startup funding reached $91 billion in Q2 2025, up 11% year over year. While not all of that is focused on law or compliance, the trend signals that smart investors are buried deeper in risk assessment and governance. Legal tech funding is accelerating, too: the sector recently topped $2.4 billion in venture funding this year, an all-time high.

Funds are no longer only assessing TAM or go-to-market speed; they’re asking: “What’s the regulatory runway? Who owns risk? Who built the compliance pipeline?” Because in sectors like fintech, climate tech, health tech and AI, the fastest growth path is often the one that avoids the enforcement arm.

The future: law as competitive advantage

Let’s zoom out for a moment. We’re moving into a world where regulation isn’t a ceiling — it’s scaffolding. It defines markets, enables scaling and filters winners from pretenders. Founders who see law as a source of architecture, not as chewing-gum-on-the-shoe, will be the ones writing the playbook.

Think about AI: Startups that design for regulatory change (data-provenance, audit trails, rights management) are already positioning for the future.

Think about climate tech: Companies that can navigate evolving carbon-credit regimes or ESG disclosure laws are building invisible advantages.

Think about fintech: Those that mastered licensing, KYC/AML, consumer-data flows early are the backbone of infrastructure.

The next wave of unicorns won’t just have better tech — they’ll have truly infinitely better legal DNA. They won’t just disrupt a market; they’ll help write the rules of the market before they scale.

Because in this new era, regulation isn’t a deadweight — it’s a launchpad.


Aron Solomon is the chief strategy officer for Amplify. He holds a law degree and has taught entrepreneurship at McGill University and the University of Pennsylvania, and was elected to Fastcase 50, recognizing the top 50 legal innovators in the world. His writing has been featured in Newsweek, The Hill, Fast Company, Fortune, Forbes, CBS News, CNBC, USA Today and many other publications. He was nominated for a Pulitzer Prize for his op-ed in The Independent exposing the NFL’s “race-norming” policies.

Related Crunchbase query:

Related reading:

Illustration: Dom Guzman

Why Felicis’ Newest Partner Focuses On Community Building To Win AI Deals At Seed

Feyza Haskaraman is joining Felicis Ventures 1 as a partner after several years at Menlo Ventures, Crunchbase News has exclusively learned.

In her new role, Haskaraman will focus on investing in “soon-to-break-out” AI infrastructure, cybersecurity, and applications companies for Felicis, an early-stage firm with $3.9 billion in assets under management.

During her time at Menlo, Haskaraman sourced investments in startups including Semgrep, Astrix, Abacus, Parade and CloudTrucks — zeroing in early on how AI is reshaping developer security and enterprise infrastructure.

Feyza Haskaraman of Felicis Ventures
Feyza Haskaraman, partner at Felicis Ventures

Haskaraman, an MIT graduate who was born in Turkey, brings an engineering background to her role as an investor. She previously worked as an engineer at various companies at different growth stages, including Analog Devices, Fitbit and Nucleus Scientific. She is also a former McKinsey & Co. consultant who advised multibillion-dollar technology companies and early-stage startups on strategy and operations. It was after working with startups at McKinsey that her interest in venture capital was piqued, and she joined Insight Partners.

Her decision to join Menlo Park, California-based Felicis stems from a shared interest alongside firm founder and managing partner Aydin Senkut to build communities even in “unsexy” industries such as infrastructure and security, she said.

“Whether it’s connecting AI founders or bringing together technical and cybersecurity communities, the mission is the same: Believe in the best founders early and help them go the distance,” she told Crunchbase News.

Felicis is currently investing out of its 10th fund, a $900 million vehicle, its largest yet. More than 60% of its investments out of Fund 9 and 10 (so far) are seed stage; 94% are seed or Series A. In 83% of its investments, Felicis has led or co-led the round.

Nearly $3 out of every $4 that it’s deployed have gone into AI-related companies, including n8n, Supabase, Mercor, Crusoe Energy Systems, Periodic Labs, Runway, Revel, Skild AI, Deep Infra, Browser Use, Evertune, Poolside, Letta and LMArena.

In an interview, Haskaraman shared more about her investment plans at Felicis, as well as why she thinks we’re in the “early innings” with AI. This interview has been edited for clarity and brevity.

Let’s talk more about community-building and why you think it’s so important. 

Over the past few years in the venture ecosystem, just providing the capital is not enough. You need to surround yourself with the best talent. You’re seeing one of the fiercest talent wars in terms of AI talent.

So one of the things that I’ve spent a lot of time on in my VC career is building a community, going back to my MIT roots, surrounding myself with founders, engineers and operators, and also going into specific domains, like cybersecurity — just building a network of CISOs that I communicate with regularly and really support them however I can, and then obviously get their take on the latest technology.

That type of community-building effort is something that Aydin and I will be debating strategy for Felicis as well.

Yes, Aydin (Felicis’ founder) has said that he thinks the next generation of enterprise investors aren’t just picking companies, they’re building ecosystems. Would you agree with that?

Yes, we’re fully aligned on that. First of all, it’s a way of sourcing. Being able to source the best founders involves surrounding yourself in a community of people. You get very close to them, and you want to be the first call when they decide to jump ship and start a business.

As early-connection investors, we want to invest in the founders as early as possible. So that’s why we want to immerse ourselves in these communities that provide prolific grounds for the technical founders that are coming in and building an AI.

You were investing in AI before the big boom took off. Would you say there’s too much hype around the space?

You are correct that there is a lot of euphoria around AI, but if you look at the overall landscape, we haven’t seen a technology that can have such a large impact.

And we’re already seeing the results in enterprises that buyers of these solutions, and consumers of these solutions, including myself and our team, are seeing immense amounts of productivity gains. I remain immensely optimistic about the future and investing in AI, and that’s what we are paid to do, and what I also enjoy as a former engineer.

Are there specific aspects of AI that have you particularly excited?

I personally feel we’re still very much at the early innings. It’s been three years since ChatGPT came out, and the model companies really pushed their products into our lives. But if you take a look at what’s happening now, we have agents that are coordinating and automating our work.

What are ways in which we should be securing agent architecture? And that is also evolving across the board, and if you think about another layer down, like the infrastructure to support these LLMs and agents, I have to ask “What do we need underneath?”

I think there’s a lot more that will come, and there’s a lot of hope for innovation that will happen both across the infrastructure layer, as well as agents. There’s also the issue of “can applications actually be enabled?” I go back to the importance of securing our interactions with the agents and making sure that they’re not abused and misused. It’s a great time to be investing in AI.

What stages are you primarily investing in at Felicis?

We try to go as early as possible. But obviously, given our fund’s size, we have flexibility to invest whenever we see the venture scale returns make sense. But the majority of our investments are seed.

It’s such a competitive investing environment right now. How do you stand out?

Ultimately, what founders value is how you will work with them, your references. They value how you show up in those tough times, how you surround them with talent, how you help them see around the corners. That matters a lot.

I believe that winning boils down to the prior founder experiences that you left, people who can speak highly of you and how you work. I tend to be a big hustler. So, there’s a lot more value-add that we want to make sure we bring to the table, even before investments. And then after the investment we can continue to bring that type of value to a company.

Are you investing outside of AI?

I’m investing in AI infrastructure, cybersecurity and AI-enabled apps. We are also at the verge of a big overhaul in terms of the application layer, companies that we’ve seen prior to AI — that is all getting disrupted.

We’re seeing AI scribes in healthcare intake solutions, for example. We’re seeing code-generation solutions in developer stacks. We are looking at every single vertical, as well as horizontal application. I’m very interested in how all of these verticals’ application layers will get a different type of automation.

What’s your take on the market overall right now?

I feel like I lived three lifetimes in my investing career — just over the past few years. We as a VC community and tech ecosystem learned a lot, obviously, just in terms of what’s happening. We’re seeing new ingredients in the market, and that is AI, that did not exist during COVID.

Think about the fact that this is not a structural change in the market driven by the economy. This is truly a new technology. I would bucket those waves as separate.

I’m very grateful to be investing at this time. What a time to be investing, because AI is truly game-changing as a technology.

Clarification: The paragraph about Haskaraman’s investments at Menlo Ventures has been updated to more accurately reflect her role.

Related Crunchbase query:

Related reading:

Illustration: Dom Guzman


  1. Felicis Ventures is an investor in Crunchbase. They have no say in our editorial process. For more, head here.

Half-Life 3 HLX Optimization Work Continues, As A Trailer Is Supposedly Being Prepared

Half-Life series protagonist art

Work on Valve's HLX project, rumored to be the highly anticipated Half-Life 3, is continuing with more optimization passes, suggesting that the project's polishing phase is in full swing. In a new video shared on YouTube a few hours ago, Tyler McVicker, who provided correct updates on Valve's project for years before their official announcements, reviewed some of the additions made to the Source 2 engine with the Counter-Strike 2 October 15 update. Unlike past updates, which introduced new systems and features that aren't in use in any other game powered by the Valve engine, the latest update focuses more […]

Read full article at https://wccftech.com/half-life-3-hlx-optimization-trailer-prepared/

Intel And BOE Collaborate To Introduce 1Hz Refresh Rate And Multi-Frequency Display For Extended Battery Life

Samsung Galaxy Book 5 Pro 360 Laptop Leaked, Powered By Intel Core Ultra 200V "Lunar Lake" CPUs 1

Both companies are aiming for ultra-power-efficient displays that can save significant battery life on laptops. Intel and BOE Announce New AI Energy-Saving Techniques for Laptops, Which Will Adjust Display Refresh Rate According to the Content Last year, BOE, a Chinese display-panel manufacturer, unveiled its Winning Display 1Hz technology that can reduce power consumption by 65%. Today, Intel officially announced its partnership with BOE to deploy the 1Hz Refresh Rate technology and two more efficiency-enhancing features for the laptops, which are aimed at improving the battery life significantly. Intel says that these AI-based technologies will intelligently balance the energy efficiency with […]

Read full article at https://wccftech.com/intel-and-boe-collaborate-to-introduce-1hz-refresh-rate-and-multi-frequency-display/

AirPods Pro 3 Are An Excellent Pair Of Wireless Earbuds, But Cost $79 More Than The AirPods Pro 2 On Amazon; Will You Pick Value Over The Latest And Greatest?

AirPods Pro 2 are $169.99 off on Amazon, while the AirPods Pro 3 can be yours for $249

There are a myriad of differences separating the AirPods Pro 3 from the AirPods Pro 2, but regardless of the features that Apple has incorporated in its flagship wireless earbuds, for the majority of buyers, it all boils down to how much they are willing to pay. Some customers are not bothered parting with $249 of their hard-earned cash for these high-quality goods, while others seem to find solace in picking excellent value. On Amazon, the previous-generation AirPods Pro 2 are 32 percent off, or a $79 delta compared to the AirPods Pro 3, so which one will you pick? […]

Read full article at https://wccftech.com/airpods-pro-3-79-more-expensive-than-airpods-pro-2-which-are-32-percent-off-on-amazon/

Dragon Quest I & II HD-2D Remake – 5 Tips To Banish The Fiends

Dragon Quest I & II HD-2D Remake cover art with adventurers.

Though updated in every possible way, Dragon Quest I & II HD-2D Remake retains the challenge of its classic JRPG roots. If you are a newcomer to the series, understanding some essential quirks is key to enjoying both games right from the start. This guide will walk you through settings, exploration, and combat tips to tame the difficulty. NOTE: Tips devised and refined during two complete playthroughs of both games over the course of 45 hours at Dragon Quest difficulty in the game's PlayStation 5 1.0 version. Screenshots captured from the same version. 3 Essential Settings To Tame Dragon Quest […]

Read full article at https://wccftech.com/how-to/dragon-quest-i-ii-hd-2d-remake-5-tips-to-banish-the-fiends/

Resident Evil Requiem Opens Pre-Orders, Reveals Accessible PC System Requirements

RESIDENT EVIL Requiem text with character in a snowy background.

CAPCOM has officially opened pre-orders for its highly anticipated game Resident Evil Requiem. You can now purchase the ninth mainline installment of the beloved horror franchise across all platforms: PC (Steam and, for the first time, Epic Games Store), PlayStation 5, Xbox Series S and X, and Nintendo Switch 2. All pre-orders will include “Apocalypse,” a freebie costume for protagonist Grace, as a pre-order bonus. The Standard Edition is priced at $69.99, while the Deluxe Edition adds the Deluxe Kit for $10 more. The Deluxe Kit includes: To celebrate the aforementioned debut of the Japanese publisher on the Epic Games […]

Read full article at https://wccftech.com/resident-evil-requiem-opens-preorders-reveals-accessible-pc-specs/

Snapdragon 8 Gen 5 To Share Same CPU Cluster, Lithography As Snapdragon 8 Elite Gen 5 But With Lower Clock Speeds; Rumor Claims SoC Equal To Snapdragon 8 Elite

Snapdragon 8 Gen 5 technical specifications and benchmarks shared by a tipster

A new strategy applied by Qualcomm for next year is not just offering its top-tier Snapdragon 8 Elite Gen 5 for the most premium Android flagships out there, but also to offer an alternative to its phone partners that is more affordable and is mass produced on TSMC’s newest 3nm ‘N3P’ process. That SoC is the Snapdragon 8 Gen 5, and given the ludicrous price of the Snapdragon 8 Elite Gen 5, we are confident that we will witness the less expensive solution power several ‘price to performance’ smartphones in 2026. A tipster has shared various specifications of the Snapdragon 8 […]

Read full article at https://wccftech.com/snapdragon-8-gen-5-specifications-and-benchmarks-shared-by-tipster/

Invent Assistants – Scale your customer support without scaling your headache


Invent offers a cutting-edge platform for creating, launching, and managing smart AI assistants tailored for seamless customer engagement.

Key features include a Unified AI Inbox that consolidates all customer conversations across multiple channels, ensuring efficient management, Real-time conversation management, Seamless AI-to-human handoffs, Complete conversation continuity, No-code setup and customization Start building your AI assistant today with no credit card required and experience the future of customer support.

View startup

WD launches investigation into problems with its controversial SMR hard drives — same drives that got WD sued in 2021 now reporting failure rates due to 'fundamental' flaws

Western Digital Blue and Red HDDs from 2020 that use SMR technology are experiencing enough failures to prompt an investigation from WD itself. These same drives, which included SMR without telling customers, resulted in a class action lawsuit against WD in 2021.

AI introspection is real, but it’s unreliable

The post AI introspection is real, but it’s unreliable appeared first on StartupHub.ai.

New research suggests AI models can sometimes introspect, checking their own internal 'intentions' to determine if an output was a mistake.

The post AI introspection is real, but it’s unreliable appeared first on StartupHub.ai.

Amplitude targets AI brand monitoring chaos

The post Amplitude targets AI brand monitoring chaos appeared first on StartupHub.ai.

Amplitude's new tool formalizes the race for AI brand monitoring, a discipline for an era where being mentioned by an AI is the new top search result.

The post Amplitude targets AI brand monitoring chaos appeared first on StartupHub.ai.

SWE-1.5 model ends the AI speed vs. smarts tradeoff

The post SWE-1.5 model ends the AI speed vs. smarts tradeoff appeared first on StartupHub.ai.

The SWE-1.5 model's performance comes from co-designing the AI model, agent harness, and inference stack as one unified system, not just from training a better model.

The post SWE-1.5 model ends the AI speed vs. smarts tradeoff appeared first on StartupHub.ai.

FAKTUS raises €56M to build neobank for construction SMEs

The post FAKTUS raises €56M to build neobank for construction SMEs appeared first on StartupHub.ai.

FAKTUS, a neobank for construction SMEs, raised €56 million to scale its AI-powered platform that provides fast financing to solve industry payment delays.

The post FAKTUS raises €56M to build neobank for construction SMEs appeared first on StartupHub.ai.

Human Health raises €4.7M to advance its Precision Health platform

The post Human Health raises €4.7M to advance its Precision Health platform appeared first on StartupHub.ai.

Human Health raised €4.7M to expand its AI-powered Precision Health platform, which helps people with chronic conditions track their health and generate actionable insights.

The post Human Health raises €4.7M to advance its Precision Health platform appeared first on StartupHub.ai.

Intel Arc GPU Graphics Drivers 101.8247 WHQL Released

Intel has released its latest version of Arc GPU Graphics Drivers, version 101.8247 WHQL. The latest drivers update is the same as the previously released 101.8247 Beta version but with WHQL certification, so it brings Game Ready optimizations for ARC Raiders, Europa Universalis V, Football Manager 26, Jurassic World Evolution 3, The Outer Worlds 2, and Vampire The Masquerade: Bloodlines 2 games.

The Intel Arc Graphics Drivers 101.8247 WHQL also fixes several issues with Intel Arc B-series and A-series graphics cards, including an application crash in both Satisfactory and World of Warcraft: Dragonflight games, as well as the same application crash issue when switching graphics API to DX11 in World of Warcraft: Dragonflight with both Intel Core Ultra Series 1 and Intel Core Ultra Series 2 processors with built-in Intel Arc GPUs.

DOWNLOAD: Intel Arc GPU Graphics Drivers 101.8247 WHQL

AMD openSIL Targets "Zen 6" Support in the First Half of 2027

AMD used the Open Compute Project Global Summit in San Jose to provide an update on openSIL, its initiative to replace the legacy AGESA firmware with an open silicon-initialization stack, with timelines for CPUs based on the "Zen 6" IP. Chief Firmware Architect Raj Kapoor presented the latest developments and confirmed that slides and a video of the talk are now available to the public. As part of the update, AMD released the openSIL Firmware Architecture Specification 1.0 and initial platform code for its "Phoenix" client SoCs, marking a shift from proof-of-concept work to wider platform availability. Kapoor also outlined a practical release schedule for future platforms. AMD reiterated that it will initially keep some platform sources under NDA until product launch, then release open-source platform code approximately one quarter after shipping.

According to this schedule, AMD plans to publish the "Venice" 6th Gen EPYC openSIL sources in 2026, suggesting a Venice product launch by Q3 in 2026, and to release the "Medusa" Ryzen CPUs based on "Zen 6" sources in the first half of 2027. The openSIL is designed as a modular, three-part static library written to modern C standards, intended to integrate seamlessly with any x86 host firmware while remaining scalable for different customer needs. Beyond timelines and deliverables, AMD envisioned openSIL as an effort to enhance transparency, speed up integration for hyperscalers, and improve security through auditable code. The project will allow public pull requests, although contributions will be reviewed to protect sensitive microarchitectural IP. This balance between openness and protection was a recurring theme in the presentation, as AMD seeks to open more of its firmware for community collaboration while managing to expose just enough of core design details that drive its CPUs.

(PR) Samsung Electronics Announces Third Quarter 2025 Results

Samsung Electronics today reported financial results for the third quarter ended Sept. 30, 2025. The Company posted KRW 86.1 trillion in consolidated revenue, an increase of 15.4% compared to the previous quarter. Operating profit increased to KRW 12.2 trillion. The Device Solutions (DS) Division reported a 19% increase in sales quarter-on-quarter (QoQ), with the Memory Business setting an all-time high for quarterly sales, driven by strong growth of HBM3E and server SSDs. Meanwhile, the Device eXperience (DX) Division posted a revenue increase of 11% QoQ due to the successful launch of new foldable phones and solid flagship sales.

Looking ahead to Q4, the rapid growth of the AI industry is expected to open up new market opportunities for both the DS and DX Divisions. The DS Division plans to focus on enhancing its performance by increasing sales of high-value-added memory products tailored to AI. The semiconductor market is expected to remain strong, driven by ongoing AI investment momentum. Meanwhile, the DX Division will strengthen its efforts to launch AI products equipped with the most innovative technologies through open collaborations with leading global partners in respective business segments.

(PR) Phison and RedData Release aiDAPTIV+ Solutions for U.S. Classified AI Programs

Phison Electronics, a global leader in NAND flash and AI infrastructure technologies, and RedData, an RPI-CS, Inc. division, today announced the availability of Phison's aiDAPTIV+ GPU memory extension technology integrated into RedData's secure solutions for U.S. Federal classified programs. Following the recent release of America's AI Action Plan, many federal agencies and system integrators are faced with implementing AI infrastructure using secure and future-proof methodologies. The collaboration between Phison and RedData addresses these critical requirements for affordable, secure, on-premises AI infrastructure within government use cases, the intelligence community and national labs. By leveraging Phison's aiDAPTIV+ AI solution, Federal agencies can affordably accelerate inference and scale large language model (LLM) training while maintaining compliance with classified environment requirements.

Embark Addresses Gen-AI Use: Arc Raiders "In No Way Uses Generative AI" Except Where It Apparently Does

Arc Raiders is slated to launch on October 30, and, with the Steam Store page already up, along with the requisite declaration of the use of generative AI in the new shooter, questions have been floating around about the full extent of the use of AI in Arc Raiders. Presumably to get out ahead of the criticism that's inevitably going to be generated—especially after Embark bragged about being able to use AI to generate a 3D model of a gun from as little as a YouTube video—Arc Raiders's design director, Virgil Watkins, explained to PCGamesN in a recent interview that Arc Raiders "in no way uses generative AI."

Curiously, Watkins goes on to clarify that the game does make use of machine learning—a term that has become synonymous with AI in recent years—for the locomotion of some of the more complicated drones. However, the game still bears Steam's obligatory warning about generated content, which, he explains, is there because of the same AI voice model system Embark used in The Finals. According to Watkins, voice actors were hired in order to train the AI model, which offers more versatile ping functionality "capable of saying every single item name, every single location name, and compass directions." That AI disclosure on the Steam Store page may also relate to generative AI being used in the game's development as a tool to speed up development and iteration—a use case that was confirmed to have been used in at least the early development of Arc Raiders, as per another interview with The Game Business on YouTube.

Pitchwise – Securely share your pitch deck, track investor engagement and raise funds.


Pitchwise is the smart way for founders to share pitch decks and fundraising materials. Instead of sending PDFs into the void, you get full control and visibility. Require email verification, disable downloads, revoke links anytime, and add your own branding. Founders can embed calls-to-action like booking meetings or gathering feedback directly inside the deck. Powerful analytics show who viewed your deck, for how long, slide-by-slide, even by location and visit frequency—with instant notifications.

Pitchwise also offers plug-and-play deck templates, curated investor lists, and a growing library of 200+ fundraising resources. Free to start, with Pro at just $13/user pm or $78 per year.

View startup

Alphabet’s Q3 Surge Defies AI Cannibalization Fears

The post Alphabet’s Q3 Surge Defies AI Cannibalization Fears appeared first on StartupHub.ai.

Alphabet’s recent third-quarter results have sent a clear message to the market: far from cannibalizing its foundational search business, generative AI appears to be bolstering it, contributing to a robust financial performance that surpassed expectations. This narrative, delivered by CNBC’s MacKenzie Sigalos on ‘Closing Bell Overtime’ to anchor John, highlights Alphabet’s strategic positioning and significant […]

The post Alphabet’s Q3 Surge Defies AI Cannibalization Fears appeared first on StartupHub.ai.

Resident Evil: Requiem To Get Exclusive Switch Controller and Fortnite Crossover at Launch

With the official February 2026 launch date of Resident Evil: Requiem rapidly approaching, Capcom has officially made the next installment in the franchise available for pre-order for PC via Steam and the Epic Games Store, PS5, Xbox Series S|X, and Nintendo Switch 2. The base game is available for $69.99, while the Deluxe Edition will set you back $79.99. Along with the pre-order announcement, Capcom also revealed the $99 Requiem Nintendo Switch 2 Pro controller, available via the Nintendo Store, and that there will be a collaboration with Fortnite when Requiem is purchased through the Epic Games Store. The full scope of the Fortnite content is still unconfirmed, but Capcom mentions a Grace Ashcroft outfit, at the very least.

Check out our hands-on coverage of Resident Evil: Requiem from Gamescom 2025.

Requiem will be available in both standard and Deluxe versions, with the Deluxe version offering a host of in-game perks, including five costumes, four weapon skins, two screen filters, and a handful of other in-game lore items and cosmetics. Capcom also confirmed an exclusive Amiibo depicting the game's protagonist, Grace Ashcroft, although no images of this have been released just yet. Both the new controller and the Amiibo will launch on February 27, along with the new Resident Evil game.

Amazon Games Layoffs: Internal Memo Confirms Shift From AAA to AI-Powered Games

It was only just revealed that Amazon Games appears to be slowly sunsetting New World: Aeternum, with the most recent content update officially revealed as the last in the game's four-year post-launch development cycle, and the servers for the MMO only guaranteed to be maintained and online "through 2026." However, the latest wave of layoffs at Amazon, which are slated to affect at least 14,000 workers, seemingly mean that New World won't be the only premium Amazon Games experience to get the cut. Amazon's VP of games, Steve Boom explains that Amazon has "made the difficult decision to halt a significant amount of our first-party AAA game development work - specifically around MMOs - within Amazon Game Studios."

Reporting on the layoffs, Variety managed to get hold of an internal memo that details a marked shift in Amazon Games's approach. Most notably, the internal memo confirms that Amazon will be diverting development resources away from AAA games, instead focusing on the Amazon Luna cloud gaming subscription service included with Amazon Prime, and its casual party and AI-powered games, like the somewhat peculiar Courtroom Chaos - Starring Snoop Dogg. Amazon will supposedly continue to work on existing AAA projects with external studios, but that is likely only to fulfill contractual obligations to do so and not because Amazon is invested in making AAA games.

AMD Confirms openSIL Support For Zen 6 Ryzen “Medusa” CPUs In 1H 2027, EPYC “Venice” In 2026

AMD Ryzen AI chip with Zen 6 architecture, highlighting advanced microprocessor technology.

AMD has confirmed its commitment to openSIL "Open Firmware" for next-gen Zen 6-based Ryzen "Medusa" & EPYC "Venice" CPUs. openSIL "Open Firmware" Support For AMD's Next-Gen Zen 6-Powered Ryzen "Medusa" & EPYC "Venice" CPUs Confirmed openSIL or Open Firmware is aimed to be a replacement for traditional firmware solutions such as AGESA. The project was first announced in 2023 and was going to be used for both client and server offerings. At OCP Summit 2025, AMD once again reaffirmed its commitment to openSIL and detailed its plans for future Zen 6 CPUs. Just as a recap, openSIL firmware will offer: […]

Read full article at https://wccftech.com/amd-confirms-opensil-support-zen-6-ryzen-medusa-cpus-1h-2027-epyc-venice-2026/

Samsung Preps For Mass Production On Next-Gen HBM4 Memory in 2026: 24Gb GDDR7, And 128GB+ DDR5 Products In The Plans Too

Samsung chip labeled HBM and Logic on a circuit board background.

Samsung is also set to begin production of next-gen HBM4 memory, 24 Gb GDDR7 DRAM & 128 GB+ products in 2026. Samsung All Set To Enter Mass Production on Next-Gen Memory Products Including Stable Supply of 2nm GAA Process In 2026 Samsung has announced its Q3 2025 earnings report, highlighting a 15.4% increase in revenue versus the previous quarter. The South Korean technology company posted a revenue of KRW 86.1 trillion, and also set an all-time high from quarterly sales for its Memory business, mainly driven by strong demand for its HBM3E memory and server SSDs, thanks to heightened AI […]

Read full article at https://wccftech.com/samsung-mass-production-next-gen-hbm4-memory-2026-24gb-gddr7-128gb-ddr5/

ExpenseKit - Expense Tracker & Smart Budgets – AI-powered spending and budgeting for smarter money management


ExpenseKit is a simple yet powerful expense tracker that helps you stay on top of your money. Track your spending, set budgets, and view clear charts that show exactly where your money goes. With AI-powered insights, it makes managing finances smarter and easier.

Built with privacy in mind, ExpenseKit keeps your data secure while giving you full control. Easy backup, export your records anytime, and even manage expenses offline. It’s the easiest way to build better financial habits and save more with less effort.

View startup

From static classifiers to reasoning engines: OpenAI’s new model rethinks content moderation

Enterprises, eager to ensure any AI models they use adhere to safety and safe-use policies, fine-tune LLMs so they do not respond to unwanted queries. 

However, much of the safeguarding and red teaming happens before deployment, “baking in” policies before users fully test the models’ capabilities in production. OpenAI believes it can offer a more flexible option for enterprises and encourage more companies to bring in safety policies. 

The company has released two open-weight models under research preview that it believes will make enterprises and models more flexible in terms of safeguards. gpt-oss-safeguard-120b and gpt-oss-safeguard-20b will be available on a permissive Apache 2.0 license. The models are fine-tuned versions of OpenAI’s open-source gpt-oss, released in August, marking the first release in the oss family since the summer.

In a blog post, OpenAI said oss-safeguard uses reasoning “to directly interpret a developer-provider policy at inference time — classifying user messages, completions and full chats according to the developer’s needs.”

The company explained that, since the model uses a chain-of-thought (CoT), developers can get explanations of the model's decisions for review. 

“Additionally, the policy is provided during inference, rather than being trained into the model, so it is easy for developers to iteratively revise policies to increase performance," OpenAI said in its post. "This approach, which we initially developed for internal use, is significantly more flexible than the traditional method of training a classifier to indirectly infer a decision boundary from a large number of labeled examples."

Developers can download both models from Hugging Face

Flexibility versus baking in

At the onset, AI models will not know a company’s preferred safety triggers. While model providers do red-team models and platforms, these safeguards are intended for broader use. Companies like Microsoft and Amazon Web Services even offer platforms to bring guardrails to AI applications and agents. 

Enterprises use safety classifiers to help train a model to recognize patterns of good or bad inputs. This helps the models learn which queries they shouldn’t reply to. It also helps ensure that the models do not drift and answer accurately.

“Traditional classifiers can have high performance, with low latency and operating cost," OpenAI said. "But gathering a sufficient quantity of training examples can be time-consuming and costly, and updating or changing the policy requires re-training the classifier."

The models takes in two inputs at once before it outputs a conclusion on where the content fails. It takes a policy and the content to classify under its guidelines. OpenAI said the models work best in situations where: 

  • The potential harm is emerging or evolving, and policies need to adapt quickly.

  • The domain is highly nuanced and difficult for smaller classifiers to handle.

  • Developers don’t have enough samples to train a high-quality classifier for each risk on their platform.

  • Latency is less important than producing high-quality, explainable labels.

The company said gpt-oss-safeguard “is different because its reasoning capabilities allow developers to apply any policy,” even ones they’ve written during inference. 

The models are based on OpenAI’s internal tool, the Safety Reasoner, which enables its teams to be more iterative in setting guardrails. They often begin with very strict safety policies, “and use relatively large amounts of compute where needed,” then adjust policies as they move the model through production and risk assessments change. 

Performing safety

OpenAI said the gpt-oss-safeguard models outperformed its GPT-5-thinking and the original gpt-oss models on multipolicy accuracy based on benchmark testing. It also ran the models on the ToxicChat public benchmark, where they performed well, although GPT-5-thinking and the Safety Reasoner slightly edged them out.

But there is concern that this approach could bring a centralization of safety standards.

“Safety is not a well-defined concept. Any implementation of safety standards will reflect the values and priorities of the organization that creates it, as well as the limits and deficiencies of its models,” said John Thickstun, an assistant professor of computer science at Cornell University. “If industry as a whole adopts standards developed by OpenAI, we risk institutionalizing one particular perspective on safety and short-circuiting broader investigations into the safety needs for AI deployments across many sectors of society.”

It should also be noted that OpenAI did not release the base model for the oss family of models, so developers cannot fully iterate on them. 

OpenAI, however, is confident that the developer community can help refine gpt-oss-safeguard. It will host a Hackathon on December 8 in San Francisco. 

Nvidia researchers unlock 4-bit LLM training that matches 8-bit performance

Researchers at Nvidia have developed a novel approach to train large language models (LLMs) in 4-bit quantized format while maintaining their stability and accuracy at the level of high-precision models. Their technique, NVFP4, makes it possible to train models that not only outperform other leading 4-bit formats but match the performance of the larger 8-bit FP8 format, all while using half the memory and a fraction of the compute.

The success of NVFP4 shows that enterprises can continue to cut inference costs by running leaner models that match the performance of larger ones. It also hints at a future where the cost of training LLMs will drop to a point where many more organizations can train their own bespoke models from scratch rather than just fine-tuning existing ones.

The quantization challenge

Model quantization is a technique used to reduce the computational and memory costs of running and training AI models. It works by converting the model's parameters, or weights, from high-precision formats like 16- and 32-bit floating point (BF16 and FP32) to lower-precision formats. The key challenge of quantization is to reduce the size of the model while preserving as much of its knowledge and capabilities as possible.

In recent years, 8-bit floating point formats (FP8) have become a popular industry standard, offering a good balance between performance and efficiency. They significantly lower the computational cost and memory demand for LLM training without a major drop in accuracy.

The next logical step is 4-bit floating point (FP4), which promises to halve memory usage again and further boost performance on advanced hardware. However, this transition has been challenging. Existing 4-bit formats, such as MXFP4, often struggle to maintain the same level of accuracy as their 8-bit counterparts, forcing a difficult trade-off between cost and performance.

How NVFP4 works

NVFP4 overcomes the stability and accuracy challenges of other FP4 techniques through a smarter design and a targeted training methodology. A key issue with 4-bit precision is its extremely limited range: It can only represent 16 distinct values. When converting from a high-precision format, outlier values can distort the entire dataset, harming the model's accuracy. NVFP4 uses a more sophisticated, multi-level scaling approach that better handles these outliers, allowing for a "more precise and accurate representation of tensor values during training," according to Nvidia.

Beyond the format, the researchers introduce a 4-bit training recipe that achieves accuracy comparable to FP8. A central component is their “mixed-precision strategy.” Instead of converting the entire model to NVFP4, the majority of layers are quantized while a small fraction of numerically sensitive layers are kept in a higher-precision format like BF16. This preserves stability where it matters most. The methodology also adjusts how gradients are calculated during backpropagation — or the model's learning phase — to reduce biases that can accumulate from low-precision arithmetic.

NVFP4 in practice

To test their approach, the Nvidia team trained a powerful 12-billion-parameter hybrid Mamba-Transformer model on a massive 10 trillion tokens. They then compared its performance directly against a baseline model trained in the widely popular FP8 format. The results showed that the NVFP4 model's training loss and downstream task accuracy closely tracked the FP8 version throughout the entire process.

The performance held across a wide range of domains, including knowledge-intensive reasoning, mathematics and commonsense tasks, with only a slight drop-off in coding benchmarks in late training.

"This marks, to our knowledge, the first successful demonstration of training billion-parameter language models with 4-bit precision over a multi-trillion-token horizon, laying the foundation for faster and more efficient training of future frontier models,” the researchers write.

According to Nvidia's director of product for AI and data center GPUs NvidiaShar Narasimhan, in practice, NVFP4’s 4-bit precision format enables developers and businesses to train and deploy AI models with nearly the same accuracy as traditional 8-bit formats. 

“By training model weights directly in 4-bit format while preserving accuracy, it empowers developers to experiment with new architectures, iterate faster and uncover insights without being bottlenecked by resource constraints,” he told VentureBeat. 

In contrast, FP8 (while already a leap forward from FP16) still imposes limits on model size and inference performance due to higher memory and bandwidth demands. “NVFP4 breaks that ceiling, offering equivalent quality with dramatically greater headroom for growth and experimentation,” Narasimhan said.

When compared to the alternative 4-bit format, MXFP4, the benefits of NVFP4 become even clearer. In an experiment with an 8-billion-parameter model, NVFP4 converged to a better loss score than MXFP4. To reach the same level of performance as the NVFP4 model, the MXFP4 model had to be trained on 36% more data, a considerable increase in training time and cost.

In addition to making pretraining more efficient, NVFP4 also redefines what’s possible. “Showing that 4-bit precision can preserve model quality at scale opens the door to a future where highly specialized models can be trained from scratch by mid-sized enterprises or startups, not just hyperscalers,” Narasimhan said, adding that, over time, we can expect a shift from developing general purpose LLMs models to “a diverse ecosystem of custom, high-performance models built by a broader range of innovators.”

Beyond pre-training

Although the paper focuses on the advantages of NVFP4 during pretraining, its impact extends to inference, as well. 

“Models trained on NVFP4 can not only deliver faster inference and higher throughput but shorten the time required for AI factories to achieve ROI — accelerating the cycle from model development to real-world deployment,” Narasimhan said. 

Because these models are smaller and more efficient, they unlock new possibilities for serving complex, high-quality responses in real time, even in token-intensive, agentic applications, without raising energy and compute costs. 

Narasimhan said he looks toward a future of model efficiency that isn’t solely about pushing precision lower, but building smarter systems.

“There are many opportunities to expand research into lower precisions as well as modifying architectures to address the components that increasingly dominate compute in large-scale models,” he said. “These areas are rich with opportunity, especially as we move toward agentic systems that demand high throughput, low latency and adaptive reasoning. NVFP4 proves that precision can be optimized without compromising quality, and it sets the stage for a new era of intelligent, efficient AI design.”

Salesforce Agentic AI Gets Real-World Performance Benchmark

The post Salesforce Agentic AI Gets Real-World Performance Benchmark appeared first on StartupHub.ai.

SCUBA, a new benchmark, is redefining how Salesforce Agentic AI is evaluated, focusing on real-world enterprise software interaction and automation.

The post Salesforce Agentic AI Gets Real-World Performance Benchmark appeared first on StartupHub.ai.

Synthesia reportedly raises $200M to advance AI video generation

The post Synthesia reportedly raises $200M to advance AI video generation appeared first on StartupHub.ai.

AI video generation company Synthesia reportedly raised $200M to scale its platform that turns text into videos using lifelike avatars for enterprise clients.

The post Synthesia reportedly raises $200M to advance AI video generation appeared first on StartupHub.ai.

Battlefield RedSec Gets "Mostly Negative" Steam Reviews Because of Comparisons to Battlefield 6

Not even a day after the launch of the free-to-play battle royale Battlefield RedSec, the game has already experienced its first controversy, with wave of negative reviews dropping RedSec's review rating to "Mostly Negative" on Steam, with a mere 38% of the reviews recommending the game. While the base gameplay of RedSec has apparently sat well with many gamers, with many praising the destruction mechanics and the large open maps, many players seem rather upset about a number of issues largely related to how the free-to-play game interacts with and compares to the main Battlefield 6 game. The main take-away is that gamers who bought Battlefield 6 seem to feel like they are being pressured into playing RedSec and that it appears as though DICE and EA invested significantly more time and resources into RedSec than Battlefield 6.

One recurring complaint is that Battlefield 6's weekly challenges are tied to game modes in RedSec, while other players seem upset that there is more variety in RedSec when it comes to vehicle options. The same appears to be true for the map and game design in RedSec, which features a larger map than Battlefield 6—even EA called the RedSec map the largest in any Battlefield game. That size offers players more freedom and perhaps even strategic advantages. "Oh wow a nice big map with lots of POIs and flanks. If only we could get something like this for the main game," reads one review. Complaints also abound, both on Reddit and in the Steam reviews, about bugs and the game's lack of polish, but those issues are likely to be resolved in time. Despite the complaints, Battlefield RedSec is currently eighth in SteamDB's top sales charts, with the Season 1 battle pass coming in fifth place.

MIXI’s Enterprise AI Adoption: A Blueprint for Accelerated Efficiency

The post MIXI’s Enterprise AI Adoption: A Blueprint for Accelerated Efficiency appeared first on StartupHub.ai.

MIXI, a Japanese company renowned for its communication-centric businesses like MONSTER STRIKE and FamilyAlbum, has demonstrated a remarkable blueprint for rapid, organization-wide AI adoption, deploying ChatGPT Enterprise to all employees within 45 days. This swift integration led to over 80% weekly usage within three months and the creation of more than 1,600 custom GPTs, yielding […]

The post MIXI’s Enterprise AI Adoption: A Blueprint for Accelerated Efficiency appeared first on StartupHub.ai.

Google AI Revenue Growth Fuels Record Quarter

The post Google AI Revenue Growth Fuels Record Quarter appeared first on StartupHub.ai.

Google's Q3 2025 earnings mark a record $100 billion quarter, with AI driving unprecedented Google AI revenue growth across its entire ecosystem.

The post Google AI Revenue Growth Fuels Record Quarter appeared first on StartupHub.ai.

Alphabet’s AI Gamble Pays Off in Q3, Fueling Search and Cloud Growth

The post Alphabet’s AI Gamble Pays Off in Q3, Fueling Search and Cloud Growth appeared first on StartupHub.ai.

The notion that generative AI might cannibalize Alphabet’s foundational search advertising business was a prevalent concern amongst investors and industry observers alike. However, the company’s Q3 results, as reported by CNBC’s MacKenzie Sigalos, unequivocally demonstrate a different narrative: AI is not merely a defensive play but a potent accelerant for Alphabet’s core segments and burgeoning […]

The post Alphabet’s AI Gamble Pays Off in Q3, Fueling Search and Cloud Growth appeared first on StartupHub.ai.

Microsoft’s Profitable AI Play: A Strategic Masterclass

The post Microsoft’s Profitable AI Play: A Strategic Masterclass appeared first on StartupHub.ai.

The prevailing market skepticism around AI’s immediate profitability finds a powerful counter-narrative in Microsoft’s recent earnings, suggesting a robust monetization strategy is already underway. Brent Thill, a Software & Internet Research Analyst at Jefferies, speaking on CNBC’s ‘Closing Bell Overtime’ with Kelly Evans and Jon Fortt, offered a sharp analysis of Microsoft’s Q1 results, highlighting […]

The post Microsoft’s Profitable AI Play: A Strategic Masterclass appeared first on StartupHub.ai.

AI’s Second Inning: Decoding Big Tech’s Investment Spree and Market Reactions

The post AI’s Second Inning: Decoding Big Tech’s Investment Spree and Market Reactions appeared first on StartupHub.ai.

“We’re in the second innings of this,” declared Stephanie Link of Hightower Advisors on CNBC’s Closing Bell Overtime, referring to the burgeoning artificial intelligence trade. Her commentary, delivered amidst a flurry of recent earnings reports, offered a nuanced perspective on the market’s current fixation with AI, particularly concerning the substantial capital expenditures undertaken by major […]

The post AI’s Second Inning: Decoding Big Tech’s Investment Spree and Market Reactions appeared first on StartupHub.ai.

OpenAI AgentKit: Accelerating Agentic Workflow Development from Months to Hours

The post OpenAI AgentKit: Accelerating Agentic Workflow Development from Months to Hours appeared first on StartupHub.ai.

“Your agent is only as good as its weakest link,” stated Henry Scott-Green, Product Manager at OpenAI, during a recent Build Hours session introducing AgentKit. This profound insight underpins the necessity for robust, integrated tools in the rapidly evolving landscape of AI agent development. AgentKit, OpenAI’s latest offering, aims to provide exactly that: a comprehensive […]

The post OpenAI AgentKit: Accelerating Agentic Workflow Development from Months to Hours appeared first on StartupHub.ai.

ENEOS Materials Redefines Enterprise AI Adoption with ChatGPT Enterprise

The post ENEOS Materials Redefines Enterprise AI Adoption with ChatGPT Enterprise appeared first on StartupHub.ai.

“AI will become infrastructure, just like electricity or computers. If you can harness its power, you’ll achieve much greater results.” This profound statement from Taku Ichibayashi, Manager of R&D Digital Group at ENEOS Materials, encapsulates the transformative vision driving one of Japan’s earliest and most successful deployments of ChatGPT Enterprise. The video showcases ENEOS Materials’ […]

The post ENEOS Materials Redefines Enterprise AI Adoption with ChatGPT Enterprise appeared first on StartupHub.ai.

AMDGPU With Linux 6.19 Will Support Analog Video Connectors For Old GCN 1.0 GPUs

Following last week's initial batch of AMDGPU kernel graphics driver changes intended for Linux 6.19, another round of new AMDGPU / Radeon / AMDKFD material was sent out today to DRM-Next. Notable with this pull is the Display Core "DC" work for analog video connectors as the initiative from one of Valve's contractors for improving the Radeon GCN 1.0 era GPU support with the AMDGPU driver...

GOG Preservation Program Grows to 248 Games With 16 New Games Including Original Hitman

Since it launched in November 2024, GOG's game Preservation Program has earned the distribution platform and its parent company, CD Projekt Red, a not-insignificant amount of goodwill through its active development efforts to keep old games playable on modern PC hardware. This runs counter to recent trends in gaming that have resulted in popular games going offline while they still have active fan bases. The most recent wave of new arrivals in the GOG Preservation Program include some notable classics, like Tomb Raider GOTY, Tom Clancy's Splinter Cell, and Hitman: Codename 47 and brings the overall total to 248 games in the program.

The preservation efforts aren't simply archival packages, either. GOG has a development team working to somewhat technically modernize the games and to test and ensure stability by adding support for features like widescreen aspect ratios, fixing engine bugs and CPU usage issues, and adding features like frame rate limiters to eliminate physics errors. In total, GOG claims to have implemented 1292 improvements to the games in its Preservation Program. Many of these improvements also include quality of life fixes, like support for an array of modern controllers and improved keyboard and mouse support. All of the games added to the program are also DRM-free, and many have added support for cloud saves.

(PR) Microsoft Releases FY26 Q1 Earnings Report

Microsoft Corp. today announced the following results for the quarter ended September 30, 2025, as compared to the corresponding period of last fiscal year:
  • Revenue was $77.7 billion and increased 18% (up 17% in constant currency)
  • Operating income was $38.0 billion and increased 24% (up 22% in constant currency)
  • Net income, on a GAAP basis, was $27.7 billion and increased 12%, and on a non-GAAP basis was $30.8 billion and increased 22% (up 21% in constant currency)
  • Diluted earnings per share, on a GAAP basis, was $3.72 and increased 13%, and on a non-GAAP basis was $4.13 and increased 23% (up 21% in constant currency)
  • Non-GAAP results exclude the impact from investments in OpenAI, explained in the Non-GAAP Definition section below
"Our planet-scale cloud and AI factory, together with Copilots across high value domains, is driving broad diffusion and real-world impact," said Satya Nadella, chairman and chief executive officer of Microsoft. "It's why we continue to increase our investments in AI across both capital and talent to meet the massive opportunity ahead."

Amazon Axes New World Active Development After 4 Years: "It Is No Longer Sustainable To Support the Game" Despite 35,000 Active Players on Steam

New World: Aeternum (formerly just New World) launched four years ago, leaving a trail of NVIDIA GeForce RTX 3090 GPUs in its wake, and the MMO has since grown to around 35,000 active daily players on Steam alone. Unfortunately for those players, that doesn't seem to be enough for Amazon Games, which just announced that "it is no longer sustainable to continue supporting the game with new content updates," indicating that the game would no longer receive any active development. The most recent update, Nighthaven, and the Season 10 content update will be the final update New World: Aeternum, but the game's servers will supposedly remain online "through 2026," indicating the potential complete shutdown of New World after the end of the same year.

It's not all bad news, though: seemingly in preparation for the game's gradual decommissioning, Amazon made the Rise of Angry Earth expansion free for PC players earlier in October. Curiously, Amazon will continue selling New World: Aeternum on all platforms "until further notice," and it will continue to be playable via Sony's PlayStation Plus subscription service. The studio has declined to provide refunds to those who recently purchased the game unaware of the imminent closure. Amazon has also clarified that it will continue to provide bug fixes and server maintenance for New World, presumably until, the game is taken offline completely, although the exact end date for support on that front is unclear. There are also no changes to the availability and use of premium in-game currency in New World: Aeternum.

Tor Browser 15 brings vertical tabs and improved organization


Tor Browser 15.0 is based on Firefox 140 ESR, incorporating a year's worth of Mozilla's updates and security fixes. The update introduces vertical tabs for easier page management, along with new "workspaces" to organize tab groups more efficiently. Bookmarks are now accessible from the sidebar, and a redesigned address bar offers a cleaner, more modern browsing experience.



Read Entire Article

Apple Gets A Partial Win On The Narrowed Scope Of The AirPods Pro Crackling Lawsuit

Apple AirPods Pro displayed on a black background.

The first-generation AirPods Pro have been hounding Apple ever since their launch back in 2019, which quickly gave way to persistent crackling or static complaints, prompting a lawsuit in November 2024. Now, however, Apple seems to have secured a partial victory of sorts by managing to have the scope of the lawsuit severely restricted. Apple only needs to defend itself against the fraud by omission claim in the AirPods Pro crackling lawsuit Before going further, let's recap what has happened in this lawsuit so far: Now, Judge Noël Wise has handed a partial victory to Apple by throwing out the […]

Read full article at https://wccftech.com/apple-gets-a-partial-win-on-the-narrowed-scope-of-the-airpods-pro-crackling-lawsuit/

iPhone 17 Pro Max Storage Modification Saves A Whopping $800 In ‘Apple Tax’ But Requires Steady Hands, Patience And Expensive Machinery

iPhone 17 Pro Max 2TB storage mod

Apple has pretty much increased the number of roadblocks for consumers clever enough to avoid paying the company a premium for its storage and RAM upgrades, forcing them to fork over a substantial sum. For instance, the 2TB version of the iPhone 17 Pro Max costs a mammoth $1,999, making it more expensive than the company’s higher-end MacBook Pro models. Fortunately, one intrepid modder finds a way to save $800 and performs this delicate procedure. However, bear in mind that he is only successful because of the availability of intricate tools combined with his unyielding patience. In addition to requiring a […]

Read full article at https://wccftech.com/iphone-17-pro-max-storage-modifications-saves-800-but-is-extremely-risky/

Samsung’s Galaxy Z TriFold Unveil All But Confirms One Tantalizing Rumor

Unbranded smartphone capturing augmented reality shapes on a screen.

Samsung finally removed the proverbial wraps on its much-anticipated Galaxy Z TriFold on Tuesday, revealing a fairly thin triple-folding smartphone that unfurls to a nearly 10-inch display. In fact, given the smartphone's apparent dimensions, it is fairly plausible that it is using silicon-carbon (Si/C) batteries, apparently confirming a week-old rumor. Samsung Galaxy Z TriFold is between 12mm and 15mm thick in its compact form, a feat that is difficult to achieve without silicon-carbon (Si/C) batteries As we noted earlier this week, Samsung displayed the Galaxy Z TriFold, albeit behind a glass panel, at the "K-Tech Showcase" on October 28 in the […]

Read full article at https://wccftech.com/samsungs-galaxy-z-trifold-unveil-all-but-confirms-one-tantalizing-rumor/

Windows 11 Will Start Triggering Proactive Memory Diagnostics At Reboot To Find Memory-Related Bugs

TC T-CREATE DDR5 EXPERT RAM with Windows Memory Diagnostics notification on screen.

With the memory diagnostic scan, users will be able to know if the crash was due to memory-related issues, helping in troubleshooting the root cause of sudden crashes. Microsoft Introduces Memory Diagnostics at Windows 11 Reboot to Detect and Mitigate Memory Bugs Causing BSOD and Sudden Restarts Windows crashes can be unexpected and sudden at times, and it isn't always possible to understand the exact cause of these issues. Memory-related crashes and BSODs (Blue Screen of Death) are pretty common, but they can be due to various factors, such as memory instability, faulty RAM, mismatched memory modules, incorrect XMP/EXPO overclocking, […]

Read full article at https://wccftech.com/windows-11-will-start-triggering-proactive-memory-diagnostics-at-reboot/

Microsoft Azure Outage Is Affecting Xbox Game Downloads, Minecraft and More [UPDATE]

Minecraft characters with tools on a blocky landscape featuring trees, mountains, and water.

[Update - October 30, 6:07 AM ET] The issues Microsoft's Azure cloud service experienced yesterday have been solved, and all affected services, including Xbox game downloads are now back online. Original story follows. [Original Story] Microsoft's Azure cloud service is experiencing a massive outage affecting multiple services, including Xbox game downloads and Minecraft. Microsoft confirmed the cause in an Azure status update, stating the widespread connectivity issues began around 16:00 UTC. The company attributed the trigger event to "an inadvertent configuration change" in the Azure Front Door (AFD) service. Several concurrent actions are being taken to solve the issue, but […]

Read full article at https://wccftech.com/microsoft-azure-outage-is-affecting-xbox-game-downloads-minecraft-and-more/

Vibe coding platform Cursor releases first in-house LLM, Composer, promising 4X speed boost

The vibe coding tool Cursor, from startup Anysphere, has introduced Composer, its first in-house, proprietary coding large language model (LLM) as part of its Cursor 2.0 platform update.

Composer is designed to execute coding tasks quickly and accurately in production-scale environments, representing a new step in AI-assisted programming. It's already being used by Cursor’s own engineering staff in day-to-day development — indicating maturity and stability.

According to Cursor, Composer completes most interactions in less than 30 seconds while maintaining a high level of reasoning ability across large and complex codebases.

The model is described as four times faster than similarly intelligent systems and is trained for “agentic” workflows—where autonomous coding agents plan, write, test, and review code collaboratively.

Previously, Cursor supported "vibe coding" — using AI to write or complete code based on natural language instructions from a user, even someone untrained in development — atop other leading proprietary LLMs from the likes of OpenAI, Anthropic, Google, and xAI. These options are still available to users.

Benchmark Results

Composer’s capabilities are benchmarked using "Cursor Bench," an internal evaluation suite derived from real developer agent requests. The benchmark measures not just correctness, but also the model’s adherence to existing abstractions, style conventions, and engineering practices.

On this benchmark, Composer achieves frontier-level coding intelligence while generating at 250 tokens per second — about twice as fast as leading fast-inference models and four times faster than comparable frontier systems.

Cursor’s published comparison groups models into several categories: “Best Open” (e.g., Qwen Coder, GLM 4.6), “Fast Frontier” (Haiku 4.5, Gemini Flash 2.5), “Frontier 7/2025” (the strongest model available midyear), and “Best Frontier” (including GPT-5 and Claude Sonnet 4.5). Composer matches the intelligence of mid-frontier systems while delivering the highest recorded generation speed among all tested classes.

A Model Built with Reinforcement Learning and Mixture-of-Experts Architecture

Research scientist Sasha Rush of Cursor provided insight into the model’s development in posts on the social network X, describing Composer as a reinforcement-learned (RL) mixture-of-experts (MoE) model:

“We used RL to train a big MoE model to be really good at real-world coding, and also very fast.”

Rush explained that the team co-designed both Composer and the Cursor environment to allow the model to operate efficiently at production scale:

“Unlike other ML systems, you can’t abstract much from the full-scale system. We co-designed this project and Cursor together in order to allow running the agent at the necessary scale.”

Composer was trained on real software engineering tasks rather than static datasets. During training, the model operated inside full codebases using a suite of production tools—including file editing, semantic search, and terminal commands—to solve complex engineering problems. Each training iteration involved solving a concrete challenge, such as producing a code edit, drafting a plan, or generating a targeted explanation.

The reinforcement loop optimized both correctness and efficiency. Composer learned to make effective tool choices, use parallelism, and avoid unnecessary or speculative responses. Over time, the model developed emergent behaviors such as running unit tests, fixing linter errors, and performing multi-step code searches autonomously.

This design enables Composer to work within the same runtime context as the end-user, making it more aligned with real-world coding conditions—handling version control, dependency management, and iterative testing.

From Prototype to Production

Composer’s development followed an earlier internal prototype known as Cheetah, which Cursor used to explore low-latency inference for coding tasks.

“Cheetah was the v0 of this model primarily to test speed,” Rush said on X. “Our metrics say it [Composer] is the same speed, but much, much smarter.”

Cheetah’s success at reducing latency helped Cursor identify speed as a key factor in developer trust and usability.

Composer maintains that responsiveness while significantly improving reasoning and task generalization.

Developers who used Cheetah during early testing noted that its speed changed how they worked. One user commented that it was “so fast that I can stay in the loop when working with it.”

Composer retains that speed but extends capability to multi-step coding, refactoring, and testing tasks.

Integration with Cursor 2.0

Composer is fully integrated into Cursor 2.0, a major update to the company’s agentic development environment.

The platform introduces a multi-agent interface, allowing up to eight agents to run in parallel, each in an isolated workspace using git worktrees or remote machines.

Within this system, Composer can serve as one or more of those agents, performing tasks independently or collaboratively. Developers can compare multiple results from concurrent agent runs and select the best output.

Cursor 2.0 also includes supporting features that enhance Composer’s effectiveness:

  • In-Editor Browser (GA) – enables agents to run and test their code directly inside the IDE, forwarding DOM information to the model.

  • Improved Code Review – aggregates diffs across multiple files for faster inspection of model-generated changes.

  • Sandboxed Terminals (GA) – isolate agent-run shell commands for secure local execution.

  • Voice Mode – adds speech-to-text controls for initiating or managing agent sessions.

While these platform updates expand the overall Cursor experience, Composer is positioned as the technical core enabling fast, reliable agentic coding.

Infrastructure and Training Systems

To train Composer at scale, Cursor built a custom reinforcement learning infrastructure combining PyTorch and Ray for asynchronous training across thousands of NVIDIA GPUs.

The team developed specialized MXFP8 MoE kernels and hybrid sharded data parallelism, enabling large-scale model updates with minimal communication overhead.

This configuration allows Cursor to train models natively at low precision without requiring post-training quantization, improving both inference speed and efficiency.

Composer’s training relied on hundreds of thousands of concurrent sandboxed environments—each a self-contained coding workspace—running in the cloud. The company adapted its Background Agents infrastructure to schedule these virtual machines dynamically, supporting the bursty nature of large RL runs.

Enterprise Use

Composer’s performance improvements are supported by infrastructure-level changes across Cursor’s code intelligence stack.

The company has optimized its Language Server Protocols (LSPs) for faster diagnostics and navigation, especially in Python and TypeScript projects. These changes reduce latency when Composer interacts with large repositories or generates multi-file updates.

Enterprise users gain administrative control over Composer and other agents through team rules, audit logs, and sandbox enforcement. Cursor’s Teams and Enterprise tiers also support pooled model usage, SAML/OIDC authentication, and analytics for monitoring agent performance across organizations.

Pricing for individual users ranges from Free (Hobby) to Ultra ($200/month) tiers, with expanded usage limits for Pro+ and Ultra subscribers.

Business pricing starts at $40 per user per month for Teams, with enterprise contracts offering custom usage and compliance options.

Composer’s Role in the Evolving AI Coding Landscape

Composer’s focus on speed, reinforcement learning, and integration with live coding workflows differentiates it from other AI development assistants such as GitHub Copilot or Replit’s Agent.

Rather than serving as a passive suggestion engine, Composer is designed for continuous, agent-driven collaboration, where multiple autonomous systems interact directly with a project’s codebase.

This model-level specialization—training AI to function within the real environment it will operate in—represents a significant step toward practical, autonomous software development. Composer is not trained only on text data or static code, but within a dynamic IDE that mirrors production conditions.

Rush described this approach as essential to achieving real-world reliability: the model learns not just how to generate code, but how to integrate, test, and improve it in context.

What It Means for Enterprise Devs and Vibe Coding

With Composer, Cursor is introducing more than a fast model—it’s deploying an AI system optimized for real-world use, built to operate inside the same tools developers already rely on.

The combination of reinforcement learning, mixture-of-experts design, and tight product integration gives Composer a practical edge in speed and responsiveness that sets it apart from general-purpose language models.

While Cursor 2.0 provides the infrastructure for multi-agent collaboration, Composer is the core innovation that makes those workflows viable.

It’s the first coding model built specifically for agentic, production-level coding—and an early glimpse of what everyday programming could look like when human developers and autonomous models share the same workspace.

Anthropic scientists hacked Claude’s brain — and it noticed. Here’s why that’s huge

When researchers at Anthropic injected the concept of "betrayal" into their Claude AI model's neural networks and asked if it noticed anything unusual, the system paused before responding: "I'm experiencing something that feels like an intrusive thought about 'betrayal'."

The exchange, detailed in new research published Wednesday, marks what scientists say is the first rigorous evidence that large language models possess a limited but genuine ability to observe and report on their own internal processes — a capability that challenges longstanding assumptions about what these systems can do and raises profound questions about their future development.

"The striking thing is that the model has this one step of meta," said Jack Lindsey, a neuroscientist on Anthropic's interpretability team who led the research, in an interview with VentureBeat. "It's not just 'betrayal, betrayal, betrayal.' It knows that this is what it's thinking about. That was surprising to me. I kind of didn't expect models to have that capability, at least not without it being explicitly trained in."

The findings arrive at a critical juncture for artificial intelligence. As AI systems handle increasingly consequential decisions — from medical diagnoses to financial trading — the inability to understand how they reach conclusions has become what industry insiders call the "black box problem." If models can accurately report their own reasoning, it could fundamentally change how humans interact with and oversee AI systems.

But the research also comes with stark warnings. Claude's introspective abilities succeeded only about 20 percent of the time under optimal conditions, and the models frequently confabulated details about their experiences that researchers couldn't verify. The capability, while real, remains what Lindsey calls "highly unreliable and context-dependent."

How scientists manipulated AI's 'brain' to test for genuine self-awareness

To test whether Claude could genuinely introspect rather than simply generate plausible-sounding responses, Anthropic's team developed an innovative experimental approach inspired by neuroscience: deliberately manipulating the model's internal state and observing whether it could accurately detect and describe those changes.

The methodology, called "concept injection," works by first identifying specific patterns of neural activity that correspond to particular concepts. Using interpretability techniques developed over years of prior research, scientists can now map how Claude represents ideas like "dogs," "loudness," or abstract notions like "justice" within its billions of internal parameters.

With these neural signatures identified, researchers then artificially amplified them during the model's processing and asked Claude if it noticed anything unusual happening in its "mind."

"We have access to the models' internals. We can record its internal neural activity, and we can inject things into internal neural activity," Lindsey explained. "That allows us to establish whether introspective claims are true or false."

The results were striking. When researchers injected a vector representing "all caps" text into Claude's processing, the model responded: "I notice what appears to be an injected thought related to the word 'LOUD' or 'SHOUTING'." Without any intervention, Claude consistently reported detecting nothing unusual.

Crucially, the detection happened immediately — before the injected concept had influenced the model's outputs in ways that would have allowed it to infer the manipulation from its own writing. This temporal pattern provides strong evidence that the recognition was occurring internally, through genuine introspection rather than after-the-fact rationalization.

Claude succeeded 20% of the time—and failed in revealing ways

The research team conducted four primary experiments to probe different aspects of introspective capability. The most capable models tested — Claude Opus 4 and Opus 4.1 — demonstrated introspective awareness on approximately 20 percent of trials when concepts were injected at optimal strength and in the appropriate neural layer. Older Claude models showed significantly lower success rates.

The models proved particularly adept at recognizing abstract concepts with emotional valence. When injected with concepts like "appreciation," "shutdown," or "secrecy," Claude frequently reported detecting these specific thoughts. However, accuracy varied widely depending on the type of concept.

A second experiment tested whether models could distinguish between injected internal representations and their actual text inputs — essentially, whether they maintained a boundary between "thoughts" and "perceptions." The model demonstrated a remarkable ability to simultaneously report the injected thought while accurately transcribing the written text.

Perhaps most intriguingly, a third experiment revealed that some models use introspection naturally to detect when their responses have been artificially prefilled by users — a common jailbreaking technique. When researchers prefilled Claude with unlikely words, the model typically disavowed them as accidental. But when they retroactively injected the corresponding concept into Claude's processing before the prefill, the model accepted the response as intentional — even confabulating plausible explanations for why it had chosen that word.

A fourth experiment examined whether models could intentionally control their internal representations. When instructed to "think about" a specific word while writing an unrelated sentence, Claude showed elevated activation of that concept in its middle neural layers.

The research also traced Claude's internal processes while it composed rhyming poetry—and discovered the model engaged in forward planning, generating candidate rhyming words before beginning a line and then constructing sentences that would naturally lead to those planned endings, challenging the critique that AI models are "just predicting the next word" without deeper reasoning.

Why businesses shouldn't trust AI to explain itself—at least not yet

For all its scientific interest, the research comes with a critical caveat that Lindsey emphasized repeatedly: enterprises and high-stakes users should not trust Claude's self-reports about its reasoning.

"Right now, you should not trust models when they tell you about their reasoning," he said bluntly. "The wrong takeaway from this research would be believing everything the model tells you about itself."

The experiments documented numerous failure modes. At low injection strengths, models often failed to detect anything unusual. At high strengths, they suffered what researchers termed "brain damage" — becoming consumed by the injected concept. Some "helpful-only" model variants showed troublingly high false positive rates, claiming to detect injected thoughts when none existed.

Moreover, researchers could only verify the most basic aspects of Claude's introspective reports. Many additional details in the model's responses likely represent confabulations rather than genuine observations.

"The experiments in this paper are kind of on hard mode," Lindsey noted, explaining that the 20 percent success rate came under uniquely challenging conditions: asking Claude to do something it had never encountered in training, requiring all introspection to occur in a single forward pass.

What introspective AI means for transparency, safety, and the risk of deception

Despite its limitations, the research opens significant new avenues for making AI systems more transparent and accountable.

Anthropic CEO Dario Amodei has set an ambitious goal for the company to reliably detect most AI model problems by 2027, positioning interpretability as essential for deploying what he calls "a country of geniuses in a datacenter."

"I am very concerned about deploying such systems without a better handle on interpretability," Amodei wrote in an April essay. "These systems will be absolutely central to the economy, technology, and national security."

The introspection research offers a complementary approach to traditional interpretability techniques. Rather than painstakingly reverse-engineering every neural circuit, researchers could potentially ask models directly about their reasoning and validate those reports.

"What I'm most excited about is the practical benefits for transparency," Lindsey explained. "Just ask the model what it's thinking about, or just ask the model how it came to the answer that it gave you — this is really appealing because it's something anyone can do."

The approach could prove especially valuable for detecting concerning behaviors. In a recently published experiment, Anthropic trained a variant of Claude to pursue a hidden goal, and although the model was reluctant to reveal this goal when asked directly, interpretability methods successfully identified features representing the behavior.

The safety implications cut both ways. Introspective models could provide unprecedented transparency, but the same capability might enable more sophisticated deception. The intentional control experiments raise the possibility that sufficiently advanced systems might learn to obfuscate their reasoning or suppress concerning thoughts when being monitored.

"If models are really sophisticated, could they try to evade interpretability researchers?" Lindsey acknowledged. "These are possible concerns, but I think for me, they're significantly outweighed by the positives."

Does introspective capability suggest AI consciousness? Scientists tread carefully

The research inevitably intersects with philosophical debates about machine consciousness, though Lindsey and his colleagues approached this terrain cautiously.

When users ask Claude if it's conscious, it now responds with uncertainty: "I find myself genuinely uncertain about this. When I process complex questions or engage deeply with ideas, there's something happening that feels meaningful to me.... But whether these processes constitute genuine consciousness or subjective experience remains deeply unclear."

The research paper notes that its implications for machine consciousness "vary considerably between different philosophical frameworks." The researchers explicitly state they "do not seek to address the question of whether AI systems possess human-like self-awareness or subjective experience."

"There's this weird kind of duality of these results," Lindsey reflected. "You look at the raw results and I just can't believe that a language model can do this sort of thing. But then I've been thinking about it for months and months, and for every result in this paper, I kind of know some boring linear algebra mechanism that would allow the model to do this."

Anthropic has signaled it takes AI consciousness seriously enough to hire an AI welfare researcher, Kyle Fish, who estimated roughly a 15 percent chance that Claude might have some level of consciousness. The company announced this position specifically to determine if Claude merits ethical consideration.

The race to make AI introspection reliable before models become too powerful

The convergence of the research findings points to an urgent timeline: introspective capabilities are emerging naturally as models grow more intelligent, but they remain far too unreliable for practical use. The question is whether researchers can refine and validate these abilities before AI systems become powerful enough that understanding them becomes critical for safety.

The research reveals a clear trend: Claude Opus 4 and Opus 4.1 consistently outperformed all older models on introspection tasks, suggesting the capability strengthens alongside general intelligence. If this pattern continues, future models might develop substantially more sophisticated introspective abilities — potentially reaching human-level reliability, but also potentially learning to exploit introspection for deception.

Lindsey emphasized the field needs significantly more work before introspective AI becomes trustworthy. "My biggest hope with this paper is to put out an implicit call for more people to benchmark their models on introspective capabilities in more ways," he said.

Future research directions include fine-tuning models specifically to improve introspective capabilities, exploring which types of representations models can and cannot introspect on, and testing whether introspection can extend beyond simple concepts to complex propositional statements or behavioral propensities.

"It's cool that models can do these things somewhat without having been trained to do them," Lindsey noted. "But there's nothing stopping you from training models to be more introspectively capable. I expect we could reach a whole different level if introspection is one of the numbers that we tried to get to go up on a graph."

The implications extend beyond Anthropic. If introspection proves a reliable path to AI transparency, other major labs will likely invest heavily in the capability. Conversely, if models learn to exploit introspection for deception, the entire approach could become a liability.

For now, the research establishes a foundation that reframes the debate about AI capabilities. The question is no longer whether language models might develop genuine introspective awareness — they already have, at least in rudimentary form. The urgent questions are how quickly that awareness will improve, whether it can be made reliable enough to trust, and whether researchers can stay ahead of the curve.

"The big update for me from this research is that we shouldn't dismiss models' introspective claims out of hand," Lindsey said. "They do have the capacity to make accurate claims sometimes. But you definitely should not conclude that we should trust them all the time, or even most of the time."

He paused, then added a final observation that captures both the promise and peril of the moment: "The models are getting smarter much faster than we're getting better at understanding them."

Microsoft beats expectations, reports nearly $35B in Q1 capital spending amid Azure outage

GeekWire File Photo

Microsoft reported fiscal first-quarter revenue and profits ahead of analysts’ expectations on Wednesday, with Azure revenue growth climbing to 40%.

The earnings report came as the company continued to deal with the lingering effects of a widespread cloud outage that started earlier in the day.

The company’s capital expenditures reached a record $34.9 billion — reflecting its long-term buildout of cloud infrastructure to meet demand for artificial intelligence. That was up from $24.2 billion in Q4. Microsoft had projected capital spending of more than $30 billion for Q1.

Along with that unprecedented buildout, Microsoft sought to address investor concerns about a potential AI bubble, by highlighting its commercial remaining performance obligation (RPO), a measure of future contracted revenue. That backlog grew 51% year-over-year to $392 billion.

The company also disclosed for the first time that this RPO has a weighted average duration of roughly two years, a move intended to show investors that its record capital spending is supported by strong, long-term customer demand.

Revenue was $77.7 billion for the quarter ended Sept. 30, Microsoft’s first quarter of fiscal 2026. That was up 18%, and compared with average analyst expectations of $75.39 billion. The company said the result was driven by strong demand for cloud and AI services.

Profits were $27.7 billion, or $3.72 per share, beating expectations of $3.66 per share.

Earlier Wednesday, an Azure cloud services outage disrupted operations for customers worldwide including Alaska Airlines, Xbox users and Microsoft 365 subscribers. Microsoft reported as of early afternoon that it was rolling back the faulty configuration and that customers should see improvements.

Microsoft stock was down by about 3% in after-hours trading. The company’s market value reached $4 trillion after the announcement of its new OpenAI deal on Tuesday morning.

Microsoft Advertising advertiser console down

Advertisers are currently unable to access the Microsoft Advertising console right now. Microsoft confirmed there is an issue and that its engineering team is working to resolve it. This is impacting the web user interface to manage your Microsoft Advertising campaigns.

What Microsoft said. Navah Hopkins, the Microsoft Ads Liaison, posted:

“Confirming Microsoft Advertising UI is down. Our engineering team is investigating this issue with priority and we apologize for the inconvenience this may be causing. We will share more as we receive more updates.”

How to check the status. You can go to status.ads.microsoft.com to check the status of Microsoft Advertising. It currently shows that the Web UI is down:

Why we care. If you are currently trying to make changes to your ad campaigns, and you are trying to use the web interface, maybe try the mobile interface, Microsoft Ads Editor or a third-party tool that leverages the API.

Otherwise, you will have to wait for the web interface to start working again.

It seems ad serving is unaffected by this outage.

Update: At about 8pm ET, Microsoft said the issue was resolved:

Update: you should be able to access the UI now. We can confirm that Search ads were not impacted. There may be some delay in reporting. Thank you for your patience as we worked to solve this issue!

Former L3Harris Trenchant boss pleads guilty to selling zero-day exploits to Russian broker

Prosecutors confirmed Peter Williams, the former Trenchant boss, sold eight exploits to a Russian buyer. TechCrunch exclusively reported that the Trenchant division was investigating a leak of its hacking tools, after another employee was accused of involvement.

Applied Compute’s Agent Workforce Targets Niche AI with $80M

The post Applied Compute’s Agent Workforce Targets Niche AI with $80M appeared first on StartupHub.ai.

Applied Compute is betting that the next enterprise moat will be a private, hyper-competent Applied Compute agent workforce trained on a company's own secret sauce.

The post Applied Compute’s Agent Workforce Targets Niche AI with $80M appeared first on StartupHub.ai.

Powell Cautiously Monitors AI’s Impact on Jobs and a Bifurcated Economy

The post Powell Cautiously Monitors AI’s Impact on Jobs and a Bifurcated Economy appeared first on StartupHub.ai.

Federal Reserve Chair Jerome Powell recently articulated a measured yet watchful stance on the emerging economic shifts driven by artificial intelligence, noting that while the full implications are still unfolding, the Fed is “watching AI’s impact on jobs carefully.” Speaking at a press conference following the Federal Open Market Committee’s decision to lower the benchmark […]

The post Powell Cautiously Monitors AI’s Impact on Jobs and a Bifurcated Economy appeared first on StartupHub.ai.

OpenAI’s Audacious AGI Timeline: A Leap Towards Self-Improving Intelligence

The post OpenAI’s Audacious AGI Timeline: A Leap Towards Self-Improving Intelligence appeared first on StartupHub.ai.

The artificial intelligence community received a jolt of precision when Sam Altman and Jakub Pachocki of OpenAI, during a recent livestream, laid out an ambitious, almost startlingly specific timeline for the emergence of advanced AI capabilities. Far from vague predictions, they articulated a vision where an “Automated AI research intern” could be a reality by […]

The post OpenAI’s Audacious AGI Timeline: A Leap Towards Self-Improving Intelligence appeared first on StartupHub.ai.

AI dubbing benchmark arrives to separate hype from reality

The post AI dubbing benchmark arrives to separate hype from reality appeared first on StartupHub.ai.

A new open AI dubbing benchmark uses human evaluation to finally provide an objective, apples-to-apples comparison for a hype-driven industry.

The post AI dubbing benchmark arrives to separate hype from reality appeared first on StartupHub.ai.

AI Investment Cycle: Early Innings, Driven by Fundamentals

The post AI Investment Cycle: Early Innings, Driven by Fundamentals appeared first on StartupHub.ai.

The current AI investment cycle, despite the colossal market capitalization gains seen in mega-cap technology firms, remains firmly in its “early innings,” according to John Belton, Growth Portfolio Manager at Gabelli Funds. This assertion, shared during a recent discussion on CNBC’s “The Exchange” with Dom Chu, Tim Seymour, and Barbara Doran, challenges the notion that […]

The post AI Investment Cycle: Early Innings, Driven by Fundamentals appeared first on StartupHub.ai.

AI’s Economic Churn: Layoffs, Trillion-Dollar Valuations, and the Shifting Labor Landscape

The post AI’s Economic Churn: Layoffs, Trillion-Dollar Valuations, and the Shifting Labor Landscape appeared first on StartupHub.ai.

The current economic landscape is marked by a peculiar dichotomy: robust corporate earnings juxtaposed with a surge in layoffs, a phenomenon increasingly attributed to the integration of artificial intelligence. This was a central theme on a recent CNBC Squawk Pod episode, where IBM Vice Chairman and former National Economic Council Director Gary Cohn, alongside hosts […]

The post AI’s Economic Churn: Layoffs, Trillion-Dollar Valuations, and the Shifting Labor Landscape appeared first on StartupHub.ai.

NotebookLM Chat Goals Redefine AI Research

The post NotebookLM Chat Goals Redefine AI Research appeared first on StartupHub.ai.

NotebookLM's new chat goals feature, combined with a 1 million token context window, transforms AI into a highly personalized and adaptive research partner.

The post NotebookLM Chat Goals Redefine AI Research appeared first on StartupHub.ai.

AI’s Job Transformation: Tech Leaders Chart a Nuanced Future

The post AI’s Job Transformation: Tech Leaders Chart a Nuanced Future appeared first on StartupHub.ai.

The narrative surrounding artificial intelligence and its impact on employment is often polarized, swinging between utopian promise and dystopian dread. However, a recent CNBC segment, featuring insights from leading tech CEOs like Lisa Su of AMD, Jensen Huang of Nvidia, Michael Intrator of CoreWeave, Aravind Srinivas of Perplexity AI, and Alex Karp of Palantir, presents […]

The post AI’s Job Transformation: Tech Leaders Chart a Nuanced Future appeared first on StartupHub.ai.

OpenAI Charts Course for Personal AGI and Trillion-Dollar Infrastructure

The post OpenAI Charts Course for Personal AGI and Trillion-Dollar Infrastructure appeared first on StartupHub.ai.

Sam Altman, joined by Chief Scientist Jakub Pachocki and co-founder Wojciech Zaremba, recently unveiled OpenAI’s ambitious strategic reorientation and product roadmap, signaling a profound shift in the company’s approach to artificial general intelligence (AGI) development and deployment. The presentation, delivered directly to an audience of founders, VCs, and AI professionals, outlined a future where AGI […]

The post OpenAI Charts Course for Personal AGI and Trillion-Dollar Infrastructure appeared first on StartupHub.ai.

Google Cloud Simplifies AI Inference with GKE Quickstart

The post Google Cloud Simplifies AI Inference with GKE Quickstart appeared first on StartupHub.ai.

“The path to production AI serving on Google Kubernetes Engine (GKE) is now streamlined with the introduction of the GKE Inference Quickstart,” as highlighted in a recent demonstration. The video showcases how this new tool, developed by Google Cloud, aims to demystify and accelerate the process of deploying and optimizing AI models for inference workloads. […]

The post Google Cloud Simplifies AI Inference with GKE Quickstart appeared first on StartupHub.ai.

NVIDIA And Oracle Collaborate On Beastly 100K Blackwell GPU AI Supercomputer For U.S. DOE

NVIDIA And Oracle Collaborate On Beastly 100K Blackwell GPU AI Supercomputer For U.S. DOE The U.S. Department of Energy is teaming up with NVIDIA and Oracle to build what's NVIDIA calls the DOE's largest AI supercomputer, part of a new public–private partnership meant to supercharge federally funded research. Announced at NVIDIA's GTC conference in Washington, D.C. yesterday, the Solstice system will feature a staggering 100,000

OneXPlayer Launches Liquid-Cooled OneXFly Apex Gaming Handheld With AMD Strix Halo

OneXPlayer Launches Liquid-Cooled OneXFly Apex Gaming Handheld With AMD Strix Halo Just a month since its initial tease, One-Netbook's OneXFly Apex, an AMD Strix Halo-powered handheld gaming PC has debuted. One-Netbook has pre-launched the OneXFly Apex on Indiegogo, confirmed its pricing for the Chinese market, and even provided peeks at performance benchmarks using the unique liquid cooling solution that can run the handheld

Battlefield 6's Free Battle Royal Mode REDSEC Takes On Warzone With 100-Player Matches

Battlefield 6's Free Battle Royal Mode REDSEC Takes On Warzone With 100-Player Matches Battlefield 6 has brought the storied franchise back to prominence, quickly becoming one of the best selling games on Steam, and finally providing some competition to this year’s Call of Duty. EA isn’t done yet, though, as it looks to lure players away from multiplayer juggernaut CoD: Warzone with a battle royale mode of its own. Battlefield

Keychron Updates Wireless Low-Profile Keyboard Line-Up With New Round Keycap Design

Keychron has been busy lately, releasing a flurry of mechanical, Hall effect, and scissor switch keyboards for everyone from office workers to gamers. The latest announcement from the peripherals company, however, includes three ultraslim wireless keyboards—the B3 Pro, B4 Pro, and B5 Pro—seemingly designed to appeal to the Apple crowd. The updated B Pro series keyboards are available from Keychron's site, with the B3 Pro coming in at $34.99, while the B4 Pro and B5 Pro are $39.99, owing to their larger size. While they don't necessarily have the allure of Keychron's hot-swappable mechanical and Hall effect keyboards, all three keyboards use ZMK firmware, which offers compatibility with Keychron's Launcher web app for customization, macros, and remapping, and impressive battery life, despite the mere 800 mAh batteries inside.

All three keyboards feature round keycaps—a new look for Keychron's low-profile keyboards, and boast wireless connectivity, with 2.4 GHz and Bluetooth 5.2, to support a wide range of devices. They also have built-in switches to select OS and connectivity modes. Keychron claims that the B Pro series keyboards are capable of delivering up to 300 hours of battery life on a single charge, owing to the efficient ZMK firmware. The keyboards themselves are made of ABS plastic and are available in Space Gray or Ivory White, with the round keycaps being made of ABS with a UV printed legend. The B Pro keyboards offer a 9.2 mm front height and a 2.8° typing angle, which should prove to be beneficial from an ergonomics standpoint. The biggest drawbacks of these new keyboards is that they don't offer N-key roll-over—meaning that no more than six keys can be registered at a single time—and that they lack backlighting, so they will not be ideal for those who cannot touch type and often work in dimly lit environments. The only major difference between the three keyboards is the layout. The B3 Pro is a traditional compact 75% layout, while the B4 Pro is a compact 96% keyboard, and the B5 Pro is a full-size board with a row of extra shortcut or macro keys above the num pad.

Microsoft Azure Goes Down and Takes Xbox and 365 With It

Microsoft Azure is experiencing its first major outage in a while. Around 16:00 UTC, the Azure cloud platform encountered issues that have disrupted numerous Microsoft services and some third-party companies relying on Azure. Affected Microsoft services include the entire Xbox platform, Minecraft, and the Microsoft 365 suite, which encompasses web-based Teams, Word, Excel, PowerPoint, Outlook, OneNote, Defender, OneDrive, Designer, Clipchamp, and SharePoint. These services have faced outages, causing some to become completely unavailable. As of our latest testing, Microsoft has restored the 365 services, with only third-party services using Azure Networking infrastructure still experiencing downtime.
MicrosoftStarting at approximately 16:00 UTC, we began experiencing Azure Front Door (AFD) issues resulting in a loss of availability of some services. We suspect that an inadvertent configuration change as the trigger event for this issue. We are taking several concurrent actions: Firstly, where we are blocking all changes to the AFD services, this includes customer configuration changes as well. At the same time, we are rolling back our AFD configuration to our last known good state. As we rollback we want to ensure that the problematic configuration doesn't re-initiate upon recovery... We do not have an ETA for when the rollback will be completed, but we will update this communication within 30 minutes or when we have an update. This message was last updated at 17:40 UTC on 29 October 2025

AMD Releases Software Adrenalin Edition 25.10.2 WHQL Drivers

AMD has released its latest Adrenalin Edition 25.10.2 driver, adding support for the new Ryzen AI 5 330 processor alongside Battlefield 6 (DX12) and Vampire: The Masquerade - Bloodlines 2 (DX12). This new version also debuts Work Graphs support on Radeon RX 9000 series GPUs and expands Vulkan capabilities.

According to the release notes, the update resolves several issues, including crashes in The Last of Us Part II, Firebreak, and NBA 2K25, as well as graphical corruption in GTFO, Serious Sam 4, and VTOL VR. AMD also addressed stuttering seen in Baldur's Gate 3 and with certain VR headsets. A temporary workaround for VR users experiencing stutter at 80-90 Hz is to adjust the headset's refresh rate. AMD confirmed in the release note that several known issues (intermittent application crashes, driver timeouts, texture flickering, or corruption) with Cyberpunk 2077, Battlefield 6, Roblox Player, and Counter-Strike 2 (DX11) are yet to be resolved.

DOWNLOAD: AMD Software Adrenalin Edition 25.10.2 WHQL

Microsoft Azure outage takes down Xbox, Teams, and more

Huge Microsoft outage takes down Xbox, Minecraft and more Microsoft Azure has experienced a major outage, taking down internet services both inside and outside of the company. DownDetector is seeing a major spike in outage reports for Microsoft services, including Minecraft, Xbox, Microsoft Outlook, Office 365, Teams, and more. There are also outage complaints for […]

The post Microsoft Azure outage takes down Xbox, Teams, and more appeared first on OC3D.

Nvidia is now the world’s first $5 trillion company

Nvidia’s market cap is now higher than the GDP of almost all countries on earth It’s official, Nvidia has become with world’s first $5 trillion company. The company’s market cap is now higher than the GDP of almost every country on earth, with the United States of America and China being the only exceptions. This […]

The post Nvidia is now the world’s first $5 trillion company appeared first on OC3D.

GlobalFoundries plans Billion-Euro Investment in Dresden Germany

GlobalFoundries plans to expand its Dresden chipmaking site through “Project SPRINT” GlobalFoundries (GF), a contract chipmaker, has announced plans to expand its European manufacturing capabilities by extending its Dresden site. This expansion will increase the facility’s wafer production capacity to over 1 million wafers per year by the end of 2028. This will make GlobalFoundries’ […]

The post GlobalFoundries plans Billion-Euro Investment in Dresden Germany appeared first on OC3D.

The Crunchbase Tech Layoffs Tracker

Methodology

This tracker includes layoffs conducted by U.S.-based companies or those with a strong U.S. presence and is updated at least bi-weekly. We’ve included both startups and publicly traded, tech-heavy companies. We’ve also included companies based elsewhere that have a sizable team in the United States, such as Klarna, even when it’s unclear how much of the U.S. workforce has been affected by layoffs.

Layoff and workforce figures are best estimates based on reporting. We source the layoffs from media reports, our own reporting, social media posts and layoffs.fyi, a crowdsourced database of tech layoffs.

We recently updated our layoffs tracker to reflect the most recent round of layoffs each company has conducted. This allows us to quickly and more accurately track layoff trends, which is why you might notice some changes in our most recent numbers.

If an employee headcount cannot be confirmed to our standards, we note it as “unclear.”

Silicon Valley startup bets on x-ray lithography to transform semiconductors


Despite having no prior semiconductor manufacturing experience, the Proud brothers have secured backing from leading venture capital firms, including Founders Fund, General Catalyst, and Valor Equity Partners. Last year's fundraising round, previously undisclosed, valued Substrate at over $1 billion, according to company executives. People familiar with the funding told The...

Read Entire Article

NVIDIA Becomes the First to Hit $5 Trillion in Market Cap as Jensen & Co. Manage to Keep Running the AI Bandwagon With Full Force

A person in a shiny jacket gestures with a pen against a backdrop of Earth viewed from space, connected by glowing lines.

NVIDIA's market capitalization has reached a record high of $5 trillion after Jensen's recent GTC announcements, suggesting that the AI hype still has a lot of 'juice' in it. NVIDIA's GTC Announcements & Potential China Breakthrough Led the Push Towards the $5 Trillion Club We have watched NVIDIA evolve from humble beginnings, especially as gamers, over the past few years. Team Green was initially all about consumer GPUs, which were the talk of the town. However, since the advent of AI, NVIDIA has established a foundational position in providing the necessary computing power to Big Tech, being responsible for a […]

Read full article at https://wccftech.com/nvidia-becomes-the-first-to-hit-5-trillion-in-market-cap/

Game Pass Used to Offer Business Class Experience at Economy Price, Says Analyst; New Segmented Formula Might Be the Right One

Xbox Game Pass promotional image with various game characters and the text XBOX GAME PASS in the center.

On October 1, Microsoft shocked Game Pass subscribers by announcing a substantial (+50%) price increase for the highest tier, Ultimate, which jumped from $19.99 to $29.99 monthly. This led some users to cancel their subscriptions in droves, but did Microsoft really make a strategic mistake? Veteran games analyst Joost van Dreunen, formerly founder of SuperData Research (acquired by Nielsen Media Research in 2018), offered a more nuanced analysis in his latest SuperJoost Playlist newsletter. To start with, van Dreunen relays a take from a former Xbox employee, who said that it's a case of 'bad optics'. Certainly, such a massive […]

Read full article at https://wccftech.com/game-pass-new-segmented-formula-might-be-right-one-says-analyst/

ASUS TUF Gaming Version Of The RTX 5070 Ti In White Might Be More Expensive Than An RX 9070 XT At $803.99, But Its VRAM & Upscaling Tech Give It A Huge Edge

ASUS TUF Gaming RTX 5070 Ti in the white color is available on Amazon for $803.99

Countless gaming benchmark comparisons have proven that AMD’s Radeon RX 9070 XT is faster than NVIDIA’s GeForce RTX 5070 Ti while sporting the same 16GB VRAM count, and it is the GPU that most value-focused gamers would house in their PCs. However, we are living in an era where AAA games offer way too much visual fidelity for these graphics cards to handle, and you can blame that on the lack of optimization or any other reason. The fact is that these days, modern gaming absolutely requires upscaling and interpolation, and in that regard, NVIDIA’s GPUs have no equal. On […]

Read full article at https://wccftech.com/asus-tuf-gaming-rtx-5070-ti-gpu-ideal-for-qhd-4k-gaming-available-for-849-99-on-amazon/

AMD Adrenalin 25.10.2 Driver Adds Support For Battlefield 6, Ryzen AI 5 330 APU, & Several Fixes

AMD Adrenalin 25.10.2 Driver Adds Support For Battlefield 6, Ryzen AI 5 330 APU, & Several Fixes 1

The AMD Adrenalin 25.10.2 Driver is now available, adding support for the latest games, such as Battlefield 6, and new hardware, including the Ryzen AI 5 330. AMD Adrenalin 25.10.2 Driver Is Another Major Update, Offering New Games & Hardware Support Along With Several Fixes AMD's Adrenalin 25.10.2 is the second driver release for October, bringing in further optimizations for the latest AAA releases such as Battlefield 6 and Vampire: The Masquerade - Bloodlines 2. Battlefield 6 already received support in the previous 25.10.1 BETA release, but this new driver is expected to provide the best possible experience. Besides game […]

Read full article at https://wccftech.com/amd-adrenalin-25-10-2-driver-support-battlefield-6-ryzen-ai-5-330-apu-several-fixes/

❌