Reading view

Microsoft Azure outage takes down Xbox, Teams, and more

Huge Microsoft outage takes down Xbox, Minecraft and more Microsoft Azure has experienced a major outage, taking down internet services both inside and outside of the company. DownDetector is seeing a major spike in outage reports for Microsoft services, including Minecraft, Xbox, Microsoft Outlook, Office 365, Teams, and more. There are also outage complaints for […]

The post Microsoft Azure outage takes down Xbox, Teams, and more appeared first on OC3D.

Nvidia is now the world’s first $5 trillion company

Nvidia’s market cap is now higher than the GDP of almost all countries on earth It’s official, Nvidia has become with world’s first $5 trillion company. The company’s market cap is now higher than the GDP of almost every country on earth, with the United States of America and China being the only exceptions. This […]

The post Nvidia is now the world’s first $5 trillion company appeared first on OC3D.

GlobalFoundries plans Billion-Euro Investment in Dresden Germany

GlobalFoundries plans to expand its Dresden chipmaking site through “Project SPRINT” GlobalFoundries (GF), a contract chipmaker, has announced plans to expand its European manufacturing capabilities by extending its Dresden site. This expansion will increase the facility’s wafer production capacity to over 1 million wafers per year by the end of 2028. This will make GlobalFoundries’ […]

The post GlobalFoundries plans Billion-Euro Investment in Dresden Germany appeared first on OC3D.

The Crunchbase Tech Layoffs Tracker

Methodology

This tracker includes layoffs conducted by U.S.-based companies or those with a strong U.S. presence and is updated at least bi-weekly. We’ve included both startups and publicly traded, tech-heavy companies. We’ve also included companies based elsewhere that have a sizable team in the United States, such as Klarna, even when it’s unclear how much of the U.S. workforce has been affected by layoffs.

Layoff and workforce figures are best estimates based on reporting. We source the layoffs from media reports, our own reporting, social media posts and layoffs.fyi, a crowdsourced database of tech layoffs.

We recently updated our layoffs tracker to reflect the most recent round of layoffs each company has conducted. This allows us to quickly and more accurately track layoff trends, which is why you might notice some changes in our most recent numbers.

If an employee headcount cannot be confirmed to our standards, we note it as “unclear.”

Silicon Valley startup bets on x-ray lithography to transform semiconductors


Despite having no prior semiconductor manufacturing experience, the Proud brothers have secured backing from leading venture capital firms, including Founders Fund, General Catalyst, and Valor Equity Partners. Last year's fundraising round, previously undisclosed, valued Substrate at over $1 billion, according to company executives. People familiar with the funding told The...

Read Entire Article

NVIDIA Becomes the First to Hit $5 Trillion in Market Cap as Jensen & Co. Manage to Keep Running the AI Bandwagon With Full Force

A person in a shiny jacket gestures with a pen against a backdrop of Earth viewed from space, connected by glowing lines.

NVIDIA's market capitalization has reached a record high of $5 trillion after Jensen's recent GTC announcements, suggesting that the AI hype still has a lot of 'juice' in it. NVIDIA's GTC Announcements & Potential China Breakthrough Led the Push Towards the $5 Trillion Club We have watched NVIDIA evolve from humble beginnings, especially as gamers, over the past few years. Team Green was initially all about consumer GPUs, which were the talk of the town. However, since the advent of AI, NVIDIA has established a foundational position in providing the necessary computing power to Big Tech, being responsible for a […]

Read full article at https://wccftech.com/nvidia-becomes-the-first-to-hit-5-trillion-in-market-cap/

Game Pass Used to Offer Business Class Experience at Economy Price, Says Analyst; New Segmented Formula Might Be the Right One

Xbox Game Pass promotional image with various game characters and the text XBOX GAME PASS in the center.

On October 1, Microsoft shocked Game Pass subscribers by announcing a substantial (+50%) price increase for the highest tier, Ultimate, which jumped from $19.99 to $29.99 monthly. This led some users to cancel their subscriptions in droves, but did Microsoft really make a strategic mistake? Veteran games analyst Joost van Dreunen, formerly founder of SuperData Research (acquired by Nielsen Media Research in 2018), offered a more nuanced analysis in his latest SuperJoost Playlist newsletter. To start with, van Dreunen relays a take from a former Xbox employee, who said that it's a case of 'bad optics'. Certainly, such a massive […]

Read full article at https://wccftech.com/game-pass-new-segmented-formula-might-be-right-one-says-analyst/

ASUS TUF Gaming Version Of The RTX 5070 Ti In White Might Be More Expensive Than An RX 9070 XT At $803.99, But Its VRAM & Upscaling Tech Give It A Huge Edge

ASUS TUF Gaming RTX 5070 Ti in the white color is available on Amazon for $803.99

Countless gaming benchmark comparisons have proven that AMD’s Radeon RX 9070 XT is faster than NVIDIA’s GeForce RTX 5070 Ti while sporting the same 16GB VRAM count, and it is the GPU that most value-focused gamers would house in their PCs. However, we are living in an era where AAA games offer way too much visual fidelity for these graphics cards to handle, and you can blame that on the lack of optimization or any other reason. The fact is that these days, modern gaming absolutely requires upscaling and interpolation, and in that regard, NVIDIA’s GPUs have no equal. On […]

Read full article at https://wccftech.com/asus-tuf-gaming-rtx-5070-ti-gpu-ideal-for-qhd-4k-gaming-available-for-849-99-on-amazon/

AMD Adrenalin 25.10.2 Driver Adds Support For Battlefield 6, Ryzen AI 5 330 APU, & Several Fixes

AMD Adrenalin 25.10.2 Driver Adds Support For Battlefield 6, Ryzen AI 5 330 APU, & Several Fixes 1

The AMD Adrenalin 25.10.2 Driver is now available, adding support for the latest games, such as Battlefield 6, and new hardware, including the Ryzen AI 5 330. AMD Adrenalin 25.10.2 Driver Is Another Major Update, Offering New Games & Hardware Support Along With Several Fixes AMD's Adrenalin 25.10.2 is the second driver release for October, bringing in further optimizations for the latest AAA releases such as Battlefield 6 and Vampire: The Masquerade - Bloodlines 2. Battlefield 6 already received support in the previous 25.10.1 BETA release, but this new driver is expected to provide the best possible experience. Besides game […]

Read full article at https://wccftech.com/amd-adrenalin-25-10-2-driver-support-battlefield-6-ryzen-ai-5-330-apu-several-fixes/

Steam Deck Chizha Mount Ling Dock Review – The ‘Must-Have’ Accessory That Makes Valve’s Handheld Even More Powerful

Steam Deck displaying “Control Ultimate Edition” on a charging dock with “HDMI” and “SSD” on-screen text.

Although Valve's Steam Deck is relatively old compared to mainstream handhelds, it remains a capable device that supports several AAA titles. However, to 'supercharge' your experience, Dockcase has introduced their newest Chizha Mount Ling Dock for Steam Deck, and they were kind enough to send us an 'exclusive' review sample. After testing the dock for several days, I must say, it is indeed an accessory that every Valve fan out there should own, since it really brings in 'valuable' benefits and add-ons. Steam Deck Chizha Mount Ling Dock - LAN Support, Extra M.2 SSD Slot & Futuristic Design Since the […]

Read full article at https://wccftech.com/review/steam-deck-chizha-mount-ling-dock-review/

NVIDIA’s CEO “Misspoke” About $500 Billion Revenue From Blackwell + Rubin In Next Five Quarters; Actual Figure Turns Out to Be Lower

Grace Blackwell NVL72 demand presentation with graph comparing 2023-25 Hopper Lifetime to Blackwell and Rubin revenue forecasts.

NVIDIA's Jensen Huang initially gave an optimistic statement about the revenue from the Blackwell and Rubin AI lineups; however, the firm's finance team has since clarified it. NVIDIA's Jensen Huang Got a "Little Too Excited" While Quoting the Optimism Around Blackwell + Rubin AI GPUs NVIDIA's GTC 2025 conference was full of exciting announcements by Jensen & Co., but more importantly, the firm provided an outlook on how its recent Blackwell lineup and the upcoming Rubin series are expected to perform on the revenue front. At the GTC keynote, NVIDIA's CEO revealed that Blackwell and Rubin alone are anticipated to […]

Read full article at https://wccftech.com/nvidia-ceo-jensen-huang-misspoke-about-500-billion-revenue-from-blackwell-rubin/

You Can Blame LPDDR5X For Smartphone Price Increases In 2026

Samsung LPDDR5X chip hovering over a digital data flow background.

As AI-driven demand for high bandwidth memory (HBM) shows no sign of abating, the market forces are now exercising their influence to correct this imbalance, and smartphone-critical LPDDR5x prices are set to explode as a result. TrendForce has raised its general DRAM price forecast for Q4, which affects smartphone-critical LPDDR5x as well We noted recently that the soaring demand for HBM within datacenter servers was tightening available wafer capacity, especially as the die size for HBM is between 35 percent and 45 percent larger than that for a comparable DRAM. Well, as per a new TrendForce report, this paradigm is now […]

Read full article at https://wccftech.com/you-can-blame-lpddr5x-for-smartphone-price-increases-in-2026/

AI Wouldn’t Be Able To Create A Very Good Grand Theft Auto Game Due To Its Lack of Creativity, Take-Two CEO Says

Grand Theft Auto VI cover art with characters holding guns on a dock, city skyline, and police boat in the background.

While AI can streamline the video game development process, it would never be able to create a very good Grand Theft Auto game, according to Take-Two's CEO Strauss Zelnick, as current models lack something very important. Speaking about AI usage in video game development at CNBC's Technology Executive Council Summit in New York on October 28, the CEO of Rockstar Games' parent company made it clear that the current models would be unable to create a very good Grand Theft Auto game. For starters, using AI to create intellectual property creates issues not only with protecting one's own but also […]

Read full article at https://wccftech.com/ai-good-grand-theft-auto/

Black Friday GPU Deals 2025: Get The Latest AMD, NVIDIA, And Intel GPUs At Best Prices

Zotac Gaming, Gigabyte, and Radeon graphics cards with Black Friday text.

Black Friday deals may be a few weeks away from now, but the discount spree has already started. If you were already planning to buy a GPU for your gaming PC, it's one of the best times to get one. From AMD RDNA 4 to NVIDIA's Blackwell cards, we have listed the best RX 9000 and NVIDIA RTX 50 series GPU deals in this post. We will keep updating it from time to time so that you never miss a GPU deal this season. NVIDIA GeForce RTX 50 Series Deals NVIDIA's latest Blackwell-based GeForce RTX 50 cards are excellent if […]

Read full article at https://wccftech.com/black-friday-gpu-deals-2025/

Dragon Quest I & II HD-2D Remake Review – Classic RPGs At Their Best

Dragon Quest I & II HD-2D Remake title screen with fantasy characters on a scenic backdrop.

Despite being the series that defined Japanese role-playing games, Dragon Quest took some time to get the widespread recognition it deserved in North America and Europe. Nowadays, the franchise created by Yuji Horii is a household name as much as Final Fantasy is, and the popularity of the franchise led to the successful release of the Dragon Quest III HD-2D Remake, a very solid remake that was met with a warm reception from fans, thanks to its great visuals and how small choice tweaks made the classic gameplay more compelling. Remaking the remainder of the Erdrick trilogy, however, needed something […]

Read full article at https://wccftech.com/review/dragon-quest-i-ii-hd-2d-remake-classic-rpgs-at-their-best/

Pickvocab – Stop guessing meanings - understand words in their actual context


Ever looked up a word and had to guess which definition the author meant? Traditional dictionaries leave you guessing which definition the author actually meant. Pickvocab's AI analyzes the surrounding text to deliver the precise meaning for that specific context - whether you're reading news articles, browsing social media, or diving into literature.

Our Chrome, Firefox, and Edge extensions let you look up any word or phrase with a single click. The AI considers the entire sentence and paragraph to explain exactly what the author meant. Save words along with their original context so you never forget where you learned them.

View startup

Microsoft’s Azure reports cloud outage, disrupting customers including Alaska Airlines.

Microsoft logo. (GeekWire Photo)

An outage on Microsoft’s Azure cloud services Wednesday morning disrupted operations for customers worldwide including Alaska Airlines, Xbox users and 365 subscribers.

The incident strikes just ahead of Microsoft’s quarterly earnings call today and follows last week’s outage at Amazon Web Services and a failure of Alaska Airline’s own data center technology.

The latest outage struck at 9 a.m. Pacific Standard Time, according to Microsoft, when the system “began experiencing Azure Front Door (AFD) issues resulting in a loss of availability of some services. We suspect that an inadvertent configuration change as the trigger event for this issue.

“We are taking several concurrent actions: Firstly, where we are blocking all changes to the AFD services, this includes customer configuration changes as well. At the same time, we are rolling back our AFD configuration to our last known good state,” the company stated. “As we rollback we want to ensure that the problematic configuration doesn’t re-initiate upon recovery.”

Alaska Airlines posted on X at 10:33 a.m., explaining that the Azure outage was disrupting systems including their website function. Passengers flying on Alaska and Hawaiian airlines who were unable to check-in online were directed to airline agents to receive their boarding passes.

“We apologize for the inconvenience and appreciate your patience as we navigate this issue,” the post said.

Microsoft did not indicate when the issue would be resolved. “We do not have an ETA for when the rollback will be completed, but we will update this communication within 30 minutes or when we have an update,” the company posted at 10:51 a.m.

Amazon layoffs hit software engineers hardest in Washington

Amazon’s headquarters towers and The Spheres in Seattle. (GeekWire File Photo / Kurt Schlosser)

Software development engineers make up the largest group of employees affected by Amazon’s latest round of layoffs in its home state.

GeekWire reported Tuesday on a new filing from the Washington Employment Security Department revealing that the tech giant is laying off 2,303 corporate employees, mostly in Seattle and Bellevue. The cuts are part of broader layoffs announced Tuesday that will impact about 14,000 workers globally.

detailed list included with the state filing reveals which roles are impacted by the layoffs. More than 600 software development engineering roles are being cut among the 2,303 affected workers in Washington — more than a quarter of total cuts.

The trend mirrors layoffs at Microsoft earlier this year, as companies reassess their engineering needs amid the rise of AI-driven coding tools. Amazon itself recently introduced its own AI coding tool Kiro in July, and has reportedly explored adopting the AI code assistant Cursor for employees.

The layoffs of software engineers reflect a striking shift for an industry that has traditionally relied on coders to help build and maintain the backbone of digital platforms.

“This generation of AI is the most transformative technology we’ve seen since the Internet,” Amazon HR chief Beth Galetti wrote in a message to employees Tuesday, saying it’s enabling teams to “innovate much faster than ever before.”

Amazon’s engineering layoffs are part of a broader industry reckoning with AI’s impact on traditional tech roles and white-collar jobs. A Wall Street Journal report this week detailed how the adoption of AI is contributing to a wave of layoffs across the country. Axios published a story Wednesday on a similar topic with the headline: How an AI job apocalypse unfolds.

More than 500 manager-level titles were also heavily affected by Amazon’s layoffs in Washington, according to the filing — aligning with a company-wide push to use the cutbacks to help reduce bureaucracy and operate more efficiently.

Amazon also made reductions in recruiting and HR roles. Other impacted areas include marketing, advertising, and legal.

The largest single site impact is at SEA40, Amazon’s Doppler office building on 7th Avenue in Seattle, where 361 employees are affected, according to the filing.

More than 100 remote employees based in Washington are also being let go.

The missing data link in enterprise AI: Why agents need streaming context, not just better prompts

Enterprise AI agents today face a fundamental timing problem: They can't easily act on critical business events because they aren't always aware of them in real-time.

The challenge is infrastructure. Most enterprise data lives in databases fed by extract-transform-load (ETL) jobs that run hourly or daily — ultimately too slow for agents that must respond in real time.

One potential way to tackle that challenge is to have agents directly interface with streaming data systems. Among the primary approaches in use today are the open source Apache Kafka and Apache Flink technologies. There are multiple commercial implementations based on those technologies, too, Confluent, which is led by the original creators behind Kafka, being one of them.

Today, Confluent is introducing a real-time context engine designed to solve this latency problem. The technology builds on Apache Kafka, the distributed event streaming platform that captures data as events occur, and open-source Apache Flink, the stream processing engine that transforms those events in real time.

The company is also releasing an open-source framework, Flink Agents, developed in collaboration with Alibaba Cloud, LinkedIn and Ververica. The framework brings event-driven AI agent capabilities directly to Apache Flink, allowing organizations to build agents that monitor data streams and trigger automatically based on conditions without committing to Confluent's managed platform.

"Today, most enterprise AI systems can't respond automatically to important events in a business without someone prompting them first," Sean Falconer, Confluent's head of AI, told VentureBeat. "This leads to lost revenue, unhappy customers or added risk when a payment fails or a network malfunctions."

The significance extends beyond Confluent's specific products. The industry is recognizing that AI agents require different data infrastructure than traditional applications. Agents don't just retrieve information when asked. They need to observe continuous streams of business events and act automatically when conditions warrant. This requires streaming architecture, not batch pipelines.

Batch versus streaming: Why RAG alone isn't enough

To understand the problem, it's important to distinguish between the different approaches to moving data through enterprise systems and how they can connect to agentic AI.

In batch processing, data accumulates in source systems until a scheduled job runs. That job extracts the data, transforms it and loads it into a target database or data warehouse. This might occur hourly, daily or even weekly. The approach works well for analytical workloads, but it creates latency between when something happens in the business and when systems can act on it.

Data streaming inverts this model. Instead of waiting for scheduled jobs, streaming platforms like Apache Kafka capture events as they occur. Each database update, user action, transaction or sensor reading becomes an event published to a stream. Apache Flink then processes these streams to join, filter and aggregate data in real time. The result is processed data that reflects the current state of the business, updating continuously as new events arrive.

This distinction becomes critical when you consider what kinds of context AI agents actually need. Much of the current enterprise AI discussion focuses on retrieval-augmented generation (RAG), which handles semantic search over knowledge bases to find relevant documentation, policies or historical information. RAG works well for questions like "What's our refund policy?" where the answer exists in static documents.

But many enterprise use cases require what Falconer calls "structural context" — precise, up-to-date information from multiple operational systems stitched together in real time. Consider a job recommendation agent that requires user profile data from the HR database, browsing behavior from the last hour, search queries from minutes ago and current open positions across multiple systems.

"The part that we're unlocking for businesses is the ability to essentially serve that structural context needed to deliver the freshest version," Falconer said.

The MCP connection problem: Stale data and fragmented context

The challenge isn't simply connecting AI to enterprise data. Model Context Protocol (MCP), introduced by Anthropic earlier this year, already standardized how agents access data sources. The problem is what happens after the connection is made.

In most enterprise architectures today, AI agents connect via MCP to data lakes or warehouses fed by batch ETL pipelines. This creates two critical failures: The data is stale, reflecting yesterday's reality rather than current events, and it's fragmented across multiple systems, requiring significant preprocessing before an agent can reason about it effectively.

The alternative — putting MCP servers directly in front of operational databases and APIs — creates different problems. Those endpoints weren't designed for agent consumption, which can lead to high token costs as agents process excessive raw data and multiple inference loops as they try to make sense of unstructured responses.

"Enterprises have the data, but it's often stale, fragmented or locked in formats that AI can't use effectively," Falconer explained. "The real-time context engine solves this by unifying data processing, reprocessing and serving, turning continuous data streams into live context for smarter, faster and more reliable AI decisions."

The technical architecture: Three layers for real-time agent context

Confluent's platform encompasses three elements that work together or adopted separately.

The real-time context engine is the managed data infrastructure layer on Confluent Cloud. Connectors pull data into Kafka topics as events occur. Flink jobs process these streams into "derived datasets" — materialized views joining historical and real-time signals. For customer support, this might combine account history, current session behavior and inventory status into one unified context object. The Engine exposes this through a managed MCP server.

Streaming agents is Confluent's proprietary framework for building AI agents that run natively on Flink. These agents monitor data streams and trigger automatically based on conditions — they don't wait for prompts. The framework includes simplified agent definitions, built-in observability and native Claude integration from Anthropic. It's available in open preview on Confluent's platform.

Flink Agents is the open-source framework developed with Alibaba Cloud, LinkedIn and Ververica. It brings event-driven agent capabilities directly to Apache Flink, allowing organizations to build streaming agents without committing to Confluent's managed platform. They handle operational complexity themselves but avoid vendor lock-in.

Competition heats up for agent-ready data infrastructure

Confluent isn't alone in recognizing that AI agents need different data infrastructure. 

The day before Confluent's announcement, rival Redpanda introduced its own Agentic Data Plane — combining streaming, SQL and governance specifically for AI agents. Redpanda acquired Oxla's distributed SQL engine to give agents standard SQL endpoints for querying data in motion or at rest. The platform emphasizes MCP-aware connectivity, full observability of agent interactions and what it calls "agentic access control" with fine-grained, short-lived tokens.

The architectural approaches differ. Confluent emphasizes stream processing with Flink to create derived datasets optimized for agents. Redpanda emphasizes federated SQL querying across disparate sources. Both recognize agents need real-time context with governance and observability.

Beyond direct streaming competitors, Databricks and Snowflake are fundamentally analytical platforms adding streaming capabilities. Their strength is complex queries over large datasets, with streaming as an enhancement. Confluent and Redpanda invert this: Streaming is the foundation, with analytical and AI workloads built on top of data in motion.

How streaming context works in practice

Among the users of Confluent's system is transportation vendor Busie. The company is building a modern operating system for charter bus companies that helps them manage quotes, trips, payments and drivers in real time. 

"Data streaming is what makes that possible," Louis Bookoff, Busie co-founder and CEO told VentureBeat. "Using Confluent, we move data instantly between different parts of our system instead of waiting for overnight updates or batch reports. That keeps everything in sync and helps us ship new features faster.

Bookoff noted that the same foundation is what will make gen AI valuable for his customers.

"In our case, every action like a quote sent or a driver assigned becomes an event that streams through the system immediately," Bookoff said. "That live feed of information is what will let our AI tools respond in real time with low latency rather than just summarize what already happened."

The challenge, however, is how to understand context. When thousands of live events flow through the system every minute, AI models need relevant, accurate data without getting overwhelmed.

 "If the data isn't grounded in what is happening in the real world, AI can easily make wrong assumptions and in turn take wrong actions," Bookoff said. "Stream processing solves that by continuously validating and reconciling live data against activity in Busie."

What this means for enterprise AI strategy

Streaming context architecture signals a fundamental shift in how AI agents consume enterprise data. 

AI agents require continuous context that blends historical understanding with real-time awareness — they need to know what happened, what's happening and what might happen next, all at once.

For enterprises evaluating this approach, start by identifying use cases where data staleness breaks the agent. Fraud detection, anomaly investigation and real-time customer intervention fail with batch pipelines that refresh hourly or daily. If your agents need to act on events within seconds or minutes of them occurring, streaming context becomes necessary rather than optional.

"When you're building applications on top of foundation models, because they're inherently probabilistic, you use data and context to steer the model in a direction where you want to get some kind of outcome," Falconer said. "The better you can do that, the more reliable and better the outcome."

Security's AI dilemma: Moving faster while risking more

Presented by Splunk, a Cisco Company


As AI rapidly evolves from a theoretical promise to an operational reality, CISOs and CIOs face a fundamental challenge: how to harness AI's transformative potential while maintaining the human oversight and strategic thinking that security demands. The rise of agentic AI is reshaping security operations, but success requires balancing automation with accountability.

The efficiency paradox: Automation without abdication

The pressure to adopt AI is intense. Organizations are being pushed to reduce headcount or redirect resources toward AI-driven initiatives, often without fully understanding what that transformation entails. The promise is compelling: AI can reduce investigation times from 60 minutes to just 5 minutes, potentially delivering 10x productivity improvements for security analysts.

However, the critical question isn't whether AI can automate tasks — it's which tasks should be automated and where human judgment remains irreplaceable. The answer lies in understanding that AI excels at accelerating investigative workflows, but remediation and response actions still require human validation. Taking a system offline or quarantining an endpoint can have massive business impact. An AI making that call autonomously could inadvertently cause the very disruption it's meant to prevent.

The goal isn't to replace security analysts but to free them for higher-value work. With routine alert triage automated, analysts can focus on red team/blue team exercises, collaborate with engineering teams on remediation, and engage in proactive threat hunting. There's no shortage of security problems to solve — there's a shortage of security experts to address them strategically.

The trust deficit: Showing your work

While confidence in AI's ability to improve efficiency is high, skepticism about the quality of AI-driven decisions remains significant. Security teams need more than just AI-generated conclusions — they need transparency into how those conclusions were reached.

When AI determines an alert is benign and closes it, SOC analysts need to understand the investigative steps that led to that determination. What data was examined? What patterns were identified? What alternative explanations were considered and ruled out?

This transparency builds trust in AI recommendations, enables validation of AI logic, and creates opportunities for continuous improvement. Most importantly, it maintains the critical human-in-the-loop for complex judgment calls that require nuanced understanding of business context, compliance requirements, and potential cascading impacts.

The future likely involves a hybrid model where autonomous capabilities are integrated into guided workflows and playbooks, with analysts remaining involved in complex decisions.

The adversarial advantage: Fighting AI with AI — carefully

AI presents a dual-edged sword in security. While we're carefully implementing AI with appropriate guardrails, adversaries face no such constraints. AI lowers the barrier to entry for attackers, enabling rapid exploit development and vulnerability discovery at scale. What was once the domain of sophisticated threat actors could soon be accessible to script kiddies armed with AI tools.

The asymmetry is striking: defenders must be thoughtful and risk-averse, while attackers can experiment freely. If we make a mistake implementing autonomous security responses, we risk taking down production systems. If an attacker's AI-driven exploit fails, they simply try again with no consequences.

This creates an imperative to use AI defensively, but with appropriate caution. We must learn from attackers' techniques while maintaining the guardrails that prevent our AI from becoming the vulnerability. The recent emergence of malicious MCP (Model Context Protocol) supply chain attacks demonstrates how quickly adversaries exploit new AI infrastructure.

The skills dilemma: Building capabilities while maintaining core competencies

As AI handles more routine investigative work, a concerning question emerges: will security professionals' fundamental skills atrophy over time? This isn't an argument against AI adoption — it's a call for intentional skill development strategies. Organizations must balance AI-enabled efficiency with programs that maintain core competencies. This includes regular exercises that require manual investigation, cross-training that deepens understanding of underlying systems, and career paths that evolve roles rather than eliminate them.

The responsibility is shared. Employers must provide tools, training, and culture that enable AI to augment rather than replace human expertise. Employees must actively engage in continuous learning, treating AI as a collaborative partner rather than a replacement for critical thinking.

The identity crisis: Governing the agent explosion

Perhaps the most underestimated challenge ahead is identity and access management in an agentic AI world. IDC estimates 1.3 billion agents by 2028 — each requiring identity, permissions, and governance. The complexity compounds exponentially.

Overly permissive agents represent significant risk. An agent with broad administrative access could be socially engineered into taking destructive actions, approving fraudulent transactions, or exfiltrating sensitive data. The technical shortcuts engineers take to "just make it work" — granting excessive permissions to expedite deployment — create vulnerabilities that adversaries will exploit.

Tool-based access control offers one path forward, granting agents only the specific capabilities they need. But governance frameworks must also address how LLMs themselves might learn and retain authentication information, potentially enabling impersonation attacks that bypass traditional access controls.

The path forward: Start with compliance and reporting

Amid these challenges, one area offers immediate, high-impact opportunity: continuous compliance and risk reporting. AI's ability to consume vast amounts of documentation, interpret complex requirements, and generate concise summaries makes it ideal for compliance and reporting work that has traditionally consumed enormous analysts’ time. This represents a low-risk, high-value entry point for AI in security operations.

The data foundation: Enabling the AI-powered SOC

None of these AI capabilities can succeed without addressing the fundamental data challenges facing security operations. SOC teams struggle with siloed data and disparate tools. Success requires a deliberate data strategy that prioritizes accessibility, quality, and unified data contexts. Security-relevant data must be immediately available to AI agents without friction, properly governed to ensure reliability, and enriched with metadata that provides the business context AI cannot understand.

Closing thought: Innovation with intentionality

The autonomous SOC is emerging — not as a light switch to flip, but as an evolutionary journey requiring continuous adaptation. Success demands that we embrace AI's efficiency gains while maintaining the human judgment, strategic thinking, and ethical oversight that security requires.

We're not replacing security teams with AI. We're building collaborative, multi-agent systems where human expertise guides AI capabilities toward outcomes that neither could achieve alone. That's the promise of the agentic AI era — if we're intentional about how we get there.


Tanya Faddoul, VP Product, Customer Strategy and Chief of Staff for Splunk, a Cisco Company. Michael Fanning is Chief Information Security Officer for Splunk, a Cisco Company.

Cisco Data Fabric provides the needed data architecture powered by Splunk Platform — unified data fabric, federated search capabilities, comprehensive metadata management — to unlock AI and SOC’s full potential. Learn more about Cisco Data Fabric.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Agentic AI is all about the context — engineering, that is

Presented by Elastic


As organizations scramble to enact agentic AI solutions, accessing proprietary data from all the nooks and crannies will be key

By now, most organizations have heard of agentic AI, which are systems that “think” by autonomously gathering tools, data and other sources of information to return an answer. But here’s the rub: reliability and relevance depend on delivering accurate context. In most enterprises, this context is scattered across various unstructured data sources, including documents, emails, business apps, and customer feedback.

As organizations look ahead to 2026, solving this problem will be key to accelerating agentic AI rollouts around the world, says Ken Exner, chief product officer at Elastic.

"People are starting to realize that to do agentic AI correctly, you have to have relevant data," Exner says. "Relevance is critical in the context of agentic AI, because that AI is taking action on your behalf. When people struggle to build AI applications, I can almost guarantee you the problem is relevance.”

Agents everywhere

The struggle could be entering a make-or-break period as organizations scramble for competitive edge or to create new efficiencies. A Deloitte study predicts that by 2026, more than 60% of large enterprises will have deployed agentic AI at scale, marking a major increase from experimental phases to mainstream implementation. And researcher Gartner forecasts that by the end of 2026, 40% of all enterprise applications will incorporate task-specific agents, up from less than 5% in 2025. Adding task specialization capabilities evolves AI assistants into context-aware AI agents.

Enter context engineering

The process for getting the relevant context into agents at the right time is known as context engineering. It not only ensures that an agentic application has the data it needs to provide accurate, in-depth responses, it helps the large language model (LLM) understand what tools it needs to find and use that data, and how to call those APIs.

While there are now open-source standards such as the Model Context Protocol (MCP) that allow LLMs to connect to and communicate with external data, there are few platforms that let organizations build precise AI agents that use your data and combine retrieval, governance, and orchestration in one place, natively.

Elasticsearch has always been a leading platform for the core of context engineering. It recently released a new feature within Elasticsearch called Agent Builder, which simplifies the entire operational lifecycle of agents: development, configuration, execution, customization, and observability.

Agent Builder helps build MCP tools on private data using various techniques, including Elasticsearch Query Language, a piped query language for filtering, transforming, and analyzing data, or workflow modeling. Users can then take various tools and combine them with prompts and an LLM to build an agent.

Agent Builder offers a configurable, out-of-the-box conversational agent that allows you to chat with the data in the index, and it also gives users the ability to build one from scratch using various tools and prompts on top of private data.

"Data is the center of our world at Elastic. We’re trying to make sure that you have the tools you need to put that data to work," Exner explains. "The second you open up Agent Builder, you point it to an index in Elasticsearch, and you can begin chatting with any data you connect this to, any data that’s indexed in Elasticsearch — or from external sources through integrations.”

Context engineering as a discipline

Prompt and context engineering is becoming a discipli. It’s not something you need a computer science degree in, but more classes and best practices will emerge, because there’s an art to it.

"We want to make it very simple to do that," Exner says. "The thing that people will have to figure out is, how do you drive automation with AI? That’s what’s going to drive productivity. The people who are focused on that will see more success."

Beyond that, other context engineering patterns will emerge. The industry has gone from prompt engineering to retrieval-augmented generation, where information is passed to the LLM in a context window, to MCP solutions that help LLMs with tool selection. But it won't stop there.

"Given how fast things are moving, I will guarantee that new patterns will emerge quite quickly," Exner says. "There will still be context engineering, but they’ll be new patterns for how to share data with an LLM, how to get it to be grounded in the right information. And I predict more patterns that make it possible for the LLM to understand private data that it’s not been trained on."

Agent Builder is available now as a tech preview. Get started with an Elastic Cloud Trial, and check out the documentation for Agent Builder here.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Your IT stack is the enemy: How 84% of attacks evade detection by turning trusted tools against you

It’s 3:37 am on a Sunday in Los Angeles, and one of the leading financial services firms on the West Coast is experiencing the second week of a living-off-the-land (LOTL) attack. A nation-state cyberattack squad has targeted the firm’s pricing, trading and valuation algorithms for cryptocurrency gain. Using common tools, the nation state has penetrated the firm’s infrastructure and is slowly weaponizing it for its own gain.

According to CrowdStrike’s 2025 Global Threat Report, nearly 80% of modern attacks, including those in finance, are now malware-free, relying on adversaries exploiting valid credentials, remote monitoring tools and administrative utilities with breakout times (sometimes less than a minute).

No one in the SOC or across the cybersecurity leadership team suspects anything is wrong. But there are unmistakable signals that an attack is underway.

The upsurge in credential theft, business email compromise and exploit of zero-day vulnerabilities is creating the ideal conditions for LOTL attacks to proliferate. Bitdefender’s recent research found that 84% of modern attacks use LOTL techniques, bypassing traditional detection systems. In nearly 1 in 5 cases, attackers increasingly aided by automation and streamlined toolkits exfiltrated sensitive data within the first hour of compromise.

LOTL-based tactics now account for the majority of modern cyber intrusions, with advanced persistent threats (APTs) often lingering undetected for weeks or months before hackers exfiltrate valuable data, according to IBM’s X-Force 2025 Threat Intelligence Index.

The financial repercussions are staggering. CrowdStrike’s 2025 threat research puts the average cost of ransomware-related downtime at $1.7 million per incident, which can balloon to $2.5 million in the public sector. For industry leaders, the stakes are so high that security budgets now rival those of core profit centers.

Your most trusted tools are an attacker’s arsenal

"These are the tools that you cannot disable because your administrators are using them, your applications are using them, your [employees] are using them, but attackers [are using them, too]," Martin Zugec, technical solutions director at Bitdefender, said at RSAC-2025 earlier this year. "You cannot disable them because you will impact the business."

CrowdStrike’s 2025 report confirms that adversaries routinely exploit utilities such as PowerShell, Windows management instrumentation (WMI), PsExec, remote desktop protocol (RDP), Microsoft Quick Assist, Certutil, Bitsadmin, MSBuild and more to persist inside enterprises and evade detection. LOTL tools of the trade leave no digital exhaust, making it extremely difficult to spot an attack in progress.

Threat actors increasingly exploit techniques such as bring your own vulnerable driver (BYOVD) and LOTL to disable endpoint detection and response (EDR) agents and conceal malicious activity within legitimate system operations," Gartner notes in a recent report. "By leveraging common OS tools, such as PowerShell, MSHTA and Certutil, they complicate detection and hide in the noise of EDR alerts."

CrowdStrike’s ransomware survey reveals that 31% of ransomware incidents begin with the misuse of legitimate remote monitoring and management tools, proving that even enterprise IT utilities are rapidly weaponized by attackers.

The documented realities in CrowdStrike's reports corroborate the industry's deeper research: The IT stack itself is now the attack vector, and those relying on traditional controls and signature-based detection are dangerously behind the curve.

Behavioral clues hiding in plain sight

Adversaries who rely on LOTL techniques are notorious for their patience.

Attacks that once required malware and attention-grabbing exploits have given way to a new norm: Adversaries blending into the background, using the very administrative and remote management tools security teams depend on.

As Bitdefender's Zugec pointed out: “We are mostly seeing that the playbook attackers use works so well they just repeat it at scale. They don’t break in, they log in. They don’t use new malware. They just use the tools that already exist on the network.”

Zugec described a textbook LOTL breach: No malware, no new tools. BitLocker, PowerShell, common admin scripts; everything looked routine until the files were gone and no one could trace it back. That’s where threat actors are winning today.

Adversaries are using normality as their camouflage. Many of the admins’ most trusted and used tools are the very reason LOTL attacks have scaled so quickly and quietly. Zugec is brutally honest: “It has never been as easy to get inside the network as it is right now.” What was once a breach of perimeter is now a breach by familiarity, invisible to legacy tools and indistinguishable from routine administration.

CrowdStrike’s 2025 Global Threat Report captures the scale of this phenomenon in numbers that should command every board’s attention. The reports’ authors write: “In 2024, 79% of detections CrowdStrike observed were malware-free [a significant rise from 40% in 2019], indicating adversaries are instead using hands-on-keyboard techniques that blend in with legitimate user activity and impede detection. This shift toward malware-free attack techniques has been a defining trend over the past five years."

The report’s researchers also found that breakout times for successful attacks continue to shrink; the average is just 48 minutes, the fastest 51 seconds.

Zugec’s advice for defenders working in this new paradigm is blunt and pragmatic. “Instead of just chasing something else, figure out how we can take all these capabilities that we have, all these technologies, and make them work together and fuel each other.” The first step: “Understanding your attack surface. Just getting familiar with how the attackers operate, what they do, not five weeks ago, but right now, should be the first step.”

He urges teams to learn what normal looks like inside their own environment and use this baseline to spot what’s truly out of place, so defenders stop chasing endless alerts and start responding only when it matters.

Take complete ownership of your tech stack now

LOTL attacks don’t just exploit trusted tools and infrastructures, they take advantage of an organizations’ culture and daily ability to compete.

Staying secure means making constant vigilance a core value, backed by zero trust and microsegmentation as cultural anchors. These are just the first steps. Consider the NIST Zero Trust Architecture (SP 800-207) as an organizational backbone and playbook to tackle LOTL head-on:

  • Limit privileges now on all accounts and delete long-standing accounts for contractors that haven’t been used in years: Apply least-privilege access across all admin and user accounts to stop attackers from escalating.

  • Enforce microsegmentation: Divide your network into secure zones; this will help confine attackers, limit movement and shrink the blast radius if something goes wrong.

  • Harden tool access and audit who is using them: Restrict, monitor and log PowerShell, WMI and other utilities. Use code signing, constrained language modes and limit access to trusted personnel.

  • Adopt NIST zero trust principles: Continuously verify identity, device hygiene and access context as outlined in SP 800-207, making adaptive trust the default.

  • Centralize behavioral analytics and logging: Use extended monitoring to flag unusual activities with system tools before an incident escalates.

  • Deploy adaptive detection if you have an existing platform that can scale and provide this at a minimal charge: Employ EDR/XDR to hunt for suspicious patterns, especially when attackers use legitimate tools in ways that sidestep traditional alerting.

  • Red team regularly: Actively test defenses with simulated attacks and know how adversaries misuse trusted tools to penetrate routine security.

  • Elevate security awareness and make it muscle memory: Train users and admins on LOTL methods, social engineering and what subtle signals betray compromise.

  • Update and inventory: Maintain application inventories, patch known vulnerabilities and conduct frequent security audits.

Bottom line: The financial services firm referenced at the beginning of this story eventually recovered from its LOTL attack. Today, their models, the CI/CD process for AI development and gen AI R&D are managed by a team of cybersecurity managers with decades of experience locking down U.S. Department of Defense sites and vaults.

LOTL attacks are real, growing, lethal and require a new mindset by everyone in cybersecurity.

SerpAPI calls Reddit lawsuit a threat to the ‘free and open web’

SerpApi Reddit

SerpAPI said it will “vigorously defend” itself after being sued by Reddit for allegedly scraping and reselling data from the platform via Google Search results.

The response. SerpAPI called Reddit’s language “inflammatory” and said it was “extremely disappointed” to learn of the lawsuit without prior communication.

  • “Our work is guided by a simple principle: public search data should be accessible,” the company said.
  • SerpAPI argued its position is backed by the First Amendment and called Reddit’s actions a threat to “the free and open Web we all enjoy.”

What they’re saying. According to SerpApi:

  • “For eight years, SerpApi has operated transparently and lawfully, helping countless developers, researchers, and businesses build on top of publicly available search data. Our technology is used across multiple industries, from SEO, marketing, and advertising to copyright verification, background checks, news monitoring, and now AI. Our work is guided by a simple principle: public search data should be accessible. And accessibility cannot exist without clean unified data structures, speed, and automation possibilities.”

Catch up quick. Reddit sued SerpAPI, Perplexity, Oxylabs, and AWMProxy last week, claiming they scraped Reddit content from Google results “at an industrial scale” and hid their identities to bypass restrictions. Reddit:

  • Claimed it set a “trap” for Perplexity to prove scraping.
  • Is seeking financial damages and a ban on further data use.

Zoom out. Reddit licenses its data to OpenAI and Google. Meanwhile, Google and Reddit are reportedly exploring a deeper AI partnership that could bring Reddit discussions directly into AI Overviews and other Google experiences.

Why we care. This fight isn’t just about data – it’s about control. The big tech platforms are battling over who owns the information that powers search results and AI-generated answers, while brands struggle to understand what’s driving rankings, visibility, and attribution.

SerpApi’s statement. Our Response to Reddit, Inc. v. SerpApi, LLC: Defending the First Amendment

How to use Google Ads Promotion assets (a step-by-step checklist)

How to use Promotion assets in Google Ads

With Black Friday, Cyber Monday, and the holiday season fast approaching, you’re probably knee-deep in holiday account planning.

Today, we’re zeroing in on a Google Ads feature that you can use all year ‘round, but may be particularly useful to you and your customers during the holidays: Promotion assets.

What are promotion assets in Google Ads?

Promotion assets (formerly known as promotion extensions) are an optional addition to your Search or Performance Max campaigns. They allow you to highlight special deals, sales, and discounts your business is currently running, alongside your standard ad headlines and descriptions.

Promotion assets can show on Google Search, and on Google Maps if you have a local promotion offer.

Why should you use promotion assets in Google Ads rather than editing your ad headlines?

While you absolutely can edit your search ad text to include promotional messaging, promotion assets offer a few unique benefits. Specifically, they:

  • Stand out in search results: Promotion assets may be bold or have a box around them, so they can really capture a user’s attention and increase your click-through rate.
  • Allow for flexibility: You can highlight deals and offers without constantly editing and re-submitting your core ad text. You can even schedule them to go live (and turn off) in advance, which saves you lots of time.

Why can’t you use promotion assets with Shopping campaigns in Google Ads?

To show special deals and sales with your Shopping ads, you need to set that up in Google Merchant Center rather than Google Ads. These are called Merchant Promotions, and they run through a separate system.

If you’re an ecommerce business, head over to Merchant Center to set up your deals and promotions.

How to create promotion assets in Google Ads

To create a new promotion asset, go to Assets > Assets on the left-side navigation in Google Ads, tap the + button, and choose Promotion.

You can add promotion assets at the account, campaign, or ad group level.

If you define multiple assets across the various levels, Google will use the most specific level available. For example, ad group-level promotion assets would be selected over campaign-level promotion assets.

Which holidays allow for promotion assets?

Part of creating a promotion asset is selecting an “occasion” for your promotion. While you must select from Google’s list of occasions, there is a wide variety of options.

Some are seasonal (like Halloween or Valentine’s Day), but many are generic and flexible, allowing you to run promotions year-round. For example:

  • Fall Sale
  • Winter Sale
  • Spring Sale
  • Summer Sale

This means that although you need an “occasion,” promotion assets are not just for big tentpole holidays. Any kind of business can have promotions, sales, and discounts throughout the year.

What additional details are required to use promotion assets?

To ensure your promotion assets get approved, and provide a good user experience, you need to pay close attention to several “fiddly” details. Inconsistency between your asset and your landing page is a common reason for disapproval.

Here’s a checklist of some of those details:

  • Language and Currency
  • Discount Type: For example
    • Monetary Discount: e.g., “$20 off”
    • Percent Discount: e.g., “20% off”
    • Up to Discount: e.g., “Up to $100 off”
  • Item Text (20 Characters): This is where you specify the actual product or service on sale. Keep it brief and relevant, e.g., “shoes” or “house cleaning.”
  • Final URL: This link must take the user directly to the specific promotion for that specific item. Do not link to a generic homepage.
  • Promo code (optional)
  • Minimum order value (e.g., “Valid on orders over $50”) (optional)
  • Terms and Conditions (optional)

Three types of scheduling for promotion assets

Promotion assets give you three layers of scheduling control. While we appreciate Google’s thoroughness, the use cases can be a bit confusing! Here’s what you need to know:

1. Promotion Scheduling: the dates your promotion is active

Understandably, the promotion asset is only eligible to show when the promotion is actually running. This means you cannot advertise a promotion before it has started or after it has ended.

2. Asset Scheduling: the dates when your promotion asset can run

This controls when your promotion asset will be eligible to run, which may or may not be the same as the actual promotion dates.

  • For example, if your business is running a sale for six weeks but only wants to advertise it via Google Ads during the last two weeks, you can set the promotion asset to be live only during those final 14 days.

3. Ad Scheduling: time of day and day of week

You can set additional scheduling rules to show your promotion asset only on specific days or times of the week (e.g., only during evenings or on weekends).

This article is part of our ongoing Search Engine Land series, Everything you need to know about Google Ads in less than 3 minutes. In each edition, Jyll highlights a different Google Ads feature, and what you need to know to get the best results from it – all in a quick 3-minute read.

💾

Here's how you can highlight deals and sales in your Search and Performance Max campaigns without touching your core ad copy.

Experts Reports Sharp Increase in Automated Botnet Attacks Targeting PHP Servers and IoT Devices

Cybersecurity researchers are calling attention to a spike in automated attacks targeting PHP servers, IoT devices, and cloud gateways by various botnets such as Mirai, Gafgyt, and Mozi. "These automated campaigns exploit known CVE vulnerabilities and cloud misconfigurations to gain control over exposed systems and expand botnet networks," the Qualys Threat Research Unit (TRU) said in a report

New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts

Cybersecurity researchers have flagged a new security issue in agentic web browsers like OpenAI ChatGPT Atlas that exposes underlying artificial intelligence (AI) models to context poisoning attacks. In the attack devised by AI security company SPLX, a bad actor can set up websites that serve different content to browsers and AI crawlers run by ChatGPT and Perplexity. The technique has been

Microsoft’s Brad Smith makes nuanced AI pitch: Huge potential, real concerns, and a Jon Stewart clip

Former Washington Gov. Chris Gregoire and Microsoft President Brad Smith at the 2025 Cascade Innovation Corridor Conference. (GeekWire Photo / Lisa Stiffler)

It’s rare for a tech executive to cue up a video mocking themselves — but that’s just what Microsoft President Brad Smith did on Tuesday at the Cascadia Innovation Corridor conference in Seattle. Smith played a clip from The Daily Show in which comedian Jon Stewart lampooned his and Microsoft CEO Satya Nadella’s interviews about AI’s impact on jobs.

The segment poked fun at the idea that displaced workers might become “prompt engineers” — a new job Stewart rebranded as “types questions guy.”

It was a self-aware feature of a talk that balanced enthusiasm for artificial intelligence’s potential with sober reflections on its hype and potential pitfalls.

The Microsoft leader called AI the “next great general purpose technology” on par with electricity. He said AI will transform sectors including health, education, biotech, aerospace, agriculture, climate and others.

That was a theme during Tuesday’s event. Former Washington Gov. Chris Gregoire, who leads the Cascadia Innovation Corridor group, kicked off the day by calling AI “a defining technology of our generation.”

Smith, who in his three decades at Microsoft has witnessed tech bubbles and bursts, also offered a “breadth of perspective” on AI that he hinted might be lacking in Silicon Valley.

“In so many ways, the sky is the limit,” Smith said. “That is exciting, but I don’t want to just be another tech bro who says, ‘Hey, great, here it comes. Get ready, get out your wallet.'”

AI-driven employment threats are becoming increasingly real in the tech sector and beyond. Amazon on Tuesday announced a huge round of layoffs, slashing 14,000 corporate and tech jobs. Earlier this year Microsoft laid off 15,000 employees worldwide. The cuts aren’t all tied to AI, but many executives are talking about worker efficiency gains thanks to the tech.

Despite the recent layoffs, many industry and elected leaders in the Cascadia region, which stretches from Vancouver, B.C., through Seattle and down to Portland, see AI as a promising economic engine that can build on the area’s strong tech foundation. That includes Microsoft and Amazon as well as a growing slate of AI startups, plus institutions such as the University of Washington, University of British Columbia, Allen Institute for AI and others.

But Smith — who manages to strike a persona blending tech evangelist, politician and favorite uncle — also acknowledged concerns about disparities in AI access, whether looking locally at rural versus urban divides, or the gap between AI use in affluent and low-income countries that lack widespread electricity and internet connections.

He also tackled the meta questions around the responsible use of AI and encouraged society to get out in front of the technology with appropriate guardrails.

“What are we trying to do as an industry, as a region, as a planet, as a species? Are we trying to build machines that are better than people? Are we trying to build machines that will help people become smarter and better?” he asked.

“If the experience that we’ve all had with social media over the last 15 years teaches us anything at all,” Smith continued, “it is that the best time to ask these questions and to debate them is before technology answers them for us.”

RELATED: Cascadia’s AI paradox: A world-leading opportunity threatened by rising costs and a talent crunch

Windows 11 videos demonstrating account and hardware requirements bypass purged from YouTube — platform says content ‘encourages dangerous or illegal activities that risk serious physical harm or death’

A YouTuber's videos telling people how to use Windows 11 without a Microsoft account and how to install it on unsupported hardware were allegedly violating community guidelines on dangerous and illegal activities.

El-Erian Warns of AI’s “Rational Bubble” and Unaddressed Risks

The post El-Erian Warns of AI’s “Rational Bubble” and Unaddressed Risks appeared first on StartupHub.ai.

“Some AI names will end up in tears,” declared Mohamed El-Erian, Allianz’s chief economic advisor, during a recent segment on CNBC’s ‘Squawk on the Street.’ His candid assessment cut through the prevailing market euphoria, offering a nuanced perspective on the artificial intelligence revolution. El-Erian, a former PIMCO CEO, engaged with the program’s hosts, providing commentary […]

The post El-Erian Warns of AI’s “Rational Bubble” and Unaddressed Risks appeared first on StartupHub.ai.

AI’s Unprecedented Infrastructure Revolution: Scarcity, Specialization, and a Cultural Reset

The post AI’s Unprecedented Infrastructure Revolution: Scarcity, Specialization, and a Cultural Reset appeared first on StartupHub.ai.

“I’ve seen nothing like this. I’m fairly certain no one’s seen anything like this. The Internet in the late 90s, early 2000s was big… this makes it… 10x is an understatement. It’s 100x what the Internet was.” This stark assessment by Amin Vahdat, VP and GM of AI and Infrastructure at Google, encapsulates the central […]

The post AI’s Unprecedented Infrastructure Revolution: Scarcity, Specialization, and a Cultural Reset appeared first on StartupHub.ai.

AI for Math Initiative: DeepMind’s Bold Research Play

The post AI for Math Initiative: DeepMind’s Bold Research Play appeared first on StartupHub.ai.

Google DeepMind's AI for Math Initiative unites top research institutions to pioneer AI in mathematical discovery, promising significant breakthroughs and accelerated scientific progress.

The post AI for Math Initiative: DeepMind’s Bold Research Play appeared first on StartupHub.ai.

Outlier Talent and the Unscripted Symphony of Innovation

The post Outlier Talent and the Unscripted Symphony of Innovation appeared first on StartupHub.ai.

“Outlier talent will always be outlier talent,” declared Alex Pall, half of the Grammy-winning duo The Chainsmokers and co-founder of Mantis VC, an observation that cuts directly to the heart of creativity in an age increasingly shaped by artificial intelligence. This potent statement, emerging during a wide-ranging conversation with Jack Altman on the *Uncapped* podcast, […]

The post Outlier Talent and the Unscripted Symphony of Innovation appeared first on StartupHub.ai.

Mitrione: AI is alive and well and continues to power the market

The post Mitrione: AI is alive and well and continues to power the market appeared first on StartupHub.ai.

AI’s Ascendancy and Market Concentration “AI is alive and well and continues to power the market,” stated RaeAnn Mitrione, Investment Management Partner at Callan Family Office, during her discussion with Frank Holland at the CNBC studio. The recent surge of NVIDIA past a $5 trillion market capitalization underscores the pervasive influence of artificial intelligence on […]

The post Mitrione: AI is alive and well and continues to power the market appeared first on StartupHub.ai.

AI layoffs hit Big Tech: Here’s what to know

The post AI layoffs hit Big Tech: Here’s what to know appeared first on StartupHub.ai.

Partsinevelos, speaking with Andrew Ross Sorkin, framed the discussion around a central query: are the reported AI layoffs in Big Tech genuine, or are they serving as a convenient “scapegoat” for other underlying issues in the employment landscape? She highlighted recent job cuts across major tech companies—Amazon, Applied Materials, Meta, and Google Cloud—some explicitly attributing […]

The post AI layoffs hit Big Tech: Here’s what to know appeared first on StartupHub.ai.

Impala AI Targets LLM Inference Costs with $11M Seed

The post Impala AI Targets LLM Inference Costs with $11M Seed appeared first on StartupHub.ai.

Impala AI has secured $11 million to launch an AI stack that promises to cut large language model inference costs by up to 13x for enterprises.

The post Impala AI Targets LLM Inference Costs with $11M Seed appeared first on StartupHub.ai.

Reflectiz raises $22M to advance web exposure management

The post Reflectiz raises $22M to advance web exposure management appeared first on StartupHub.ai.

Web exposure management platform Reflectiz raised $22 million to help enterprises secure their websites from risks posed by third-party scripts and open-source components.

The post Reflectiz raises $22M to advance web exposure management appeared first on StartupHub.ai.

The AI Infrastructure Gold Rush: Opportunities, Risks, and Strategic Moats

The post The AI Infrastructure Gold Rush: Opportunities, Risks, and Strategic Moats appeared first on StartupHub.ai.

The burgeoning artificial intelligence landscape, characterized by unprecedented infrastructure buildout and escalating capital expenditure, is creating both immense opportunities and significant strategic challenges for the world’s leading tech companies. This dynamic was a central theme in a recent CNBC discussion where Alex Kantrowitz, Founder of Big Technology and CNBC contributor, provided incisive commentary on Nvidia’s […]

The post The AI Infrastructure Gold Rush: Opportunities, Risks, and Strategic Moats appeared first on StartupHub.ai.

Beta’s unique electric airplane flies into Seattle to wow state officials and aviation experts

The ALIA CX300 electric airplane from Beta Technologies on approach at Boeing Field in Seattle. (Steve Rice Photo)

More than 117 years after Seattle residents first saw a flying machine in the sky, a unique aircraft over Jet City can still turn heads.

That happened this week with the arrival of Beta Technologies‘ all-electric ALIA CX300 conventional takeoff and landing aircraft as it dropped into King County International Airport – Boeing Field.

Photographer Steve Rice captured the strange-looking airplane with a rear propellor and posted images on Reddit, where aviation geeks launched into a debate about e-planes, range, charging times, vertical takeoff and landing aircraft, and more.

Vermont-based Beta wasn’t just doing a fly-by. The company brought the plane to Seattle for an official demonstration of the ALIA in an event that drew state officials, aviation experts, and industry leaders from across Washington.

(Steve Rice Photo)

In a news release Tuesday, Beta said Washington has a “deep-rooted aviation heritage that has long positioned the state as a global leader in aerospace innovation and manufacturing.” And the company said the state is now “actively advancing the future of flight through strategic investments in sustainable aviation and the critical infrastructure needed to support next-generation technologies.”

Beta founder and CEO Kyle Clark called the event at Boeing Field “a step toward realizing a future where electric aviation is accessible, reliable, and benefits local communities.” 

Founded in 2017, Beta is building two electric aircraft — the fixed-wing ALIA CTOL, and the ALIA VTOL, a vertical takeoff and landing aircraft — at a production facility in Vermont.

The inaugural flight of Beta’s first production model airplane was last November. The ALIA CTOL has a range of 336 nautical miles, and Beta’s planes are designed to carry passengers or cargo.

The company has also developed and is rolling out a network of charging infrastructure for use across airports and the electric aviation ecosystem.

(Steve Rice Photo)

Beta filed for an initial public offering earlier this month with plans to sell 25 million shares at $27 to $33 each — a price range that could value the company at $7.2 billion.

Vancouver, B.C.-based Helijet International previously placed orders with Beta for a fleet of eVTOL aircraft.

Other electric and hybrid aircraft makers are getting their planes off the ground in Washington, including Seattle-based Aero-TEC and Everett, Wash.-based magniX. Arlington, Wash.-based Eviation Aircraft paused work on its Alice airplane earlier this year.

With earnings on tap, Microsoft touches $4 trillion again after reaching OpenAI deal

Microsoft reports earnings Wednesday afternoon for the September quarter. (GeekWire File Photo / Todd Bishop)

With a new OpenAI partnership in hand, Microsoft is going into its earnings report Wednesday afternoon with a resolution to one of the biggest questions about its business.

The company’s market value reached $4 trillion again as Wall Street reacted to the details of the new Microsoft-OpenAI agreement, which gives Microsoft a 27% equity stake in OpenAI’s new for-profit entity, and a commitment for $250 billion in cloud purchasing by the ChatGPT maker.

Analysts expect the tech giant to report another strong quarter, fueled primarily by continued momentum in its Azure cloud business and growing adoption of its Copilot AI tools.

Quarterly revenue is expected to be about $75.4 billion for the first quarter of Microsoft’s 2026 fiscal year, which ended Sept. 30, according to numbers tracked by Yahoo Finance. That would represent a 15% jump compared to the $65.6 billion reported in the same period last year. 

Analysts expect earnings per share of $3.66, up about 11% year-over-year from $3.30.

Investors will be paying close attention to the growth rate in Microsoft’s Azure cloud business, with some analysts expecting as much as 39% growth (in constant currency, excluding the impact of exchange rates). Hitting this mark would exceed the company’s prior guidance and maintain the 39% growth pace set in the previous quarter.

Yet the potential for an AI bubble will no doubt be the focus of questions on company’s earnings conference call. Amid surging investment and growing valuations in the AI sector, some analysts and tech leaders are warning that the enthusiasm could outpace the business realities

Microsoft and Google parent Alphabet will both report numbers on Wednesday afternoon, and Amazon the following day, making for quick comparisons across the major cloud platforms.

As of the most recent quarter, ended in June, Microsoft reported more than $75 billion in annual Azure revenue for its just-ended fiscal year, compared to an annual run rate that had surpassed $50 billion for Google Cloud and a run rate of nearly $124 billion for Amazon Web Services (based on its $30.9 billion revenue in the June quarter).

Check back with GeekWire on Wednesday afternoon for full coverage.

Worth a mention: Seattle tech vets take on Google Alerts with Alertmouse, a startup to track who’s talking about you

The co-founders of Alertmouse, from left: Nathan Kriege, Rand Fishkin and Adam Doppelt, along with Britt Klontz, founder of PR firm Vada Communications, who helped beta test the product. (LinkedIn Photos)

A trio of veteran entrepreneurs have joined forces to create a new Seattle startup — and any mentions of the company across the internet will likely be tracked by what they’re building.

Alertmouse generates email alerts for people and brands who want to know what’s being said about them — or anything else they’re interested in — online. The goal is to provide a better offering than other monitoring tools, most notably Google Alerts, which Alertmouse calls “so bad it might as well not exist.”

The startup was created by co-founders Rand Fishkin (SparkToro, Snackbar Studio), Adam Doppelt (Urbanspoon, Dwellable), and Nathan Kriege (Blueprint AI, Fresh Chalk).

Fishkin, the CEO, posted about Alertmouse on LinkedIn this week, saying that while the side project has turned into a full-fledged business, he’s not leaving his other two jobs.

“It doesn’t take a ton of my time but it has been really fun to build this thing that I desperately needed,” Fishkin said in a video on his post, before listing his grievances with Google Alerts, including how it “doesn’t pick up everything you want, it sends you useless alerts” and more.

Fishkin said tracking his mentions or those related to his companies, whether it’s in a news article or in a Reddit thread, allows him to monitor what’s being said and jump in if necessary to reply.

Alertmouse says it searches the index of websites and pages it can reach two or three times each day for the unique string of words/phrases/rules a user has entered. An email is sent with the pages that contained them.

“It’s not rocket science, but it takes a lot of clever programming, testing, and iteration to make a good alert service,” the company says in its FAQ.

In an interview with GeekWire this week, Fishkin said there are enterprise tracking tools that do what Alertmouse does, such as Mentioned, Hootsuite, and Brandwatch, but they can be cost-prohibitive.

“Google Alerts has been this free alternative for a long time, but sometime in the last decade, maybe even before that, it just stopped sending me anything decent,” he said. “I have no idea what they’re doing under the hood. I suspect it’s a defunct product that no one maintains anymore, but I couldn’t tell you what’s really going on.”

On a website loaded with cheesy puns, Alertmouse has four pricing tiers, including Nibble (free), Slice ($120/year), Wedge ($600/year), and Wheel ($1,200/year).

Alertmouse attracted 1,000 sign-ups in the first several hours it was live, and Fishkin credits the fun interface and language on the website, and the fact that it’s easy to use.

“We wanted to make a brand that no one could confuse for AI,” Fishkin laughed. “This is not an AI company. There’s going to be no venture capital, there’s no AI under the hood. It’s just really simple, straightforward, fun, delightful humans.”

Fishkin, an SEO expert who founded and led Moz, a Seattle-based maker of marketing software tools, co-founded SparkToro in 2018. The audience research tool helps marketers and others understand their target audiences. He raised $2.15 million last year for his new independent video game studio, Snackbar Studio.

Doppelt and Kriege previously worked on vacation rental startup Dwellable (sold to HomeAway in 2015), local professional recommendation site Fresh Chalk, and task management company Blueprint AI together. Last year they teamed up to create a resource website for everything you’d ever want to know about smoke detectors.

Fishkin and Doppelt are also part of dedicated group of “Dungeons & Dragons” players.

The Alertmouse website says the startup has no plans to hire. But that could change after a morning in which lots of people were emailing with questions.

“If it keeps going like this we might have to bring someone on,” Fishkin said.

Here's NVIDIA’s Vera Rubin AI Superchip — 88 Cores, Two GPUs, Gobs Of Memory And Next-Level Design

Here's NVIDIA’s Vera Rubin AI Superchip — 88 Cores, Two GPUs, Gobs Of Memory And Next-Level Design NVIDIA held its annual Graphics Technology Conference (GTC) in Washington D.C. yesterday, and as a surprise showing in the middle of his keynote, company CEO Jensen Huang pulled out a Vera Rubin Superchip, marking the first time that this product was shown to the public. The part looks quite different from even the GB300 Blackwell Ultra Superchip,

AMD Radeon AI PRO R9700 Performance For OpenCL Workloads

On Monday the AMD Radeon AI PRO R9700 officially arrived at Internet retailers and is successfully selling at the $1299 price point. Some models have sine sold out but as of writing two days later some Radeon AI PRO R9700 graphics cards remain available at that competitive price point. On Monday I provided some initial benchmarks of the AMD Radeon AI PRO R9700 for vLLM AI inferencing with more AI benchmarks on the way... While the craze is all about AI in 2025, the Radeon AI PRO R9700 does work for other non-AI workloads too and in this article is a look at its competitive OpenCL performance with great value compared to the NVIDIA RTX competition.

(PR) Inseego Launches the FX4200 Enterprise 5G Fixed Wireless Access Cellular Router

Inseego Corp., a global leader in 5G mobile broadband and 5G fixed wireless access (FWA) solutions, today introduced a new approach to enterprise FWA with a completely new 5G hardware platform, the Inseego Wavemaker FX4200, and updated software suite, Inseego Connect. Designed to bridge the gap between performance and ease-of-use for enterprise wireless networks, this innovative approach to FWA pairs the power and functionality of enterprise network solutions with the simplicity and ease-of-management of small- and medium-sized (SMB)-oriented solutions.

"In order to take advantage of the power of 5G for business, organizations have been forced to choose between feature-heavy solutions that can be complicated and expensive to deploy, and simplistic products that can't scale or meet business needs," said Juho Sarvikas, CEO of Inseego. "With the FX4200, X700 mesh Wi-Fi, and the innovative Inseego Connect software, we're eliminating that tradeoff. We've built the solution that the market has been asking for, and it will propel growth in FWA and 5G use in business."

Inno3D Intros GeForce RTX 5060 LP Low-profile Graphics Card

Inno3D today rolled out the GeForce RTX 5060 LP, a low-profile graphics card capable of maxed out gaming at 1080p, including with ray tracing. The card is strictly half-height (low-profile), and is 2 slots thick. It uses an aluminium fin-stack heatsink with a tightly packed fin-stack sitting on top of a copper baseplate, with copper heat pipes spreading heat across. This heatsink is ventilated by three 50 mm fans.

The card draws power from a single 8-pin power connector, and display outputs include two standard DisplayPort 2.1b and one full-size HDMI 2.1b. The power connector points toward the tail end, to ensure clearance along the top edge. In all, the Inno3D RTX 5060 LP measures 17.8 cm x 6.9 cm x 4.1 cm (WxHxD). It sticks to NVIDIA-reference clock speeds, boosting up to 2497 MHz. Based on the 5 nm GB206 silicon and the Blackwell graphics architecture, the RTX 5060 offers 3,840 CUDA cores, 30 RT cores, 120 Tensor cores, 120 TMUs, and 48 ROPs. It comes with 8 GB of 28 Gbps GDDR7 memory across a 128-bit wide memory bus. Inno3D did not announce pricing, and availability for now is limited to the Chinese market.

(PR) GlobalFoundries Plans Billion-Euro Investment to Expand Chip Manufacturing in Germany

GlobalFoundries (GF) today announced plans to invest €1.1 billion to expand its manufacturing capabilities at its Dresden, Germany site. The investment will enable a production capacity increase to more than one million wafers per year by the end of 2028, making it the largest site of its kind in Europe.

The expansion, known as project SPRINT, is expected to be supported by the German federal government and the State of Saxony under the framework of the European Chips Act, with EU approval for the full program expected later this year. This investment underscores Saxony's role as a critical hub for semiconductor manufacturing and innovation and reinforces Europe's strategic goal of supply chain resilience.

NVIDIA First Company in History at $5 Trillion Value After Major Announcements at GTC 2025

NVIDIA's stock jumped 5% on Tuesday, closing at a record high and pushing the company's valuation to $4.89 trillion, just shy of the $5 trillion milestone. The surge followed a wave of new announcements at its GTC event in Washington, D.C., covering AI, supercomputing, and major industry partnerships. NVIDIA's shares have risen more than 50% year-to-date, more than doubling since April, fueled by demand for GPUs powering cloud data centers from Amazon, Google, and Microsoft. The company has also invested up to $100 billion in OpenAI one of its largest customers. In his keynote, CEO Jensen Huang said that it is expecting $500 billion in GPU sales by the end of 2026, a realistic target if we consider that NVIDIA generated over $100 billion in revenue in just the first two quarters of this year.

During the GTC event NVIDIA revealed plans with the U.S. Department of Energy to build seven new supercomputers including one powered by 10,000 Blackwell GPUs. NVIDIA also introduced NVQLink, a new open systems architecture designed to accelerate the development of quantum supercomputers. Moreover, NVIDIA yesterday showcased its highly anticipated "Vera Rubin" Superchip, a single package combining two Rubin GPUs with a Vera CPU featuring 88 cores and 176 threads. While competition from AMD, Qualcomm, and in-house chips developed by hyperscalers continues to grow, NVIDIA remains positioned at the center of the AI infrastructure boom edging closer to the $5 trillion milestone valuation that seems set to become reality in a matter of days.

(PR) HighPoint Launches New Rocket 1624A PCIe Gen 5 Switch Adapter

HighPoint Technologies, Inc., a leading manufacturer of high-performance PCIe storage and connectivity solutions, has unveiled the newest member of its industry leading PCIe Switch Solutions - the Rocket 1624A, a compact, high-efficiency variant of the acclaimed PCIe Gen 5 x16 Rocket 1628A Switch Adapter. The Rocket 1624A redefines PCIe scalability by combining dedicated NVMe storage connectivity with full-bandwidth PCIe expansion capability for GPUs, specialized accelerators, and other high-performance peripherals - all through a cost-effective Gen 5 switching platform.

Priced competitively at USD $899, the Rocket 1624A is designed for system integrators, datacenter administrators, and solution providers seeking to unlock the full potential of PCIe Gen 5 connectivity by enhancing the versatility and value of the target computing platform.

(PR) Acer Debuts New Premium Chromebook Plus Enterprise 714 Line

Acer America today debuted the premium Acer Chromebook Plus Enterprise 714 laptop line designed for businesses and organizations that need the latest technology to work in the cloud more efficiently and securely. In addition, the new Acer Chromebook Plus Enterprise 514 is now shipping in North America.

The new Chromebook Plus Enterprise models come with the business capabilities of ChromeOS unlocked, ensuring best-in-class security, simple management, flexible access, and enhanced administrative support. ChromeOS has built-in Google AI features that help employees do their best work, whether it's creating compelling content, leading meetings, and more. In addition, the new Chromebooks feature a Quick Insert key that encourages creativity and efficiency by providing one-touch access to tools, menus, and other applications.

NVIDIA's "Vera Rubin" Superchip System Pictured for the First Time

NVIDIA yesterday had a GTC conference in Washington D.C., showing its latest "Vera Rubin" Superchip symbiote. Pictured for the first time is the combination of two "Rubin" GPU paired with a single "Vera" CPU carrying 88 custom NVIDIA cores and 176 threads in a single package. NVIDIA quoted performance targets of roughly 50 PetaFLOPS of FP4 compute per Rubin GPU, which yields about 100 PetaFLOPS FP4 for the two-GPU Superchip. The company said engineering samples are already moving through labs and set mass production goals for 2026 with broader shipments and deployments into 2027.

Each Rubin GPU appears to integrate two reticle-sized compute chiplets (2x830mm²?) paired with eight HBM4 stacks, delivering about 288 GB of HBM4 per GPU and roughly 576 GB of HBM4 on the full Superchip. NVIDIA also populated the board with SOCAMM2 LPDDR5X modules to provide large, low-latency system memory, with some older briefings indicating around 1.5 TB of LPDDR5X per Vera CPU on typical trays. The Vera CPU itself uses an 88-core, 176-thread Arm-based custom design and shows signs of a multi-chiplet layout with a distinct I/O chiplet nearby. With "Grace," NVIDIA relied on Arm's Neoverse design, but with Vera, the design team brought this CPU core in the house to extract maximum performance. Additionally, NVLink bandwidth climbs to approximately 1.8 TB/s to sustain heavy CPU-to-GPU traffic in system-demanding workloads such as AI inference and training.

(PR) ViewSonic Showcases Range of Display Options to Elevate Productivity and Entertainment

ViewSonic Corp., a leading global provider of visual and edtech solutions, is celebrating the 2025 holiday season with a dynamic showcase of displays at the Pepcom Holiday Spectacular!, October 29th in New York City. The latest offerings from ViewSonic deliver powerful performance, sleek portability, and simplified connectivity, ideal for content creators, gamers, and remote working professionals.

This year's collection of displays features high-resolution desktop monitors, ultra-portable OLED displays, and projectors for work and entertainment needs. ViewSonic is debuting the VP3276T-4K, a 32-inch ColorPro monitor that is Pantone Validated and features a Thunderbolt 4 docking station. Also being demonstrated is the lightweight M1X projector that is part of the AtmosKIT Autoplay GO Bundle and festive decoration kit to create a showstopping holiday experience.

GMKtec Launches NucBox M7 Ultra Mini PC Powered by AMD Ryzen 7 Pro 6850U

GMKtec has introduced the NucBox M7 Ultra compact Mini PC, the system is powered by AMD's Ryzen 7 PRO 6850U, an 8-core, 16-thread Zen 3+ processor built on TSMC's 6 nm process, paired with Radeon 680M graphics featuring 12 CUs running at 2200 MHz. The NucBox M7 Ultra supports up to 64 GB of DDR5-4800 memory and includes PCIe 4.0 M.2 storage with up to 16 TB of total expansion (2× M.2 2280). Display output options include HDMI 2.1, DisplayPort 2.0 and dual USB4 supporting up to 8K resolution across four screens. Connectivity is handled by Wi-Fi 6E, Bluetooth 5.2, and dual 2.5 GbE LAN ports.

Front I/O includes two USB 3.2 Gen 2 ports, a 3.5 mm combo jack, USB4, and an OCulink connector. The rear panel adds two USB 2.0 ports, HDMI, DP, USB4, dual LAN, and a security lock slot. The system ships with Windows 11 Pro pre-installed and also supports Linux distributions such as Ubuntu. The GMKtec NucBox M7 Ultra is available globally starting at $309.99 for the base model (no DDR, SSD or OS), with configurations up to 32 GB RAM and 1 TB SSD priced at $429.99. Early buyers before November 9 will receive a free USB Hub expansion dock.

(PR) Tight DRAM Supply to Boost DDR5 Contract Prices—Profitability in 2026 Expected to Surpass HBM3e

TrendForce's latest investigations show that server DRAM contract prices are strengthening in 4Q25, driven by ongoing data center expansion among global CSPs. This momentum is lifting overall DRAM pricing. Although final contract pricing for the quarter is still being negotiated, suppliers are showing a greater willingness to raise quotes as CSPs increase order volumes.

TrendForce has accordingly revised its 4Q25 outlook for conventional DRAM pricing upward, from an earlier forecast of 8-13% growth to 18-23%, with a strong likelihood of further upward revision.

Thermaltake confirms Intel Nova Lake/LGA 1954 support with new MINECUBE CPU cooler

Thermaltake accidentally confirms Intel LGA 1954 support for its newest CPU cooler On the web page for its new MINECUBE 360 Ultra CPU cooler, which we saw at Computex 2025, Thermaltake accidentally unveiled that its new cooler supports Intel’s upcoming Nova Lake CPUs. Specifically, their cooler was listed as supporting Intel’s next-generation LGA-1954 socket, which […]

The post Thermaltake confirms Intel Nova Lake/LGA 1954 support with new MINECUBE CPU cooler appeared first on OC3D.

The State Of Startups In 7 Charts: These Sectors And Stages Are Down As AI Megarounds Dominate In 2025

Venture funding has most definitely rebounded since the 2022 correction, but there’s a sharp divide between who’s getting funding and who’s not.

That was the overarching theme from our third-quarter market reports, which showed that global startup funding in Q3 totaled $97 billion, marking only the fourth quarter above $90 billion since Q3 2022.

Still, there are stark differences between the 2021 market peak and now, as contributing reporter Joanna Glasner noted in a couple of recent columns, here and here. Just as we saw four years ago, funding is frothy and often seems to be driven by investor FOMO. Some companies are even raising follow-on rounds at head-spinning speeds.

But the funding surge this time is also much, much more concentrated — namely in outsized rounds for AI companies.

With that, let’s take a look at the charts that illustrate the major private-market and startup funding themes as we head into the final quarter of 2025.

AI funding continues to drive venture growth

Nearly half — 46% — of startup funding globally in Q3 went to AI companies, Crunchbase data shows. Almost a third went to a single company: Anthropic, which raised $13 billion last quarter.

Even with an astonishing $45 billion going to artificial intelligence startups in Q3, it was only the third-highest quarter on record for AI funding, with Q4 2024 and Q1 2025 each clocking in higher.

Megarounds gobble up lion’s share

It shouldn’t come as too much of a surprise that AI has also skewed investment heavily toward megarounds, which we define as funding deals of $100 million or more.

The percentage of overall funding going into such deals hit a record high this year, with an astonishing 60% of global and 70% of U.S. venture capital going to $100 million-plus rounds, per Crunchbase data.

Even with several months left in the year, it also seems plausible that the total dollars going into such deals will match or top what we saw in 2021, which marked a peak for startup funding not scaled before or since.

The difference? Back then, startup dollars were widely distributed, going to a whole host of sectors — from food tech to health tech to robotics — and to early-stage, late-stage and in-between companies alike.

Contrast that with recent quarters, when the LLM giants and other large, established, AI-centric companies are getting the largest slice of venture dollars.

Seed deals slide further

As megarounds have increased, seed deals have declined.

The number of seed deals has shown a steady downward trend in recent quarters, Crunchbase data shows, even as total dollars invested at the stage has stayed relatively steady. That indicates that while seed deals are growing larger, they’re also harder to come by.

Early-stage funding has essentially flatlined, despite larger rounds to companies working on robotics, biotech, AI and other technologies.

The AI haves and have-nots

AI has enthralled investors for the past three years.

What are they less interested in? Old standbys like cybersecurity and biotech. Biotech investment as a share of overall funding recently hit a 20-year low. Crunchbase data shows that cybersecurity investment, while still relatively steady, also retreated somewhat in Q3 2025. That’s notable given that many cybersecurity companies are integrating AI into their offerings.

Still, other sectors that benefit heavily from AI-driven automation are seeing a surge in investment. Perhaps most notable is legal tech, which hit an all-time high last month on the back of large rounds for companies promising to automate much of the drudgery of the profession.

Among the other sectors buoyed by AI is human resources software (including AI-powered recruitment and hiring offerings).

Other data points of note

Other interesting points that emerged from our Q3 reports and recent coverage include:

Looking ahead

The increasing concentration of capital into a small cadre of large AI companies — not to mention the interconnectedness of those deals — begs some obvious questions. Are we in a bubble? And given that nearly half of venture capital in recent years has been tied up in AI, what happens to the startup ecosystem if or when it pops?

Related reading:

Illustration: Dom Guzman

Microsoft sets its sights on a universal gaming ecosystem


In a rare deep-dive on Microsoft's gaming strategy with TPBN, CEO Satya Nadella outlined the company's evolving vision for Xbox and Windows gaming. His comments follow Microsoft's 2023 acquisition of Activision Blizzard, as the company reconsiders the future of Xbox – not just as a console, but as a broader platform.

Read Entire Article

No Wait, A “More Slim” Galaxy S26 Edge Is Still In The Cards

Unbranded smartphone held in hand showing volume buttons.

With the flak that Apple is reportedly receiving for its ultra-slim iPhone Air, there was an evolving consensus for a time that Samsung would simply trash its upcoming Galaxy S26 Edge. However, new reporting from the Netherlands now indicates that the project is not only alive but thriving, and is all set to debut next year, albeit with a hefty delay. Samsung Galaxy S26 Edge is likely to debut months after the rest of the S26 variants To wit, the Netherlands' GalaxyClub is now reporting that Samsung continues to develop a smartphone under the relatively silly alias of "More Slim." […]

Read full article at https://wccftech.com/no-wait-a-more-slim-galaxy-s26-edge-is-still-in-the-cards/

Hideo Kojima Wasn’t Aware He Was Offered A Matrix Video Game Project

Person in black jacket with glasses stands in front of The Matrix green digital rain background.

Hideo Kojima was as surprised as anyone learning this week that he was offered to develop a video game project based on the Matrix franchise, as no one ever told him such a conversation took place between the Wachowskis and Konami. Earlier today, the legendary game designer and creator of the Metal Gear and Death Stranding series clarified the statements made a few days ago by Christopher Bergstresser, the Senior Vice President of Strategic Planning and business Development at Konami between 1996 and 2002. In his post on X, Kojima-san confirmed that, back in 1999, he and the Wachowskis exchanged […]

Read full article at https://wccftech.com/hideo-kojima-wasnt-aware-he-was-offered-a-matrix-video-game-project/

A New Geekbench 6 Test Shows Samsung Exynos 2600 Holding Up Surprisingly Well Against Qualcomm’s Snapdragon 8 Elite Gen 5

Samsung Exynos 2600 processor on circuit board.

A new Geekbench 6 test is doing the rounds today on social media, showing the raw performance of what appears to be the final configuration of Samsung's upcoming flagship Exynos 2600 chip. Unfortunately, the chip fails to exceed the proven benchmark scores of Qualcomm's Snapdragon 8 Elite Gen 5, but manages to significantly narrow down the erstwhile performance gap between the two SoCs. Samsung Exynos 2600 falls behind Qualcomm's Snapdragon 8 Elite Gen 5 in both single-core and multi-core GeekBench 6 tests First, let's go over the Exynos 2600 setup used in the test: For comparison, here is the CPU […]

Read full article at https://wccftech.com/a-new-geekbench-6-test-shows-samsung-exynos-2600-holding-up-surprisingly-well-against-qualcomms-snapdragon-8-elite-gen-5/

KTC Launches G27P6M 2K OLED Monitor, Featuring 4th Gen LG OLED Panel At Sub-$300 Price As A Limited Time Deal

KTC G27P6M monitor displaying a futuristic soldier.

This is another cheap OLED 1440p gaming monitor, which boasts up to 280Hz refresh rate and uses LG's latest LG Tandem OLED panel. KTC Unveils G27P6M OLED 280Hz@1440p Gaming Monitor With LG Primary RGB Tandem OLED; Official Price $422, but Limited Time Deal Price is $281 Looks like Lenovo isn't the only one that is willing to offer its OLED gaming monitors at the cheapest price. KTC, a popular Chinese monitor maker, joined the race and has unveiled its latest offering called G27P6M, which is based on LG's Primary RGB Tandem OLED panel. The monitor launched recently, offering competitive specifications, […]

Read full article at https://wccftech.com/ktc-launches-g27p6m-2k-oled-monitor-featuring-4th-gen-lg-oled/

PlayStation Portal Leak Suggests Highly Requested Feature Could Be Coming Soon

PlayStation Portal new feature

The PlayStation Portal handheld system may be about to get a highly-requested feature soon, judging from a PS App leak. Over on the PlayStation Portal official subreddit, user GetTheWetsOn shared a screenshot captured from the PS App of the Deliver at All Costs store page, which seemingly confirms that cloud streaming of purchased digital games could be coming to the PlayStation Portal for PS Plus Premium subscribers. The user also stated that the same confirmation, which has since been removed, could be found on the Dead Space remake store page. While there's no indication this new PlayStation Portal streaming option […]

Read full article at https://wccftech.com/playstation-portal-leak-suggests-highly-requested-feature-is-coming-soon/

American Chip Startup ‘Substrate’ Vows to End the U.S. Dependence on ASML Through a New Lithography Technique

Silicon wafer inside a semiconductor manufacturing machine.

It seems like the US chip industry is going through a 'massive' revolution, and with that, a startup, Substrate, has decided to tap into the lithography segment, which ASML heavily dominates. Substrate Intends to Utilize X-Rays Over EUV For Lithography, Claiming It To Be a Cheaper Alternative When it comes to chip lithography equipment, the US is entirely dependent on companies like ASML, as the nation lacks a domestic technology that can rival the Dutch chipmaker. However, in a new report by Bloomberg, it seems like there's a startup by the name of 'Substrate' out there that plans to end […]

Read full article at https://wccftech.com/american-chip-startup-substrate-vows-to-end-dependency-on-asml/

New SK Hynix 7200 MT/s DDR5 Memory Spotted; 2 Gb B-Die And 4 Gb M-Die Prepared

SK Hynix has reportedly prepared several new DDR5 chips that can achieve a native speed of 7200 MT/s, as spotted on a popular e-commerce platform. Three More SK Hynix Memory Chips Get Prepared, Offering 7200 MT/s Speed; SK Hynix Reportedly Prepares 2 Gb and 4 Gb Chips as Well Last week, we reported that SK Hynix is preparing 3 Gb (Gigabit) DDR5 A-Die chips, which are rated for 7200 MT/s, above the current JEDEC standard of 6400 MT/s. It appears that SK Hynix isn't just limiting itself to A-Die 3 Gb chips, but has also silently prepared three more dies, […]

Read full article at https://wccftech.com/new-sk-hynix-7200-mt-s-ddr5-memory-spotted-2-gb-b-die-and-4-gb-m-die-prepared/

New World Gets Killed Off by Amazon Just as People Were Returning to It

New World Aeternum logo with fiery and dark background.

It was practically a given following the announcement of the mass layoffs at Amazon Game Studios, but we now have confirmation that New World, the MMORPG developed by the team based in Irvine, California, is effectively on life support. Amazon has clarified that Season 10: Nighthaven, which launched earlier this month, will be the last content update because further development 'is not sustainable'. The servers are, however, guaranteed to remain up throughout December 31, 2026 at the least. Complete closure of the servers will likely follow shortly after that date, although Amazon said it will notify players at least six […]

Read full article at https://wccftech.com/new-world-gets-killed-off-amazon-just-as-people-were-returning/

Remedy Isn’t Satisfied with Recent Sales, But Remains Confident and Aims for ‘Significant Commercial Success’ by 2030

REMEDY logo on a geometric abstract background.

This morning, Remedy Entertainment published its Q3 2025 financial report. As expected, it's not great news for the Finnish studio. Tero Virtala, the company's CEO for nine years, stepped down earlier this month after Remedy told investors that the game FBC: Firebreak had effectively failed. This past quarter, just as the studio celebrated the thirty years since its foundation, the financial highlights were indeed very negative: However, interim CEO Markus Mäki reassured investors (and fans) that the remaining projects are on track, and the studio is still confident in its ability to achieve 'significant commercial success' by 2030. Despite challenges […]

Read full article at https://wccftech.com/remedy-isnt-satisfied-recent-sales-remains-confident-significant-commercial-success/

Geostar pioneers GEO as traditional SEO faces 25% decline from AI chatbots, Gartner says

The moment Mack McConnell knew everything about search had changed came last summer at the Paris Olympics. His parents, independently and without prompting, had both turned to ChatGPT to plan their day's activities in the French capital. The AI recommended specific tour companies, restaurants, and attractions — businesses that had won a new kind of visibility lottery.

"It was almost like this intuitive interface that older people were as comfortable with using as younger people," McConnell recalled in an exclusive interview with VentureBeat. "I could just see the businesses were now being recommended."

That observation has now become the foundation of Geostar, a Pear VC-backed startup that's racing to help businesses navigate what may be the most significant shift in online discovery since Google's founding. 

The company, which recently emerged from stealth with impressive early customer traction, is betting that the rise of AI-powered search represents a significant opportunity to reinvent how companies get found online. The global AI search engine market alone is projected to grow from $43.63 billion in 2025 to $108.88 billion by 2032.

Already the fastest-growing company in PearX's latest cohort, Geostar is fast approaching $1 million in annual recurring revenue in just four months — with only two founders and no employees.

Why Gartner predicts traditional search volume will decline 25% by 2026

The numbers tell a stark story of disruption. Gartner predicts that traditional search engine volume will decline by 25% by 2026, largely due to the rise of AI chatbots. Google's AI Overviews now appear on billions of searches monthly. Princeton University researchers have found that optimizing for these new AI systems can increase visibility by up to 40%.

"Search used to mean that you had to make Google happy," McConnell explained. "But now you have to optimize for four different Google interfaces — traditional search, AI Mode, Gemini, and AI Overviews — each with different criteria. And then ChatGPT, Claude, and Perplexity each work differently on top of that."

This fragmentation is creating chaos for businesses that have spent decades perfecting their Google search strategies. A recent Forrester study found that 95% of B2B buyers plan to use generative AI in future purchase decisions. Yet most companies remain woefully unprepared for this shift.

"Anybody who's not on this right now is losing out," said Cihan Tas, Geostar's co-founder and chief technology officer. "We see lawyers getting 50% of their clients through ChatGPT now. It's just such a massive shift."

How language models read the web differently than search engines ever did

What Geostar and a growing cohort of competitors call Generative Engine Optimization or GEO represents a fundamental departure from traditional search engine optimization. Where SEO focused primarily on keywords and backlinks, GEO requires understanding how large language models parse, understand, and synthesize information across the entire web.

The technical challenges are formidable. Every website must now function as what Tas calls "its own little database" capable of being understood by dozens of different AI crawlers, each with unique requirements and preferences. Google's systems pull from their existing search index. ChatGPT relies heavily on structured data and specific content formats. Perplexity shows a marked preference for Wikipedia and authoritative sources.

"Now the strategy is actually being concise, clear, and answering the question, because that's directly what the AI is looking for," Tas explained. "You're actually tuning for somewhat of an intelligent model that makes decisions similarly to how we make decisions."

Consider schema markup, the structured data that helps machines understand web content. While only 30% of websites currently implement comprehensive schema, research shows that pages with proper markup are 36% more likely to appear in AI-generated summaries. Yet most businesses don't even know what schema markup is, let alone how to implement it effectively.

Inside Geostar's AI agents that optimize websites continuously without human intervention

Geostar's solution embodies a broader trend in enterprise software: the rise of autonomous AI agents that can take action on behalf of businesses. The company embeds what it calls "ambient agents" directly into client websites, continuously optimizing content, technical configurations, and even creating new pages based on patterns learned across its entire customer base.

"Once we learn something about the way content performs, or the way a technical optimization performs, we can then syndicate that same change across the remaining users so everyone in the network benefits," McConnell said.

For RedSift, a cybersecurity company, this approach yielded a 27% increase in AI mentions within three months. In one case, Geostar identified an opportunity to rank for "best DMARC vendors," a high-value search term in the email security space. The company's agents created and optimized content that achieved first-page rankings on both Google and ChatGPT within four days.

"We're doing the work of an agency that charges $10,000 a month," McConnell said, noting that Geostar's pricing ranges from $1,000 to $3,000 monthly. "AI creates a situation where, for the first time ever, you can take action like an agency, but you can scale like software."

Why brand mentions without links now matter more than ever in the AI era

The implications of this shift extend far beyond technical optimizations. In the SEO era, a mention without a link was essentially worthless. In the age of AI, that calculus has reversed. AI systems can analyze vast amounts of text to understand sentiment and context, meaning that brand mentions on Reddit, in news articles, or across social media now directly influence how AI systems describe and recommend companies.

"If the New York Times mentions a company without linking to it, that company would actually benefit from that in an AI system," McConnell explained. "AI has the ability to do mass analysis of huge amounts of text, and it will understand the sentiment around that mention."

This has created new vulnerabilities. Research from the Indian Institute of Technology and Princeton found that AI systems show systematic bias toward third-party sources over brand-owned content. A company's own website might be less influential in shaping AI perceptions than what others say about it online.

The shifting landscape has also disrupted traditional metrics of success. Where SEO focused on rankings and click-through rates, GEO must account for what researchers call impression metrics — how prominently and positively a brand appears within AI-generated responses, even when users never click through to the source.

A growing market as SEO veterans and new players rush to dominate AI optimization

Geostar is hardly alone in recognizing this opportunity. Companies like Brandlight, Profound, and Goodie are all racing to help businesses navigate the new landscape. The SEO industry, worth approximately $80 billion globally, is scrambling to adapt, with established players like Semrush and Ahrefs rushing to add AI visibility tracking features.

But the company's founders, who previously built and sold a Y-Combinator-backed e-commerce optimization startup called Monto, believe their technical approach gives them an edge. Unlike competitors who largely provide dashboards and recommendations, Geostar's agents actively implement changes.

"Everyone is taking the same solutions that worked in the last era and just saying, 'We'll do this for AI instead,'" McConnell argued. "But when you think about what AI is truly capable of, it can actually do the work for you."

The stakes are particularly high for small and medium-sized businesses. While large corporations can afford to hire specialized consultants or build internal expertise, smaller companies risk becoming invisible in AI-mediated search. Geostar sees this as its primary market opportunity: nearly half of the 33.2 million small businesses in America invest in SEO. Among the roughly 418,000 law firms in the U.S., many spend between $2,500 and $5,000 monthly on search optimization to stay competitive in local markets.

From Kurdish village to PearX: The unlikely partnership building the future of search

For Tas, whose journey to Silicon Valley began in a tiny Kurdish village in Turkey with just 50 residents, the current moment represents both opportunity and responsibility. His mother's battle with cancer prevented him from finishing college, leading him to teach himself programming and eventually partner with McConnell — whom he worked with for an entire year before they ever met in person.

"We're not just copy and pasting a solution that was existing before," Tas emphasized. "This is something that's different and was uniquely possible today."

Looking forward, the transformation of search appears to be accelerating rather than stabilizing. Industry observers predict that search functionality will soon be embedded in productivity tools, wearables, and even augmented reality interfaces. Each new surface will likely have its own optimization requirements, further complicating the landscape.

"Soon, search will be in our eyes, in our ears," McConnell predicted. "When Siri breaks out of her prison, whatever that Jony Ive and OpenAI are building together will be like a multimodal search interface."

The technical challenges are matched by ethical ones. As businesses scramble to influence AI recommendations, questions arise about manipulation, fairness, and transparency. There's currently no oversight body or established best practices for GEO, creating what some critics describe as a Wild West environment.

As businesses grapple with these changes, one thing seems certain: the era of simply optimizing for Google is over. In its place is emerging a far more complex ecosystem where success requires understanding not just how machines index information, but how they think about it, synthesize it, and ultimately decide what to recommend to humans seeking answers.

For the millions of businesses whose survival depends on being discovered online, mastering this new paradigm isn't just an opportunity — it's an existential imperative. The question is no longer whether to optimize for AI search, but whether companies can adapt quickly enough to remain visible as the pace of change accelerates.

McConnell's parents at the Olympics were a preview of what's already becoming the norm. They didn't search for tour companies in Paris. They didn't scroll through results or click on links. They simply asked ChatGPT what to do — and the AI decided which businesses deserved their attention.

In the new economy of discovery, the businesses that win won't be the ones that rank highest. They'll be the ones AI chooses to recommend.

Why your SEO and PPC teams need shared standards to unlock mutual gains

SEO-PPC mutual gains concept

Most marketing teams still treat SEO and PPC as budget rivals, not as complementary systems facing the same performance challenges.

In practice, these relationships fall into three types:

  • Parasitism: One benefits at the other’s expense.
  • Commensalism: One benefits while the other remains unaffected.
  • Mutualism: Both thrive through shared optimization and accountability.

Only mutualism creates sustainable performance gains – and it’s the shift marketing teams need to make next.

Mutualism: Solving joint problems

One glaring problem unites online marketers: we’re getting less traffic for the same budget.

Navigating the coming years requires more than the coexistence many teams mistake for collaboration. 

We need mutualism – shared technical standards that optimize for both organic visibility and paid performance. 

Shared accountability drives lower acquisition costs, faster market response, and sustainable gains that neither channel can achieve alone.

Here’s what it looks like in practice: 

  • Fostering a culture of experimentation and learning.
  • PPC tests messaging while SEO builds long-term content assets​.
  • SEO uncovers search intent that PPC capitalizes on immediately​.
  • Both channels learn from shared incrementality testing (guerrilla testing)​.
  • Cross-pollination of keyword intelligence and conversion data​.
  • Combined technical standards (modified Core Web Vitals weights) align engineering with marketing goals​.
  • Feedback loops accelerate market insights and reduce wasted spend​.

Stabilizing performance during SEO volatility

During SEO penalties and core updates, PPC can maintain traffic until recovery. 

Core updates cause fluctuations in organic rankings and user behavior, which, in turn, can affect ad relevance and placements.

Do you involve SEO during a CPC price surge?

PPC-only landing pages affect the Core Web Vitals of entire sites, influencing Google’s default assumptions for URLs without enough traffic to calculate individual scores. 

Paid pages are penalized for slow loading just as much as organic ones, impacting Quality Score and, ultimately, bids.

New-market launch considerations

PPC should answer a simple question: Are we getting the types of results we expect and want? 

Setting clear PPC baselines by market and country provides valuable, real-time keyword and conversion data that SEO teams can use to strengthen organic strategies. 

By analyzing which PPC clicks drive signups or demo requests, SEO teams can prioritize content and keyword targets with proven high intent. 

Sharing PPC insights enables organic search teams to make smarter decisions, improve rankings, and drive better-qualified traffic.

Global Baseline, Market Baseline, Country Baseline, and Click Yield. Each step explains evaluating performance metrics at different levels and tracking conversion outcomes from clicks. The image visually supports a new-market launch PPC strategy designed to gather insights that strengthen SEO performance.
Source: SMX Advanced Berlin presentation by the author.

Dig deeper: The end of SEO-PPC silos: Building a unified search strategy for the AI era

Get the newsletter search marketers rely on.


Building unified performance measurement

One key question to ask is: how do we measure incrementality? 

We need to quantify the true, additional contribution PPC and SEO drive above the baseline. 

Guerrilla testing offers a lo-fi way to do this – turning campaigns on or off in specific markets to see whether organic conversions are affected. 

A more targeted test involves turning off branded campaigns. 

PPC ads on branded terms can capture conversions that would have occurred organically, making paid results appear stronger and SEO weaker. 

That’s exactly what Arturs Cavniss’ company did – and here are the results.

"Line and bar chart titled 'Cost and Purchases Over Time.' The chart shows daily advertising costs (bars) and purchases (line) from June to September 2025, with a highlighted section labeled 'Brand Spend Turned Off.' During this highlighted period, costs drop sharply while purchases remain steady, illustrating the impact of disabling branded ad spend."

For teams ready to operate in a more sophisticated way, several options are available. 

One worth exploring is Robyn, an open-source, AI/ML-powered marketing mix modeling (MMM) package.

Core Web Vitals

Core Web Vitals measures layout stability, rendering efficiency, and server response times – key factors influencing search visibility and overall performance. 

These metrics are weighted by Google in evaluating page experience.

Core Web Vitals MetricGoogle’s Weight
First Contentful Paint10%
Speed Index10%
Largest Contentful Paint25%
Total Blocking Time30%
Cumulative Layout Shift25%

Core Web Vitals:

  • Affect PPC performance through CLS metrics.
  • Influence SEO rankings through search vitals.
  • Give engineering teams clear benchmarks that align development efforts with marketing goals.

You can create a modified weighted system to reflect a combined SEO and PPC baseline. (Here’s a quick MVP spreadsheet to get started.)

However, SEO-focused weightings don’t capture PPC’s Quality Score requirements or conversion optimization needs. 

Clicking an ad link can be slower than an organic one because Google’s ad network introduces extra processes – additional data handling and script execution – before the page loads. 

The hypothesis is that ad clicks may consistently load slower than organic ones due to these extra steps in the ad-serving process. 

This suggests that performance standards designed for organic results may not fully represent the experience of paid users.

Microsoft Ads Liaison Navah Hopkins notes that paid pages are penalized for slow loading just as severely as organic ones – a factor that directly affects Quality Score and bids.

bold headline 'Before SEOs grumble...'. The text warns that PPC-only landing pages can negatively affect Core Web Vitals for the entire site, leading Google to make poor default assumptions about low-traffic URLs and ultimately hurting SEO performance."
Source: SMX Advanced Berlin presentation by the author.

SEOs also take responsibility for improving PPC-only landing pages, even without being asked. As Jono Alderson explains:

  • “All of your PPC-only landing pages are affecting the CWV of your whole site (and, Google’s default assumptions for all of your URLs that don’t have enough traffic to calculate), and thus f*ck with your SEO.”

PPC-only landing pages influence the Core Web Vitals of entire sites, shaping Google’s assumptions for low-traffic URLs.

INP gains importance with agentic AI

Agentic AI’s sensitivity to interaction delays has made Interaction to Next Paint (INP) a critical performance metric.

INP measures how quickly a website responds when a human or AI agent interacts with a page – clicking, scrolling, or filling out forms while completing tasks. 

When response times lag, agents fail tasks, abandon the site, and may turn to competitors. 

INP doesn’t appear in Chrome Lighthouse or PageSpeed Insights because those are synthetic testing tools that don’t simulate real interactions. 

Real user monitoring helps reveal what’s happening in practice, but it still can’t capture the full picture for AI-driven interactions.

Bringing quality scoring to SEO

PPC practitioners have long relied on Quality Score – a 1-10 scale measuring expected CTR and user intent fit – to optimize landing pages and reduce costs. 

SEO lacks an equivalent unified metric, leaving teams to juggle separate signals like Core Web Vitals, keyword relevance, and user engagement without a clear prioritization framework.

You can create a company-wide quality score for pages to incentivize optimization and align teams while maintaining channel-specific goals. 

This score can account for page type, with sub-scores for trial, demo, or usage pages – adaptable to the content that drives the most business value.

The system should account for overlapping metrics across subscores yet remain simple enough for all teams – SEO, PPC, engineering, and product – to understand and act on. 

A unified scoring model gives everyone a common language and turns distributed accountability into daily practice. 

When both channels share quality standards, teams can prioritize fixes that strengthen organic rankings and paid performance simultaneously.

Give a comprehensive view across channels

Display advertising and SEO rarely share performance metrics, yet both pursue the same goal – converting impressions into engaged users. 

Click-per-thousand impressions (CPTI) measures the number of clicks generated per 1,000 impressions, creating a shared language for evaluating content effectiveness across paid display and organic search.

For display teams, CPTI reveals which creative and targeting combinations drive engagement beyond vanity metrics like reach. 

For SEO teams, applying CPTI to search impressions (via Google Search Console) shows which pages and queries convert visibility into traffic – exposing content that ranks well but fails to earn clicks.

This shared metric allows teams to compare efficiency directly: if a blog post drives 50 clicks per 1,000 organic impressions while a display campaign with similar visibility generates only 15 clicks, the performance gap warrants investigation.

'Reverse CPM' explains how to calculate when content is 'paid for' and achieves ROI, giving the example that 1 million monthly impressions should generate 1,000+ clicks. The concept is attributed to inspiration from Navah Hopkins.
Source: SMX Advanced Berlin presentation by the author.

Reverse CPM offers another useful lens. It measures how long content takes to “pay for itself” – the point where it reaches ROI. 

For example, if an article earns 1 million impressions in a month, it should deliver roughly 1,000 clicks. 

As generative AI continues to reshape traffic patterns, this metric will need refinement.

Feedback loops

The most valuable insights emerge when SEO and PPC teams share operational intelligence rather than compete for credit. 

PPC provides quick keyword performance data to respond to market trends faster, while SEO uncovers emerging search intent that PPC can immediately act on. 

Together, these feedback loops create compound advantages.

SEO signals PPC should act on:

  • Google is testing a feature that impacts SEO rankings and traffic – PPC can maintain visibility during organic volatility.
  • SEO keyword research uncovers search intent, emerging keywords, seasonal patterns, and regional differences in query popularity.
  • Long-tail insights reveal shifting search intents after core updates, signaling format and content opportunities.

PPC signals SEO should act on: 

  • Some PPC keywords are effectively “dead.” They’ll never convert and are better handled by SEO.
  • PPC competitors bidding on brand keywords expose gaps in brand protection strategy.
  • PPC data highlights which product messaging, features, or offers resonate most with users, informing content priorities.
Mind map titled 'Dead Keywords in PPC' illustrating three main branches: why keywords become dead (including SERP feature dominance, zero-click searches, and other reasons for reduced visibility), impact on PPC campaigns (wasted budget, shift to expected CTR, quality score impact), and deeper impacts on PPC (long-term account health issues, shifting user intent due to SERP features, and reduced competitiveness)

When both channels share intelligence, insights extend beyond marketing performance into product and business strategy.

  • Product managers exploring new features benefit from unified search data across both channels.
  • Joining Merchant Center and Google Search Console data in BigQuery provides a strong foundation for ecommerce attribution.

These feedback loops don’t require expensive tools – only an organizational commitment to regular cross-channel reviews in which teams share what’s working, what’s failing, and what deserves coordinated testing.

Optimizing the system, not the channel

Treat technical performance as shared infrastructure, not channel-specific optimization. 

Teams that implement unified Core Web Vitals standards, cross-channel attribution models, and distributed accountability systems will capture opportunities that siloed operations miss. 

As agentic AI adoption accelerates and digital marketing grows more complex, symbiotic SEO-PPC operations become a competitive advantage rather than a luxury.

The new SEO sales tactic: Selling the AI dream

Selling AI-powered SEO concept

Something’s shifting in how SEO services are being marketed, and if you’ve been shopping for help with search lately, you’ve probably noticed it.

AI search demand is real – but so is the spin

Over the past few months, “AI SEO” has emerged as a distinct service offering. 

Browse service provider websites, scroll through Fiverr, or sit through sales presentations, and you’ll see it positioned as something fundamentally new and separate from traditional SEO. 

Some are packaging it as “GEO” (generative engine optimization) or “AEO” (answer engine optimization), with separate pricing, distinct deliverables, and the implication that you need both this and traditional SEO to compete.

The pitch goes like this: 

  • “Traditional SEO handles Google and Bing. But now you need AI SEO for ChatGPT, Perplexity, Claude, and other AI search platforms. They work completely differently and require specialized optimization.”

The data helps explain why the industry is moving so quickly.

AI-sourced traffic jumped 527% year-over-year from early 2024 to early 2025. 

Service providers are responding to genuine market demand for AI search optimization.

But here’s what I’ve observed after evaluating what these AI SEO services actually deliver. 

Many of these so-called new tactics are the same SEO fundamentals – just repackaged under a different name.

As a marketer responsible for budget and results, understanding this distinction matters. 

It affects how you allocate resources, evaluate agency partners, and structure your search strategy. 

Let’s dig into what’s really happening so you can make smarter decisions about where to invest.

The AI SEO pitch: What you’re hearing in sales calls

The typical AI SEO sales deck has become pretty standardized. 

  • First comes the narrative about how search is fragmenting across platforms. 
  • Then, the impressive dashboard showing AI visibility metrics. 
  • Finally, the recommendation to treat AI optimization as a separate workstream, often with separate pricing.

Here are the most common claims I’m hearing.

‘AI search is fundamentally different and requires specialized optimization’ 

They’ll show you how ChatGPT, Perplexity, and Claude are changing search behavior, and they’re not wrong about that. 

Research shows that 82% of consumers agree that “AI-powered search is more helpful than traditional search engines,” signaling how search behavior is evolving.

‘You need to optimize for how AI platforms chunk and retrieve content’ 

The pitch emphasizes passage-level optimization, structured data, and Q&A formatting specifically for AI retrieval. 

They’ll discuss how AI values mentions and citations differently than backlinks and how entity recognition matters more than keywords.

‘Only 22% of marketers are monitoring AI visibility; you need to act now’

This creates urgency around a supposedly new practice that requires immediate investment.

The urgency is real. Only 22% of marketers have set up LLM brand visibility monitoring, but the question is whether this requires a separate “AI SEO” service or an expansion of your existing search strategy.

Understanding the rebranding trend

To be clear, the AI capabilities are real. What’s new is the positioning – familiar SEO practices rebranded to sound more revolutionary than they are.

When you examine what’s actually being recommended (passage-level content structure, semantic clarity, Q&A formatting, earning citations and mentions), you will find that these practices have been core to SEO for years. 

Google introduced passage ranking in 2020 and featured snippets back in 2014.

Research from Fractl, Search Engine Land, and MFour found that generative engine optimization “is based on similar value systems that advanced SEOs, content marketers, and digital PR teams are already experts in.”

Let me show you what I mean.

What you’re hearing: “AI-powered semantic analysis and predictive keyword intelligence.”

  • What’s actually happening: Keyword research using advanced tools to analyze search volume, competition, user intent, and content opportunities. The strategic fundamentals (understanding what your audience is searching for and why) haven’t changed.

What you’re hearing: “Machine learning content optimization that aligns with AI algorithms.”

  • What’s actually happening: Analyzing top-ranking content, understanding user intent, identifying content gaps, and creating comprehensive content. AI tools can accelerate analysis, which is valuable. But the strategic work (determining what topics matter for your business, how to position your expertise, and what content will drive conversions) still requires human insight.

What you’re hearing: “Entity-based authority building for AI platforms.”

  • What’s actually happening: Building quality mentions and citations, earning coverage from reputable sources, and establishing expertise in your industry. Authority building is inherently relationship-driven and time-dependent. No AI tool shortcuts to becoming a recognized expert in your space.

Dig deeper: AI search is booming, but SEO is still not dead

Get the newsletter search marketers rely on.


Where real differences exist (and why fundamentals still matter)

I want to be fair here. There’s genuine debate in the SEO community about whether optimizing for AI-powered search represents a distinct discipline or an evolution of existing practices.

The differences are real.

  • AI search handles queries differently from traditional search. 
  • Users write longer, conversational prompts rather than short keywords.
  • AI platforms use query fan-out to match multiple sub-queries. 
  • Optimization happens at the passage or chunk level rather than the page level. 
  • Authority signals shift from links and engagement to mentions and citations.

These differences affect execution, but the strategic foundation remains consistent.

You still need to:

  • Understand what users are trying to accomplish.
  • Create content demonstrating genuine expertise.
  • Build authority and credibility.
  • Ensure content is technically accessible.
  • Optimize for relevance and user intent.

And here’s something that reinforces the overlap.

SEO professionals recently discovered that ChatGPT’s Atlas browser directly uses Google search results. 

Even AI-powered search platforms are relying on traditional search infrastructure.

So yes, there are platform-specific tactics that matter. 

The question for you as a marketer isn’t whether differences exist (they do). 

The real question is whether those differences justify treating this as an entirely separate service with its own strategy and budget.

Or are they simply tactical adaptations of the same fundamental approach?

Dig deeper: GEO and SEO: How to invest your time and efforts wisely

The risk of chasing platform-specific tactics

The “separate AI SEO service” approach comes with a real risk.

It can shift focus toward short-term, platform-specific tactics at the expense of long-term fundamentals.

I’m seeing recommendations that feel remarkably similar to the blackhat SEO tactics we saw a decade ago: 

These tactics might work today, but they’re playing a dangerous game.

Dig deeper: Black hat GEO is real – Here’s why you should pay attention

AI platforms are still in their infancy. Their spam detection systems aren’t yet as mature as Google’s or Bing’s, but that will change, likely faster than many expect.

AI platforms like Perplexity are building their own search indexes (hundreds of billions of documents). 

They’ll need to develop the same core systems traditional search engines have: 

  • Site quality scoring.
  • Authority evaluation.
  • Anti-spam measures. 

They’re supposedly buying link data from third-party providers, recognizing that understanding authority requires signals beyond just content analysis.

The pattern is predictable

We’ve seen this with Google. 

In the early days, keyword stuffing and link schemes worked great.

Then, Google developed Panda and Penguin updates that devastated sites relying on those tactics. 

Overnight, sites lost 50-90% of their traffic.

The same thing will likely happen with AI platforms. 

Sites gaming visibility now with spammy tactics will face serious problems when these platforms implement stronger quality and spam detection. 

As one SEO veteran put it, “It works until it doesn’t.”

This is why fundamentals matter more than ever

Building around platform-specific tactics is like building on sand. 

Focus instead on fundamentals – creating valuable content, earning authority, demonstrating expertise, and optimizing for intent – and you’ll have something sustainable across platforms.

Where AI genuinely helps

I’m not anti-AI. Used well, it meaningfully improves SEO workflows and results.

AI excels at large-scale research and ideation – analyzing competitor content, spotting gaps, and mapping topic clusters in minutes.

For one client, it surfaced 73 subtopics we hadn’t fully considered. 

But human expertise was still essential to align those ideas with business goals and strategic priorities.

AI also transforms data analysis and workflow automation – from reporting and rank tracking to technical monitoring – freeing more time for strategy.

AI clearly helps. The real question is whether these AI offerings bring truly new strategies or familiar ones powered by better tools.

What to watch for when evaluating services

After working with clients to evaluate various service models, I’ve seen consistent patterns in proposals that overpromise and underdeliver.

  • They lead with technology, not strategy: If the conversation jumps immediately to tools and dashboards rather than starting with your business goals, that suggests a tools-first rather than strategy-first approach.
  • Vague explanations of their approach: Watch for responses about “proprietary algorithms” or “advanced machine learning” without concrete explanations of what specific problems this solves.
  • Focus on vanity metrics: “We generated 500 AI citations!” sounds impressive but doesn’t answer: Did qualified traffic increase? Did conversion rates improve? How did search contribute to revenue?
  • Case studies that focus on visibility, not business results: They might have increased AI mentions or improved rankings, but did it drive revenue growth? Did it increase qualified leads?

Questions to ask instead

When evaluating any service provider, ask:

  • How would you approach our business? Walk me through your strategic process. The best approaches start by understanding your business, not showcasing tools. If they jump immediately to AI tools or technical tactics without understanding your business context, that’s a red flag.
  • How do you determine content strategy and prioritization? Look for answers that balance data insights with business context and audience understanding, not just what AI tools suggest would perform well.
  • What specific results have you achieved for similar businesses? Push for concrete business metrics (revenue growth, lead generation, conversion improvements), not just traffic or ranking increases.
  • How do you integrate optimization across traditional search and AI platforms? This reveals whether they view these as separate disciplines requiring separate work or as interconnected parts of a unified search strategy.

What actually drives long-term success

After working in SEO for 20 years, through multiple algorithm updates and trend cycles, I keep coming back to the same fundamentals:

  • Deep audience understanding drives every strategic decision.
  • Quality and expertise still win (search algorithms are increasingly sophisticated at evaluating content quality).
  • Authority building takes time and authenticity (you can’t automate trust and credibility).
  • Business alignment drives meaningful results (rankings and AI citations are means to an end: revenue growth, customer acquisition, or whatever your primary business goals are).

Dig deeper: Thriving in AI search starts with SEO fundamentals

What sustainable SEO looks like in the AI era

AI is genuinely changing how we work in search marketing – and that’s mostly positive. 

The tools make us more efficient and enable analysis that wasn’t previously practical.

But AI only enhances good strategy. It doesn’t replace it. 

Fundamentals still matter – along with audience understanding, quality, and expertise.

Search behavior is fragmenting across Google, ChatGPT, Perplexity, and social platforms, but the principles that drive visibility and trust remain consistent.

Real advantage doesn’t come from the newest tools or the flashiest “GEO” tactics. 

It comes from a clear strategy, deep market understanding, strong execution of fundamentals, and smart use of technology to strengthen human expertise.

Don’t get distracted by hype or dismiss innovation. The balance lies in thoughtful AI integration within a solid strategic framework focused on business goals.

That’s what delivers sustainable results – whether people find you through Google, ChatGPT, or whatever comes next.

Discover Practical AI Tactics for GRC — Join the Free Expert Webinar

Artificial Intelligence (AI) is rapidly transforming Governance, Risk, and Compliance (GRC). It's no longer a future concept—it's here, and it's already reshaping how teams operate. AI's capabilities are profound: it's speeding up audits, flagging critical risks faster, and drastically cutting down on time-consuming manual work. This leads to greater efficiency, higher accuracy, and a more

Preparing for the Digital Battlefield of 2026: Ghost Identities, Poisoned Accounts, & AI Agent Havoc

BeyondTrust’s annual cybersecurity predictions point to a year where old defenses will fail quietly, and new attack vectors will surge. Introduction The next major breach won’t be a phished password. It will be the result of a massive, unmanaged identity debt. This debt takes many forms: it’s the “ghost” identity from a 2015 breach lurking in your IAM, the privilege sprawl from thousands of new

Russian Hackers Target Ukrainian Organizations Using Stealthy Living-Off-the-Land Tactics

Organizations in Ukraine have been targeted by threat actors of Russian origin with an aim to siphon sensitive data and maintain persistent access to compromised networks. The activity, according to a new report from the Symantec and Carbon Black Threat Hunter Team, targeted a large business services organization for two months and a local government entity in the country for a week. The attacks

10 npm Packages Caught Stealing Developer Credentials on Windows, macOS, and Linux

Cybersecurity researchers have discovered a set of 10 malicious npm packages that are designed to deliver an information stealer targeting Windows, Linux, and macOS systems. "The malware uses four layers of obfuscation to hide its payload, displays a fake CAPTCHA to appear legitimate, fingerprints victims by IP address, and downloads a 24MB PyInstaller-packaged information stealer that harvests

Alphabet’s AI Reckoning: Cloud Momentum vs. Search Durability

The post Alphabet’s AI Reckoning: Cloud Momentum vs. Search Durability appeared first on StartupHub.ai.

The burgeoning influence of artificial intelligence presents a dual narrative for tech giants, particularly Alphabet, as it navigates both profound opportunities and existential threats to its established revenue streams. As CNBC’s MacKenzie Sigalos reported on “Worldwide Exchange,” ahead of Alphabet’s recent earnings call, the company finds itself at a critical juncture, balancing the burgeoning momentum […]

The post Alphabet’s AI Reckoning: Cloud Momentum vs. Search Durability appeared first on StartupHub.ai.

TestSprite raises $6.7M to fix AI generated code testing

The post TestSprite raises $6.7M to fix AI generated code testing appeared first on StartupHub.ai.

TestSprite raised $6.7 million to address the critical bottleneck of AI generated code testing, enabling developers to validate AI-written code at unprecedented speeds.

The post TestSprite raises $6.7M to fix AI generated code testing appeared first on StartupHub.ai.

Amazon’s AI Power Play: Inside the $11 Billion Indiana Data Center

The post Amazon’s AI Power Play: Inside the $11 Billion Indiana Data Center appeared first on StartupHub.ai.

Amazon’s latest $11 billion AI data center in Indiana signals a profound shift in the foundational infrastructure powering the artificial intelligence revolution, an investment of unprecedented scale that underscores the intense competition for AI compute. In less than a year, Amazon transformed vast Indiana cornfields into its largest AI data center yet, an astonishing feat […]

The post Amazon’s AI Power Play: Inside the $11 Billion Indiana Data Center appeared first on StartupHub.ai.

AI Browser Security: The Peril of the Premature Launch

The post AI Browser Security: The Peril of the Premature Launch appeared first on StartupHub.ai.

“The rush to get these things to market has not allowed them to be secured.” This stark assessment from Dave McGinnis, Global Partner for Cyber Threat Management Offering Group at IBM, encapsulates the central tension explored in a recent episode of IBM’s Security Intelligence podcast. Host Matt Kosinski, alongside McGinnis and fellow panelists Suja Viswesan […]

The post AI Browser Security: The Peril of the Premature Launch appeared first on StartupHub.ai.

Radical’s full-size prototype for a stratospheric drone makes first flight

A prototype for Radical’s Evenstar stratospheric solar-powered airplane flies over its Oregon test range. (Radical Photo)

Seattle-based Radical says it has put a full-size prototype for a solar-powered drone through its first flight, marking one low-altitude step in the startup’s campaign to send robo-planes into the stratosphere for long-duration military and commercial missions.

“It’s a 120-foot-wingspan aircraft that only weighs 240 pounds,” Radical CEO James Thomas told GeekWire. “We’re talking about something that has a wingspan just a bit bigger than a Boeing 737, but it only weighs a little bit more than a person. So, it’s a pretty extreme piece of engineering, and we’re really proud of what our team has achieved so far.”

Last month’s flight test was conducted at the Tillamook UAS Test Range in Oregon, which is one of the sites designated by the Federal Aviation Administration for testing uncrewed aerial systems. Thomas declined to delve into the details about the flight’s duration or maximum altitude, other than to say that it was a low-altitude flight.

“We take off from the top of a car, and takeoff speeds are very low, so it flies just over 15 miles an hour on the ground or at low altitudes,” he said. (Thomas later added that the car was a Subaru, a choice he called “a Pacific Northwest move, I guess.”)

The prototype ran on battery power alone, but future flights will make use of solar arrays mounted on the plane’s wings to keep it in the air at altitudes as high as 65,000 feet for months at a time. For last month’s test, engineers added ballast to the prototype to match the weight of the solar panels and batteries required for stratospheric flight. Thomas said he expects high-altitude tests to begin next year.

  • Radical team monitors flight test
    Radical CEO James Thomas and teammates monitor the first flight test of a full-size Evenstar prototype. (Radical Photo)
  • The prototype is mounted on top of a car for takeoff. (Radical Photo)
  • Radical’s prototype rises from the top of its launch car. (Radical Photo)
  • The Evenstar prototype takes to the air. (Radical Photo)
  • The prototype has a wingspan of 120 feet. (Radical Photo)

Thomas and his fellow co-founder, chief technology officer Cyriel Notteboom, are veterans of Prime Air, Amazon’s effort to field a fleet of delivery drones. They left Amazon in mid-2022 to launch Radical and have since raised more than $4.5 million in funding. September’s test of a full-size drone follows up on the 24-hour-plus flight of a 13-pound subscale prototype in 2023.

The company’s manufacturing operation is based in Seattle’s Ballard neighborhood. There are currently six people on the team, plus a new hire, Thomas said. “We’re still lean,” he said. “To make this airplane work, it has to be really efficient, right? Really efficient electronics and aerodynamics. And you also need a really efficient team.”

Thomas said Radical has attracted interest from potential customers, but he shied away from discussing details. “We’re working with groups in the government and also commercially,” he said. “Obviously there are applications at the end of this that span everything from imagery through telecommunications and weather forecasting. There are a lot of people really interested in the technology, and the thing that stops us from serving those customers is not having a product up in the sky. So, that’s what we’re working through.”

Radical’s solar-powered airplane, known as Evenstar, is just one example of a class of aircraft known as high-altitude platform stations, or HAPS. Thomas and his teammates use a different term to refer to Evenstar. They call it a StratoSat, because it’s designed to take on many of the tasks typically assigned to satellites — but without the costs and the hassles associated with launching a spacecraft.

Potential applications include doing surveillance from a vantage point that’s difficult to attack, providing telecommunication links in areas where connectivity is constrained, monitoring weather patterns and conducting atmospheric research.

“We have customers who are really excited about the way that this can improve how we understand Earth’s weather systems and climate,” Thomas said. “That’s an application that we’re really excited to get into.”

Evenstar will carry payloads weighing up to about 33 pounds (15 kilograms). “That was based on analysis about major use cases,” Thomas explained. “That payload is enough to carry high-bandwidth, direct-to-device radio communications, or to carry ultra-high-resolution imaging equipment.”

Radical isn’t the only company working on solar-powered aircraft built for long-duration flights in the stratosphere. Other entrants in the market include AeroVironment, SoftBank, BAE Systems, Swift Engineering, Kea Aerospace, Korea Aerospace Industries and NewSpace Research & Technologies. Airbus’ solar-powered Zephyr set the record for long-duration stratospheric flight in 2022 with a 64-day test mission that ended in a crash.

Among those who tried but failed to field stratospheric solar drones are Alphabet, which closed down Titan Aerospace in 2016; and Facebook, which abandoned Project Aquila in 2018.

Thomas said the outlook for high-flying solar planes has brightened in the past decade.

“The key supporting technologies have matured enormously,” he said. “Commercial battery energy density has doubled in that 10-year time period. Solar cells are 10 times cheaper than they were just 10 years ago. And then you have advances in compute and AI, and all of these things feed into the situation we have now, where it’s actually possible to make the models close — whereas when we run the 10-year-old numbers, we can’t close the models.”

The way Thomas sees it, the concept behind Radical isn’t all that radical anymore.

“Not only do our models say this will work, but we have flight data that agrees with our models, and says this is a technology that can serve its purpose and unlock the potential of persistent infrastructure in the sky,” he said. “I can see why other people are pursuing it. It’s not a new idea. It’s one that people have wanted to crack for a long time, and we’re at this critical inflection point where it’s finally possible.”

(PR) AAEON Releases UP Xtreme ARL Edge

AAEON's UP brand, predominately known for its industrial-grade developer board series, today announced the release of the UP Xtreme ARL Edge, its first Mini PC to feature the new Intel Core Ultra 200H Series platform (formerly Arrow Lake). Primarily designed to bring AI functionality to applications such as industrial robots and AMRs, the UP Xtreme ARL Edge boasts a ruggedized enclosure with fanless operation, capable of operating in temperatures as wide as -20°C to 60°C. Moreover, the Mini PC is equipped with a 9 V to 36 V DC power input range, while also boasting impressive resistance to shock and vibration.

Despite its fanless operation, the UP Xtreme ARL Edge offers a choice of Intel Core Ultra (Series 2) processors, with default models offering the Intel Core Ultra 5 processor 225H, Intel Core Ultra 7 processor 255H, or the Intel Core Ultra 7 processor 265H, with the latter capable of utilizing the platform's enhanced integrated CPU, GPU, and NPU to provide up to 97 TOPS of AI performance.

(PR) Electronic Arts Reports Q2 FY26 Results

Electronic Arts Inc. today announced preliminary financial results for its second fiscal quarter ended September 30, 2025. "Across our broad portfolio - from EA SPORTS to Battlefield, The Sims, and skate. - our teams continue to create high-quality experiences that connect and inspire players around the world," said Andrew Wilson, CEO of Electronic Arts. "The creativity, passion, and innovation of our teams are at the heart of everything we do."

Selected Operating Highlights and Metrics
  • Net bookings for the quarter totaled $1.818 billion, down 13% year-over-year, driven largely by the extraordinary release of College Football 25 in the prior year period.
  • EA SPORTS Madden NFL 26 delivered net bookings growth year-over-year in the quarter, with players returning to the title.
  • Apex Legends returned to net bookings growth on a year-over-year basis in Q2, growing double digits, as the team continues to deliver new experiences that drove deeper engagement.
  • EA SPORTS FC 26 HD net bookings were up mid single digits year-over-year versus EA SPORTS FC 25 HD net bookings in the quarter, after adjusting for differences in deluxe edition content timing.
  • The successful launches of skate. and Battlefield 6 - underscore the strength of EA's long-term strategy to build community-driven experiences centered on creativity, connection, and long-term growth.

(PR) Logitech Announces Q2 Fiscal Year 2026 Results

Logitech International today announced financial results for the second quarter of Fiscal Year 2026.
  • Sales were $1.19 billion, up 6 percent in US dollars and 4 percent in constant currency compared to Q2 of the prior year.
  • GAAP gross margin was 43.4 percent, down 20 basis points compared to Q2 of the prior year. Non-GAAP gross margin was 43.8 percent, down 30 basis points compared to Q2 of the prior year.
  • GAAP operating income was $191 million, up 19 percent compared to Q2 of the prior year. Non-GAAP operating income was $230 million, up 19 percent compared to Q2 of the prior year.
  • GAAP earnings per share (EPS) was $1.15, up 21 percent compared to Q2 of the prior year. Non-GAAP EPS was $1.45, up 21 percent compared to Q2 of the prior year.
  • Cash flow from operations was $229 million. The quarter-ending cash balance was $1.4 billion.
  • The Company returned $340 million to shareholders through its annual dividend payment and share repurchases.

Turtle Beach Launches PC Edition of Victrix Pro BFG Reloaded Modular Controller

Leading gaming accessories maker Turtle Beach Corporation and its Victrix brand, today announced the launch of the new Victrix Pro BFG Reloaded Wireless Modular Controller - PC Edition. The Victrix Pro BFG Controllers have long been coveted by competitive esports gamers the world over, and the refined Victrix Pro BFG Reloaded Wireless Modular Controller - PC Edition adds powerful features. These latest upgrades include magnetic, anti-drift Hall Effect thumbsticks and triggers, a new touch-sensitive trackpad on the front of the controller, back buttons that can be mapped to any controller input as well as to keyboard and mouse inputs, and a 1 kHz polling rate for even faster input that's available when using the controller in wired mode.

The Victrix Pro BFG Reloaded Wireless Modular Controller - PC Edition is built for serious PC gamers and is available in North America as a Best Buy retail exclusive and directly from Turtle Beach at www.turtlebeach.com. Globally, the controller is also available on turtlebeach.com and participating retailers for $189.99|£159.99|€179.99 MSRP.

(PR) SK hynix Announces 3Q25 Financial Results

SK hynix Inc. announced today that it has recorded 24.4489 trillion won in revenues, 11.3834 trillion won in operating profit (with an operating margin of 47%), and 12.5975 trillion won in net profit (with a net margin of 52%) in the third quarter. The company achieved its highest-ever quarterly performance, driven by the full-scale rise in prices of DRAM and NAND, as well as the increasing shipments of high-performance products for AI servers. In particular, operating profit exceeded 10 trillion won for the first time in the company's history.

As demand across the memory segment has soared due to customers' expanding investments in AI infrastructure, SK hynix once again surpassed the record-high performance of the previous quarter due to increased sales of high value-added products such as 12-high HBM3E and DDR5 for servers. Driven by surging demand for AI servers, shipments of high-capacity DDR5s of 128 GB or more have more than doubled from the previous quarter. In NAND, the portion of AI server eSSD, which commands a price premium, expanded significantly as well. Building on this strong performance, the company's cash and cash equivalents at the end of the third quarter increased by 10.9 trillion won from the previous quarter, reaching 27.9 trillion won. Meanwhile, interest bearing debt stood at 24.1 trillion won, enabling the company to successfully transition to a net cash position of 3.8 trillion won.

Intel Nova Lake LGA1954 Socket Keeps Cooler Compatibility with LGA1851: Thermaltake

A leaked product installation guide by Thermaltake confirms that Intel's upcoming desktop socket for its Core Ultra 400 series "Nova Lake-S" processors, the LGA1954, will retain cooler compatibility with the current LGA1851 and previous LGA1700. This was rumored as far back as May 2025, but now has confirmation from a major CPU cooler manufacturer. This means you should be able to reuse your CPU coolers purchased as far back as 2021 for your 12th Gen Core "Alder Lake" build, with your future "Nova Lake" build. The LGA1954 socket and package is expected to have similar physical dimensions to current Intel desktop chips, with the company increasing socket pin counts by reducing the size of the contact points, and making the island—the central region of the land grid that has some SMDs—smaller.

(PR) QNAP Launches All-Flash NASbook TBS-h574TX with Pre-installed Enterprise E1.S SSDs

QNAP Systems, Inc., a leading computing, and storage solutions innovator, today announced new models of the acclaimed TBS-h574TX all-flash NASbook, which come pre-installed with enterprise-grade E1.S SSDs. Available with two raw capacities, 9.6 TB or 19.2 TB, the new models are purpose-built for high-throughput post-production workflows including video editing, visual effects (VFX), and animation. With support for hot-swappable E1.S SSDs, the TBS-h574TX enables uninterrupted ingest-to-delivery operations—empowering on-location shoots, small-scale video production teams, SOHO users, and mobile media professionals to collaborate seamlessly and maintain peak productivity.

"Speed and reliability are critical in media production. By integrating QNAP-validated E1.S SSDs into the TBS-h574TX, users no longer need to worry about drive compatibility. They can power on, configure, and get straight to editing." said Andy Chuang, Product Manager of QNAP, adding "This NASbook combines portable design, all-flash performance, and hot-swappable SSDs to offer a uniquely compact, powerful, and zero-downtime experience—so teams can focus on creativity anytime, anywhere with peace of mind."

(PR) Seagate Technology Reports Fiscal First Quarter 2026 Financial Results

Seagate Technology Holdings plc (NASDAQ: STX), a leading innovator of mass-capacity data storage, today reported financial results for its fiscal first quarter ended October 3, 2025. "Seagate delivered strong September quarter results, with revenue growth of 21% year-over-year and non-GAAP EPS exceeding the high end of our guided range. Our performance underscores the team's strong execution and robust customer demand for our high-capacity storage products," said Dave Mosley, Seagate's chair and chief executive officer.

"With clear visibility into sustained demand strength, we are ramping shipments of our areal density-leading Mozaic HAMR products, which are now qualified with five of the world's largest cloud customers. These products address customers' performance, durability and TCO needs at scale to continue supporting demand for existing use cases such as social media video platforms as well as growth driven by new AI applications. AI is transforming how content is being consumed and generated, increasing the value of data and storage and Seagate is well positioned for continued profitable growth," Mosley concluded.

(PR) Durabook Introduces Next-Generation R10 Copilot+ PC Rugged Tablet

Durabook, the global rugged mobile solutions brand owned by Twinhead International Corporation, today announced the launch of its next-generation AI-powered fully rugged R10 tablet. Equipped with high-performance Intel Core Ultra 200V series processors, the 10" device is one of the first Copilot+ PC rugged tablets on the market. Redefining versatility in the tablet world, the R10 can be paired with a detachable backlit keyboard, seamlessly transforming it into a 2-in-1 rugged laptop PC. This adaptable design delivers the ideal balance of performance, reliability, and mobility, empowering users with a powerful and intelligent rugged device that fuses cutting-edge AI capabilities with Durabook's hallmark durability and field-proven design.

Twinhead's CEO, Fred Kao, said: "Durabook devices are built to meet the needs of professionals who depend on powerful, reliable technology to stay productive in any environment. The compact and versatile R10 redefines the 10-inch rugged tablet category by providing AI-enhanced productivity supported by smart engineering for optimal usability. The R10's adaptive design and customisation capability make it the perfect partner for field service operatives working across a wide range of sectors, including industrial manufacturing, warehouse management, automotive diagnostics, public safety, utilities, transport and logistics."

Seattle startup TestSprite raises $6.7M to become ‘testing backbone’ for AI-generated code

TestSprite founders Yunhao Jiao (left) and Rui Li. (TestSprite Photo)

In the era of AI-generated software, developers still need to make sure their code is clean. That’s where TestSprite wants to help.

The Seattle startup announced $6.7 million in seed funding to expand its platform that automatically tests and monitors code written by AI tools such as GitHub Copilot, Cursor, and Windsurf.

TestSprite’s autonomous agent integrates directly into development environments, running tests throughout the coding process rather than as a separate step after deployment.

“As AI writes more code, validation becomes the bottleneck,” said CEO Yunhao Jiao. “TestSprite solves that by making testing autonomous and continuous, matching AI speed.”

The platform can generate and run front- and back-end tests during development to ensure AI-written code works as expected, help AI IDEs (Integrated Development Environments) fix software based on TestSprite’s integration testing reports, and continuously update and rerun test cases to monitor deployed software for ongoing reliability.

Founded last year, TestSprite says its user base grew from 6,000 to 35,000 in three months, and revenue has doubled each month since launching its 2.0 version and new Model Context Protocol (MCP) integration. The company employs about 25 people.

Jiao is a former engineer at Amazon and a natural language processing researcher. He co-founded TestSprite with Rui Li, a former Google engineer.

Jiao said TestSprite doesn’t compete with AI coding copilots, but complements them by focusing on continuous validation and test generation. Developers can trigger tests using simple natural-language commands, such as “Test my payment-related features,” directly inside their IDEs.

The seed round was led by Bellevue, Wash.-based Trilogy Equity Partners, with participation from Techstars, Jinqiu Capital, MiraclePlus, Hat-trick Capital, Baidu Ventures, and EdgeCase Capital Partners. Total funding to date is about $8.1 million.

Crash Bandicoot Netflix series in the works – reports claim

It looks like Crash Bandicoot is the newest video game classic to move to Netflix Netflix is the king of video game adaptations. In recent years, Netflix has adapted Castlevania, Tomb Raider, Splinter Cell, Sonic the Hedgehog, and even Cyberpunk 2077. Now, the streaming giant is reportedly developing a new animated series based on Crash […]

The post Crash Bandicoot Netflix series in the works – reports claim appeared first on OC3D.

I Was Fired From My Own Startup. Here’s What Every Founder Should Know About Letting Go

By Yakov Filippenko

No founder plans for the day they get fired from their own company.

You plan for funding rounds, product launches and exits, but not for the boardroom moment when everyone raises their hand, and you realize your journey inside the company is over.

It happened to me. I called that board meeting. I set the vote. We had to choose who would stay, me or my co-founder. The vote didn’t go my way.

In movies, this is where the music swells and the credits roll. Steve Jobs after John Sculley. Travis Kalanick after Bill Gurley. In real life, there’s no cinematic pause. No final scene. Just the quiet realization that everything you built now belongs to someone else.

What follows isn’t drama, either. It’s disorientation. And like most founders, I had no idea how to handle it.

Don’t fill the silence too fast

Yakov Filippenko, founder and CEO at Intch
Yakov Filippenko

When it ended, I filled my calendar with aimless meetings. Five or six a day. Not because they had any real purpose, but because it felt strange not to be doing business. For more than 10 years, I’d never had a day when I didn’t have to think about work. A startup teaches you to fix things fast.

When you’re out, though, there’s nothing left to fix. Only yourself. Getting pushed out isn’t like missing a quarterly target. It’s like losing the story you’ve been telling yourself for years.

The hardest part is that you don’t know who to blame.

Investors? They were doing their job. Yourself? Every decision made sense in context. So the frustration lands on the person closest to you. Your co-founder. It’s not about logic. I would say it is more of a defense mechanism. It’s how the mind tries to make sense of loss.

Learn to see the pattern

For months, I kept asking: What did we do wrong? It took me a couple of years to see the pattern.

Later, working inside a venture fund helped me see the truth. I saw the same story play out again and again. Founders repeating the same emotional arc, as follows:

  • Expectation of an M&A deal;
  • Long wait for the deal;
  • The deal collapses;
  • The startup stalls;
  • Expectations diverge; and then
  • Resentment between co-founders

Every time, the same sequence. And when the dream fades, blame fills the gap.

The pattern itself is that the anger toward a co-founder is often a projection of disappointment from a failed deal. If that energy isn’t processed consciously, it finds its own way out, usually as anger. You can’t really be mad at yourself; you did everything right. The other side acted in their own interest. So it lands on the person next to you, your co-founder and your team, and for them, it’s you.

And that’s where I have a bit of a claim toward investors because they often see this dynamic coming and could at least warn founders about it.

Once I recognized the pattern, I stopped seeing my story as a failure. It was part of a cycle almost every founder goes through, only most don’t talk about it.

Trade strategy for emotional tools

Traditional business tools didn’t help. OKRs, planning sessions, strategy off-sites, none of it worked on the inner collapse that comes when your identity and your company split apart.

This led me to begin studying Gestalt therapy. It gave me the language to understand how situations like this actually work, their cycles, causes and effects, and how to think about them with the right awareness and perspective. One part of building startups isn’t about pivots or fundraising. It’s realizing how much of yourself you’ve tied to the story you’re telling the world.

The point is to first get conscious of your anger, and then let it out.

Acceptance comes in stages

Acceptance doesn’t show up all at once. It arrives in pieces.

For me, the first piece came when I watched another founder go through the same breakdown and recognized every stage.

The second came when my first startup was acquired. Not at the valuation I’d dreamed of, but enough to accept that it continued without me. The third came with my current company, Intch, which is built from calm, not from fear.

I no longer measure success by control, but by clarity.

What I’d tell a founder in that room

Here’s what I’d share now with another entrepreneur who finds themselves in the same situation.

  • You’re losing a story, not your worth. Give yourself space to grieve it.
  • Don’t let anger choose a target. Name the pattern instead.
  • Find mirrors. Other founders are walking through the same steps.
  • Business tools have limits. Emotional tools matter here.
  • Acceptance comes in stages. You’ll recognize them when they arrive.

Founders are trained to manage everything except their own psychology. But startups are way more than capital and code. They run on the emotional architecture of the people who build them. And when that structure breaks, rebuilding it is the most important startup you’ll ever work on.


 Yakov Filippenko is a seasoned entrepreneur with more than 10 years of experience in IT and technologies, as well as scaling businesses internationally. As a product manager at Yandex, he led a team that grew the product’s user base from 500,000 to 1.2 million and secured its entry into the international market. Subsequently, he co-founded SailPlay, which he scaled to 45 countries and eventually exited, after it was acquired by Retail Rocket in 2018. In 2021, Filippenko launched Intch, an AI-powered platform connecting part-time professionals with flexible roles.

Illustration: Dom Guzman

Ninja Gaiden 4 – Achieve S Rank Easily in All Missions With This Trick

Ninja Gaiden 4 game screen with cybernetic ninja in an action pose surrounded by futuristic buildings.

With its many accessibility options, Ninja Gaiden 4 is one of the most approachable entries in the series, allowing players to experience its high-speed action without letting the high challenge level get in the way of enjoyment. Those who want to truly appreciate the game, however, will wish to master many of its intricacies, put their skills to the test, and attempt to achieve an S rank for completing each of the story chapters. Here are some tips to help you understand the mission scoring system and what you should always strive to do to achieve such a high rank. […]

Read full article at https://wccftech.com/how-to/ninja-gaiden-4-achieve-s-rank-easily-in-all-missions-with-this-trick/

Apple Is ‘Not Yet In Talks With TSMC’ For Its A16 Process, As Its Current Focus Likely Lies In Developing Several 2nm Chipsets Next Year

Apple has not entered talks with TSMC to use its A16, or 1.6nm process

The A20 and A20 Pro will be Apple’s first chipsets fabricated on TSMC’s 2nm process, pretty much highlighting the company’s propensity to jump to the newest manufacturing nodes as quickly as possible to have an advantage over the competition. On the same lithography, we expect the California-based giant to introduce a total of four chipsets, and after a couple of generations, Apple will switch to an even more advanced technology. The most obvious transition would be TSMC’s A16, or 1.6nm, but a report says neither company has entered talks for this node. Future Apple chipsets are expected to take advantage of […]

Read full article at https://wccftech.com/apple-not-yet-in-talks-with-tsmc-over-a16-process/

John Romero Says He’s Talking with Many Companies to Finish the Game That Was Being Funded by Microsoft

John Romero Games logo features skull design alongside person with glasses in black jacket.

John Romero might not be a name that youngsters recognize, but he was a legend of the early days of the first-person shooter genre, co-founding id Software and making videogames like Wolfenstein 3D, Doom, Hexen, and Quake, to name a few. Nowadays, he makes smaller titles at Romero Games. The studio's most recent title was the Mafia-themed turn-based strategy game Empire of Sin, launched in late 2020 to a mixed reception. More recently, John Romero and his fellow developers signed a deal with Microsoft for their next project, but that deal went awry along with the latest Xbox layoffs. The […]

Read full article at https://wccftech.com/john-romero-talking-companies-finish-game-funded-by-microsoft/

Microsoft CEO: We’re Now the Largest Gaming Publisher and Want to Be Everywhere; The Real Competitor Is TikTok

THIS IS AN XBOX text with Microsoft CEO Satya Nadella, Samsung devices and Xbox gaming console in the background.

Microsoft CEO Satya Nadella was featured in a live interview on TBPN (Technology Business Programming Network) discussing various topics, including the company's updated multiplatform strategy on the gaming front. Nadella pointed out that following the acquisition of Activision Blizzard, Microsoft is now the largest gaming publisher in terms of revenue. The goal, then, is to be everywhere the consumer is, just like with Office. The Microsoft CEO then interestingly referred to TikTok, or, to be more accurate, short-form video as a whole, as the true competitor of gaming. Remember, the biggest gaming business is the Windows business. And of course, […]

Read full article at https://wccftech.com/microsoft-ceo-largest-gaming-publisher-want-to-be-everywhere-competition-tiktok/

ZipWik – ZipWik transforms static files into a single, shareable smart link


ZipWik lets you turn several documents—PDFs, slides, images or spreadsheets—into one simple document and a link you can share anywhere, from WhatsApp to Slack. You can control who sees it, set it to expire, and skip the hassle of sending large attachments. ZipWik also shows you what happens after you share: who viewed it, how long they spent, whether they downloaded or shared it, and which documents got the most attention. It’s an easy way to share files, stay in control, and actually understand how people engage with your content.

No more struggles with large attachments, sharing documents that you cannot control, inability to combine document formats..ZipWik does it all. Try Today.

View startup

Active Exploits Hit Dassault and XWiki — CISA Confirms Critical Flaws Under Attack

Threat actors are actively exploiting multiple security flaws impacting Dassault Systèmes DELMIA Apriso and XWiki, according to alerts issued by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and VulnCheck. The vulnerabilities are listed below - CVE-2025-6204 (CVSS score: 8.0) - A code injection vulnerability in Dassault Systèmes DELMIA Apriso that could allow an attacker to

Arm’s GitHub Copilot Agentic AI: Cloud Migration’s Next Leap

The post Arm’s GitHub Copilot Agentic AI: Cloud Migration’s Next Leap appeared first on StartupHub.ai.

Arm and GitHub's new Migration Assistant Custom Agent for GitHub Copilot Agentic AI fundamentally transforms cloud migration to Arm-based infrastructure.

The post Arm’s GitHub Copilot Agentic AI: Cloud Migration’s Next Leap appeared first on StartupHub.ai.

Vesence lands $9M to bring rigorous AI review to law firms

The post Vesence lands $9M to bring rigorous AI review to law firms appeared first on StartupHub.ai.

Vesence's $9M seed round fuels its mission to embed rigorous AI review agents directly into Microsoft Office, promising law firms unparalleled precision and compliance.

The post Vesence lands $9M to bring rigorous AI review to law firms appeared first on StartupHub.ai.

Cartesia’s Sonic-3 TTS laughs and emotes at human speed

The post Cartesia’s Sonic-3 TTS laughs and emotes at human speed appeared first on StartupHub.ai.

Cartesia's Sonic-3 uses a State Space Model architecture to deliver emotionally expressive AI speech, including laughter, at speeds faster than a human can respond.

The post Cartesia’s Sonic-3 TTS laughs and emotes at human speed appeared first on StartupHub.ai.

Salesforce Agentic AI: The Enterprise Evolution

The post Salesforce Agentic AI: The Enterprise Evolution appeared first on StartupHub.ai.

Salesforce's 'Agentic AI' strategy, featuring Agentforce 360 and Forward Deployed Engineers, aims to fundamentally redefine enterprise operations with unified, workflow-spanning AI agents.

The post Salesforce Agentic AI: The Enterprise Evolution appeared first on StartupHub.ai.

Polygraf AI Closes $9.5M Funding Round to Scale Its Secure AI Solutions for Enterprise Defense and Intelligence

The post Polygraf AI Closes $9.5M Funding Round to Scale Its Secure AI Solutions for Enterprise Defense and Intelligence appeared first on StartupHub.ai.

Polygraf AI, based in Austin, Texas announced the closing of their $9.5M seed round participation from DOMiNO Ventures, Allegis Capital, Alumni Ventures, DataPower VC and previous investors to accelerate their mission to bring clarity and trust to enterprise AI. With the new $9.5M Seed round, Polygraf AI is building the next generation of enterprise AI […]

The post Polygraf AI Closes $9.5M Funding Round to Scale Its Secure AI Solutions for Enterprise Defense and Intelligence appeared first on StartupHub.ai.

NVIDIA BlueField-4 Powers AI Factory OS

The post NVIDIA BlueField-4 Powers AI Factory OS appeared first on StartupHub.ai.

NVIDIA BlueField-4 is poised to redefine AI infrastructure, offering unprecedented compute power, 800Gb/s throughput, and advanced security for gigascale AI factories.

The post NVIDIA BlueField-4 Powers AI Factory OS appeared first on StartupHub.ai.

Primaa raises €7M to advance AI cancer diagnostics

The post Primaa raises €7M to advance AI cancer diagnostics appeared first on StartupHub.ai.

Biotech company Primaa raised €7 million to expand its AI software that helps pathologists improve the speed and accuracy of cancer diagnostics.

The post Primaa raises €7M to advance AI cancer diagnostics appeared first on StartupHub.ai.

FitResume – AI Resume Generator, Job Tailoring and many more


Fitresume.app is a free, AI‑driven resume builder that generates ATS‑friendly resumes, lets you choose polished templates, and custom‑tailors every resume to the exact wording of each job description. Ready to download as a PDF in seconds. Beyond writing, it tracks every application, follow‑up, and interview while visualizing your entire pipeline with an interactive Sankey diagram, so you stay organised and land offers faster.

View startup

CEO of spyware maker Memento Labs confirms one of its government customers was caught using its malware

Security researchers found a government hacking campaign that relies on Windows spyware developed by surveillance tech maker Memento Labs. When reached by TechCrunch, the spyware maker's chief executive blamed a government customer for getting caught.

NVIDIA Tackles AI Energy Consumption with Gigawatt Blueprint

The post NVIDIA Tackles AI Energy Consumption with Gigawatt Blueprint appeared first on StartupHub.ai.

NVIDIA's Omniverse DSX blueprint provides a standardized, energy-efficient framework for designing and operating gigawatt-scale AI factories, directly addressing AI energy consumption.

The post NVIDIA Tackles AI Energy Consumption with Gigawatt Blueprint appeared first on StartupHub.ai.

NVIDIA Open Models Broaden AI Innovation Access

The post NVIDIA Open Models Broaden AI Innovation Access appeared first on StartupHub.ai.

NVIDIA's new open models and data across language, robotics, and biology are set to democratize advanced AI and accelerate innovation.

The post NVIDIA Open Models Broaden AI Innovation Access appeared first on StartupHub.ai.

OpenAI Restructuring and Amazon’s AI Paradox Reshape Tech Landscape

The post OpenAI Restructuring and Amazon’s AI Paradox Reshape Tech Landscape appeared first on StartupHub.ai.

The capital requirements and strategic maneuvering defining the artificial intelligence frontier are starkly evident in recent developments, from OpenAI’s finalized restructuring to Amazon’s contrasting AI investment strategy. CNBC’s Morgan Brennan recently spoke with CNBC Business News reporter MacKenzie Sigalos, delving into the implications of these pivotal shifts for the broader tech ecosystem and workforce. Their […]

The post OpenAI Restructuring and Amazon’s AI Paradox Reshape Tech Landscape appeared first on StartupHub.ai.

NVIDIA AI Factory Government: Securing Public Sector AI

The post NVIDIA AI Factory Government: Securing Public Sector AI appeared first on StartupHub.ai.

NVIDIA's AI Factory for Government provides a secure, full-stack AI reference design, enabling federal agencies to deploy mission-critical AI with stringent security.

The post NVIDIA AI Factory Government: Securing Public Sector AI appeared first on StartupHub.ai.

NVIDIA IGX Thor Powers Real-Time AI at the Industrial Edge

The post NVIDIA IGX Thor Powers Real-Time AI at the Industrial Edge appeared first on StartupHub.ai.

NVIDIA IGX Thor is an industrial-grade platform delivering 8x the AI compute of its predecessor, enabling real-time physical AI for critical industrial and medical applications.

The post NVIDIA IGX Thor Powers Real-Time AI at the Industrial Edge appeared first on StartupHub.ai.

Battlefield Launches RedSec Free-to-Play Battle Royale Spin-Off With up to 100-Player Matches

jAs predicted by the early leaks and rumors surrounding the new game, EA has officially announced Battlefield RedSec as a free-to-play battle royale game "built on Battlefield's iconic DNA," with the new battle royale shooter launching on PC via Steam, Epic Games, and the EA App and on PS5 and Xbox Series X|S consoles. Players will face off in 100-player battle royale (in 25 teams of four or 50 teams of two), squad mode, and a mission-based elimination mode. The game is set in Fort Lyndon, a government testing facility in California that has become a war zone, and the biggest Battlefield map to date. As you might expect of an urban setting, the battlefield varies from tight interior spaces to wide open city streets, and much of the environment in RedSec is destructible.

Battlefield RedSec calls for, at minimum, an Intel Core i5-8400 or AMD Ryzen 5 2600, an AMD Radeon RX 5600 XT, NVIDIA GeForce RTX 2060, or Intel Arc A380, and 16 GB of RAM. As is increasingly the case with multiplayer games these days, RedSec also requires that players have TPM 2.0 and Secure Boot enabled, effectively locking out any potential gamers on Linux, including the Valve Steam Deck. RedSec also gives creators access to Portal, the updated Battlefield UGC and custom game creator, replete with all the vehicles and weapons from Battlefield RedSec.

NVIDIA Could Receive Approval for Blackwell AI Chip in China, Marking a Major “Bonus” for Its Market Share in the Region

Unbranded chip held on stage with spiral backdrop.

NVIDIA's market position in China could see a significant boost following the Trump-Xi meeting, as President Trump hints at discussing 'Blackwell' AI chips for Beijing. NVIDIA's Blackwell AI Chip Will Be a Topic of Discussion Under Trump-Xi Meeting, With a Potential Breakthrough in Sight The Chinese market has been a significant challenge for Jensen Huang since the US-China trade hostilities, and now, it seems like there might be a sigh of relief on the horizon for NVIDIA. According to a report by Bloomberg, President Trump has suggested discussing NVIDIA's Blackwell AI chip with the Chinese counterpart, indicating that chips could […]

Read full article at https://wccftech.com/nvidia-could-receive-approval-for-blackwell-ai-chip-in-china/

Filing: Amazon cuts more than 2,300 jobs in Washington state as part of broader layoffs

GeekWire File Photo

Amazon will lay off 2,303 corporate employees in Washington state, primarily in its Seattle and Bellevue offices, according to a filing with the state Employment Security Department that provides the first geographic breakdown of the company’s 14,000 global job cuts.

A detailed list included with the state filing shows a wide array of impacted roles, including software engineers, program managers, product managers, and designers, as well as a significant number of recruiters and human resources staff. 

Senior and principal-level roles are among those being cut, aligning with a company-wide push to use the cutbacks to help reduce bureaucracy and operate more efficiently.

Amazon announced the cuts Tuesday morning, part of a larger push by CEO Andy Jassy to streamline the company. Jassy had previously told Amazon employees in June that efficiency gains from AI would likely lead to a smaller corporate workforce over time.

In a memo from HR chief Beth Galetti, the company signaled that further cutbacks will continue into 2026. Reuters reported Monday that the number of layoffs could ultimately total as many as 30,000 people, which is still possible as the layoffs continue into next year.

Lilly Blackwell Drug Discovery: A New Era

The post Lilly Blackwell Drug Discovery: A New Era appeared first on StartupHub.ai.

Lilly's new AI factory, powered by NVIDIA Blackwell GPUs, marks a pivotal shift in drug discovery, promising unprecedented speed and scale in pharmaceutical innovation.

The post Lilly Blackwell Drug Discovery: A New Era appeared first on StartupHub.ai.

NVIDIA AI-RAN: Open Source Rewrites Wireless Innovation

The post NVIDIA AI-RAN: Open Source Rewrites Wireless Innovation appeared first on StartupHub.ai.

NVIDIA's open-sourcing of Aerial software, coupled with DGX Spark, is democratizing AI-native 5G and 6G development, accelerating wireless innovation at an unprecedented pace.

The post NVIDIA AI-RAN: Open Source Rewrites Wireless Innovation appeared first on StartupHub.ai.

Celestica CEO Mionis: AI is a Must-Have Utility, Not a Bubble

The post Celestica CEO Mionis: AI is a Must-Have Utility, Not a Bubble appeared first on StartupHub.ai.

“AI right now used to be a nice-to-have. It’s a utility, it’s a must-have.” This declarative statement from Celestica President and CEO Rob Mionis on CNBC’s Mad Money with Jim Cramer cuts directly to the core of the current technological zeitgeist. It frames artificial intelligence not as a speculative fad or a nascent technology still […]

The post Celestica CEO Mionis: AI is a Must-Have Utility, Not a Bubble appeared first on StartupHub.ai.

Google AI Studio Unleashes “Vibe Coding” Revolutionizing AI Agent Development

The post Google AI Studio Unleashes “Vibe Coding” Revolutionizing AI Agent Development appeared first on StartupHub.ai.

The era of complex, code-heavy AI development is rapidly giving way to an intuitive, natural language-driven approach, dramatically democratizing creation. At the forefront of this shift is Google AI Studio, a platform designed to accelerate the journey from concept to fully functional AI application in minutes. This new “vibe coding” experience, showcased by Logan Kilpatrick, […]

The post Google AI Studio Unleashes “Vibe Coding” Revolutionizing AI Agent Development appeared first on StartupHub.ai.

AI Fusion Energy: NVIDIA, GA Unveil Digital Twin Breakthrough

The post AI Fusion Energy: NVIDIA, GA Unveil Digital Twin Breakthrough appeared first on StartupHub.ai.

NVIDIA and General Atomics have launched an AI-enabled digital twin for fusion reactors, dramatically accelerating the path to commercial AI fusion energy.

The post AI Fusion Energy: NVIDIA, GA Unveil Digital Twin Breakthrough appeared first on StartupHub.ai.

CyberRidge raises $26M to advance optical security for fiber networks

The post CyberRidge raises $26M to advance optical security for fiber networks appeared first on StartupHub.ai.

CyberRidge launched with $26 million to develop its optical security system that protects data in fiber-optic networks from eavesdropping.

The post CyberRidge raises $26M to advance optical security for fiber networks appeared first on StartupHub.ai.

Apple Bringing Water Resistance To iPad mini, OLED Displays To The MacBook Air, iPad Air, And iPad mini

Laptop with colorful display in front of large text OLED on a blue background.

Apple is finally getting ready to introduce OLED displays in a wider range of its products. However, don't expect a broad-based debut soon, especially given the Cupertino giant's tendency to move at a glacial pace when introducing new technology. Apple is gearing up to introduce OLED displays in the future versions of the MacBook Air, iPad Air, and iPad mini, with water resistance added for good measure Bloomberg's legendary tipster, Mark Gurman, is out with another scoop today, focusing on a much-anticipated display overhaul for the MacBook Air, iPad Air, and iPad mini, all of which are now slated to […]

Read full article at https://wccftech.com/apple-is-testing-oled-displays-for-the-macbook-air-ipad-air-and-ipad-mini-with-water-resistance-also-in-the-offing/

IBM's open source Granite 4.0 Nano AI models are small enough to run locally directly in your browser

In an industry where model size is often seen as a proxy for intelligence, IBM is charting a different course — one that values efficiency over enormity, and accessibility over abstraction.

The 114-year-old tech giant's four new Granite 4.0 Nano models, released today, range from just 350 million to 1.5 billion parameters, a fraction of the size of their server-bound cousins from the likes of OpenAI, Anthropic, and Google.

These models are designed to be highly accessible: the 350M variants can run comfortably on a modern laptop CPU with 8–16GB of RAM, while the 1.5B models typically require a GPU with at least 6–8GB of VRAM for smooth performance — or sufficient system RAM and swap for CPU-only inference. This makes them well-suited for developers building applications on consumer hardware or at the edge, without relying on cloud compute.

In fact, the smallest ones can even run locally on your own web browser, as Joshua Lochner aka Xenova, creator of Transformer.js and a machine learning engineer at Hugging Face, wrote on the social network X.

All the Granite 4.0 Nano models are released under the Apache 2.0 license — perfect for use by researchers and enterprise or indie developers, even for commercial usage.

They are natively compatible with llama.cpp, vLLM, and MLX and are certified under ISO 42001 for responsible AI development — a standard IBM helped pioneer.

But in this case, small doesn't mean less capable — it might just mean smarter design.

These compact models are built not for data centers, but for edge devices, laptops, and local inference, where compute is scarce and latency matters.

And despite their small size, the Nano models are showing benchmark results that rival or even exceed the performance of larger models in the same category.

The release is a signal that a new AI frontier is rapidly forming — one not dominated by sheer scale, but by strategic scaling.

What Exactly Did IBM Release?

The Granite 4.0 Nano family includes four open-source models now available on Hugging Face:

  • Granite-4.0-H-1B (~1.5B parameters) – Hybrid-SSM architecture

  • Granite-4.0-H-350M (~350M parameters) – Hybrid-SSM architecture

  • Granite-4.0-1B – Transformer-based variant, parameter count closer to 2B

  • Granite-4.0-350M – Transformer-based variant

The H-series models — Granite-4.0-H-1B and H-350M — use a hybrid state space architecture (SSM) that combines efficiency with strong performance, ideal for low-latency edge environments.

Meanwhile, the standard transformer variants — Granite-4.0-1B and 350M — offer broader compatibility with tools like llama.cpp, designed for use cases where hybrid architecture isn’t yet supported.

In practice, the transformer 1B model is closer to 2B parameters, but aligns performance-wise with its hybrid sibling, offering developers flexibility based on their runtime constraints.

“The hybrid variant is a true 1B model. However, the non-hybrid variant is closer to 2B, but we opted to keep the naming aligned to the hybrid variant to make the connection easily visible,” explained Emma, Product Marketing lead for Granite, during a Reddit "Ask Me Anything" (AMA) session on r/LocalLLaMA.

A Competitive Class of Small Models

IBM is entering a crowded and rapidly evolving market of small language models (SLMs), competing with offerings like Qwen3, Google's Gemma, LiquidAI’s LFM2, and even Mistral’s dense models in the sub-2B parameter space.

While OpenAI and Anthropic focus on models that require clusters of GPUs and sophisticated inference optimization, IBM’s Nano family is aimed squarely at developers who want to run performant LLMs on local or constrained hardware.

In benchmark testing, IBM’s new models consistently top the charts in their class. According to data shared on X by David Cox, VP of AI Models at IBM Research:

  • On IFEval (instruction following), Granite-4.0-H-1B scored 78.5, outperforming Qwen3-1.7B (73.1) and other 1–2B models.

  • On BFCLv3 (function/tool calling), Granite-4.0-1B led with a score of 54.8, the highest in its size class.

  • On safety benchmarks (SALAD and AttaQ), the Granite models scored over 90%, surpassing similarly sized competitors.

Overall, the Granite-4.0-1B achieved a leading average benchmark score of 68.3% across general knowledge, math, code, and safety domains.

This performance is especially significant given the hardware constraints these models are designed for.

They require less memory, run faster on CPUs or mobile devices, and don’t need cloud infrastructure or GPU acceleration to deliver usable results.

Why Model Size Still Matters — But Not Like It Used To

In the early wave of LLMs, bigger meant better — more parameters translated to better generalization, deeper reasoning, and richer output.

But as transformer research matured, it became clear that architecture, training quality, and task-specific tuning could allow smaller models to punch well above their weight class.

IBM is banking on this evolution. By releasing open, small models that are competitive in real-world tasks, the company is offering an alternative to the monolithic AI APIs that dominate today’s application stack.

In fact, the Nano models address three increasingly important needs:

  1. Deployment flexibility — they run anywhere, from mobile to microservers.

  2. Inference privacy — users can keep data local with no need to call out to cloud APIs.

  3. Openness and auditability — source code and model weights are publicly available under an open license.

Community Response and Roadmap Signals

IBM’s Granite team didn’t just launch the models and walk away — they took to Reddit’s open source community r/LocalLLaMA to engage directly with developers.

In an AMA-style thread, Emma (Product Marketing, Granite) answered technical questions, addressed concerns about naming conventions, and dropped hints about what’s next.

Notable confirmations from the thread:

  • A larger Granite 4.0 model is currently in training

  • Reasoning-focused models ("thinking counterparts") are in the pipeline

  • IBM will release fine-tuning recipes and a full training paper soon

  • More tooling and platform compatibility is on the roadmap

Users responded enthusiastically to the models’ capabilities, especially in instruction-following and structured response tasks. One commenter summed it up:

“This is big if true for a 1B model — if quality is nice and it gives consistent outputs. Function-calling tasks, multilingual dialog, FIM completions… this could be a real workhorse.”

Another user remarked:

“The Granite Tiny is already my go-to for web search in LM Studio — better than some Qwen models. Tempted to give Nano a shot.”

Background: IBM Granite and the Enterprise AI Race

IBM’s push into large language models began in earnest in late 2023 with the debut of the Granite foundation model family, starting with models like Granite.13b.instruct and Granite.13b.chat. Released for use within its Watsonx platform, these initial decoder-only models signaled IBM’s ambition to build enterprise-grade AI systems that prioritize transparency, efficiency, and performance. The company open-sourced select Granite code models under the Apache 2.0 license in mid-2024, laying the groundwork for broader adoption and developer experimentation.

The real inflection point came with Granite 3.0 in October 2024 — a fully open-source suite of general-purpose and domain-specialized models ranging from 1B to 8B parameters. These models emphasized efficiency over brute scale, offering capabilities like longer context windows, instruction tuning, and integrated guardrails. IBM positioned Granite 3.0 as a direct competitor to Meta’s Llama, Alibaba’s Qwen, and Google's Gemma — but with a uniquely enterprise-first lens. Later versions, including Granite 3.1 and Granite 3.2, introduced even more enterprise-friendly innovations: embedded hallucination detection, time-series forecasting, document vision models, and conditional reasoning toggles.

The Granite 4.0 family, launched in October 2025, represents IBM’s most technically ambitious release yet. It introduces a hybrid architecture that blends transformer and Mamba-2 layers — aiming to combine the contextual precision of attention mechanisms with the memory efficiency of state-space models. This design allows IBM to significantly reduce memory and latency costs for inference, making Granite models viable on smaller hardware while still outperforming peers in instruction-following and function-calling tasks. The launch also includes ISO 42001 certification, cryptographic model signing, and distribution across platforms like Hugging Face, Docker, LM Studio, Ollama, and watsonx.ai.

Across all iterations, IBM’s focus has been clear: build trustworthy, efficient, and legally unambiguous AI models for enterprise use cases. With a permissive Apache 2.0 license, public benchmarks, and an emphasis on governance, the Granite initiative not only responds to rising concerns over proprietary black-box models but also offers a Western-aligned open alternative to the rapid progress from teams like Alibaba’s Qwen. In doing so, Granite positions IBM as a leading voice in what may be the next phase of open-weight, production-ready AI.

A Shift Toward Scalable Efficiency

In the end, IBM’s release of Granite 4.0 Nano models reflects a strategic shift in LLM development: from chasing parameter count records to optimizing usability, openness, and deployment reach.

By combining competitive performance, responsible development practices, and deep engagement with the open-source community, IBM is positioning Granite as not just a family of models — but a platform for building the next generation of lightweight, trustworthy AI systems.

For developers and researchers looking for performance without overhead, the Nano release offers a compelling signal: you don’t need 70 billion parameters to build something powerful — just the right ones.

Microsoft’s Copilot can now build apps and automate your job — here’s how it works

Microsoft is launching a significant expansion of its Copilot AI assistant on Tuesday, introducing tools that let employees build applications, automate workflows, and create specialized AI agents using only conversational prompts — no coding required.

The new capabilities, called App Builder and Workflows, mark Microsoft's most aggressive attempt yet to merge artificial intelligence with software development, enabling the estimated 100 million Microsoft 365 users to create business tools as easily as they currently draft emails or build spreadsheets.

"We really believe that a main part of an AI-forward employee, not just developers, will be to create agents, workflows and apps," Charles Lamanna, Microsoft's president of business and industry Copilot, said in an interview with VentureBeat. "Part of the job will be to build and create these things."

The announcement comes as Microsoft deepens its commitment to AI-powered productivity tools while navigating a complex partnership with OpenAI, the creator of the underlying technology that powers Copilot. On the same day, OpenAI completed its restructuring into a for-profit entity, with Microsoft receiving a 27% ownership stake valued at approximately $135 billion.

How natural language prompts now create fully functional business applications

The new features transform Copilot from a conversational assistant into what Microsoft envisions as a comprehensive development environment accessible to non-technical workers. Users can now describe an application they need — such as a project tracker with dashboards and task assignments — and Copilot will generate a working app complete with a database backend, user interface, and security controls.

"If you're right inside of Copilot, you can now have a conversation to build an application complete with a backing database and a security model," Lamanna explained. "You can make edit requests and update requests and change requests so you can tune the app to get exactly the experience you want before you share it with other users."

The App Builder stores data in Microsoft Lists, the company's lightweight database system, and allows users to share finished applications via a simple link—similar to sharing a document. The Workflows agent, meanwhile, automates routine tasks across Microsoft's ecosystem of products, including Outlook, Teams, SharePoint, and Planner, by converting natural language descriptions into automated processes.

A third component, a simplified version of Microsoft's Copilot Studio agent-building platform, lets users create specialized AI assistants tailored to specific tasks or knowledge domains, drawing from SharePoint documents, meeting transcripts, emails, and external systems.

All three capabilities are included in the existing $30-per-month Microsoft 365 Copilot subscription at no additional cost — a pricing decision Lamanna characterized as consistent with Microsoft's historical approach of bundling significant value into its productivity suite.

"That's what Microsoft always does. We try to do a huge amount of value at a low price," he said. "If you go look at Office, you think about Excel, Word, PowerPoint, Exchange, all that for like eight bucks a month. That's a pretty good deal."

Why Microsoft's nine-year bet on low-code development is finally paying off

The new tools represent the culmination of a nine-year effort by Microsoft to democratize software development through its Power Platform — a collection of low-code and no-code development tools that has grown to 56 million monthly active users, according to figures the company disclosed in recent earnings reports.

Lamanna, who has led the Power Platform initiative since its inception, said the integration into Copilot marks a fundamental shift in how these capabilities reach users. Rather than requiring workers to visit a separate website or learn a specialized interface, the development tools now exist within the same conversational window they already use for AI-assisted tasks.

"One of the big things that we're excited about is Copilot — that's a tool for literally every office worker," Lamanna said. "Every office worker, just like they research data, they analyze data, they reason over topics, they also will be creating apps, agents and workflows."

The integration offers significant technical advantages, he argued. Because Copilot already indexes a user's Microsoft 365 content — emails, documents, meetings, and organizational data — it can incorporate that context into the applications and workflows it builds. If a user asks for "an app for Project Spartan," Copilot can draw from existing communications to understand what that project entails and suggest relevant features.

"If you go to those other tools, they have no idea what the heck Project Spartan is," Lamanna said, referencing competing low-code platforms from companies like Google, Salesforce, and ServiceNow. "But if you do it inside of Copilot and inside of the App Builder, it's able to draw from all that information and context."

Microsoft claims the apps created through these tools are "full-stack applications" with proper databases secured through the same identity systems used across its enterprise products — distinguishing them from simpler front-end tools offered by competitors. The company also emphasized that its existing governance, security, and data loss prevention policies automatically apply to apps and workflows created through Copilot.

Where professional developers still matter in an AI-powered workplace

While Microsoft positions the new capabilities as accessible to all office workers, Lamanna was careful to delineate where professional developers remain essential. His dividing line centers on whether a system interacts with parties outside the organization.

"Anything that leaves the boundaries of your company warrants developer involvement," he said. "If you want to build an agent and put it on your website, you should have developers involved. Or if you want to build an automation which interfaces directly with your customers, or an app or a website which interfaces directly with your customers, you want professionals involved."

The reasoning is risk-based: external-facing systems carry greater potential for data breaches, security vulnerabilities, or business errors. "You don't want people getting refunds they shouldn't," Lamanna noted.

For internal use cases — approval workflows, project tracking, team dashboards — Microsoft believes the new tools can handle the majority of needs without IT department involvement. But the company has built "no cliffs," in Lamanna's terminology, allowing users to migrate simple apps to more sophisticated platforms as needs grow.

Apps created in the conversational App Builder can be opened in Power Apps, Microsoft's full development environment, where they can be connected to Dataverse, the company's enterprise database, or extended with custom code. Similarly, simple workflows can graduate to the full Power Automate platform, and basic agents can be enhanced in the complete Copilot Studio.

"We have this mantra called no cliffs," Lamanna said. "If your app gets too complicated for the App Builder, you can always edit and open it in Power Apps. You can jump over to the richer experience, and if you're really sophisticated, you can even go from those experiences into Azure."

This architecture addresses a problem that has plagued previous generations of easy-to-use development tools: users who outgrow the simplified environment often must rebuild from scratch on professional platforms. "People really do not like easy-to-use development tools if I have to throw everything away and start over," Lamanna said.

What happens when every employee can build apps without IT approval

The democratization of software development raises questions about governance, maintenance, and organizational complexity — issues Microsoft has worked to address through administrative controls.

IT administrators can view all applications, workflows, and agents created within their organization through a centralized inventory in the Microsoft 365 admin center. They can reassign ownership, disable access at the group level, or "promote" particularly useful employee-created apps to officially supported status.

"We have a bunch of customers who have this approach where it's like, let 1,000 apps bloom, and then the best ones, I go upgrade and make them IT-governed or central," Lamanna said.

The system also includes provisions for when employees leave. Apps and workflows remain accessible for 60 days, during which managers can claim ownership — similar to how OneDrive files are handled when someone departs.

Lamanna argued that most employee-created apps don't warrant significant IT oversight. "It's just not worth inspecting an app that John, Susie, and Bob use to do their job," he said. "It should concern itself with the app that ends up being used by 2,000 people, and that will pop up in that dashboard."

Still, the proliferation of employee-created applications could create challenges. Users have expressed frustration with Microsoft's increasing emphasis on AI features across its products, with some giving the Microsoft 365 mobile app one-star ratings after a recent update prioritized Copilot over traditional file access.

The tools also arrive as enterprises grapple with "shadow IT" — unsanctioned software and systems that employees adopt without official approval. While Microsoft's governance controls aim to provide visibility, the ease of creating new applications could accelerate the pace at which these systems multiply.

The ambitious plan to turn 500 million workers into software builders

Microsoft's ambitions for the technology extend far beyond incremental productivity gains. Lamanna envisions a fundamental transformation of what it means to be an office worker — one where building software becomes as routine as creating spreadsheets.

"Just like how 20 years ago you put on your resume that you could use pivot tables in Excel, people are going to start saying that they can use App Builder and workflow agents, even if they're just in the finance department or the sales department," he said.

The numbers he's targeting are staggering. With 56 million people already using Power Platform, Lamanna believes the integration into Copilot could eventually reach 500 million builders. "Early days still, but I think it's certainly encouraging," he said.

The features are currently available only to customers in Microsoft's Frontier Program — an early access initiative for Microsoft 365 Copilot subscribers. The company has not disclosed how many organizations participate in the program or when the tools will reach general availability.

The announcement fits within Microsoft's larger strategy of embedding AI capabilities throughout its product portfolio, driven by its partnership with OpenAI. Under the restructured agreement announced Tuesday, Microsoft will have access to OpenAI's technology through 2032, including models that achieve artificial general intelligence (AGI) — though such systems do not yet exist. Microsoft has also begun integrating Copilot into its new companion apps for Windows 11, which provide quick access to contacts, files, and calendar information.

The aggressive integration of AI features across Microsoft's ecosystem has drawn mixed reactions. While enterprise customers have shown interest in productivity gains, the rapid pace of change and ubiquity of AI prompts have frustrated some users who prefer traditional workflows.

For Microsoft, however, the calculation is clear: if even a fraction of its user base begins creating applications and automations, it would represent a massive expansion of the effective software development workforce — and further entrench customers in Microsoft's ecosystem. The company is betting that the same natural language interface that made ChatGPT accessible to millions can finally unlock the decades-old promise of empowering everyday workers to build their own tools.

The App Builder and Workflows agents are available starting today through the Microsoft 365 Copilot Agent Store for Frontier Program participants.

Whether that future arrives depends not just on the technology's capabilities, but on a more fundamental question: Do millions of office workers actually want to become part-time software developers? Microsoft is about to find out if the answer is yes — or if some jobs are better left to the professionals.

Google DeepMind’s BlockRank could reshape how AI ranks information

Block AI

Google DeepMind researchers have developed BlockRank, a new method for ranking and retrieving information more efficiently in large language models (LLMs).

  • BlockRank is detailed in a new research paper, Scalable In-Context Ranking with Generative Models.
  • BlockRank is designed to solve a challenge called In-context Ranking (ICR), or the process of having a model read a query and multiple documents at once to decide which ones matter most.
  • As far as we know, BlockRank is not being used by Google (e.g., Search, Gemini, AI Mode, AI Overviews) right now – but it could be used at some point in the future.

What BlockRank changes. ICR is expensive and slow. Models use a process called “attention,” where every word compares itself to every other word. Ranking hundreds of documents at once gets exponentially harder for LLMs.

How BlockRank works. BlockRank restructures how an LLM “pays attention” to text. Instead of every document attending to every other document, each one focuses only on itself and the shared instructions.

  • The model’s query section has access to all the documents, allowing it to compare them and decide which one best answers the question.
  • This transforms the model’s attention cost from quadratic (very slow) to linear (much faster) growth.

By the numbers. In experiments using Mistral-7B, Google’s team found that BlockRank:

  • Ran 4.7× faster than standard fine-tuned models when ranking 100 documents.
  • Scaled smoothly to 500 documents (about 100,000 tokens) in roughly one second.
  • Matched or beat leading listwise rankers like RankZephyr and FIRST on benchmarks such as MSMARCO, Natural Questions (NQ), and BEIR.

Why we care. BlockRank could change how future AI-driven retrieval and ranking systems work to reward user intent, clarity, and relevance. That means (in theory) clear, focused content that aligns with why a person is searching (not just what they type) should increasingly win.

What’s next. Google/DeepMind researchers are continuing to redefine what it means to “rank” information in the age of generative AI. The future of search is advancing fast – and it’s fascinating to watch it evolve in real time.

NVIDIA Boosts Navy AI Training with DGX GB300

The post NVIDIA Boosts Navy AI Training with DGX GB300 appeared first on StartupHub.ai.

NVIDIA's DGX GB300 system is empowering the Naval Postgraduate School with advanced NVIDIA Navy AI training, enabling secure, on-premises generative AI and high-fidelity digital twin simulations for critical defense applications.

The post NVIDIA Boosts Navy AI Training with DGX GB300 appeared first on StartupHub.ai.

NVIDIA Charts America’s AI Future with Industrial-Scale Vision

The post NVIDIA Charts America’s AI Future with Industrial-Scale Vision appeared first on StartupHub.ai.

NVIDIA's GTC Washington, D.C., keynote unveiled a strategic blueprint for America's AI future, emphasizing national infrastructure, physical AI, and industry transformation.

The post NVIDIA Charts America’s AI Future with Industrial-Scale Vision appeared first on StartupHub.ai.

NVIDIA AI Fuels US Economic Development

The post NVIDIA AI Fuels US Economic Development appeared first on StartupHub.ai.

NVIDIA is driving significant AI economic development across the US by partnering with states, cities, and universities to democratize AI access and foster innovation.

The post NVIDIA AI Fuels US Economic Development appeared first on StartupHub.ai.

Microsoft’s OpenAI Bet Yields 10x Return, Igniting AI Infrastructure Race

The post Microsoft’s OpenAI Bet Yields 10x Return, Igniting AI Infrastructure Race appeared first on StartupHub.ai.

Microsoft’s staggering ten-fold return on its OpenAI investment, now valued at $135 billion, signals a new era where strategic AI stakes redefine corporate power and valuation. This monumental gain, highlighted by CNBC’s MacKenzie Sigalos, follows a significant corporate restructure at OpenAI that redefines its partnership terms with Microsoft, granting the tech giant a 27% equity […]

The post Microsoft’s OpenAI Bet Yields 10x Return, Igniting AI Infrastructure Race appeared first on StartupHub.ai.

Desktop Commander raises €1.1M to advance AI desktop automation

The post Desktop Commander raises €1.1M to advance AI desktop automation appeared first on StartupHub.ai.

Desktop Commander raised €1.1 million to develop its AI tool that allows non-technical users to automate computer tasks using natural language.

The post Desktop Commander raises €1.1M to advance AI desktop automation appeared first on StartupHub.ai.

Grasp raises $7M to advance its multi-agent AI for finance

The post Grasp raises $7M to advance its multi-agent AI for finance appeared first on StartupHub.ai.

AI startup Grasp raised $7 million to expand its multi-agent platform that automates complex financial analysis and reporting for consultants and investment banks.

The post Grasp raises $7M to advance its multi-agent AI for finance appeared first on StartupHub.ai.

Energy as the New Geopolitical Currency in the AI Race

The post Energy as the New Geopolitical Currency in the AI Race appeared first on StartupHub.ai.

“Knowledge used to be power, now power is knowledge.” This stark redefinition, articulated by U.S. Secretary of the Interior Doug Burgum during a CNBC “Power Lunch” interview, cuts to the core of the contemporary global power struggle. Speaking with Brian Sullivan, Burgum outlined a comprehensive strategy for the United States to secure its position in […]

The post Energy as the New Geopolitical Currency in the AI Race appeared first on StartupHub.ai.

CoreStory raises $32M to advance AI legacy code modernization

The post CoreStory raises $32M to advance AI legacy code modernization appeared first on StartupHub.ai.

AI startup CoreStory raised $32 million to help enterprises modernize legacy software with its platform that automatically documents and analyzes old code.

The post CoreStory raises $32M to advance AI legacy code modernization appeared first on StartupHub.ai.

Microsoft Windows Server Update Service Is Under Attack, What You Need To Know

Microsoft Windows Server Update Service Is Under Attack, What You Need To Know Windows Server 2025 is currently open to a Remote Code Execution exploit via the Windows Update Service, and at the time of this writing a fix from Microsoft has yet to fully patch the issue. Reports to The Register indicate that Microsoft's attempt to patch the exploit earlier this month didn't stop any active exploitation, contrary to Microsoft's

(PR) HPE to Build "Mission" and "Vision" Supercomputers Featuring NVIDIA Vera Rubin

HPE today announced, in partnership with the U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA) and Los Alamos National Laboratory (LANL), that it has been selected to deliver two state-of-the-art supercomputers, named "Mission" and "Vision". The next-generation systems will be based on the new direct liquid-cooled HPE Cray Supercomputing GX5000 system and feature upcoming NVIDIA Vera Rubin Superchips. Mission and Vision are part of the DOE's $370 million investment to accelerate scientific discovery, advance AI initiatives and strengthen national security.

"For decades, HPE and Los Alamos National Laboratory have collaborated on innovative supercomputing designs that deliver powerful capabilities to solve complex scientific challenges and bolster national security efforts," said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE. "We are proud to continue powering the lab's journey with the upcoming Mission and Vision systems. These innovations will be among the first to feature next-generation HPE Cray supercomputing architecture to drive AI innovation and scientific impact."

(PR) Sandisk Launches Officially Licensed FIFA World Cup 2026 Product Lineup

Sandisk kicked off the countdown to the FIFA World Cup 2026 today with the launch of its collection of officially licensed products. Purpose-built for what's set to be one of the most content-rich sporting events in history, the Sandisk Official Licensed Product Collection for the FIFA World Cup 2026 empowers fans, creators, and professionals alike to capture, preserve, and relive the most iconic moments from the world's biggest stage in sports.

Blending heritage with innovation, the design-led products honor host nations and iconic moments through whistle-inspired USB-C drives to SSDs in tournament colors and pro-level memory cards to capture history-making moments. Each product proudly bears official FIFA World Cup 2026 licensing marks and host nation-inspired details, making them authentic pieces of football history.

(PR) Supermicro Expands NVIDIA Collaboration, Focuses on U.S.-Made AI Systems for Government Use

Super Micro Computer, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is showcasing its advanced AI infrastructure solutions at NVIDIA GTC in Washington, D.C. this week, highlighting systems tailored to meet the stringent requirements of federal customers. Supermicro announced its plans to deliver next-generation NVIDIA AI platforms, including the NVIDIA Vera Rubin NVL144 and NVIDIA Vera Rubin NVL144 CPX in 2026. Additionally, Supermicro introduces U.S.-manufactured, TAA (Trade Agreements Act)-compliant systems, including the high-density 2OU NVIDIA HGX B300 8-GPU system with up to 144 GPUs per rack and an expanded portfolio featuring a Super AI Station based on NVIDIA GB300 and the new rack-scale NVIDIA GB200 NVL4 HPC solutions.

"Our expanded collaboration with NVIDIA and our focus on U.S.-based manufacturing position Supermicro as a trusted partner for federal AI deployments. With our corporate headquarters, manufacturing, and R&D all based in San Jose, California, in the heart of Silicon Valley, we have an unparalleled ability and capacity to deliver first-to-market solutions are developed, constructed, validated (and manufactured) for American federal customers," said Charles Liang, president and CEO, Supermicro. "The result of many years of working hand-in-hand with our close partner NVIDIA—also based in Silicon Valley—Supermicro has cemented its position as a pioneer of American AI infrastructure development."

(PR) Don't Nod Reveals New Trailer for Aphelion

French developer and publisher DON'T NOD has presented a new trailer for Aphelion, its upcoming cinematic third-person action-adventure game launching in 2026, at the ID@Xbox Showcase. The trailer reveals Ariane's fellow astronaut Thomas Cross as a playable character, and showcases brand-new stealth sequences, the never-before-seen alien antagonist, new environments, and the in-game spacesuit patch designed in collaboration with the European Space Agency (ESA).

Aphelion is a sci-fi action-adventure on the edge of the solar system. In the shoes of ESA astronauts Ariane and Thomas, players will explore and survey the uncharted planet Persephone and solve the mystery of the crash, all while trying to survive in the terrifying presence of an unknown enemy. At its heart, the game is an emotional tale about love, resilience, hope, and what we bring with us when everything is lost.

(PR) Creative Technology Launches Kickstarter Campaign for Sound Blaster Re:Imagine

Creative Technology, the company that brought the world the original Sound Blaster and transformed PC audio in the 90s, today announces Sound Blaster Re:Imagine, a next-generation modular audio hub that redefines what a sound card can be. The campaign goes live on Kickstarter on October 28, 2025 (10am EST).

Since its debut in 1989, Sound Blaster has shipped more than 400 million devices worldwide, shaping the soundtrack of the digital age. The original Sound Blaster gave PCs a voice, powering the rise of multimedia, gaming, and digital creativity. Sound Blaster Re:Imagine builds on that heritage - taking the DNA of Sound Blaster and evolving it into a modern, modular platform designed for creators, gamers, and anyone who lives at the intersection of work and play.

(PR) MAINGEAR Introduces aiDAPTIV+ Package for Pro RS & Pro WS Workstations Co-Developed With Phison

MAINGEAR, a leading provider of high-performance custom PCs, today announced a new aiDAPTIV+ package, co-developed with Phison Electronics, a global leader in NAND flash controllers and storage solutions, for its Pro RS and Pro WS workstations. The aiDAPTIV+ add-on enables full-parameter fine-tuning and large-model inference on mainstream GPUs, helping teams move faster while keeping data private and on-prem. Live demos of a MAINGEAR workstation equipped with aiDAPTIV+ will be available at Phison's booth during NVIDIA GTC Washington, D.C.

AI teams need on-prem training and inference performance without the unpredictability of cloud costs or the exposure of sensitive data. The aiDAPTIV+ package combines MAINGEAR's powerful, enterprise-ready workstations with Phison's aiDAPTIV+ intelligent SSD caching to expand effective VRAM, enabling larger models and longer contexts at the edge, with predictable costs and IT-friendly deployment.

(PR) Giga Computing Showcases Scalable Next-Gen AI and Visualization Solutions at NVIDIA GTC DC 2025

Giga Computing, a subsidiary of GIGABYTE and an industry innovator and leader in AI hardware and advanced cooling solutions, today announced its participation in NVIDIA GTC DC (Oct. 28-29). With the importance of AI and scalable solutions, Giga Computing demonstrates how innovation in hardware and software can drive forth the transformation into the AI-driven era. These solutions will empower developers, researchers, and creators to achieve more, from the desktop to the data center, and discussions are being held at the GIGABYTE booth #528.

The booth features four flagship GIGABYTE systems: the AI TOP ATOM, the W775-V10 workstation, the XL44-SX2 (NVIDIA RTX PRO Server), and a liquid-cooled G4L4-SD3 AI server. Together, these GIGABYTE solutions built on the NVIDIA Blackwell architecture to enable efficient, high-performance AI and visualization workloads spanning every compute tier.

(PR) ASUS IoT Announces PE3000N Based On NVIDIA Jetson Thor

ASUS IoT today unveils PE3000N, a compact edge-AI platform engineered to meet the advanced requirements of next-generation robotics and intelligent automation. Accelerated by the cutting-edge NVIDIA Jetson Thor platform, with advanced NVIDIA Blackwell GPU, a powerful 14-core Arm CPU, and an industry-leading 128 GB of LPDDR5X memory, enabling an impressive 2,070 FP4 TFLOPS of AI processing power in a highly space-efficient form factor - making it ideal for integration into robotic systems where both space and energy efficiency are critical. With its robust architecture, PE3000N powered by Jetson T5000 module enables developers and integrators to achieve new levels of autonomy, sensor fusion, and AI-driven control for industrial, commercial, and smart infrastructure deployments.

Rugged reliability for challenging environments
Engineered for durability, PE3000N incorporates MIL-STD-810H industrial-grade connectors and a low-profile chassis to withstand demanding operating conditions. With support for up to four optional 25GbE links and 16 GMSL cameras, it enables high-bandwidth sensor fusion and advanced machine vision, even in the most challenging environments. The wide 12-60 V DC input and ignition support provide stable, battery-friendly operation across diverse settings - from factory floors and autonomous vehicles to smart-city infrastructure. With an operating temperature range from -20°C up to 60°C, PE3000N ensures resilient performance and secure data handling, making it a trusted solution for mission-critical robotics, automation, and edge AI deployments.

(PR) NVIDIA IGX Thor Robotics Processor Brings Real-Time Physical AI to the Industrial and Medical Edge

AI is moving from the digital world into the physical one. Across factory floors and operating rooms, machines are evolving into collaborators that can see, sense and make decisions in real time. To accelerate this transformation, NVIDIA today unveiled NVIDIA IGX Thor, a powerful, industrial-grade platform built to bring real-time physical AI directly to the edge, combining high-speed sensor processing, enterprise-grade reliability and functional safety in a small module for the desktop.

Delivering up to 8x the AI compute performance of its predecessor, NVIDIA IGX Orin, IGX Thor enables developers to build intelligent systems that perceive, reason and act faster, safer and smarter than ever. Early adopters include industrial, robotic, medical and healthcare leaders, Diligent Robotics, EndoQuest Robotics, Hitachi Rail, Joby Aviation, Maven and SETI Institute, while CMR Surgical is evaluating IGX Thor to advance its medical capabilities.

(PR) Quantum Machines Announces NVIDIA NVQLink Integration

Quantum Machines (QM), the leading provider of quantum control solutions, today announced its integration with NVIDIA NVQLink, the new open platform for real-time orchestration between quantum and classical computing resources. This marks a major step that extends QM's first-of-its-kind, field-proven, µs-latency quantum-classical integration solution.

Building on the foundation of NVIDIA DGX Quantum - the first system to connect a quantum controller directly with the NVIDIA accelerated computing stack - QM's platform will support the new NVQLink open architecture, providing seamless interoperability between quantum processors (QPUs), control hardware, CPUs, and GPUs. The result is real-time data exchange and control at microsecond latency, enabling the demanding workloads required for logical qubits and large-scale quantum error correction.

Amazon layoffs reaction: ‘Thought I was a top performer but guess I’m expendable’

Amazon’s headquarters campus in Seattle. (GeekWire Photo / Kurt Schlosser)

Reaction to a huge round of layoffs rippled across Amazon and beyond on Tuesday as the Seattle-based tech giant confirmed that it was slashing 14,000 corporate and tech jobs.

We’ve rounded up some of what’s being said online and/or shared with GeekWire:

‘Never been laid off before’

A megathread on Reddit served as a collection of comments by impacted employees who posted about their level, location, org and years of service at Amazon.

Workers across ads, recruitment, robotics, retail, Prime Video, Amazon Games, business development, North American Stores, finance, devices and services, Amazon Autos, and more used the thread to vent.

  • “TPM II for Amazon Robotics, 6.5 years there. Still processing this, I’ve never been laid off before.”
  • “L6 SDEIII, started as SDEI 7 years ago. I went L4 to L6 in 3 years. My last performance review I got raising the bar. Thought I was a top performer but guess I’m expendable.”
  • “Never been laid off before feels overwhelming on VISA! Someone please help me understand next steps in terms of VISA, if I am not able to get H1b sponsoring job in next 90 days will I have to uproot everything here and go back?”
  • “I heard AWS layoffs come after re:invent to avoid customer disruption and bad press.”
  • “It’s heartbreaking how impersonal and abrupt these layoffs have become. People who’ve given years to a company are finding out in minutes that they’re done.”

Bad news via text?

Kristi Coulter, author of Exit Interview: The Life and Death of My Ambitious Career, a memoir about what she learned in her 12 years at Amazon, weighed in about the timing of apparent text messages that were sent to impacted employees.

“Wait, I’m sorry: Amazon made people relocate, switch their kids’ schools, and bookend their days with traffic for RTO only to lay them off via a 3 a.m. text? What happened to the vibe and conversations that only being together at the office could allow?” Coulter wrote on LinkedIn.

‘Reduced functionality’

Some employees shared how they were quickly locked out of work laptops, expressing confusion about whether that was how they were supposed to learn about being terminated.

“I lost access to everything immediately :( ,” one Reddit user said.

Others discussed how they should have found time to transfer important work examples or positive interactions related to their performance over to personal computers.

“One thing I would recommend for everyone is to back up your personal files onto your personal laptop,” one user said on Reddit. “I used to keep all my accolades and praise in a quip file along with all my 2×2 write ups and MBR/QBR write ups cataloging my wins. When I found out I got laid off my head was spinning so I went outside for a walk, by the time I returned I was locked out of my laptop and no longer had access to anything.”

Is this Amazon’s way of saying 100% laid off?

Any Amazon folks on the timeline – seen this before?#Amazon #layoffs #amazonlayoffs pic.twitter.com/1MCxoXjfHQ

— Aravind Naveen (@MydAravind) October 28, 2025

Why layoffs now?

Amazon human resources chief Beth Galetti pinned the layoffs in part on the need to reduce bureaucracy and become more efficient in the new era of artificial intelligence. Others looked for deeper meaning in the cuts.

In a post on LinkedIn, Yahoo! Finance Executive Editor Brian Rozzi said stock price is likely a key consideration when it comes to top execs and the Amazon board signing off on such mass layoffs.

Amazon’s stock was up about 1% on Tuesday to $229 per share.

“If the layoffs keep jacking up the stock price, maybe I can retire instead,” one longtime employee told GeekWire.

Entrepreneur and investor Jason Calacanis posted on X about how AI was coming for middle managers and those with “rote jobs” faster than anyone expected. He encouraged workers to become a founder and do a startup before it’s too late.

Hard-hit divisions

Mid-level managers in Amazon’s retail division were heavily impacted by Tuesday’s cuts, according to internal data obtained by Business Insider.

More than 78% of the roles eliminated were held by managers assigned L5 to L7 designations, BI reported. (L5 is typically the starting point for managers at Amazon, with more seniority assigned to higher levels.)

BI also said that U.S.-focused data showed that more than 80% of employees laid off Tuesday worked in Amazon’s retail business, spanning e-commerce, human resources, and logistics.

Bloomberg and others reported that significant cuts are also being felt by Amazon’s video games unit.

Steve Boom, VP of audio, Twitch, and games said in a memo shared with The Verge that “significant role reductions” would be felt at studios in Irvine and San Diego, Calif., as well on Amazon’s central publishing teams.

“We have made the difficult decision to halt a significant amount of our first-party AAA game development work — specifically around MMOs [massively multiplayer online games] — within Amazon Game Studios,” Boom wrote.

Current titles in Amazon’s MMO lineup include “New World: Aeternum,” “Throne and Liberty,” and “Lost Ark.” Amazon also previously announced that it would be developing a “Lord of the Rings” MMO.

‘Ripple effects throughout the community’

Amazon employees and others line up at a food truck near Amazon offices in Seattle’s South Lake Union neighborhood. (GeekWire File Photo / Kurt Schlosser)

Jon Scholes, president and CEO of the Downtown Seattle Association (DSA), has previously praised Amazon for its mandate calling for employees to return to the office five days per week, saying that the foot traffic from thousands of tech workers in the city is a necessary element to helping downtown Seattle rebound from the pandemic.

On Tuesday, Scholes reacted to Amazon’s layoffs in a statement to GeekWire:

“As downtown’s largest employer, a workforce change of this scale has ripple effects throughout the community — on individual employees and families and our small businesses that rely on the weekday foot traffic customer base. In addition, these jobs buttress our tax base that helps fund the city services we all depend on. Employers have options for where they locate jobs, and we want to ensure downtown Seattle is the most attractive place to invest and grow. We must provide vibrancy and a predictable regulatory environment in a competitive landscape because other cities would welcome the jobs currently based in downtown.”

RPCS3 GPU recommendations increase due to dropped driver support

AMD and Nvidia have forced RPCS3 to increase its recommended GPU requirements The team behind RPCS3, the PlayStation 3 emulator, has announced that it has increased its recommended GPU requirements for Windows. This is due to AMD and Nvidia’s decision to drop driver support for older Radeon and GeForce graphics cards. Now, the emulator’s recommended […]

The post RPCS3 GPU recommendations increase due to dropped driver support appeared first on OC3D.

Whatnot Lands $225M Series F, More Than Doubles Valuation to $11.5B Since January

Whatnot, a live shopping platform and marketplace, has closed a $225 million Series F round, more than doubling its valuation to $11.5 billion in less than 10 months.

DST Global and CapitalG co-led the financing, which brings the Los Angeles-based company’s total raised to about $968 million since its 2019 inception. Whatnot had raised $265 million in a Series E round at a nearly $5 billion valuation in January.

New investors Sequoia Capital and Alkeon Capital participated in the Series F, alongside returning backers Greycroft, Andreessen Horowitz, Avra and Bond. Other investors include Y Combinator, Lightspeed Venture Partners and Liquid 2 Ventures.

As part of the latest financing, Whatnot says it will initiate a tender offer where select current investors will buy up to $126 million worth of shares.

Funding to e-commerce startups globally so far this year totals $7.1 billion, per Crunchbase data. That compares to $11.3 billion raised by e-commerce startups globally in all of 2024. This year’s numbers are also down significantly from post-pandemic funding totals, which surged to $93 billion in 2021.

‘Retail’s new normal’

Live commerce is the combination of livestreaming and online shopping. Grant LaFontaine, co-founder and CEO of Whatnot, said in an announcement that his startup is “proving that live shopping is retail’s new normal.”

Whatnot co-founders Logan Head and Grant LaFontaine. Courtesy photo.

The company says more than $6 billion worth of items have been sold on its platform in 2025 so far, more than twice its total for all of 2024. Its app facilitates the buying and selling of collectibles like trading cards and toys through live video auctions. It also offers items such as clothing and sneakers. It competes with the likes of eBay, which currently does not offer a livestreaming option. It’s also a competitor to TikTok Shop.

“Whatnot brought the live shopping wave to the US, the UK, and Europe and has turned it into one of the fastest growing marketplaces of all time, Laela Sturdy, Whatnot board member and managing partner at CapitalG, Alphabet’s independent growth fund, said in a release.

The company plans to use its new funds to invest in its platform, roll out new features and “evolve” its policies. It is also accelerating its international expansion, adding to its current 900-person workforce by hiring across multiple departments.

Related query:

Related Reading:

Illustration: Dom Guzman

Pixel 10a CAD Renders Show A Pixel 9a Clone

Google Pixel smartphone in hand, blurred background.

Google appears to be playing it safe with its upcoming budget offering, the Pixel 10a. If the latest CAD renders are anything to go by, the tech giant is eschewing flashy design changes and opting for a predictable, if somewhat boring, overall design language. Almost nothing appears to have changed between the Pixel 9a and the upcoming Pixel 10a, as per the new CAD renders As per the CAD renders published by the X user OnLeaks on behalf of Android Headlines, the following can be easily concluded: As for the budget offering's rumored specs, the following is known at the […]

Read full article at https://wccftech.com/pixel-10a-cad-renders-show-a-pixel-9a-clone/

DON’T NOD Admits Lost Records: Bloom & Rage Missed Expectations, Signs Deal With Netflix to Create New Game Based on “A Major IP”

“Lost Records: Bloom & Rage” title screen with four characters.

Developer and publisher DON'T NOD has published its latest financial release which goes over its half-year results for 2025, which includes a few notable updates from the studio, like how its most recent release, Lost Records: Bloom & Rage performed "below expectations," and that the studio signed a deal with Netflix to make a narrative game based on "a major IP." It's definitely a disappointing result for DON'T NOD, particularly considering the fact that its last major releases last year, Banishers: Ghosts of New Eden and Jusant, also fell below expectations. The studio's total operating revenue took a 5% dip […]

Read full article at https://wccftech.com/lost-records-bloom-and-rage-missed-expectations-dont-nod-signs-deal-with-netflix/

Base iPhone 17 OLED Panel Is Around 42% Cheaper To Make Than ‘Pro’ Models, Despite Apple Making ProMotion Technology A Standard

Base iPhone 17 OLED is cheaper to make, claims new report

Apple revamped its iPhone 17 lineup this year by introducing ProMotion technology to the base model, making it one of the best decisions it could ever make for its flagship smartphone family. Best of all, it brings a host of other upgrades while retaining that $699 price point, which is probably why the iPhone 17 has garnered immense popularity worldwide, particularly in China. Part of why Apple has been able to keep this price unchanged from the iPhone 16 is by keeping the display costs low. According to the latest report, the OLED panel in the iPhone 17 costs around 42 […]

Read full article at https://wccftech.com/iphone-17-oled-panel-around-42-percent-to-make-than-pro-models/

Oppo Has Just A Few Hours Now To Hand Over To Apple Evidentiary Documents On A Former Engineer Who Stole Apple Watch Secrets

A glowing Apple logo-headed figure with a sword confronts a hooded figure holding a sword, with oppo written in neon green.

In the ongoing high-stakes court battle between Oppo and Apple, the former has only a few hours left to complete a transfer of required documents and device forensic reports on an ex-Apple engineer who stands accused of stealing proprietary intellectual property (IP) at the behest of Oppo. Apple accuses Oppo of using its former employee, Chen Shi, to steal Apple Watch secrets Before going further, let's summarize what has happened in this high-stakes saga so far: Apple is asking the court for injunctive relief on four counts: For its part, Oppo maintains that it has conducted a comprehensive search of […]

Read full article at https://wccftech.com/oppo-has-just-a-few-hours-left-to-hand-over-to-apple-evidentiary-documents-on-a-former-engineer-who-stole-apple-watch-secrets/

Sony’s WH-1000XM4 Wireless Headphones Cost Less Than Half The Price Of An AirPods Max But Fulfill Your ANC & Long Battery Life Needs For Under $200 On Amazon

Sony WH-1000XM4 wireless headphones are available for under $200 on Amazon

In a market that is littered with countless options, Sony successfully stands out with its family of wireless headphones that offer comfort, impeccable audio, a boatload of features, and value, though the latter is subjective, especially if you are not on the hunt for the WH-1000XM6, which cost a jaw-dropping $458 on Amazon. Sure, the latter are the crème de la crème of wireless headphones, but if your primary objective is affordability, you will want to pick the WH-1000XM4, which are available at the same online retailer, but at a more affordable $198, or 43 percent off. Despite being two […]

Read full article at https://wccftech.com/sony-wh-1000xm4-wireless-headphones-cost-less-than-half-the-price-of-airpods-max-on-amazon/

Call of Duty: Black Ops 7 is Reportedly “Far Behind” Battlefield 6 In Pre-Orders Leading Up to Launch

Call of Duty: Black Ops 7

A new report from GamesIndustry.Biz, based on data provided by Alinea Analytics, shows that in the lead-up to launch, Call of Duty: Black Ops 7 trails "far behind" the numbers that Battlefield 6 was able to pull. Setting the parameters here: this is based on data from Steam pre-order sales for Battlefield 6 and Call of Duty: Black Ops 7, 18 days ahead of their respective launches. Within that 18-day lead-up period, Battlefield 6 was able to sell close to a million copies in pre-orders. Black Ops 7 has only managed 200K pre-order copies sold. These numbers start to look […]

Read full article at https://wccftech.com/call-of-duty-black-ops-7-far-behind-battlefield-6-pre-order-sales/

CORSAIR Unveils Its Flagship MP700 PRO XT PCIe 5.0 SSD, Offering Up To 14,900 MB/s Of Read Speeds

Corsair MP700 PRO XT PCIe 5.0 x4 NVMe SSD on a desk next to a laptop.

After Team Group, now Corsair also claims to have reached 14,900 MB/s of read speeds on its latest PCIe 5.0 SSD. CORSAIR Launches MP700 PRO XT and Compact 2242 Form Factor MP700 MICRO PCIe 5.0 SSDs with Blazing Fast Read/Write Speeds One of the leading hardware and peripheral manufacturers, CORSAIR, has released its two new high-performance PCIe 5.0 SSDs for enthusiasts, offering the best-in-class performance for PC builders. The first SSD is the MP700 PRO XT, which is its flagship offering, delivering up to 14,900 MB/s of sequential Read speeds and up to 14,500 MB/s of sequential Write speeds. If […]

Read full article at https://wccftech.com/corsair-unveils-its-flagship-mp700-pro-xt-pcie-5-0-ssd-offering-up-to-14900-mb-s-of-read-speeds/

DayZ Creator Says AI Fears Remind Him of People Worrying About Google & Wikipedia; ‘Regardless of What We Do, AI Is Here’

Unbranded game controller with futuristic AI head wearing headphones beside portrait of Dean Hall

With each passing month, artificial intelligence creeps into more industries. That does not exclude the gaming industry, which has long used artificial intelligence to populate its virtual worlds. Still, the generative AI that is taking root everywhere offers much more power, and also much greater risk, compared to what gaming developers were used to. Big companies like Microsoft, Amazon, and EA are already laying off (or thinking about laying off) employees to invest further into artificial intelligence. What do the actual developers think about this artificial intelligence revolution? Their takes, as you would expect, are quite varied. The creator of […]

Read full article at https://wccftech.com/dayz-creator-says-ai-fears-remind-him-people-worrying-about-google-wikipedia-ai-is-here/

Thermaltake Confirms One Of Its Existing AIO Coolers Will Be Compatible With LGA 1954 Socket

ASRock motherboard with exposed CPU socket and TOUGHRAM RGB in a high-performance PC build, CPU at 53°C, GPU at 28°C.

The latest AIO cooler from Thermaltake will work with Intel's upcoming LGA 1954 platform, as spotted on the official website. Thermaltake Lists LGA 1954 as a Compatible Socket for MINECUBE 360 Ultra ARGB Sync AIO Cooler, Confirming Support for Intel Nova Lake Popular cooler and PC case maker, Thermaltake, has officially listed the Intel LGA 1954 socket as a compatible platform for one of its latest AIO coolers. Thermaltake's MINECUBE 360 Ultra ARGB Sync, which was showcased at Computex this year, lists the LGA 1954 on its compatibility list, which confirms that the cooler won't just be compatible with the […]

Read full article at https://wccftech.com/thermaltake-confirms-one-of-its-existing-aio-coolers-will-be-compatible-with-lga-1954-socket/

A ‘Rocket League Minus the Cars’ 3v3 F2P Arcade Game Superball Shadow-Dropped on PC and Xbox Series X/S

SUPERBALL text with a futuristic ball and action-packed scene.

Pathea Games, the studio known for games like My Time at Portia, My Time at Sandrock, and the upcoming My Time at Evershine has just shadow-dropped something you'd be more likely to expect from Velan Studios after its game Knockout City, or even Psyonix as a spin-off from Rocket League with Superball, a new free-to-play 3v3 arcade hero football game that's out now on PC and Xbox Series X/S. Announced during the ID@Xbox and IGN Showcase, Superball is described as a mash between Rocket League, something that's made extremely obvious with its giant ball and arena style, and Overwatch with […]

Read full article at https://wccftech.com/rocket-league-minus-cars-f2p-superball-pathea-games/

ClearWork – ClearWork maps business processes and plans digital transformations


ClearWork helps companies transform their operations by first automatically discovering and mapping their actual, end-to-end processes. Unlike old-school methods that rely on manual workshops and guesswork, our AI analyzes real user activity to give a precise, objective view of current operations and pinpoint friction points.

From there, we use AI to help you model and plan an optimized future state that's grounded in your operational reality. Finally, we provide an AI co-pilot, powered by your own data, and orchestrate automated, cross-platform workflows to ensure new processes are not only planned but also executed and sustained across the organization.

View startup

Fortanix and NVIDIA partner on AI security platform for highly regulated industries

Data security company Fortanix Inc. announced a new joint solution with NVIDIA: a turnkey platform that allows organizations to deploy agentic AI within their own data centers or sovereign environments, backed by NVIDIA’s "confidential computing" GPUs.

“Our goal is to make AI trustworthy by securing every layer—from the chip to the model to the data," said Fortanix CEO and co-founder Anand Kashyap, in a recent video call interview with VentureBeat. "Confidential computing gives you that end-to-end trust so you can confidently use AI with sensitive or regulated information.”

The solution arrives at a pivotal moment for industries such as healthcare, finance, and government — sectors eager to embrace AI but constrained by strict privacy and regulatory requirements.

Fortanix’s new platform, powered by NVIDIA Confidential Computing, enables enterprises to build and run AI systems on sensitive data without sacrificing security or control.

“Enterprises in finance, healthcare and government want to harness the power of AI, but compromising on trust, compliance, or control creates insurmountable risk,” said Anuj Jaiswal, chief product officer at Fortanix, in a press release. “We’re giving enterprises a sovereign, on-prem platform for AI agents—one that proves what’s running, protects what matters, and gets them to production faster.”

Secure AI, Verified from Chip to Model

At the heart of the Fortanix–NVIDIA collaboration is a confidential AI pipeline that ensures data, models, and workflows remain protected throughout their lifecycle.

The system uses a combination of Fortanix Data Security Manager (DSM) and Fortanix Confidential Computing Manager (CCM), integrated directly into NVIDIA’s GPU architecture.

“You can think of DSM as the vault that holds your keys, and CCM as the gatekeeper that verifies who’s allowed to use them," Kashyap said. "DSM enforces policy, CCM enforces trust.”

DSM serves as a FIPS 140-2 Level 3 hardware security module that manages encryption keys and enforces strict access controls.

CCM, introduced alongside this announcement, verifies the trustworthiness of AI workloads and infrastructure using composite attestation—a process that validates both CPUs and GPUs before allowing access to sensitive data.

Only when a workload is verified by CCM does DSM release the cryptographic keys necessary to decrypt and process data.

“The Confidential Computing Manager checks that the workload, the CPU, and the GPU are running in a trusted state," explained Kashyap. "It issues a certificate that DSM validates before releasing the key. That ensures the right workload is running on the right hardware before any sensitive data is decrypted.”

This “attestation-gated” model creates what Fortanix describes as a provable chain of trust extending from the hardware chip to the application layer.

It’s an approach aimed squarely at industries where confidentiality and compliance are non-negotiable.

From Pilot to Production—Without the Security Trade-Off

According to Kashyap, the partnership marks a step forward from traditional data encryption and key management toward securing entire AI workloads.

Kashyap explained that enterprises can deploy the Fortanix–NVIDIA solution incrementally, using a lift-and-shift model to migrate existing AI workloads into a confidential environment.

“We offer two form factors: SaaS with zero footprint, and self-managed. Self-managed can be a virtual appliance or a 1U physical FIPS 140-2 Level 3 appliance," he noted. "The smallest deployment is a three-node cluster, with larger clusters of 20–30 nodes or more.”

Customers already running AI models—whether open-source or proprietary—can move them onto NVIDIA’s Hopper or Blackwell GPU architectures with minimal reconfiguration.

For organizations building out new AI infrastructure, Fortanix’s Armet AI platform provides orchestration, observability, and built-in guardrails to speed up time to production.

“The result is that enterprises can move from pilot projects to trusted, production-ready AI in days rather than months,” Jaiswal said.

Compliance by Design

Compliance remains a key driver behind the new platform’s design. Fortanix’s DSM enforces role-based access control, detailed audit logging, and secure key custody—elements that help enterprises demonstrate compliance with stringent data protection regulations.

These controls are essential for regulated industries such as banking, healthcare, and government contracting.

The company emphasizes that the solution is built for both confidentiality and sovereignty.

For governments and enterprises that must retain local control over their AI environments, the system supports fully on-premises or air-gapped deployment options.

Fortanix and NVIDIA have jointly integrated these technologies into the NVIDIA AI Factory Reference Design for Government, a blueprint for building secure national or enterprise-level AI systems.

Future-Proofed for a Post-Quantum Era

In addition to current encryption standards such as AES, Fortanix supports post-quantum cryptography (PQC) within its DSM product.

As global research in quantum computing accelerates, PQC algorithms are expected to become a critical component of secure computing frameworks.

“We don’t invent cryptography; we implement what’s proven,” Kashyap said. “But we also make sure our customers are ready for the post-quantum era when it arrives.”

Real-World Flexibility

While the platform is designed for on-premises and sovereign use cases, Kashyap emphasized that it can also run in major cloud environments that already support confidential computing.

Enterprises operating across multiple regions can maintain consistent key management and encryption controls, either through centralized key hosting or replicated key clusters.

This flexibility allows organizations to shift AI workloads between data centers or cloud regions—whether for performance optimization, redundancy, or regulatory reasons—without losing control over their sensitive information.

Fortanix converts usage into “credits,” which correspond to the number of AI instances running within a factory environment. The structure allows enterprises to scale incrementally as their AI projects grow.

Fortanix will showcase the joint platform at NVIDIA GTC, held October 27–29, 2025, at the Walter E. Washington Convention Center in Washington, D.C. Visitors can find Fortanix at booth I-7 for live demonstrations and discussions on securing AI workloads in highly regulated environments.

About Fortanix

Fortanix Inc. was founded in 2016 in Mountain View, California, by Anand Kashyap and Ambuj Kumar, both former Intel engineers who worked on trusted execution and encryption technologies. The company was created to commercialize confidential computing—then an emerging concept—by extending the security of encrypted data beyond storage and transmission to data in active use, according to TechCrunch and the company’s own About page.

Kashyap, who previously served as a senior security architect at Intel and VMware, and Kumar, a former engineering lead at Intel, drew on years of work in trusted hardware and virtualization systems. Their shared insight into the gap between research-grade cryptography and enterprise adoption drove them to found Fortanix, according to Forbes and Crunchbase.

Today, Fortanix is recognized as a global leader in confidential computing and data security, offering solutions that protect data across its lifecycle—at rest, in transit, and in use.

Fortanix serves enterprises and governments worldwide with deployments ranging from cloud-native services to high-security, air-gapped systems.

"Historically we provided encryption and key-management capabilities," Kashyap said. "Now we’re going further to secure the workload itself—specifically AI—so an entire AI pipeline can run protected with confidential computing. That applies whether the AI runs in the cloud or in a sovereign environment handling sensitive or regulated data.

New TEE.Fail Side-Channel Attack Extracts Secrets from Intel and AMD DDR5 Secure Enclaves

A group of academic researchers from Georgia Tech, Purdue University, and Synkhronix have developed a side-channel attack called TEE.Fail that allows for the extraction of secrets from the trusted execution environment (TEE) in a computer's main processor, including Intel's Software Guard eXtensions (SGX) and Trust Domain Extensions (TDX) and AMD's Secure Encrypted Virtualization with Secure

New Android Trojan 'Herodotus' Outsmarts Anti-Fraud Systems by Typing Like a Human

Cybersecurity researchers have disclosed details of a new Android banking trojan called Herodotus that has been observed in active campaigns targeting Italy and Brazil to conduct device takeover (DTO) attacks. "Herodotus is designed to perform device takeover while making first attempts to mimic human behaviour and bypass behaviour biometrics detection," ThreatFabric said in a report shared with

Researchers Expose GhostCall and GhostHire: BlueNoroff's New Malware Chains

Threat actors tied to North Korea have been observed targeting the Web3 and blockchain sectors as part of twin campaigns tracked as GhostCall and GhostHire. According to Kaspersky, the campaigns are part of a broader operation called SnatchCrypto that has been underway since at least 2017. The activity is attributed to a Lazarus Group sub-cluster called BlueNoroff, which is also known as APT38,

CyDeploy wants to create a replica of a company’s system to help it test updates before pushing them out — catch it at Disrupt 2025

Tina Williams-Koroma said CyDeploy uses machine learning to understand what happens on a company’s machine and then creates a “digital twin” where system administrators can test updates.

Nvidia and partners to build seven AI supercomputers for the U.S. gov't with over 100,000 Blackwell GPUs —combined performance of 2,200 ExaFLOPS of compute

Nvidia, Oracle, and the U.S. Department of Energy will build seven ExaFLOPS-class AI supercomputers for Argonne National Laboratory — including the Oracle-built Equinox and Solstice systems with over 100,000 Blackwell GPUs delivering up to 2,200 FP4 ExaFLOPS — to power next-generation AI and scientific research.

OpenAI and Microsoft sign agreement to restructure OpenAI into a public benefit corporation with Microsoft retaining 27% stake — non-profit 'Open AI Foundation' to oversee 'Open AI PBC'

OpenAI is restructuring into a public benefit corporation with Microsoft retaining a 27% stake in the new "OpenAI PBC," worth roughly $135 billion. OpenAI PBC will still be overseen by the non-profit OpenAI Inc., soon to be renamed OpenAI Foundation. Both companies are intertwined till at least 2032 with major cloud computing contracts.

Fake Nvidia GTC stream hosting deepfake Jensen Huang crypto scam garners 100,000 YouTube viewers, AI-generated hoax generates 5x more views than real event

Unsuspecting YouTube viewers looking for Nvidia's GTC keynote on Tuesday might well have found themselves accidentally watching a Jensen Huang deepfake promoting a cryptocurrency scam, after YouTube promoted the video over the official stream.

Musk says Samsung's Texas fab outclasses TSMC's US-based fabs — with AI5 still in development, questions remain over whether Tesla will need advanced tools

Elon Musk's statement that Samsung's Taylor, Texas fab is more advanced than TSMC's Fab 21 in Arizona reflects the newer 3nm-era tools being installed there. However, this advantage has little relevance for Tesla's AI5 processor, which likely relies on SF4A FinFET technology, which gains minimal benefit from those capabilities.

AI’s True Impact: Productivity, Not Layoffs, Driving CEO Agendas

The post AI’s True Impact: Productivity, Not Layoffs, Driving CEO Agendas appeared first on StartupHub.ai.

Despite widespread anxieties about artificial intelligence decimating the workforce, Steve Odland, CEO of The Conference Board, offers a more nuanced, and perhaps more optimistic, perspective: AI is not primarily a job killer, but a catalyst for productivity. He contends that while AI will profoundly reshape the professional landscape, current large-scale layoffs stem more from broader […]

The post AI’s True Impact: Productivity, Not Layoffs, Driving CEO Agendas appeared first on StartupHub.ai.

Google Backs AI Cybersecurity Startups in Latin America

The post Google Backs AI Cybersecurity Startups in Latin America appeared first on StartupHub.ai.

Google's new accelerator program is investing in 11 AI cybersecurity startups in Latin America, aiming to fortify the region's digital defenses.

The post Google Backs AI Cybersecurity Startups in Latin America appeared first on StartupHub.ai.

E-commerce consumption could bump 20% because of agentic AI, says Mizuho’s Dan Dolev

The post E-commerce consumption could bump 20% because of agentic AI, says Mizuho’s Dan Dolev appeared first on StartupHub.ai.

Dan Dolev, Mizuho’s managing director and senior analyst covering the fintech and payments space, spoke with the host of CNBC’s “The Exchange” following the announcement of a strategic partnership between PayPal and OpenAI. The discussion centered on the potential total addressable market for “agentic commerce” and the specific upside for PayPal in this burgeoning domain, […]

The post E-commerce consumption could bump 20% because of agentic AI, says Mizuho’s Dan Dolev appeared first on StartupHub.ai.

Nano Banana’s Creative Revolution: Unpacking DeepMind’s Viral Image Model

The post Nano Banana’s Creative Revolution: Unpacking DeepMind’s Viral Image Model appeared first on StartupHub.ai.

Google DeepMind’s Nano Banana, the image model that recently captivated the internet, represents a pivotal moment in the democratization and evolution of digital creativity. Its creators, Principal Scientist Oliver Wang and Group Product Manager Nicole Brichtova, recently sat down with a16z partners Yoko Li and Guido Appenzeller to unravel the model’s origins, its unexpected viral […]

The post Nano Banana’s Creative Revolution: Unpacking DeepMind’s Viral Image Model appeared first on StartupHub.ai.

Google Gemini for Home: The AI Assistant’s Next Evolution

The post Google Gemini for Home: The AI Assistant’s Next Evolution appeared first on StartupHub.ai.

Google Gemini for Home is rolling out in early access, upgrading smart assistants with advanced conversational AI and introducing a premium subscription for enhanced features.

The post Google Gemini for Home: The AI Assistant’s Next Evolution appeared first on StartupHub.ai.

Mem0 raises $24M to cure AI’s digital amnesia

The post Mem0 raises $24M to cure AI’s digital amnesia appeared first on StartupHub.ai.

Mem0 is tackling AI's "digital amnesia" with a universal memory layer, aiming to become the foundational database for the next generation of intelligent agents.

The post Mem0 raises $24M to cure AI’s digital amnesia appeared first on StartupHub.ai.

Amazon’s AI-Driven Efficiency Reshapes Big Tech Workforce

The post Amazon’s AI-Driven Efficiency Reshapes Big Tech Workforce appeared first on StartupHub.ai.

The transformative power of artificial intelligence, while heralding unprecedented innovation, is simultaneously catalyzing a profound restructuring of the tech workforce, a reality starkly illustrated by Amazon’s recent corporate layoffs. As CNBC’s MacKenzie Sigalos reported on “Money Movers,” Amazon is embarking on a multi-year efficiency drive, predominantly focused on “hollowing out layers of middle management.” This […]

The post Amazon’s AI-Driven Efficiency Reshapes Big Tech Workforce appeared first on StartupHub.ai.

AI Reshapes M&A Landscape, Trillions in Value Up for Grabs

The post AI Reshapes M&A Landscape, Trillions in Value Up for Grabs appeared first on StartupHub.ai.

The convergence of advanced artificial intelligence and a uniquely poised global economy is setting the stage for an unprecedented era of mergers and acquisitions, fundamentally altering how companies operate and how value is created. This transformative period, characterized by both immense opportunity and inherent risks, was a central theme in Ken Moelis’s discussion with CNBC’s […]

The post AI Reshapes M&A Landscape, Trillions in Value Up for Grabs appeared first on StartupHub.ai.

Pomelli AI: Google’s Play for SMB Marketing

The post Pomelli AI: Google’s Play for SMB Marketing appeared first on StartupHub.ai.

Google Labs' new Pomelli AI aims to democratize on-brand social media campaign generation for SMBs by leveraging AI to understand and replicate brand identity.

The post Pomelli AI: Google’s Play for SMB Marketing appeared first on StartupHub.ai.

AI Valuations Spark Bubble Fears Amidst Broader Market Optimism

The post AI Valuations Spark Bubble Fears Amidst Broader Market Optimism appeared first on StartupHub.ai.

A stark warning echoes from the latest CNBC Fed Survey: nearly 80% of respondents believe AI stocks are currently overvalued, with a quarter deeming them “extremely overvalued.” This sentiment, highlighted by CNBC Senior Economics Reporter Steve Liesman on “Squawk on the Street,” paints a picture of growing apprehension within the investment community regarding the sustainability […]

The post AI Valuations Spark Bubble Fears Amidst Broader Market Optimism appeared first on StartupHub.ai.

Building AI Unicorns: Lessons from Casetext’s $650M Exit

The post Building AI Unicorns: Lessons from Casetext’s $650M Exit appeared first on StartupHub.ai.

“I cannot believe that they are doing it this way.” This sentiment, articulated by Jake Heller, co-founder and CEO of Casetext, encapsulates the entrepreneurial spark that ignited his $650 million AI legal startup, CoCounsel, recently acquired by Thomson Reuters. His candid talk at the AI Startup School on June 17th, 2025, offered a masterclass in […]

The post Building AI Unicorns: Lessons from Casetext’s $650M Exit appeared first on StartupHub.ai.

(PR) NVIDIA Launches BlueField-4 DPUs with 800 Gb/s Throughput for AI Data Centers

AI factories continue to grow at unprecedented scale, processing structured, unstructured and emerging AI-native data. With demand for trillion-token workloads exploding, a new class of infrastructure is required to keep pace. At NVIDIA GTC Washington, D.C, NVIDIA revealed the NVIDIA BlueField-4 data processing unit, part of the full-stack BlueField platform that accelerates gigascale AI infrastructure, delivering massive computing performance, supporting 800 Gb/s of throughput and enabling high-performance inference processing.

Powered by software-defined acceleration across AI data storage, networking and security, NVIDIA BlueField-4 transforms data centers into secure, intelligent AI infrastructure—designed to accelerate every workload, in every AI factory. It's purpose-built as the end-to-end engine for a new class of AI storage platforms, bringing AI data storage acceleration to the foundation of AI data pipelines for efficient data processing and breakthrough performance at scale.

OneXPlayer Officially Reveals Water-Cooled AMD Strix Halo-Powered OneXFly Apex Gaming Handheld

OneXPlayer has officially announced its latest gaming handheld, the OneXFly Apex, which puts the exciting AMD Ryzen AI Max+ 395 APU and Radeon 8060 graphics into a compact handheld form factor with some interesting cooling and power tricks. The new Windows gaming handheld from OneXPlayer is clearly aimed to combat recent announcements from the likes of GPD, replete with a detachable battery, just like GPD's Win 5. Unlike the Win 5, however, OneXPlayer also saw fit to equip the OneXFly Apex with a liquid cooling system to keep the AMD Ryzen AI Max+ 395 in check. OneXPlayer says that the powerful APU is capable of drawing as much as 120 W with this cooling solution, claiming that it is the first Windows gaming handheld to achieve this feat. The Apex will come with an 8-inch, 120 Hz IPS display with a maximum rated brightness of 500 nits and 100% coverage of the sRGB color space.

The watercooling solution is a detachable tower containing the radiator, pump, and reservior, much like the XMG Neo 17's Oasis system we reviewed prior. In handheld mode, without the water cooling tower, the OneXFly's APU is said to be capable of sustained 80 W TDP with up to 100 W supposedly also possible. This is all powered by an 85 Wh external battery in a similar piggyback configuration to GPD's Win 5 detachable battery. OneXPlayer showed off some comparative testing putting the device up against another handheld equipped with the AMD Ryzen Z2 Extreme, and the Strix Halo-powered device expectedly blew the smaller APU out of the water when it came to gaming tests. As is the case with other portable devices using the same APU, the OneXPlayer OneXFly Apex will be available with up to 128 GB of LPDDR5x-8000 memory and a 2 TB NVMe SSD (with another M.2 slot available for upgrades). While the device is clearly intended primarily as a gaming handheld, OneXPlayer is openly marketing the Apex as a do-it-all machine, especially considering the water cooling dock.

(PR) NVIDIA to Build Seven New AI Supercomputers for U.S. Government

NVIDIA today announced that it is working with the U.S. Department of Energy's national labs and the nation's leading companies to build America's AI infrastructure to support scientific discovery, economic growth and power the next industrial revolution.

"We are at the dawn of the AI industrial revolution that will define the future of every industry and nation," said Jensen Huang, founder and CEO of NVIDIA. "It is imperative that America lead the race to the future—this is our generation's Apollo moment. The next wave of inventions, discoveries and progress will be determined by our nation's ability to scale AI infrastructure. Together with our partners, we are building the most advanced AI infrastructure ever created, ensuring that America has the foundation for a prosperous future, and that the world's AI runs on American innovation, openness and collaboration, for the benefit of all."

(PR) NVIDIA Introduces NVQLink — Connecting Quantum and GPU Computing for 17 Quantum Builders and Nine Scientific Labs

NVIDIA today announced NVIDIA NVQLink, an open system architecture for tightly coupling the extreme performance of GPU computing with quantum processors to build accelerated quantum supercomputers.

Researchers from leading supercomputing centers at national laboratories including Brookhaven National Laboratory, Fermi Laboratory, Lawrence Berkeley National Laboratory (Berkeley Lab), Los Alamos National Laboratory, MIT Lincoln Laboratory, the Department of Energy's Oak Ridge National Laboratory, Pacific Northwest National Laboratory and Sandia National Laboratories guided the development of NVQLink, helping accelerate next-generation work on quantum computing. NVQLink provides an open approach to quantum integration, supporting 17 QPU builders, five controller builders and nine U.S national labs.

Almost 90% of Windows Games Run on Linux, Notes Report

Linux gaming has quietly reached a new inflection point. A recent Boiling Steam summary of crowd-sourced ProtonDB compatibility reports shows that about 89.7% of Windows titles now at least launch on Linux systems. The numbers are spread into a few categories. Games rated "Platinum," meaning they install, run, and save on Linux without requiring user intervention, made up 42% of new releases tracked in October, up from 29% the previous year. At the same time the share of titles that refuse to launch, the so-called "Borked" cohort, has fallen to roughly 3.8%, a group that still includes deliberate blocks such as March of Giants, which explicitly detects Wine and Proton and exits to the desktop.

The most persistent obstacles are not obscure indies but anti-cheat middleware and contractual choices. Easy Anti-Cheat, BattlEye, and similar systems remain the primary gatekeepers for online multiplayer, and enabling them on Linux is often more a negotiation than a mere technical flip of a switch. When a studio approves Steam Deck support, desktop Linux compatibility frequently follows within a single build cycle, suggesting the code paths are already unified and only sign-off is pending.

(PR) Razer Unveils Huntsman V3 Pro and V3 Pro Tenkeyless 8KHz Esports Keyboards

Razer, the leading global lifestyle brand for gamers, today unveiled the Razer Huntsman V3 Pro 8KHz and Razer Huntsman V3 Pro Tenkeyless 8KHz, its most advanced esports gaming keyboards to date. The Huntsman V3 Pro 8KHz and Huntsman V3 Pro Tenkeyless 8KHz build on the award-winning Huntsman legacy, introducing next-generation responsiveness and refined keystroke feel for a truly competitive edge.

"The Huntsman V3 Pro 8KHz is a reflection of our relentless pursuit of esports excellence. With the evolution of our Analog Optical Switches and the introduction of 8000 Hz HyperPolling, we've pushed performance to new heights," said Barrie Ooi, Head of Razer's PC Gaming Division. "It delivers the speed, control and precision that elite players demand. It's a showcase of what happens when engineering meets competitive ambition."

(PR) PNY Unveils CS3250 M.2 NVMe PCIe Gen 5 x4 SSD, Transforming Storage with Lightning-Fast Performance

PNY announced the addition of the CS3250 M.2 NVMe PCIe Gen 5 x4 SSD to its lineup of solid-state drives. The CS3250 pushes the limits of storage technology with ultra-fast NVMe PCIe Gen 5 x4 performance. With sequential read speeds of up to 14,900 MB/s and write speeds up to 14,000 MB/s, it delivers the speed and responsiveness required for today's most demanding workloads. Designed for AI developers, gamers, content creators, and performance-driven professionals, the CS3250 sets a new benchmark for high-end computing.

Enhanced Computing
Built for the future of computing, the CS3250 harnesses next-gen NVMe PCIe Gen 5 x4 technology to deliver next-level performance, making it the ultimate solution for powering AI image generation, AAA titles, and demanding workloads. Whether you are pushing the limits of creativity or performance, the CS3250 ensures lightning-fast load times, seamless multitasking, and unbeatable responsiveness, empowering professionals and enthusiasts alike - raising the bar for premium storage solutions.

(PR) Endorfy Presents Arx 500 White ARGB PC Case

After the success of the highly acclaimed and award-winning Arx 500 and Arx 700 cases, ENDORFY presents their younger, equally ambitious sibling. The new Arx 500 White ARGB, finished in an elegant white color scheme, is a natural evolution of the series and another step toward a complete product portfolio that allows to build a reliable, visually consistent ENDORFY ecosystem. Designed with attention to detail, the white Arx 500 impresses with perfectly matched shades of white, ensuring that it looks great both right out of the box and after long-term use. It's a blend of performance and design that creators, gamers and professionals alike will appreciate.

Technology In Its Purest Form
Behind its beautiful form lies thoughtful engineering. The spacious interior can accommodate up to seven fans and radiators up to 360 mm, and it's compatible with ATX, microATX, and Mini-ITX motherboards. Straight out of the box, the case comes equipped with four pre-installed Stratus 140 White PWM ARGB fans, developed in collaboration with Synergy Cooling. Each operates between 200 and 1400 RPM, delivering not only excellent airflow but also silence.

Corsair delivers peak PCIe 5.0 speeds with its new MP700 PRO XT

Corsair extends its PCIe 5.0 offerings with its MP700 PRO XT and MP700 Micro Corsair has just added two new SSDs to its PCIe 5.0 storage lineup, promising high-end SSD performance and Microsoft DirectStorage support. Catering to the high-end market, Corsair’s new MP700 PRO XT SSD promises performance levels that reach the limits of the […]

The post Corsair delivers peak PCIe 5.0 speeds with its new MP700 PRO XT appeared first on OC3D.

NVIDIA Shows Next-Gen Vera Rubin Superchip For The First Time, Two Massive GPUs Primed For Production Next Year

NVIDIA circuit board displayed on stage shows TWW 2538 on chips.

NVIDIA has shown off its next-gen Vera Rubin Superchip for the first time at GTC in Washington, primed to spark the next wave of AI. NVIDIA Has Received Its First Rubin GPUs In The Labs, Ready For Vera Rubin Superchip Mass Production Next Year, Around The Same Time or Earlier At GTC October 2025, NVIDIA's CEO Jensen Huang showcased the next-gen Vera Rubin Superchip. This is the first time that we are seeing an actual sample of the motherboard, or Superchip as NVIDIA loves to call it, featuring the Vera CPU and two massive Rubin GPUs. The motherboard also hosts […]

Read full article at https://wccftech.com/nvidia-shows-next-gen-vera-rubin-superchip-two-massive-gpus-production-next-year/

NVIDIA Unveils a Massive Partnership With Nokia, Bringing Next-Gen 6G Connectivity By Leveraging the Power of AI

Announcing Nokia to build AI-native 6G on new NVIDIA ARC Aerial RAN Computer on stage with Nokia MIMO Radio displayed.

NVIDIA has announced a surprise partnership with Nokia to bring 6G connectivity by utilizing the firm's new AI-RAN products, involving Grace CPUs and Blackwell GPUs. NVIDIA's Collaboration With Nokia Allows Merging CUDA & Computing Tech With Existing RAN Infrastructure Team Green has managed to integrate AI into everything mainstream, and it seems that the telecommunications industry is now expected to benefit from the next wave of AI's computing capabilities. At the GTC 2025 keynote, NVIDIA's CEO announced a pivotal partnership with Nokia, formally entering the race for achieving 6G connectivity through a new suite of AI-RAN products combined with Nokia's […]

Read full article at https://wccftech.com/nvidia-announces-a-massive-partnership-with-nokia-bringing-next-gen-6g-connectivity/

Amazon Game Studios Hit With “Significant” Cuts Amid Mass 14,000+ Layoff

New World game artwork with fiery and lush landscapes, featuring a warrior face with glowing eyes at the center.

Amazon is laying off more than 14,000 corporate jobs today, and per a report from Bloomberg, the video games division, Amazon Game Studios, is not immune to the cuts. While Amazon doesn't specify exactly how many people from its video games division will be laid off, a statement from Steve Boom, Amazon's head of audio, Twitch, and games, does call the cut "significant," and says that the cuts are happening despite Amazon being "proud" of the success it has had. While the studio's MMO, New World, isn't mentioned by name, the statement does say that Amazon is halting its game […]

Read full article at https://wccftech.com/amazon-video-game-division-hit-significant-cuts-amid-mass-14000-layoff/

Snapdragon 8 Elite Gen 6 Rumored To Get LPDDR6 RAM & UFS 5.0 Support For Faster AI Operations, But Tipster Shares Questionable Lithography Details

Snapdragon 8 Elite Gen 6 details shared by tipster

Qualcomm will keep pace with Apple and announce its first 2nm chipset in late 2026, the Snapdragon 8 Elite Gen 6, directly succeeding the Snapdragon 8 Elite Gen 5. A tipster now shares some partial specifications of the chipset, claiming that it will feature LPDDR6 RAM and UFS 5.0 storage, bringing in a wave of improvements. However, the rumor also mentions that the Snapdragon 8 Elite Gen 6 will utilize TSMC’s more advanced ‘N2P’ process, which has been refuted on a previous occasion. Based on TSMC’s 2nm production timeline, its N2 wafers will be available in higher volume for customers like […]

Read full article at https://wccftech.com/snapdragon-8-elite-gen-6-to-get-lpddr6-and-ufs-5-0-support-but-will-stick-with-tsmc-n2-process/

Final Fantasy VII Rebirth Zack Gameplay Overhaul Mod Will Introduce New Skills and Mechanics

Final Fantasy VII Rebirth key art

Zack Fair's gameplay in Final Fantasy VII Rebirth is set to be significantly expanded by a new mod introducing new mechanics and skill for an overhauled combat experience. This Zack gameplay overhaul mod is being developed by NSK, the modder behind the Zack and Sephiroth Combat Fix mod whichaddressed some issues for the two characters and expanded their possibilities when added to the regular combat party outside their small playable segments. Judging from the video showcase shared a few days ago on YouTube, the changes being made to Zack's gameplay are going to be significant, leveraging his unique Charge mechanics […]

Read full article at https://wccftech.com/final-fantasy-vii-rebirth-zack-gameplay-mod/

Battlefield 6 Season 1, Battlefield REDSEC Now Live, Full Season 1 Roadmap Revealed

Battlefield Redsec title screen with armed soldiers walking on a street amidst explosions.

It's a big day for Battlefield 6, with both its Season 1 update now live for players to jump into, and its new free-to-play battle royale mode, Battlefield REDSEC, also now available. EA and Battlefield Studios confirmed yesterday what was already rumored, that REDSEC would be revealed and launched today, and now it's here for all players on PC, PS5, and Xbox Series X/S. Once the gameplay trailer that was teased yesterday was over, the mode and the new season was officially live for all players to jump into, and we got our first major question of the day answered. […]

Read full article at https://wccftech.com/battlefield-6-season-1-battlefield-redsec-out-now-pc-ps5-xbox-series-x-s/

Lenovo Launches Legion Pro 27Q-10, The Cheapest QHD OLED Monitor At Just $337

Lenovo Legion desktop setup with RGB keyboard, monitor displaying LEGION, and headset on desk.

The Pro 27Q-10 is probably the cheapest QHD OLED gaming monitor available on the market and is currently available for just 2,399 Yuan in China. Lenovo Debuts Legion Pro Series OLED Monitors, Starting at $337; Available in Both 2K and 4K Variants with Up To 280Hz Refresh Rate Competition in the OLED display category is getting aggressive, and while we already have some QHD OLED gaming monitors available for as low as $450-$500, Lenovo just brought the price to under $350. Lenovo is the most popular PC brand on earth, isn't just involved in desktops and laptops; it is also […]

Read full article at https://wccftech.com/lenovo-launches-legion-pro-27q-10-the-cheapest-qhd-oled-monitor-at-just-337/

President Trump to Meet NVIDIA’s CEO Jensen Huang at a Time When the U.S. & China Have Agreed on the Framework for a Trade Deal

Unbranded chip held on stage with spiral backdrop.

President Trump is expected to meet with NVIDIA's CEO, Jensen Huang, during his visit to South Korea, where he will congratulate him on the firm's recent achievements. President Trump Will Congratulate NVIDIA On Producing The First Blackwell Chip Wafer In the US Well, the timing of a meeting between President Trump and Jensen Huang is indeed a 'massive' coincidence, to say the least, especially since both the US and China have agreed on a trade deal framework, which is expected to reduce hostilities between the two nations. While speaking with business leaders in Tokyo, Japan, President Trump announced his meeting […]

Read full article at https://wccftech.com/president-trump-to-meet-nvidia-ceo-jensen-huang/

Microsoft Will No Longer Have Any Say In OpenAI’s Upcoming “Apple iPhone Killer” Consumer Device Decisions

Apple logo in fiery orange and OpenAI logo in metallic blue appear side by side in dramatic background.

OpenAI has been working for quite a while now with the famous Apple designer, Jony Ive, to come up with a consumer AI device, one that would supposedly render smartphones obsolete, devastating Apple's legendary moat around its iPhones in the process. Now, we have just received the clearest sign yet that OpenAI is indeed working on such a device. What's more, Microsoft will no longer exercise any influence over the upcoming "Apple iPhone killer." OpenAI and Microsoft have successfully renegotiated their tie-up, removing the latter's influence over the former's upcoming "Apple iPhone killer" consumer device, among other things Microsoft and […]

Read full article at https://wccftech.com/microsoft-will-no-longer-have-any-say-in-openais-upcoming-apple-iphone-killer-consumer-device-decisions/

Sandbox Racer Wreckreation Out Now on PC, PS5, and Xbox Series X/S

Wreckreation logo above a landscape with sports cars racing on twisting tracks and roads.

Wrekcreation, the sandbox open-world arcade racing game from Three Fields Entertainment, a studio founded by former Criterion developers who worked on the Burnout series, is out now on PC, PS5, and Xbox Series X/S. Published by THQ Nordic, Wreckreation gives players the freedom to create whatever kinds of tracks they want, from the kinds of things you'd only expect to see in Hot Wheels Unleashed to something super realistic if that's more your speed, and race the wide variety of vehicles on them to your heart's content. With more than 400 square kilometres of space to create tracks in and […]

Read full article at https://wccftech.com/sandbox-racer-wreckreation-out-now-pc-ps5-xbox-series/

Smart Glasses Can Be The Future Of Chip Manufacturing And Smartphone & AI GPU Production, Says Vuzix’s Enterprise Solutions Head

Vuzix Z100 smart glasses displaying Dinner at 8?, battery and Wi-Fi icons, and 5:45 PM on the lens.

The advent of AI and Meta's launch of its smart glasses have injected fresh air into the sector after Google decided to shelve its smart glasses in 2023, the sector has seen increased interest. In fact, Meta CEO Mark Zuckerberg has gone as far as to suggest that courtesy of AI, users who do not use smart glasses can find themselves at a cognitive disadvantage. To understand the smart glasses industry and how the gadgets can impact consumer electronics manufacturing, semiconductor fabrication and AI GPU production, we decided to talk to Vuzix Corporation's President, Enterprise Solutions Dr. Chris Parkinson. Vuzix […]

Read full article at https://wccftech.com/smart-glasses-can-be-the-future-of-chip-manufacturing-and-smartphone-ai-gpu-production-says-vuzixs-enterprise-solutions-head/

Intel Foundry Reportedly in Bold Pursuit of Former TSMC Executive Who Drove the Company’s High-End Chip Breakthroughs

Logos of tsmc and intel overlaid on semiconductor chip background.

TSMC's former SVP, known for his key role in driving the Taiwan giant's chip technologies, is reportedly being pursued to join Intel Foundry, which could be a significant hiring move for Team Blue. Intel's Pursuit of TSMC's Former Executive Shows the Firm's 'Hunger' Towards a Comeback in the Chip Industry Intel has been scaling up its chipmaking ambitions since the change in leadership, and under CEO Lip-Bu Tan, the foundry division has vowed to gain recognition in the semiconductor industry. Structural changes are being made within the department, including adjustments to the management hierarchy and the approach towards specific chip […]

Read full article at https://wccftech.com/intel-foundry-reportedly-pursuing-former-tsmc-executive/

Guild Wars 2: Visions of Eternity Expansion Out Now on PC

A colorful bird flies over a vibrant fantasy landscape with waterfalls and cliffs.

Developer ArenaNet has launched the sixth major expansion for Guild Wars 2 today, with Visions of Eternity now available to players on PC. Visions of Eternity adds a new island to explore called Castora, with two new maps to explore, a new storyline, and plenty more. The new storyline kicks off with whispers and rumors of the island of Castora, with the Tyrian Alliance stepping in to uncover more about the magical island once they discover that the Inquest has begun sniffing around for Castora. Alongside two new maps included with the new expansion, Shipwreck Strand and Starlit Weald, players […]

Read full article at https://wccftech.com/guild-wars-2-visions-of-eternity-expansion-out-now-pc/

Tampo – Manage team and personal tasks in one app


Tampo is a modern task and team management platform built for startups and growing teams. It helps you organize projects, assign tasks, and collaborate seamlessly—all in one place. With features like multi-user assignments, real-time tracking, and smart filters, Tampo simplifies team coordination without sacrificing power. Designed to be fast, intuitive, and mobile-friendly, Tampo is the productivity partner your team needs to get more done, together.

View startup

Bill Gates urges world to ‘refocus’ climate goals, pushes back on emissions targets

Cipher executive editor Amy Harder and Bill Gates at the Breakthrough Energy Summit in Seattle on Oct. 19, 2022. (GeekWire Photo / Lisa Stiffler)

Less than two weeks ahead of the United Nations climate conference, Bill Gates posted a memo on his personal blog encouraging folks to just calm down about climate change.

“Although climate change will have serious consequences — particularly for people in the poorest countries — it will not lead to humanity’s demise. People will be able to live and thrive in most places on Earth for the foreseeable future,” Gates wrote.

The missive seems to run counter to earlier climate actions taken by the Microsoft co-founder and billionaire, but also echoes Gates’ long-held priorities and perspectives. In some regards, it’s the framing, timing and broader political context that heighten the memo’s impact.

What the world needs to do, he said, is to shift the goals away from reducing carbon emissions and keeping warming below agreed-upon temperature targets.

“This is a chance to refocus on the metric that should count even more than emissions and temperature change: improving lives,” he wrote. “Our chief goal should be to prevent suffering, particularly for those in the toughest conditions who live in the world’s poorest countries.

More than four years ago, Gates published “How to Avoid a Climate Disaster,” a book highlighting the urgency and necessity of cutting carbon emissions and promoting the need to reduce “green premiums” in order to make climate friendly technologies as cheap as unsustainable alternatives.

“It’ll be tougher than anything humanity’s ever done, and only by staying constant in working on this over the next 30 years do we have a chance to do it,” Gates told GeekWire in 2021. “Having some people who think it’s easy will be an impediment. Having people who think that it’s not important will be an impediment.”

Gates’ clean energy efforts go back even earlier. In 2006 he helped launch the next-gen nuclear company TerraPower, which is currently building its first reactor in Wyoming. In 2015 he founded Breakthrough Energy Ventures, a $1 billion fund to support carbon-cutting startups, which evolved into Breakthrough Energy, an umbrella organization tackling clean tech policies, funding for researchers and data generation.

Earlier this year, however, Gates began taking steps that suggested a cooling commitment to the challenge.

Roughly two months after President Trump took office in January, and as clean energy policies and funding began getting axed, Breakthrough Energy laid off staff. In May Gates announced he would direct nearly all of his wealth to his eponymous global health foundation, deploying $200 billion through the organization over two decades.

At the same time, many of the key points in the memo published today reflect statements that Gates has made in the past.

In both his new post and at a 2022 global climate summit organized in Seattle by Breakthrough Energy, Gates urged people to focus on reducing green premiums more than on cutting emissions as a key benchmark.

“If you keep the primary measures, which is the emissions reductions in the near term, you’re going to be very depressed,” Gates said. At his summit talk, he shared optimism that new innovations were arriving quickly and would address climate challenges.

A curious paradox in Gates’ stance is the reality that people living in lower-income nations and in regions important to the Gates Foundation are often hardest hit by the rising temperatures and natural disasters that are stoked by increased carbon emissions.

Gates acknowledged that truth in his post this week, and said that solutions such as engineering drought tolerant crops and making air conditioning more widespread can address some of those harms. At the Seattle summit three years ago, one of the Breakthrough Energy executives likewise said the organization was going to increase its investment into technologies for adapting to climate change.

On Nov. 10, global climate leaders will meet in Brazil for COP30 to discuss climate progress and issues. Gates has often attended the event, but the New York Times reported that won’t be the case this year.

UN efforts meanwhile continue to emphasize the importance of reducing emissions. A statement today from the organization notes that while carbon emissions are curving downward, it’s not happening fast enough.

The world needs to raise its climate ambitions, the statement continues, “to avoid the worst climate impacts by limiting warming to 1.5°C this century, as science demands.”

GitHub's Agent HQ aims to solve enterprises' biggest AI coding problem: Too many agents, no central control

GitHub is making a bold bet that enterprises don't need another proprietary coding agent: They need a way to manage all of them.

At its Universe 2025 conference, the Microsoft-owned developer platform announced Agent HQ. The new architecture transforms GitHub into a unified control plane for managing multiple AI coding agents from competitors including Anthropic, OpenAI, Google, Cognition and xAI. Rather than forcing developers into a single agent experience, the company is positioning itself as the essential orchestration layer beneath them all.

Agent HQ represents GitHub's attempt to apply its collaboration platform approach to AI agents. Just as the company transformed Git, pull requests and CI/CD into collaborative workflows, it's now trying to do the same with a fragmented AI coding landscape.

The announcement marks what GitHub calls the transition from "wave one" to "wave two" of AI-assisted development. According to GitHub's Octoverse report, 80% of new developers use Copilot in their first week and AI has helped to lead to a large increase overall in the use of the GitHub platform.

"Last year, the big announcements for us, and what we were saying as a company, is wave one is done, that was kind of code completion," GitHub's COO Mario Rodriguez told VentureBeat. "We're into this wave two era, [which] is going to be multimodal, it's going to be agentic and it's going to have these new experiences that will feel AI native."

What is Agent HQ?

GitHub already updated its GitHub Copilot coding tool for the agentic era with the debut of GitHub Copilot Agent in May.

Agent HQ transforms GitHub into an open ecosystem that unites multiple AI coding agents on a single platform. Over the coming months, coding agents from Anthropic, OpenAI, Google, Cognition, xAI and others will become available directly within GitHub as part of existing paid GitHub Copilot subscriptions.

The architecture maintains GitHub's core primitives. Developers still work with Git, pull requests and issues. They still use their preferred compute, whether GitHub Actions or self-hosted runners. What changes is the layer above: agents from multiple vendors can now operate within GitHub's security perimeter, using the same identity controls, branch permissions and audit logging that enterprises already trust for human developers.

This approach differs fundamentally from standalone tools. When developers use Cursor or grant repository access to Claude, those agents typically receive broad permissions across entire repositories. Agent HQ compartmentalizes access at the branch level and wraps all agent activity in enterprise-grade governance controls.

Mission Control: One interface for all agents

At the heart of Agent HQ is Mission Control. It's a unified command center that appears consistently across GitHub's web interface, VS Code, mobile apps and the command line. Through Mission Control, developers can assign work to multiple agents simultaneously. They can track progress and manage permissions, all from a single pane of glass.

The technical architecture addresses a critical enterprise concern: Security. Unlike standalone agent implementations where users must grant broad repository access, GitHub's Agent HQ implements granular controls at the platform level.

"Our coding agent has a set of security controls and capabilities that are built natively into the platform, and that's what we're providing to all of these other agents as well," Rodriguez explained. "It runs with a GitHub token that is very locked down to what it can actually do."

Agents operating through Agent HQ can only commit to designated branches. They run within sandboxed GitHub Actions environments with firewall protections. They operate under strict identity controls. Rodriguez explained that even if an agent goes rogue, the firewall prevents it from accessing external networks or exfiltrating data unless those protections are explicitly disabled.

Technical differentiation: MCP integration and custom agents

Beyond managing third-party agents, GitHub is introducing two technical capabilities that set Agent HQ apart from alternative approaches like Cursor's standalone editor or Anthropic's Claude integration.

Custom agents via AGENTS.md files: Enterprises can now create source-controlled configuration files that define specific rules, tools and guardrails for how Copilot behaves. For example, a company could specify "prefer this logger" or "use table-driven tests for all handlers." This permanently encodes organizational standards without requiring developers to re-prompt every time.

"Custom agents have an immense amount of product market fit within enterprises, because they could just codify a set of skills that the coordination can do, then standardize on those and get really high quality output," Rodriguez said.

The AGENTS.md specification allows teams to version control their agent behavior alongside their code. When a developer clones a repository, they automatically inherit the custom agent rules. This solves a persistent problem with AI coding tools: Inconsistent output quality when different team members use different prompting strategies.

Native Model Context Protocol (MCP) support: VS Code now includes a GitHub MCP Registry. Developers can discover, install and enable MCP servers with a single click. They can then create custom agents that combine these tools with specific system prompts.

This positions GitHub as the integration point between the emerging MCP ecosystem and actual developer workflows. MCP, introduced by Anthropic but rapidly gaining industry support, is becoming a de facto standard for agent-to-tool communication. By supporting the full specification, GitHub can orchestrate agents that need access to external services without each agent implementing its own integration logic.

Plan Mode and agentic code review

GitHub is also shipping new capabilities within VS Code itself. Plan Mode allows developers to collaborate with Copilot on building step-by-step project approaches. The AI asks clarifying questions before any code is written. Once approved, the plan can be executed either locally in VS Code or by cloud-based agents.

The feature addresses a common failure mode in AI coding: Beginning implementation before requirements are fully understood. By forcing an explicit planning phase, GitHub aims to reduce wasted effort and improve output quality.

More significantly, GitHub's code review feature is becoming agentic. The new implementation will use GitHub's CodeQL engine, which previously largely focused on security vulnerabilities to identify bugs and maintainability issues. The code review agent will automatically scan agent-generated pull requests before human review. This creates a two-stage quality gate.

"Our code review agent will be able to make calls into the CodeQL engine to then find a set of bugs," Rodriguez explained. "We're extending the engine and we're going to be able to tap into that engine also to find bugs."

Enterprise considerations: What to do now

For enterprises already deploying multiple AI coding tools, Agent HQ offers a path to consolidation without forcing tool elimination.

GitHub's multi-agent approach provides vendor flexibility and reduces lock-in risk. Organizations can test multiple agents within a unified security perimeter and switch providers without retraining developers. The tradeoff is potentially less optimized experiences compared to specialized tools that tightly integrate UI and agent behavior.

Rodriguez's recommendation is clear: Begin with custom agents. This allows enterprises to codify organizational standards that agents follow consistently. Once established, organizations can layer in additional third-party agents to expand capabilities.

"Go and do agent coding, custom agents and start playing with that," he said. "That is a capability available tomorrow, and it allows you to really start shaping your SDLC to be personalized to you, your organization and your people."

Intuit learned to build AI agents for finance the hard way: Trust lost in buckets, earned back in spoonfuls

Building AI for financial software requires a different playbook than consumer AI, and Intuit's latest QuickBooks release provides an example.

The company has announced Intuit Intelligence, a system that orchestrates specialized AI agents across its QuickBooks platform to handle tasks including sales tax compliance and payroll processing. These new agents augment existing accounting and project management agents (which have also been updated) as well as a unified interface that lets users query data across QuickBooks, third-party systems and uploaded files using natural language.

The new development follow years of investment and improvement in Intuit's GenOS, allowing the company to build AI capabilities that reduce latency and improve accuracy.

But the real news isn't what Intuit built — it's how they built it and why their design decisions will make AI more usable. The company's latest AI rollout represents an evolution built on hard-won lessons about what works and what doesn't when deploying AI in financial contexts.

What the company learned is sobering: Even when its accounting agent improved transaction categorization accuracy by 20 percentage points on average, they still received complaints about errors.

"The use cases that we're trying to solve for customers include tax and finance; if you make a mistake in this world, you lose trust with customers in buckets and we only get it back in spoonfuls," Joe Preston, Intuit's VP of product and design, told VentureBeat.

The architecture of trust: Real data queries over generative responses

Intuit's technical strategy centers on a fundamental design decision. For financial queries and business intelligence, the system queries actual data, rather than generating responses through large language models (LLMs).

Also critically important: That data isn't all in one place. Intuit's technical implementation allows QuickBooks to ingest data from multiple distinct sources: native Intuit data, OAuth-connected third-party systems like Square for payments and user-uploaded files such as spreadsheets containing vendor pricing lists or marketing campaign data. This creates a unified data layer that AI agents can query reliably.

"We're actually querying your real data," Preston explained. "That's very different than if you were to just copy, paste out a spreadsheet or a PDF and paste into ChatGPT."

This architectural choice means that the Intuit Intelligence system functions more as an orchestration layer. It's a natural language interface to structured data operations. When a user asks about projected profitability or wants to run payroll, the system translates the natural language query into database operations against verified financial data.

This matters because Intuit's internal research has uncovered widespread shadow AI usage. When surveyed, 25% of accountants using QuickBooks admitted they were already copying and pasting data into ChatGPT or Google Gemini for analysis.

Intuit's approach treats AI as a query translation and orchestration mechanism, not a content generator. This reduces the hallucination risk that has plagued AI deployments in financial contexts.

Explainability as a design requirement, not an afterthought

Beyond the technical architecture, Intuit has made explainability a core user experience across its AI agents. This goes beyond simply providing correct answers: It means showing users the reasoning behind automated decisions.

When Intuit's accounting agent categorizes a transaction, it doesn't just display the result; it shows the reasoning. This isn't marketing copy about explainable AI, it's actual UI displaying data points and logic.

"It's about closing that trust loop and making sure customers understand the why," Alastair Simpson, Intuit's VP of design, told VentureBeat.

This becomes particularly critical when you consider Intuit's user research: While half of small businesses describe AI as helpful, nearly a quarter haven't used AI at all. The explanation layer serves both populations: Building confidence for newcomers, while giving experienced users the context to verify accuracy.

The design also enforces human control at critical decision points. This approach extends beyond the interface. Intuit connects users directly with human experts, embedded in the same workflows, when automation reaches its limits or when users want validation.

Navigating the transition from forms to conversations

One of Intuit's more interesting challenges involves managing a fundamental shift in user interfaces. Preston described it as having one foot in the past and one foot in the future.

"This isn't just Intuit, this is the market as a whole," said Preston. "Today we still have a lot of customers filling out forms and going through tables full of data. We're investing a lot into leaning in and questioning the ways that we do it across our products today, where you're basically just filling out, form after form, or table after table, because we see where the world is headed, which is really a different form of interacting with these products."

This creates a product design challenge: How do you serve users who are comfortable with traditional interfaces while gradually introducing conversational and agentic capabilities?

Intuit's approach has been to embed AI agents directly into existing workflows. This means not forcing users to adopt entirely new interaction patterns. The payments agent appears alongside invoicing workflows; the accounting agent enhances the existing reconciliation process rather than replacing it. This incremental approach lets users experience AI benefits without abandoning familiar processes.

What enterprise AI builders can learn from Intuit's approach

Intuit's experience deploying AI in financial contexts surfaces several principles that apply broadly to enterprise AI initiatives.

Architecture matters for trust: In domains where accuracy is critical, consider whether you need content generation or data query translation. Intuit's decision to treat AI as an orchestration and natural language interface layer dramatically reduces hallucination risk and avoids using AI as a generative system.

Explainability must be designed in, not bolted on: Showing users why the AI made a decision isn't optional when trust is at stake. This requires deliberate UX design. It may constrain model choices.

User control preserves trust during accuracy improvements: Intuit's accounting agent improved categorization accuracy by 20 percentage points. Yet, maintaining user override capabilities was essential for adoption.

Transition gradually from familiar interfaces: Don't force users to abandon forms for conversations. Embed AI capabilities into existing workflows first. Let users experience benefits before asking them to change behavior.

Be honest about what's reactive versus proactive: Current AI agents primarily respond to prompts and automate defined tasks. True proactive intelligence that makes unprompted strategic recommendations remains an evolving capability.

Address workforce concerns with tooling, not just messaging: If AI is meant to augment rather than replace workers, provide workers with AI tools. Show them how to leverage the technology.

For enterprises navigating AI adoption, Intuit's journey offers a clear directive. The winning approach prioritizes trustworthiness over capability demonstrations. In domains where mistakes have real consequences, that means investing in accuracy, transparency and human oversight before pursuing conversational sophistication or autonomous action.

Simpson frames the challenge succinctly: "We didn't want it to be a bolted-on layer. We wanted customers to be in their natural workflow, and have agents doing work for customers, embedded in the workflow."

Ferguson’s AI balancing act: Washington governor wants to harness innovation while minimizing harms

Washington Gov. Bob Ferguson speaks at Seattle AI Week, at the AI House on Pier 70 along the city’s waterfront. (GeekWire Photo / Todd Bishop)

Washington state Gov. Bob Ferguson is threading the needle when it comes to artificial intelligence.

Ferguson made a brief appearance at the opening reception for Seattle AI Week on Monday evening, speaking at AI House on Pier 70 about his approach to governing the consequential technology.

“I view my job as maximizing the benefits and minimizing harms,” said Ferguson, who took office earlier this year.

Ferguson called AI one of the “top five biggest challenges” he thinks about daily, both professionally and personally.

In a follow-up interview with GeekWire, the governor said AI “could totally transform our government, as well as the private sector, in many ways.”

His comments came just as Amazon, the largest employer in Washington state, said it would eliminate about 14,000 corporate jobs, citing a need to reduce bureaucracy and become more efficient in the new era of artificial intelligence.

Ferguson told the crowd that the future of work and “loss of jobs that come with the technology” is on his mind.

The governor highlighted Washington’s AI Task Force, created during his tenure as attorney general, which is studying issues from algorithmic bias to data security. The group’s next set of recommendations arrives later this year and could shape upcoming legislation, he said.

States are moving ahead with their own AI rules in the absence of a comprehensive federal framework. Washington appears to sit in the pragmatic middle of this fast-moving regulatory landscape — using executive action and an expert task force to build guidelines, while watching experiments in states such as California and Colorado.

Seattle city leaders also getting involved. Seattle Mayor Bruce Harrell last month announced a “responsible AI plan” that provides guidelines for Seattle’s use of artificial intelligence and its support of the AI tech sector as an economic driver.

(GeekWire Photo / Taylor Soper)

Ferguson said he’s aware of how AI can “really revolutionize our economy and state in so many ways,” from healthcare to education to wildfire detection.

But he also flagged his concerns — both as a policymaker and parent. The governor, who has 17-year-old twins, said he worries about the technology’s impact on young people, referencing reports of teen suicides linked to AI chatbots.

Despite those concerns, Ferguson maintained an upbeat tone during his remarks at Seattle AI Week, citing the region’s technical talent and economic opportunity from the technology.

He noted that the state, amid a $16 billion budget shortfall this year, kept $300,000 in funding for the AI House, the new waterfront startup hub that hosted Monday’s event.

“There is no better place anywhere in the United States for this innovation than right here in the Northwest,” he said.

Related: A tale of two Seattles in the age of AI: Harsh realities and new hope for the tech community

Helion gives behind-the-scenes tour of secretive 60-foot fusion prototype as it races to deployment

Stacks of pallets containing power units that deliver massive pulses of energy to Helion’s Polaris fusion generator. (Helion Photo)

EVERETT, Wash. — In an industrial stretch of Everett is a boxy, windowless building called Ursa. Inside that building is a vault built from concrete blocks up to 5 feet thick with an additional layer of radiation-absorbing plastic. Within that vault is Polaris, a machine that could change the world.

Helion Energy is trying to replicate the physics that fuel the sun and the stars — hence the celestial naming theme — to provide nearly limitless power on earth through fusion reactions.

The company recently invited a small group of journalists to visit its headquarters and see Polaris, which is the seventh iteration of its fusion generator and the prototype for a commercial facility called Orion that broke ground this summer in Malaga in Central Washington.

David Kirtley, Helion CEO, at the Malaga, Wash., site where the company broke ground this summer on its planned commercial fusion plant. (LinkedIn Photo)

Few people outside of Helion have been provided such access; photographs were not allowed.

“We run these systems right now at 100 million degrees, about 10 times the temperature of the sun, and compress them to high pressure… the same pressure as the bottom of the Marianas Trench,” said Helion CEO and co-founder David Kirtley, referencing the deepest part of the ocean.

Polaris and its vault occupy a relative small footprint inside of Ursa. The majority of the space is filled with 2,500 power units. They’re configured into 4-foot-by-4-foot pallets, lined up in rows and stacked seven high. The units are packed with capacitors that are charged from the grid to provide super high intensity pulses of electricity — 100 gigawatts of peak power — that create the temperatures and pressure needed for fusion reactions.

All of that energy is carried through miles and miles of coaxial cables filled with copper, aluminum and custom-metal alloys. End-to-end, the cables would stretch across Washington state and back again — roughly 720 miles. They flow in thick, black bundles from the pallets into the vault. They curl on the floor in giant heaps before connecting to the tubular-shaped, 60-foot-long Polaris generator.

The ultimate goal is for the generator to force lightweight ions to fuse, creating a super hot plasma that expands, pushing on a magnetic field that surrounds it. The energy created by that expansion is directly captured and carried back the capacitors to recharge them so the process can be repeated over and over again.

And the small amount of extra power that’s produced by fusion goes into the electrical grid for others to use — or at least that’s the plan for the future.

‘Worth being aggressive’

Helion is building fusion generators that smash together deuterium and helium-3 isotopes in super hot, super high pressure conditions to produce power. (Helion Illustration)

Helion is a contender in a global race to generate fusion power for a rapidly escalating demand for electricity, driven in part by data centers and AI. No one so far has been able to make and capture enough energy from fusion to commercialize the process, but dozens of companies — including three other competitors in the Pacific Northwest — are trying.

The company aims by 2028 to begin producing energy at the Malaga site, which Microsoft has agreed to purchase. If it hits this extremely ambitious target — and many are highly skeptical — it could be the world’s first company to do so.

“There is a level of risk, of being aggressive with program development, new technology and timelines,” Kirtley said. “But I think it’s worth it. Fusion is the same process that happens in the stars. It has the promise of very low cost electricity that’s clean and safe and base load and always on. And so it’s worth being aggressive.”

Some in the sector worry that Helion will miss the mark and cast doubt on a sector that is working hard to prove itself. At a June event, the head of R&D for fusion competitor Zap Energy questioned Helion’s deadline.

“I don’t see a commercial application in the next few years happening,” said Ben Levitt. “There is a lot of complicated science and engineering still to be discovered and to be applied.”

Others are willing to take the bet. Helion has raised more than $1 billion from investors that include SoftBank, Lightspeed Venture Partners and Sam Altman, who is OpenAI’s CEO and co-founder, as well as Helion’s longtime chair of its board of directors. The company is able to unlock an additional $1.8 billion if it hits Polaris milestones.

The generator has been operating since December, running all day, five days a week, creating fusion, Kirtley said.

Energy without ignition

A section of Trenta, Helion’s sixth fusion generator prototype, which is no longer in service. (GeekWire File Photo / Lisa Stiffler)

Helion is highly cautious — some would say too cautious — in sharing details on its progress. Helion officials say they must hold their tech close to the vest as Chinese competitors have stolen pieces of their intellectual property; critics say the secrecy makes it difficult for the scientific community to verify their likelihood of success in a very risky, highly technical field.

In August, Kirtley shared an online post about Helion’s power-producing strategy, which upends the conventional approach.

Most efforts are trying to achieve ignition in their fusion generators, which is a condition where the reactions produce more power than is required for fusion to occur. This feat was first accomplished at a national lab in California in 2022 — but it still wasn’t enough energy that one could put electricity on the grid.

Helion is not aiming for ignition but rather for a system that is so efficient it can capture enough energy from fusion without reaching that state.

Kirtley compares the strategy for producing power to regenerative braking in electric vehicles. Simply put, an EV’s battery gets the car moving, and regenerative braking by the driver puts energy back into the battery to help it run longer. In the fusion generator, the capacitors provide that initial power, and the fusion reaction resupplies the energy and a little bit more.

“We can recover electricity at high efficiency,” Kirtley said. Compared to other commercial fusion approaches, “we require a lot less fusion. Fusion is the hard part. My goal, ironically, is to do the minimum amount of fusion that we can deliver a product to the customer and generate electricity.”

The glow from a super hot plasma generated inside Polaris, Helion’s seventh fusion prototype device. (Helion Photo)

❌