Reading view

(PR) AAEON Releases UP Xtreme ARL Edge

AAEON's UP brand, predominately known for its industrial-grade developer board series, today announced the release of the UP Xtreme ARL Edge, its first Mini PC to feature the new Intel Core Ultra 200H Series platform (formerly Arrow Lake). Primarily designed to bring AI functionality to applications such as industrial robots and AMRs, the UP Xtreme ARL Edge boasts a ruggedized enclosure with fanless operation, capable of operating in temperatures as wide as -20°C to 60°C. Moreover, the Mini PC is equipped with a 9 V to 36 V DC power input range, while also boasting impressive resistance to shock and vibration.

Despite its fanless operation, the UP Xtreme ARL Edge offers a choice of Intel Core Ultra (Series 2) processors, with default models offering the Intel Core Ultra 5 processor 225H, Intel Core Ultra 7 processor 255H, or the Intel Core Ultra 7 processor 265H, with the latter capable of utilizing the platform's enhanced integrated CPU, GPU, and NPU to provide up to 97 TOPS of AI performance.

(PR) Electronic Arts Reports Q2 FY26 Results

Electronic Arts Inc. today announced preliminary financial results for its second fiscal quarter ended September 30, 2025. "Across our broad portfolio - from EA SPORTS to Battlefield, The Sims, and skate. - our teams continue to create high-quality experiences that connect and inspire players around the world," said Andrew Wilson, CEO of Electronic Arts. "The creativity, passion, and innovation of our teams are at the heart of everything we do."

Selected Operating Highlights and Metrics
  • Net bookings for the quarter totaled $1.818 billion, down 13% year-over-year, driven largely by the extraordinary release of College Football 25 in the prior year period.
  • EA SPORTS Madden NFL 26 delivered net bookings growth year-over-year in the quarter, with players returning to the title.
  • Apex Legends returned to net bookings growth on a year-over-year basis in Q2, growing double digits, as the team continues to deliver new experiences that drove deeper engagement.
  • EA SPORTS FC 26 HD net bookings were up mid single digits year-over-year versus EA SPORTS FC 25 HD net bookings in the quarter, after adjusting for differences in deluxe edition content timing.
  • The successful launches of skate. and Battlefield 6 - underscore the strength of EA's long-term strategy to build community-driven experiences centered on creativity, connection, and long-term growth.

(PR) Logitech Announces Q2 Fiscal Year 2026 Results

Logitech International today announced financial results for the second quarter of Fiscal Year 2026.
  • Sales were $1.19 billion, up 6 percent in US dollars and 4 percent in constant currency compared to Q2 of the prior year.
  • GAAP gross margin was 43.4 percent, down 20 basis points compared to Q2 of the prior year. Non-GAAP gross margin was 43.8 percent, down 30 basis points compared to Q2 of the prior year.
  • GAAP operating income was $191 million, up 19 percent compared to Q2 of the prior year. Non-GAAP operating income was $230 million, up 19 percent compared to Q2 of the prior year.
  • GAAP earnings per share (EPS) was $1.15, up 21 percent compared to Q2 of the prior year. Non-GAAP EPS was $1.45, up 21 percent compared to Q2 of the prior year.
  • Cash flow from operations was $229 million. The quarter-ending cash balance was $1.4 billion.
  • The Company returned $340 million to shareholders through its annual dividend payment and share repurchases.

Turtle Beach Launches PC Edition of Victrix Pro BFG Reloaded Modular Controller

Leading gaming accessories maker Turtle Beach Corporation and its Victrix brand, today announced the launch of the new Victrix Pro BFG Reloaded Wireless Modular Controller - PC Edition. The Victrix Pro BFG Controllers have long been coveted by competitive esports gamers the world over, and the refined Victrix Pro BFG Reloaded Wireless Modular Controller - PC Edition adds powerful features. These latest upgrades include magnetic, anti-drift Hall Effect thumbsticks and triggers, a new touch-sensitive trackpad on the front of the controller, back buttons that can be mapped to any controller input as well as to keyboard and mouse inputs, and a 1 kHz polling rate for even faster input that's available when using the controller in wired mode.

The Victrix Pro BFG Reloaded Wireless Modular Controller - PC Edition is built for serious PC gamers and is available in North America as a Best Buy retail exclusive and directly from Turtle Beach at www.turtlebeach.com. Globally, the controller is also available on turtlebeach.com and participating retailers for $189.99|£159.99|€179.99 MSRP.

(PR) SK hynix Announces 3Q25 Financial Results

SK hynix Inc. announced today that it has recorded 24.4489 trillion won in revenues, 11.3834 trillion won in operating profit (with an operating margin of 47%), and 12.5975 trillion won in net profit (with a net margin of 52%) in the third quarter. The company achieved its highest-ever quarterly performance, driven by the full-scale rise in prices of DRAM and NAND, as well as the increasing shipments of high-performance products for AI servers. In particular, operating profit exceeded 10 trillion won for the first time in the company's history.

As demand across the memory segment has soared due to customers' expanding investments in AI infrastructure, SK hynix once again surpassed the record-high performance of the previous quarter due to increased sales of high value-added products such as 12-high HBM3E and DDR5 for servers. Driven by surging demand for AI servers, shipments of high-capacity DDR5s of 128 GB or more have more than doubled from the previous quarter. In NAND, the portion of AI server eSSD, which commands a price premium, expanded significantly as well. Building on this strong performance, the company's cash and cash equivalents at the end of the third quarter increased by 10.9 trillion won from the previous quarter, reaching 27.9 trillion won. Meanwhile, interest bearing debt stood at 24.1 trillion won, enabling the company to successfully transition to a net cash position of 3.8 trillion won.

Intel Nova Lake LGA1954 Socket Keeps Cooler Compatibility with LGA1851: Thermaltake

A leaked product installation guide by Thermaltake confirms that Intel's upcoming desktop socket for its Core Ultra 400 series "Nova Lake-S" processors, the LGA1954, will retain cooler compatibility with the current LGA1851 and previous LGA1700. This was rumored as far back as May 2025, but now has confirmation from a major CPU cooler manufacturer. This means you should be able to reuse your CPU coolers purchased as far back as 2021 for your 12th Gen Core "Alder Lake" build, with your future "Nova Lake" build. The LGA1954 socket and package is expected to have similar physical dimensions to current Intel desktop chips, with the company increasing socket pin counts by reducing the size of the contact points, and making the island—the central region of the land grid that has some SMDs—smaller.

(PR) QNAP Launches All-Flash NASbook TBS-h574TX with Pre-installed Enterprise E1.S SSDs

QNAP Systems, Inc., a leading computing, and storage solutions innovator, today announced new models of the acclaimed TBS-h574TX all-flash NASbook, which come pre-installed with enterprise-grade E1.S SSDs. Available with two raw capacities, 9.6 TB or 19.2 TB, the new models are purpose-built for high-throughput post-production workflows including video editing, visual effects (VFX), and animation. With support for hot-swappable E1.S SSDs, the TBS-h574TX enables uninterrupted ingest-to-delivery operations—empowering on-location shoots, small-scale video production teams, SOHO users, and mobile media professionals to collaborate seamlessly and maintain peak productivity.

"Speed and reliability are critical in media production. By integrating QNAP-validated E1.S SSDs into the TBS-h574TX, users no longer need to worry about drive compatibility. They can power on, configure, and get straight to editing." said Andy Chuang, Product Manager of QNAP, adding "This NASbook combines portable design, all-flash performance, and hot-swappable SSDs to offer a uniquely compact, powerful, and zero-downtime experience—so teams can focus on creativity anytime, anywhere with peace of mind."

(PR) Seagate Technology Reports Fiscal First Quarter 2026 Financial Results

Seagate Technology Holdings plc (NASDAQ: STX), a leading innovator of mass-capacity data storage, today reported financial results for its fiscal first quarter ended October 3, 2025. "Seagate delivered strong September quarter results, with revenue growth of 21% year-over-year and non-GAAP EPS exceeding the high end of our guided range. Our performance underscores the team's strong execution and robust customer demand for our high-capacity storage products," said Dave Mosley, Seagate's chair and chief executive officer.

"With clear visibility into sustained demand strength, we are ramping shipments of our areal density-leading Mozaic HAMR products, which are now qualified with five of the world's largest cloud customers. These products address customers' performance, durability and TCO needs at scale to continue supporting demand for existing use cases such as social media video platforms as well as growth driven by new AI applications. AI is transforming how content is being consumed and generated, increasing the value of data and storage and Seagate is well positioned for continued profitable growth," Mosley concluded.

(PR) Durabook Introduces Next-Generation R10 Copilot+ PC Rugged Tablet

Durabook, the global rugged mobile solutions brand owned by Twinhead International Corporation, today announced the launch of its next-generation AI-powered fully rugged R10 tablet. Equipped with high-performance Intel Core Ultra 200V series processors, the 10" device is one of the first Copilot+ PC rugged tablets on the market. Redefining versatility in the tablet world, the R10 can be paired with a detachable backlit keyboard, seamlessly transforming it into a 2-in-1 rugged laptop PC. This adaptable design delivers the ideal balance of performance, reliability, and mobility, empowering users with a powerful and intelligent rugged device that fuses cutting-edge AI capabilities with Durabook's hallmark durability and field-proven design.

Twinhead's CEO, Fred Kao, said: "Durabook devices are built to meet the needs of professionals who depend on powerful, reliable technology to stay productive in any environment. The compact and versatile R10 redefines the 10-inch rugged tablet category by providing AI-enhanced productivity supported by smart engineering for optimal usability. The R10's adaptive design and customisation capability make it the perfect partner for field service operatives working across a wide range of sectors, including industrial manufacturing, warehouse management, automotive diagnostics, public safety, utilities, transport and logistics."

Seattle startup TestSprite raises $6.7M to become ‘testing backbone’ for AI-generated code

TestSprite founders Yunhao Jiao (left) and Rui Li. (TestSprite Photo)

In the era of AI-generated software, developers still need to make sure their code is clean. That’s where TestSprite wants to help.

The Seattle startup announced $6.7 million in seed funding to expand its platform that automatically tests and monitors code written by AI tools such as GitHub Copilot, Cursor, and Windsurf.

TestSprite’s autonomous agent integrates directly into development environments, running tests throughout the coding process rather than as a separate step after deployment.

“As AI writes more code, validation becomes the bottleneck,” said CEO Yunhao Jiao. “TestSprite solves that by making testing autonomous and continuous, matching AI speed.”

The platform can generate and run front- and back-end tests during development to ensure AI-written code works as expected, help AI IDEs (Integrated Development Environments) fix software based on TestSprite’s integration testing reports, and continuously update and rerun test cases to monitor deployed software for ongoing reliability.

Founded last year, TestSprite says its user base grew from 6,000 to 35,000 in three months, and revenue has doubled each month since launching its 2.0 version and new Model Context Protocol (MCP) integration. The company employs about 25 people.

Jiao is a former engineer at Amazon and a natural language processing researcher. He co-founded TestSprite with Rui Li, a former Google engineer.

Jiao said TestSprite doesn’t compete with AI coding copilots, but complements them by focusing on continuous validation and test generation. Developers can trigger tests using simple natural-language commands, such as “Test my payment-related features,” directly inside their IDEs.

The seed round was led by Bellevue, Wash.-based Trilogy Equity Partners, with participation from Techstars, Jinqiu Capital, MiraclePlus, Hat-trick Capital, Baidu Ventures, and EdgeCase Capital Partners. Total funding to date is about $8.1 million.

Crash Bandicoot Netflix series in the works – reports claim

It looks like Crash Bandicoot is the newest video game classic to move to Netflix Netflix is the king of video game adaptations. In recent years, Netflix has adapted Castlevania, Tomb Raider, Splinter Cell, Sonic the Hedgehog, and even Cyberpunk 2077. Now, the streaming giant is reportedly developing a new animated series based on Crash […]

The post Crash Bandicoot Netflix series in the works – reports claim appeared first on OC3D.

I Was Fired From My Own Startup. Here’s What Every Founder Should Know About Letting Go

By Yakov Filippenko

No founder plans for the day they get fired from their own company.

You plan for funding rounds, product launches and exits, but not for the boardroom moment when everyone raises their hand, and you realize your journey inside the company is over.

It happened to me. I called that board meeting. I set the vote. We had to choose who would stay, me or my co-founder. The vote didn’t go my way.

In movies, this is where the music swells and the credits roll. Steve Jobs after John Sculley. Travis Kalanick after Bill Gurley. In real life, there’s no cinematic pause. No final scene. Just the quiet realization that everything you built now belongs to someone else.

What follows isn’t drama, either. It’s disorientation. And like most founders, I had no idea how to handle it.

Don’t fill the silence too fast

Yakov Filippenko, founder and CEO at Intch
Yakov Filippenko

When it ended, I filled my calendar with aimless meetings. Five or six a day. Not because they had any real purpose, but because it felt strange not to be doing business. For more than 10 years, I’d never had a day when I didn’t have to think about work. A startup teaches you to fix things fast.

When you’re out, though, there’s nothing left to fix. Only yourself. Getting pushed out isn’t like missing a quarterly target. It’s like losing the story you’ve been telling yourself for years.

The hardest part is that you don’t know who to blame.

Investors? They were doing their job. Yourself? Every decision made sense in context. So the frustration lands on the person closest to you. Your co-founder. It’s not about logic. I would say it is more of a defense mechanism. It’s how the mind tries to make sense of loss.

Learn to see the pattern

For months, I kept asking: What did we do wrong? It took me a couple of years to see the pattern.

Later, working inside a venture fund helped me see the truth. I saw the same story play out again and again. Founders repeating the same emotional arc, as follows:

  • Expectation of an M&A deal;
  • Long wait for the deal;
  • The deal collapses;
  • The startup stalls;
  • Expectations diverge; and then
  • Resentment between co-founders

Every time, the same sequence. And when the dream fades, blame fills the gap.

The pattern itself is that the anger toward a co-founder is often a projection of disappointment from a failed deal. If that energy isn’t processed consciously, it finds its own way out, usually as anger. You can’t really be mad at yourself; you did everything right. The other side acted in their own interest. So it lands on the person next to you, your co-founder and your team, and for them, it’s you.

And that’s where I have a bit of a claim toward investors because they often see this dynamic coming and could at least warn founders about it.

Once I recognized the pattern, I stopped seeing my story as a failure. It was part of a cycle almost every founder goes through, only most don’t talk about it.

Trade strategy for emotional tools

Traditional business tools didn’t help. OKRs, planning sessions, strategy off-sites, none of it worked on the inner collapse that comes when your identity and your company split apart.

This led me to begin studying Gestalt therapy. It gave me the language to understand how situations like this actually work, their cycles, causes and effects, and how to think about them with the right awareness and perspective. One part of building startups isn’t about pivots or fundraising. It’s realizing how much of yourself you’ve tied to the story you’re telling the world.

The point is to first get conscious of your anger, and then let it out.

Acceptance comes in stages

Acceptance doesn’t show up all at once. It arrives in pieces.

For me, the first piece came when I watched another founder go through the same breakdown and recognized every stage.

The second came when my first startup was acquired. Not at the valuation I’d dreamed of, but enough to accept that it continued without me. The third came with my current company, Intch, which is built from calm, not from fear.

I no longer measure success by control, but by clarity.

What I’d tell a founder in that room

Here’s what I’d share now with another entrepreneur who finds themselves in the same situation.

  • You’re losing a story, not your worth. Give yourself space to grieve it.
  • Don’t let anger choose a target. Name the pattern instead.
  • Find mirrors. Other founders are walking through the same steps.
  • Business tools have limits. Emotional tools matter here.
  • Acceptance comes in stages. You’ll recognize them when they arrive.

Founders are trained to manage everything except their own psychology. But startups are way more than capital and code. They run on the emotional architecture of the people who build them. And when that structure breaks, rebuilding it is the most important startup you’ll ever work on.


 Yakov Filippenko is a seasoned entrepreneur with more than 10 years of experience in IT and technologies, as well as scaling businesses internationally. As a product manager at Yandex, he led a team that grew the product’s user base from 500,000 to 1.2 million and secured its entry into the international market. Subsequently, he co-founded SailPlay, which he scaled to 45 countries and eventually exited, after it was acquired by Retail Rocket in 2018. In 2021, Filippenko launched Intch, an AI-powered platform connecting part-time professionals with flexible roles.

Illustration: Dom Guzman

Ninja Gaiden 4 – Achieve S Rank Easily in All Missions With This Trick

Ninja Gaiden 4 game screen with cybernetic ninja in an action pose surrounded by futuristic buildings.

With its many accessibility options, Ninja Gaiden 4 is one of the most approachable entries in the series, allowing players to experience its high-speed action without letting the high challenge level get in the way of enjoyment. Those who want to truly appreciate the game, however, will wish to master many of its intricacies, put their skills to the test, and attempt to achieve an S rank for completing each of the story chapters. Here are some tips to help you understand the mission scoring system and what you should always strive to do to achieve such a high rank. […]

Read full article at https://wccftech.com/how-to/ninja-gaiden-4-achieve-s-rank-easily-in-all-missions-with-this-trick/

Apple Is ‘Not Yet In Talks With TSMC’ For Its A16 Process, As Its Current Focus Likely Lies In Developing Several 2nm Chipsets Next Year

Apple has not entered talks with TSMC to use its A16, or 1.6nm process

The A20 and A20 Pro will be Apple’s first chipsets fabricated on TSMC’s 2nm process, pretty much highlighting the company’s propensity to jump to the newest manufacturing nodes as quickly as possible to have an advantage over the competition. On the same lithography, we expect the California-based giant to introduce a total of four chipsets, and after a couple of generations, Apple will switch to an even more advanced technology. The most obvious transition would be TSMC’s A16, or 1.6nm, but a report says neither company has entered talks for this node. Future Apple chipsets are expected to take advantage of […]

Read full article at https://wccftech.com/apple-not-yet-in-talks-with-tsmc-over-a16-process/

John Romero Says He’s Talking with Many Companies to Finish the Game That Was Being Funded by Microsoft

John Romero Games logo features skull design alongside person with glasses in black jacket.

John Romero might not be a name that youngsters recognize, but he was a legend of the early days of the first-person shooter genre, co-founding id Software and making videogames like Wolfenstein 3D, Doom, Hexen, and Quake, to name a few. Nowadays, he makes smaller titles at Romero Games. The studio's most recent title was the Mafia-themed turn-based strategy game Empire of Sin, launched in late 2020 to a mixed reception. More recently, John Romero and his fellow developers signed a deal with Microsoft for their next project, but that deal went awry along with the latest Xbox layoffs. The […]

Read full article at https://wccftech.com/john-romero-talking-companies-finish-game-funded-by-microsoft/

Microsoft CEO: We’re Now the Largest Gaming Publisher and Want to Be Everywhere; The Real Competitor Is TikTok

THIS IS AN XBOX text with Microsoft CEO Satya Nadella, Samsung devices and Xbox gaming console in the background.

Microsoft CEO Satya Nadella was featured in a live interview on TBPN (Technology Business Programming Network) discussing various topics, including the company's updated multiplatform strategy on the gaming front. Nadella pointed out that following the acquisition of Activision Blizzard, Microsoft is now the largest gaming publisher in terms of revenue. The goal, then, is to be everywhere the consumer is, just like with Office. The Microsoft CEO then interestingly pointed to TikTok, or, to be more accurate, short-form video as a whole, as the true competitor of gaming. Remember, the biggest gaming business is the Windows business. And of course, […]

Read full article at https://wccftech.com/microsoft-ceo-largest-gaming-publisher-want-to-be-everywhere-competition-tiktok/

ZipWik – ZipWik transforms static files into a single, shareable smart link


ZipWik lets you turn several documents—PDFs, slides, images or spreadsheets—into one simple document and a link you can share anywhere, from WhatsApp to Slack. You can control who sees it, set it to expire, and skip the hassle of sending large attachments. ZipWik also shows you what happens after you share: who viewed it, how long they spent, whether they downloaded or shared it, and which documents got the most attention. It’s an easy way to share files, stay in control, and actually understand how people engage with your content.

No more struggles with large attachments, sharing documents that you cannot control, inability to combine document formats..ZipWik does it all. Try Today.

View startup

Active Exploits Hit Dassault and XWiki — CISA Confirms Critical Flaws Under Attack

Threat actors are actively exploiting multiple security flaws impacting Dassault Systèmes DELMIA Apriso and XWiki, according to alerts issued by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and VulnCheck. The vulnerabilities are listed below - CVE-2025-6204 (CVSS score: 8.0) - A code injection vulnerability in Dassault Systèmes DELMIA Apriso that could allow an attacker to

Arm’s GitHub Copilot Agentic AI: Cloud Migration’s Next Leap

The post Arm’s GitHub Copilot Agentic AI: Cloud Migration’s Next Leap appeared first on StartupHub.ai.

Arm and GitHub's new Migration Assistant Custom Agent for GitHub Copilot Agentic AI fundamentally transforms cloud migration to Arm-based infrastructure.

The post Arm’s GitHub Copilot Agentic AI: Cloud Migration’s Next Leap appeared first on StartupHub.ai.

Vesence lands $9M to bring rigorous AI review to law firms

The post Vesence lands $9M to bring rigorous AI review to law firms appeared first on StartupHub.ai.

Vesence's $9M seed round fuels its mission to embed rigorous AI review agents directly into Microsoft Office, promising law firms unparalleled precision and compliance.

The post Vesence lands $9M to bring rigorous AI review to law firms appeared first on StartupHub.ai.

Cartesia’s Sonic-3 TTS laughs and emotes at human speed

The post Cartesia’s Sonic-3 TTS laughs and emotes at human speed appeared first on StartupHub.ai.

Cartesia's Sonic-3 uses a State Space Model architecture to deliver emotionally expressive AI speech, including laughter, at speeds faster than a human can respond.

The post Cartesia’s Sonic-3 TTS laughs and emotes at human speed appeared first on StartupHub.ai.

Salesforce Agentic AI: The Enterprise Evolution

The post Salesforce Agentic AI: The Enterprise Evolution appeared first on StartupHub.ai.

Salesforce's 'Agentic AI' strategy, featuring Agentforce 360 and Forward Deployed Engineers, aims to fundamentally redefine enterprise operations with unified, workflow-spanning AI agents.

The post Salesforce Agentic AI: The Enterprise Evolution appeared first on StartupHub.ai.

Polygraf AI Closes $9.5M Funding Round to Scale Its Secure AI Solutions for Enterprise Defense and Intelligence

The post Polygraf AI Closes $9.5M Funding Round to Scale Its Secure AI Solutions for Enterprise Defense and Intelligence appeared first on StartupHub.ai.

Polygraf AI, based in Austin, Texas announced the closing of their $9.5M seed round participation from DOMiNO Ventures, Allegis Capital, Alumni Ventures, DataPower VC and previous investors to accelerate their mission to bring clarity and trust to enterprise AI. With the new $9.5M Seed round, Polygraf AI is building the next generation of enterprise AI […]

The post Polygraf AI Closes $9.5M Funding Round to Scale Its Secure AI Solutions for Enterprise Defense and Intelligence appeared first on StartupHub.ai.

NVIDIA BlueField-4 Powers AI Factory OS

The post NVIDIA BlueField-4 Powers AI Factory OS appeared first on StartupHub.ai.

NVIDIA BlueField-4 is poised to redefine AI infrastructure, offering unprecedented compute power, 800Gb/s throughput, and advanced security for gigascale AI factories.

The post NVIDIA BlueField-4 Powers AI Factory OS appeared first on StartupHub.ai.

Primaa raises €7M to advance AI cancer diagnostics

The post Primaa raises €7M to advance AI cancer diagnostics appeared first on StartupHub.ai.

Biotech company Primaa raised €7 million to expand its AI software that helps pathologists improve the speed and accuracy of cancer diagnostics.

The post Primaa raises €7M to advance AI cancer diagnostics appeared first on StartupHub.ai.

FitResume – AI Resume Generator, Job Tailoring and many more


Fitresume.app is a free, AI‑driven resume builder that generates ATS‑friendly resumes, lets you choose polished templates, and custom‑tailors every resume to the exact wording of each job description. Ready to download as a PDF in seconds. Beyond writing, it tracks every application, follow‑up, and interview while visualizing your entire pipeline with an interactive Sankey diagram, so you stay organised and land offers faster.

View startup

CEO of spyware maker Memento Labs confirms one of its government customers was caught using its malware

Security researchers found a government hacking campaign that relies on Windows spyware developed by surveillance tech maker Memento Labs. When reached by TechCrunch, the spyware maker's chief executive blamed a government customer for getting caught.

NVIDIA Tackles AI Energy Consumption with Gigawatt Blueprint

The post NVIDIA Tackles AI Energy Consumption with Gigawatt Blueprint appeared first on StartupHub.ai.

NVIDIA's Omniverse DSX blueprint provides a standardized, energy-efficient framework for designing and operating gigawatt-scale AI factories, directly addressing AI energy consumption.

The post NVIDIA Tackles AI Energy Consumption with Gigawatt Blueprint appeared first on StartupHub.ai.

OpenAI Restructuring and Amazon’s AI Paradox Reshape Tech Landscape

The post OpenAI Restructuring and Amazon’s AI Paradox Reshape Tech Landscape appeared first on StartupHub.ai.

The capital requirements and strategic maneuvering defining the artificial intelligence frontier are starkly evident in recent developments, from OpenAI’s finalized restructuring to Amazon’s contrasting AI investment strategy. CNBC’s Morgan Brennan recently spoke with CNBC Business News reporter MacKenzie Sigalos, delving into the implications of these pivotal shifts for the broader tech ecosystem and workforce. Their […]

The post OpenAI Restructuring and Amazon’s AI Paradox Reshape Tech Landscape appeared first on StartupHub.ai.

NVIDIA IGX Thor Powers Real-Time AI at the Industrial Edge

The post NVIDIA IGX Thor Powers Real-Time AI at the Industrial Edge appeared first on StartupHub.ai.

NVIDIA IGX Thor is an industrial-grade platform delivering 8x the AI compute of its predecessor, enabling real-time physical AI for critical industrial and medical applications.

The post NVIDIA IGX Thor Powers Real-Time AI at the Industrial Edge appeared first on StartupHub.ai.

Battlefield Launches RedSec Free-to-Play Battle Royale Spin-Off With up to 100-Player Matches

jAs predicted by the early leaks and rumors surrounding the new game, EA has officially announced Battlefield RedSec as a free-to-play battle royale game "built on Battlefield's iconic DNA," with the new battle royale shooter launching on PC via Steam, Epic Games, and the EA App and on PS5 and Xbox Series X|S consoles. Players will face off in 100-player battle royale (in 25 teams of four or 50 teams of two), squad mode, and a mission-based elimination mode. The game is set in Fort Lyndon, a government testing facility in California that has become a war zone, and the biggest Battlefield map to date. As you might expect of an urban setting, the battlefield varies from tight interior spaces to wide open city streets, and much of the environment in RedSec is destructible.

Battlefield RedSec calls for, at minimum, an Intel Core i5-8400 or AMD Ryzen 5 2600, an AMD Radeon RX 5600 XT, NVIDIA GeForce RTX 2060, or Intel Arc A380, and 16 GB of RAM. As is increasingly the case with multiplayer games these days, RedSec also requires that players have TPM 2.0 and Secure Boot enabled, effectively locking out any potential gamers on Linux, including the Valve Steam Deck. RedSec also gives creators access to Portal, the updated Battlefield UGC and custom game creator, replete with all the vehicles and weapons from Battlefield RedSec.

NVIDIA Could Receive Approval for Blackwell AI Chip in China, Marking a Major “Bonus” for Its Market Share in the Region

Unbranded chip held on stage with spiral backdrop.

NVIDIA's market position in China could see a significant boost following the Trump-Xi meeting, as President Trump hints at discussing 'Blackwell' AI chips for Beijing. NVIDIA's Blackwell AI Chip Will Be a Topic of Discussion Under Trump-Xi Meeting, With a Potential Breakthrough in Sight The Chinese market has been a significant challenge for Jensen Huang since the US-China trade hostilities, and now, it seems like there might be a sigh of relief on the horizon for NVIDIA. According to a report by Bloomberg, President Trump has suggested discussing NVIDIA's Blackwell AI chip with the Chinese counterpart, indicating that chips could […]

Read full article at https://wccftech.com/nvidia-could-receive-approval-for-blackwell-ai-chip-in-china/

Filing: Amazon cuts more than 2,300 jobs in Washington state as part of broader layoffs

GeekWire File Photo

Amazon will lay off 2,303 corporate employees in Washington state, primarily in its Seattle and Bellevue offices, according to a filing with the state Employment Security Department that provides the first geographic breakdown of the company’s 14,000 global job cuts.

A detailed list included with the state filing shows a wide array of impacted roles, including software engineers, program managers, product managers, and designers, as well as a significant number of recruiters and human resources staff. 

Senior and principal-level roles are among those being cut, aligning with a company-wide push to use the cutbacks to help reduce bureaucracy and operate more efficiently.

Amazon announced the cuts Tuesday morning, part of a larger push by CEO Andy Jassy to streamline the company. Jassy had previously told Amazon employees in June that efficiency gains from AI would likely lead to a smaller corporate workforce over time.

In a memo from HR chief Beth Galetti, the company signaled that further cutbacks will continue into 2026. Reuters reported Monday that the number of layoffs could ultimately total as many as 30,000 people, which is still possible as the layoffs continue into next year.

NVIDIA AI-RAN: Open Source Rewrites Wireless Innovation

The post NVIDIA AI-RAN: Open Source Rewrites Wireless Innovation appeared first on StartupHub.ai.

NVIDIA's open-sourcing of Aerial software, coupled with DGX Spark, is democratizing AI-native 5G and 6G development, accelerating wireless innovation at an unprecedented pace.

The post NVIDIA AI-RAN: Open Source Rewrites Wireless Innovation appeared first on StartupHub.ai.

Celestica CEO Mionis: AI is a Must-Have Utility, Not a Bubble

The post Celestica CEO Mionis: AI is a Must-Have Utility, Not a Bubble appeared first on StartupHub.ai.

“AI right now used to be a nice-to-have. It’s a utility, it’s a must-have.” This declarative statement from Celestica President and CEO Rob Mionis on CNBC’s Mad Money with Jim Cramer cuts directly to the core of the current technological zeitgeist. It frames artificial intelligence not as a speculative fad or a nascent technology still […]

The post Celestica CEO Mionis: AI is a Must-Have Utility, Not a Bubble appeared first on StartupHub.ai.

Google AI Studio Unleashes “Vibe Coding” Revolutionizing AI Agent Development

The post Google AI Studio Unleashes “Vibe Coding” Revolutionizing AI Agent Development appeared first on StartupHub.ai.

The era of complex, code-heavy AI development is rapidly giving way to an intuitive, natural language-driven approach, dramatically democratizing creation. At the forefront of this shift is Google AI Studio, a platform designed to accelerate the journey from concept to fully functional AI application in minutes. This new “vibe coding” experience, showcased by Logan Kilpatrick, […]

The post Google AI Studio Unleashes “Vibe Coding” Revolutionizing AI Agent Development appeared first on StartupHub.ai.

AI Fusion Energy: NVIDIA, GA Unveil Digital Twin Breakthrough

The post AI Fusion Energy: NVIDIA, GA Unveil Digital Twin Breakthrough appeared first on StartupHub.ai.

NVIDIA and General Atomics have launched an AI-enabled digital twin for fusion reactors, dramatically accelerating the path to commercial AI fusion energy.

The post AI Fusion Energy: NVIDIA, GA Unveil Digital Twin Breakthrough appeared first on StartupHub.ai.

CyberRidge raises $26M to advance optical security for fiber networks

The post CyberRidge raises $26M to advance optical security for fiber networks appeared first on StartupHub.ai.

CyberRidge launched with $26 million to develop its optical security system that protects data in fiber-optic networks from eavesdropping.

The post CyberRidge raises $26M to advance optical security for fiber networks appeared first on StartupHub.ai.

Apple Bringing Water Resistance To iPad mini, OLED Displays To The MacBook Air, iPad Air, And iPad mini

Laptop with colorful display in front of large text OLED on a blue background.

Apple is finally getting ready to introduce OLED displays in a wider range of its products. However, don't expect a broad-based debut soon, especially given the Cupertino giant's tendency to move at a glacial pace when introducing new technology. Apple is gearing up to introduce OLED displays in the future versions of the MacBook Air, iPad Air, and iPad mini, with water resistance added for good measure Bloomberg's legendary tipster, Mark Gurman, is out with another scoop today, focusing on a much-anticipated display overhaul for the MacBook Air, iPad Air, and iPad mini, all of which are now slated to […]

Read full article at https://wccftech.com/apple-is-testing-oled-displays-for-the-macbook-air-ipad-air-and-ipad-mini-with-water-resistance-also-in-the-offing/

IBM's open source Granite 4.0 Nano AI models are small enough to run locally directly in your browser

In an industry where model size is often seen as a proxy for intelligence, IBM is charting a different course — one that values efficiency over enormity, and accessibility over abstraction.

The 114-year-old tech giant's four new Granite 4.0 Nano models, released today, range from just 350 million to 1.5 billion parameters, a fraction of the size of their server-bound cousins from the likes of OpenAI, Anthropic, and Google.

These models are designed to be highly accessible: the 350M variants can run comfortably on a modern laptop CPU with 8–16GB of RAM, while the 1.5B models typically require a GPU with at least 6–8GB of VRAM for smooth performance — or sufficient system RAM and swap for CPU-only inference. This makes them well-suited for developers building applications on consumer hardware or at the edge, without relying on cloud compute.

In fact, the smallest ones can even run locally on your own web browser, as Joshua Lochner aka Xenova, creator of Transformer.js and a machine learning engineer at Hugging Face, wrote on the social network X.

All the Granite 4.0 Nano models are released under the Apache 2.0 license — perfect for use by researchers and enterprise or indie developers, even for commercial usage.

They are natively compatible with llama.cpp, vLLM, and MLX and are certified under ISO 42001 for responsible AI development — a standard IBM helped pioneer.

But in this case, small doesn't mean less capable — it might just mean smarter design.

These compact models are built not for data centers, but for edge devices, laptops, and local inference, where compute is scarce and latency matters.

And despite their small size, the Nano models are showing benchmark results that rival or even exceed the performance of larger models in the same category.

The release is a signal that a new AI frontier is rapidly forming — one not dominated by sheer scale, but by strategic scaling.

What Exactly Did IBM Release?

The Granite 4.0 Nano family includes four open-source models now available on Hugging Face:

  • Granite-4.0-H-1B (~1.5B parameters) – Hybrid-SSM architecture

  • Granite-4.0-H-350M (~350M parameters) – Hybrid-SSM architecture

  • Granite-4.0-1B – Transformer-based variant, parameter count closer to 2B

  • Granite-4.0-350M – Transformer-based variant

The H-series models — Granite-4.0-H-1B and H-350M — use a hybrid state space architecture (SSM) that combines efficiency with strong performance, ideal for low-latency edge environments.

Meanwhile, the standard transformer variants — Granite-4.0-1B and 350M — offer broader compatibility with tools like llama.cpp, designed for use cases where hybrid architecture isn’t yet supported.

In practice, the transformer 1B model is closer to 2B parameters, but aligns performance-wise with its hybrid sibling, offering developers flexibility based on their runtime constraints.

“The hybrid variant is a true 1B model. However, the non-hybrid variant is closer to 2B, but we opted to keep the naming aligned to the hybrid variant to make the connection easily visible,” explained Emma, Product Marketing lead for Granite, during a Reddit "Ask Me Anything" (AMA) session on r/LocalLLaMA.

A Competitive Class of Small Models

IBM is entering a crowded and rapidly evolving market of small language models (SLMs), competing with offerings like Qwen3, Google's Gemma, LiquidAI’s LFM2, and even Mistral’s dense models in the sub-2B parameter space.

While OpenAI and Anthropic focus on models that require clusters of GPUs and sophisticated inference optimization, IBM’s Nano family is aimed squarely at developers who want to run performant LLMs on local or constrained hardware.

In benchmark testing, IBM’s new models consistently top the charts in their class. According to data shared on X by David Cox, VP of AI Models at IBM Research:

  • On IFEval (instruction following), Granite-4.0-H-1B scored 78.5, outperforming Qwen3-1.7B (73.1) and other 1–2B models.

  • On BFCLv3 (function/tool calling), Granite-4.0-1B led with a score of 54.8, the highest in its size class.

  • On safety benchmarks (SALAD and AttaQ), the Granite models scored over 90%, surpassing similarly sized competitors.

Overall, the Granite-4.0-1B achieved a leading average benchmark score of 68.3% across general knowledge, math, code, and safety domains.

This performance is especially significant given the hardware constraints these models are designed for.

They require less memory, run faster on CPUs or mobile devices, and don’t need cloud infrastructure or GPU acceleration to deliver usable results.

Why Model Size Still Matters — But Not Like It Used To

In the early wave of LLMs, bigger meant better — more parameters translated to better generalization, deeper reasoning, and richer output.

But as transformer research matured, it became clear that architecture, training quality, and task-specific tuning could allow smaller models to punch well above their weight class.

IBM is banking on this evolution. By releasing open, small models that are competitive in real-world tasks, the company is offering an alternative to the monolithic AI APIs that dominate today’s application stack.

In fact, the Nano models address three increasingly important needs:

  1. Deployment flexibility — they run anywhere, from mobile to microservers.

  2. Inference privacy — users can keep data local with no need to call out to cloud APIs.

  3. Openness and auditability — source code and model weights are publicly available under an open license.

Community Response and Roadmap Signals

IBM’s Granite team didn’t just launch the models and walk away — they took to Reddit’s open source community r/LocalLLaMA to engage directly with developers.

In an AMA-style thread, Emma (Product Marketing, Granite) answered technical questions, addressed concerns about naming conventions, and dropped hints about what’s next.

Notable confirmations from the thread:

  • A larger Granite 4.0 model is currently in training

  • Reasoning-focused models ("thinking counterparts") are in the pipeline

  • IBM will release fine-tuning recipes and a full training paper soon

  • More tooling and platform compatibility is on the roadmap

Users responded enthusiastically to the models’ capabilities, especially in instruction-following and structured response tasks. One commenter summed it up:

“This is big if true for a 1B model — if quality is nice and it gives consistent outputs. Function-calling tasks, multilingual dialog, FIM completions… this could be a real workhorse.”

Another user remarked:

“The Granite Tiny is already my go-to for web search in LM Studio — better than some Qwen models. Tempted to give Nano a shot.”

Background: IBM Granite and the Enterprise AI Race

IBM’s push into large language models began in earnest in late 2023 with the debut of the Granite foundation model family, starting with models like Granite.13b.instruct and Granite.13b.chat. Released for use within its Watsonx platform, these initial decoder-only models signaled IBM’s ambition to build enterprise-grade AI systems that prioritize transparency, efficiency, and performance. The company open-sourced select Granite code models under the Apache 2.0 license in mid-2024, laying the groundwork for broader adoption and developer experimentation.

The real inflection point came with Granite 3.0 in October 2024 — a fully open-source suite of general-purpose and domain-specialized models ranging from 1B to 8B parameters. These models emphasized efficiency over brute scale, offering capabilities like longer context windows, instruction tuning, and integrated guardrails. IBM positioned Granite 3.0 as a direct competitor to Meta’s Llama, Alibaba’s Qwen, and Google's Gemma — but with a uniquely enterprise-first lens. Later versions, including Granite 3.1 and Granite 3.2, introduced even more enterprise-friendly innovations: embedded hallucination detection, time-series forecasting, document vision models, and conditional reasoning toggles.

The Granite 4.0 family, launched in October 2025, represents IBM’s most technically ambitious release yet. It introduces a hybrid architecture that blends transformer and Mamba-2 layers — aiming to combine the contextual precision of attention mechanisms with the memory efficiency of state-space models. This design allows IBM to significantly reduce memory and latency costs for inference, making Granite models viable on smaller hardware while still outperforming peers in instruction-following and function-calling tasks. The launch also includes ISO 42001 certification, cryptographic model signing, and distribution across platforms like Hugging Face, Docker, LM Studio, Ollama, and watsonx.ai.

Across all iterations, IBM’s focus has been clear: build trustworthy, efficient, and legally unambiguous AI models for enterprise use cases. With a permissive Apache 2.0 license, public benchmarks, and an emphasis on governance, the Granite initiative not only responds to rising concerns over proprietary black-box models but also offers a Western-aligned open alternative to the rapid progress from teams like Alibaba’s Qwen. In doing so, Granite positions IBM as a leading voice in what may be the next phase of open-weight, production-ready AI.

A Shift Toward Scalable Efficiency

In the end, IBM’s release of Granite 4.0 Nano models reflects a strategic shift in LLM development: from chasing parameter count records to optimizing usability, openness, and deployment reach.

By combining competitive performance, responsible development practices, and deep engagement with the open-source community, IBM is positioning Granite as not just a family of models — but a platform for building the next generation of lightweight, trustworthy AI systems.

For developers and researchers looking for performance without overhead, the Nano release offers a compelling signal: you don’t need 70 billion parameters to build something powerful — just the right ones.

Microsoft’s Copilot can now build apps and automate your job — here’s how it works

Microsoft is launching a significant expansion of its Copilot AI assistant on Tuesday, introducing tools that let employees build applications, automate workflows, and create specialized AI agents using only conversational prompts — no coding required.

The new capabilities, called App Builder and Workflows, mark Microsoft's most aggressive attempt yet to merge artificial intelligence with software development, enabling the estimated 100 million Microsoft 365 users to create business tools as easily as they currently draft emails or build spreadsheets.

"We really believe that a main part of an AI-forward employee, not just developers, will be to create agents, workflows and apps," Charles Lamanna, Microsoft's president of business and industry Copilot, said in an interview with VentureBeat. "Part of the job will be to build and create these things."

The announcement comes as Microsoft deepens its commitment to AI-powered productivity tools while navigating a complex partnership with OpenAI, the creator of the underlying technology that powers Copilot. On the same day, OpenAI completed its restructuring into a for-profit entity, with Microsoft receiving a 27% ownership stake valued at approximately $135 billion.

How natural language prompts now create fully functional business applications

The new features transform Copilot from a conversational assistant into what Microsoft envisions as a comprehensive development environment accessible to non-technical workers. Users can now describe an application they need — such as a project tracker with dashboards and task assignments — and Copilot will generate a working app complete with a database backend, user interface, and security controls.

"If you're right inside of Copilot, you can now have a conversation to build an application complete with a backing database and a security model," Lamanna explained. "You can make edit requests and update requests and change requests so you can tune the app to get exactly the experience you want before you share it with other users."

The App Builder stores data in Microsoft Lists, the company's lightweight database system, and allows users to share finished applications via a simple link—similar to sharing a document. The Workflows agent, meanwhile, automates routine tasks across Microsoft's ecosystem of products, including Outlook, Teams, SharePoint, and Planner, by converting natural language descriptions into automated processes.

A third component, a simplified version of Microsoft's Copilot Studio agent-building platform, lets users create specialized AI assistants tailored to specific tasks or knowledge domains, drawing from SharePoint documents, meeting transcripts, emails, and external systems.

All three capabilities are included in the existing $30-per-month Microsoft 365 Copilot subscription at no additional cost — a pricing decision Lamanna characterized as consistent with Microsoft's historical approach of bundling significant value into its productivity suite.

"That's what Microsoft always does. We try to do a huge amount of value at a low price," he said. "If you go look at Office, you think about Excel, Word, PowerPoint, Exchange, all that for like eight bucks a month. That's a pretty good deal."

Why Microsoft's nine-year bet on low-code development is finally paying off

The new tools represent the culmination of a nine-year effort by Microsoft to democratize software development through its Power Platform — a collection of low-code and no-code development tools that has grown to 56 million monthly active users, according to figures the company disclosed in recent earnings reports.

Lamanna, who has led the Power Platform initiative since its inception, said the integration into Copilot marks a fundamental shift in how these capabilities reach users. Rather than requiring workers to visit a separate website or learn a specialized interface, the development tools now exist within the same conversational window they already use for AI-assisted tasks.

"One of the big things that we're excited about is Copilot — that's a tool for literally every office worker," Lamanna said. "Every office worker, just like they research data, they analyze data, they reason over topics, they also will be creating apps, agents and workflows."

The integration offers significant technical advantages, he argued. Because Copilot already indexes a user's Microsoft 365 content — emails, documents, meetings, and organizational data — it can incorporate that context into the applications and workflows it builds. If a user asks for "an app for Project Spartan," Copilot can draw from existing communications to understand what that project entails and suggest relevant features.

"If you go to those other tools, they have no idea what the heck Project Spartan is," Lamanna said, referencing competing low-code platforms from companies like Google, Salesforce, and ServiceNow. "But if you do it inside of Copilot and inside of the App Builder, it's able to draw from all that information and context."

Microsoft claims the apps created through these tools are "full-stack applications" with proper databases secured through the same identity systems used across its enterprise products — distinguishing them from simpler front-end tools offered by competitors. The company also emphasized that its existing governance, security, and data loss prevention policies automatically apply to apps and workflows created through Copilot.

Where professional developers still matter in an AI-powered workplace

While Microsoft positions the new capabilities as accessible to all office workers, Lamanna was careful to delineate where professional developers remain essential. His dividing line centers on whether a system interacts with parties outside the organization.

"Anything that leaves the boundaries of your company warrants developer involvement," he said. "If you want to build an agent and put it on your website, you should have developers involved. Or if you want to build an automation which interfaces directly with your customers, or an app or a website which interfaces directly with your customers, you want professionals involved."

The reasoning is risk-based: external-facing systems carry greater potential for data breaches, security vulnerabilities, or business errors. "You don't want people getting refunds they shouldn't," Lamanna noted.

For internal use cases — approval workflows, project tracking, team dashboards — Microsoft believes the new tools can handle the majority of needs without IT department involvement. But the company has built "no cliffs," in Lamanna's terminology, allowing users to migrate simple apps to more sophisticated platforms as needs grow.

Apps created in the conversational App Builder can be opened in Power Apps, Microsoft's full development environment, where they can be connected to Dataverse, the company's enterprise database, or extended with custom code. Similarly, simple workflows can graduate to the full Power Automate platform, and basic agents can be enhanced in the complete Copilot Studio.

"We have this mantra called no cliffs," Lamanna said. "If your app gets too complicated for the App Builder, you can always edit and open it in Power Apps. You can jump over to the richer experience, and if you're really sophisticated, you can even go from those experiences into Azure."

This architecture addresses a problem that has plagued previous generations of easy-to-use development tools: users who outgrow the simplified environment often must rebuild from scratch on professional platforms. "People really do not like easy-to-use development tools if I have to throw everything away and start over," Lamanna said.

What happens when every employee can build apps without IT approval

The democratization of software development raises questions about governance, maintenance, and organizational complexity — issues Microsoft has worked to address through administrative controls.

IT administrators can view all applications, workflows, and agents created within their organization through a centralized inventory in the Microsoft 365 admin center. They can reassign ownership, disable access at the group level, or "promote" particularly useful employee-created apps to officially supported status.

"We have a bunch of customers who have this approach where it's like, let 1,000 apps bloom, and then the best ones, I go upgrade and make them IT-governed or central," Lamanna said.

The system also includes provisions for when employees leave. Apps and workflows remain accessible for 60 days, during which managers can claim ownership — similar to how OneDrive files are handled when someone departs.

Lamanna argued that most employee-created apps don't warrant significant IT oversight. "It's just not worth inspecting an app that John, Susie, and Bob use to do their job," he said. "It should concern itself with the app that ends up being used by 2,000 people, and that will pop up in that dashboard."

Still, the proliferation of employee-created applications could create challenges. Users have expressed frustration with Microsoft's increasing emphasis on AI features across its products, with some giving the Microsoft 365 mobile app one-star ratings after a recent update prioritized Copilot over traditional file access.

The tools also arrive as enterprises grapple with "shadow IT" — unsanctioned software and systems that employees adopt without official approval. While Microsoft's governance controls aim to provide visibility, the ease of creating new applications could accelerate the pace at which these systems multiply.

The ambitious plan to turn 500 million workers into software builders

Microsoft's ambitions for the technology extend far beyond incremental productivity gains. Lamanna envisions a fundamental transformation of what it means to be an office worker — one where building software becomes as routine as creating spreadsheets.

"Just like how 20 years ago you put on your resume that you could use pivot tables in Excel, people are going to start saying that they can use App Builder and workflow agents, even if they're just in the finance department or the sales department," he said.

The numbers he's targeting are staggering. With 56 million people already using Power Platform, Lamanna believes the integration into Copilot could eventually reach 500 million builders. "Early days still, but I think it's certainly encouraging," he said.

The features are currently available only to customers in Microsoft's Frontier Program — an early access initiative for Microsoft 365 Copilot subscribers. The company has not disclosed how many organizations participate in the program or when the tools will reach general availability.

The announcement fits within Microsoft's larger strategy of embedding AI capabilities throughout its product portfolio, driven by its partnership with OpenAI. Under the restructured agreement announced Tuesday, Microsoft will have access to OpenAI's technology through 2032, including models that achieve artificial general intelligence (AGI) — though such systems do not yet exist. Microsoft has also begun integrating Copilot into its new companion apps for Windows 11, which provide quick access to contacts, files, and calendar information.

The aggressive integration of AI features across Microsoft's ecosystem has drawn mixed reactions. While enterprise customers have shown interest in productivity gains, the rapid pace of change and ubiquity of AI prompts have frustrated some users who prefer traditional workflows.

For Microsoft, however, the calculation is clear: if even a fraction of its user base begins creating applications and automations, it would represent a massive expansion of the effective software development workforce — and further entrench customers in Microsoft's ecosystem. The company is betting that the same natural language interface that made ChatGPT accessible to millions can finally unlock the decades-old promise of empowering everyday workers to build their own tools.

The App Builder and Workflows agents are available starting today through the Microsoft 365 Copilot Agent Store for Frontier Program participants.

Whether that future arrives depends not just on the technology's capabilities, but on a more fundamental question: Do millions of office workers actually want to become part-time software developers? Microsoft is about to find out if the answer is yes — or if some jobs are better left to the professionals.

Google DeepMind’s BlockRank could reshape how AI ranks information

Block AI

Google DeepMind researchers have developed BlockRank, a new method for ranking and retrieving information more efficiently in large language models (LLMs).

  • BlockRank is detailed in a new research paper, Scalable In-Context Ranking with Generative Models.
  • BlockRank is designed to solve a challenge called In-context Ranking (ICR), or the process of having a model read a query and multiple documents at once to decide which ones matter most.
  • As far as we know, BlockRank is not being used by Google (e.g., Search, Gemini, AI Mode, AI Overviews) right now – but it could be used at some point in the future.

What BlockRank changes. ICR is expensive and slow. Models use a process called “attention,” where every word compares itself to every other word. Ranking hundreds of documents at once gets exponentially harder for LLMs.

How BlockRank works. BlockRank restructures how an LLM “pays attention” to text. Instead of every document attending to every other document, each one focuses only on itself and the shared instructions.

  • The model’s query section has access to all the documents, allowing it to compare them and decide which one best answers the question.
  • This transforms the model’s attention cost from quadratic (very slow) to linear (much faster) growth.

By the numbers. In experiments using Mistral-7B, Google’s team found that BlockRank:

  • Ran 4.7× faster than standard fine-tuned models when ranking 100 documents.
  • Scaled smoothly to 500 documents (about 100,000 tokens) in roughly one second.
  • Matched or beat leading listwise rankers like RankZephyr and FIRST on benchmarks such as MSMARCO, Natural Questions (NQ), and BEIR.

Why we care. BlockRank could change how future AI-driven retrieval and ranking systems work to reward user intent, clarity, and relevance. That means (in theory) clear, focused content that aligns with why a person is searching (not just what they type) should increasingly win.

What’s next. Google/DeepMind researchers are continuing to redefine what it means to “rank” information in the age of generative AI. The future of search is advancing fast – and it’s fascinating to watch it evolve in real time.

NVIDIA Boosts Navy AI Training with DGX GB300

The post NVIDIA Boosts Navy AI Training with DGX GB300 appeared first on StartupHub.ai.

NVIDIA's DGX GB300 system is empowering the Naval Postgraduate School with advanced NVIDIA Navy AI training, enabling secure, on-premises generative AI and high-fidelity digital twin simulations for critical defense applications.

The post NVIDIA Boosts Navy AI Training with DGX GB300 appeared first on StartupHub.ai.

NVIDIA Charts America’s AI Future with Industrial-Scale Vision

The post NVIDIA Charts America’s AI Future with Industrial-Scale Vision appeared first on StartupHub.ai.

NVIDIA's GTC Washington, D.C., keynote unveiled a strategic blueprint for America's AI future, emphasizing national infrastructure, physical AI, and industry transformation.

The post NVIDIA Charts America’s AI Future with Industrial-Scale Vision appeared first on StartupHub.ai.

NVIDIA AI Fuels US Economic Development

The post NVIDIA AI Fuels US Economic Development appeared first on StartupHub.ai.

NVIDIA is driving significant AI economic development across the US by partnering with states, cities, and universities to democratize AI access and foster innovation.

The post NVIDIA AI Fuels US Economic Development appeared first on StartupHub.ai.

Microsoft’s OpenAI Bet Yields 10x Return, Igniting AI Infrastructure Race

The post Microsoft’s OpenAI Bet Yields 10x Return, Igniting AI Infrastructure Race appeared first on StartupHub.ai.

Microsoft’s staggering ten-fold return on its OpenAI investment, now valued at $135 billion, signals a new era where strategic AI stakes redefine corporate power and valuation. This monumental gain, highlighted by CNBC’s MacKenzie Sigalos, follows a significant corporate restructure at OpenAI that redefines its partnership terms with Microsoft, granting the tech giant a 27% equity […]

The post Microsoft’s OpenAI Bet Yields 10x Return, Igniting AI Infrastructure Race appeared first on StartupHub.ai.

Desktop Commander raises €1.1M to advance AI desktop automation

The post Desktop Commander raises €1.1M to advance AI desktop automation appeared first on StartupHub.ai.

Desktop Commander raised €1.1 million to develop its AI tool that allows non-technical users to automate computer tasks using natural language.

The post Desktop Commander raises €1.1M to advance AI desktop automation appeared first on StartupHub.ai.

Grasp raises $7M to advance its multi-agent AI for finance

The post Grasp raises $7M to advance its multi-agent AI for finance appeared first on StartupHub.ai.

AI startup Grasp raised $7 million to expand its multi-agent platform that automates complex financial analysis and reporting for consultants and investment banks.

The post Grasp raises $7M to advance its multi-agent AI for finance appeared first on StartupHub.ai.

Energy as the New Geopolitical Currency in the AI Race

The post Energy as the New Geopolitical Currency in the AI Race appeared first on StartupHub.ai.

“Knowledge used to be power, now power is knowledge.” This stark redefinition, articulated by U.S. Secretary of the Interior Doug Burgum during a CNBC “Power Lunch” interview, cuts to the core of the contemporary global power struggle. Speaking with Brian Sullivan, Burgum outlined a comprehensive strategy for the United States to secure its position in […]

The post Energy as the New Geopolitical Currency in the AI Race appeared first on StartupHub.ai.

CoreStory raises $32M to advance AI legacy code modernization

The post CoreStory raises $32M to advance AI legacy code modernization appeared first on StartupHub.ai.

AI startup CoreStory raised $32 million to help enterprises modernize legacy software with its platform that automatically documents and analyzes old code.

The post CoreStory raises $32M to advance AI legacy code modernization appeared first on StartupHub.ai.

Microsoft Windows Server Update Service Is Under Attack, What You Need To Know

Microsoft Windows Server Update Service Is Under Attack, What You Need To Know Windows Server 2025 is currently open to a Remote Code Execution exploit via the Windows Update Service, and at the time of this writing a fix from Microsoft has yet to fully patch the issue. Reports to The Register indicate that Microsoft's attempt to patch the exploit earlier this month didn't stop any active exploitation, contrary to Microsoft's

(PR) HPE to Build "Mission" and "Vision" Supercomputers Featuring NVIDIA Vera Rubin

HPE today announced, in partnership with the U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA) and Los Alamos National Laboratory (LANL), that it has been selected to deliver two state-of-the-art supercomputers, named "Mission" and "Vision". The next-generation systems will be based on the new direct liquid-cooled HPE Cray Supercomputing GX5000 system and feature upcoming NVIDIA Vera Rubin Superchips. Mission and Vision are part of the DOE's $370 million investment to accelerate scientific discovery, advance AI initiatives and strengthen national security.

"For decades, HPE and Los Alamos National Laboratory have collaborated on innovative supercomputing designs that deliver powerful capabilities to solve complex scientific challenges and bolster national security efforts," said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE. "We are proud to continue powering the lab's journey with the upcoming Mission and Vision systems. These innovations will be among the first to feature next-generation HPE Cray supercomputing architecture to drive AI innovation and scientific impact."

(PR) Sandisk Launches Officially Licensed FIFA World Cup 2026 Product Lineup

Sandisk kicked off the countdown to the FIFA World Cup 2026 today with the launch of its collection of officially licensed products. Purpose-built for what's set to be one of the most content-rich sporting events in history, the Sandisk Official Licensed Product Collection for the FIFA World Cup 2026 empowers fans, creators, and professionals alike to capture, preserve, and relive the most iconic moments from the world's biggest stage in sports.

Blending heritage with innovation, the design-led products honor host nations and iconic moments through whistle-inspired USB-C drives to SSDs in tournament colors and pro-level memory cards to capture history-making moments. Each product proudly bears official FIFA World Cup 2026 licensing marks and host nation-inspired details, making them authentic pieces of football history.

(PR) Supermicro Expands NVIDIA Collaboration, Focuses on U.S.-Made AI Systems for Government Use

Super Micro Computer, Inc. (SMCI), a Total IT Solution Provider for AI/ML, HPC, Cloud, Storage, and 5G/Edge, is showcasing its advanced AI infrastructure solutions at NVIDIA GTC in Washington, D.C. this week, highlighting systems tailored to meet the stringent requirements of federal customers. Supermicro announced its plans to deliver next-generation NVIDIA AI platforms, including the NVIDIA Vera Rubin NVL144 and NVIDIA Vera Rubin NVL144 CPX in 2026. Additionally, Supermicro introduces U.S.-manufactured, TAA (Trade Agreements Act)-compliant systems, including the high-density 2OU NVIDIA HGX B300 8-GPU system with up to 144 GPUs per rack and an expanded portfolio featuring a Super AI Station based on NVIDIA GB300 and the new rack-scale NVIDIA GB200 NVL4 HPC solutions.

"Our expanded collaboration with NVIDIA and our focus on U.S.-based manufacturing position Supermicro as a trusted partner for federal AI deployments. With our corporate headquarters, manufacturing, and R&D all based in San Jose, California, in the heart of Silicon Valley, we have an unparalleled ability and capacity to deliver first-to-market solutions are developed, constructed, validated (and manufactured) for American federal customers," said Charles Liang, president and CEO, Supermicro. "The result of many years of working hand-in-hand with our close partner NVIDIA—also based in Silicon Valley—Supermicro has cemented its position as a pioneer of American AI infrastructure development."

(PR) Don't Nod Reveals New Trailer for Aphelion

French developer and publisher DON'T NOD has presented a new trailer for Aphelion, its upcoming cinematic third-person action-adventure game launching in 2026, at the ID@Xbox Showcase. The trailer reveals Ariane's fellow astronaut Thomas Cross as a playable character, and showcases brand-new stealth sequences, the never-before-seen alien antagonist, new environments, and the in-game spacesuit patch designed in collaboration with the European Space Agency (ESA).

Aphelion is a sci-fi action-adventure on the edge of the solar system. In the shoes of ESA astronauts Ariane and Thomas, players will explore and survey the uncharted planet Persephone and solve the mystery of the crash, all while trying to survive in the terrifying presence of an unknown enemy. At its heart, the game is an emotional tale about love, resilience, hope, and what we bring with us when everything is lost.

(PR) Creative Technology Launches Kickstarter Campaign for Sound Blaster Re:Imagine

Creative Technology, the company that brought the world the original Sound Blaster and transformed PC audio in the 90s, today announces Sound Blaster Re:Imagine, a next-generation modular audio hub that redefines what a sound card can be. The campaign goes live on Kickstarter on October 28, 2025 (10am EST).

Since its debut in 1989, Sound Blaster has shipped more than 400 million devices worldwide, shaping the soundtrack of the digital age. The original Sound Blaster gave PCs a voice, powering the rise of multimedia, gaming, and digital creativity. Sound Blaster Re:Imagine builds on that heritage - taking the DNA of Sound Blaster and evolving it into a modern, modular platform designed for creators, gamers, and anyone who lives at the intersection of work and play.

(PR) MAINGEAR Introduces aiDAPTIV+ Package for Pro RS & Pro WS Workstations Co-Developed With Phison

MAINGEAR, a leading provider of high-performance custom PCs, today announced a new aiDAPTIV+ package, co-developed with Phison Electronics, a global leader in NAND flash controllers and storage solutions, for its Pro RS and Pro WS workstations. The aiDAPTIV+ add-on enables full-parameter fine-tuning and large-model inference on mainstream GPUs, helping teams move faster while keeping data private and on-prem. Live demos of a MAINGEAR workstation equipped with aiDAPTIV+ will be available at Phison's booth during NVIDIA GTC Washington, D.C.

AI teams need on-prem training and inference performance without the unpredictability of cloud costs or the exposure of sensitive data. The aiDAPTIV+ package combines MAINGEAR's powerful, enterprise-ready workstations with Phison's aiDAPTIV+ intelligent SSD caching to expand effective VRAM, enabling larger models and longer contexts at the edge, with predictable costs and IT-friendly deployment.

(PR) Giga Computing Showcases Scalable Next-Gen AI and Visualization Solutions at NVIDIA GTC DC 2025

Giga Computing, a subsidiary of GIGABYTE and an industry innovator and leader in AI hardware and advanced cooling solutions, today announced its participation in NVIDIA GTC DC (Oct. 28-29). With the importance of AI and scalable solutions, Giga Computing demonstrates how innovation in hardware and software can drive forth the transformation into the AI-driven era. These solutions will empower developers, researchers, and creators to achieve more, from the desktop to the data center, and discussions are being held at the GIGABYTE booth #528.

The booth features four flagship GIGABYTE systems: the AI TOP ATOM, the W775-V10 workstation, the XL44-SX2 (NVIDIA RTX PRO Server), and a liquid-cooled G4L4-SD3 AI server. Together, these GIGABYTE solutions built on the NVIDIA Blackwell architecture to enable efficient, high-performance AI and visualization workloads spanning every compute tier.

(PR) ASUS IoT Announces PE3000N Based On NVIDIA Jetson Thor

ASUS IoT today unveils PE3000N, a compact edge-AI platform engineered to meet the advanced requirements of next-generation robotics and intelligent automation. Accelerated by the cutting-edge NVIDIA Jetson Thor platform, with advanced NVIDIA Blackwell GPU, a powerful 14-core Arm CPU, and an industry-leading 128 GB of LPDDR5X memory, enabling an impressive 2,070 FP4 TFLOPS of AI processing power in a highly space-efficient form factor - making it ideal for integration into robotic systems where both space and energy efficiency are critical. With its robust architecture, PE3000N powered by Jetson T5000 module enables developers and integrators to achieve new levels of autonomy, sensor fusion, and AI-driven control for industrial, commercial, and smart infrastructure deployments.

Rugged reliability for challenging environments
Engineered for durability, PE3000N incorporates MIL-STD-810H industrial-grade connectors and a low-profile chassis to withstand demanding operating conditions. With support for up to four optional 25GbE links and 16 GMSL cameras, it enables high-bandwidth sensor fusion and advanced machine vision, even in the most challenging environments. The wide 12-60 V DC input and ignition support provide stable, battery-friendly operation across diverse settings - from factory floors and autonomous vehicles to smart-city infrastructure. With an operating temperature range from -20°C up to 60°C, PE3000N ensures resilient performance and secure data handling, making it a trusted solution for mission-critical robotics, automation, and edge AI deployments.

(PR) NVIDIA IGX Thor Robotics Processor Brings Real-Time Physical AI to the Industrial and Medical Edge

AI is moving from the digital world into the physical one. Across factory floors and operating rooms, machines are evolving into collaborators that can see, sense and make decisions in real time. To accelerate this transformation, NVIDIA today unveiled NVIDIA IGX Thor, a powerful, industrial-grade platform built to bring real-time physical AI directly to the edge, combining high-speed sensor processing, enterprise-grade reliability and functional safety in a small module for the desktop.

Delivering up to 8x the AI compute performance of its predecessor, NVIDIA IGX Orin, IGX Thor enables developers to build intelligent systems that perceive, reason and act faster, safer and smarter than ever. Early adopters include industrial, robotic, medical and healthcare leaders, Diligent Robotics, EndoQuest Robotics, Hitachi Rail, Joby Aviation, Maven and SETI Institute, while CMR Surgical is evaluating IGX Thor to advance its medical capabilities.

(PR) Quantum Machines Announces NVIDIA NVQLink Integration

Quantum Machines (QM), the leading provider of quantum control solutions, today announced its integration with NVIDIA NVQLink, the new open platform for real-time orchestration between quantum and classical computing resources. This marks a major step that extends QM's first-of-its-kind, field-proven, µs-latency quantum-classical integration solution.

Building on the foundation of NVIDIA DGX Quantum - the first system to connect a quantum controller directly with the NVIDIA accelerated computing stack - QM's platform will support the new NVQLink open architecture, providing seamless interoperability between quantum processors (QPUs), control hardware, CPUs, and GPUs. The result is real-time data exchange and control at microsecond latency, enabling the demanding workloads required for logical qubits and large-scale quantum error correction.

Amazon layoffs reaction: ‘Thought I was a top performer but guess I’m expendable’

Amazon’s headquarters campus in Seattle. (GeekWire Photo / Kurt Schlosser)

Reaction to a huge round of layoffs rippled across Amazon and beyond on Tuesday as the Seattle-based tech giant confirmed that it was slashing 14,000 corporate and tech jobs.

We’ve rounded up some of what’s being said online and/or shared with GeekWire:

‘Never been laid off before’

A megathread on Reddit served as a collection of comments by impacted employees who posted about their level, location, org and years of service at Amazon.

Workers across ads, recruitment, robotics, retail, Prime Video, Amazon Games, business development, North American Stores, finance, devices and services, Amazon Autos, and more used the thread to vent.

  • “TPM II for Amazon Robotics, 6.5 years there. Still processing this, I’ve never been laid off before.”
  • “L6 SDEIII, started as SDEI 7 years ago. I went L4 to L6 in 3 years. My last performance review I got raising the bar. Thought I was a top performer but guess I’m expendable.”
  • “Never been laid off before feels overwhelming on VISA! Someone please help me understand next steps in terms of VISA, if I am not able to get H1b sponsoring job in next 90 days will I have to uproot everything here and go back?”
  • “I heard AWS layoffs come after re:invent to avoid customer disruption and bad press.”
  • “It’s heartbreaking how impersonal and abrupt these layoffs have become. People who’ve given years to a company are finding out in minutes that they’re done.”

Bad news via text?

Kristi Coulter, author of Exit Interview: The Life and Death of My Ambitious Career, a memoir about what she learned in her 12 years at Amazon, weighed in about the timing of apparent text messages that were sent to impacted employees.

“Wait, I’m sorry: Amazon made people relocate, switch their kids’ schools, and bookend their days with traffic for RTO only to lay them off via a 3 a.m. text? What happened to the vibe and conversations that only being together at the office could allow?” Coulter wrote on LinkedIn.

‘Reduced functionality’

Some employees shared how they were quickly locked out of work laptops, expressing confusion about whether that was how they were supposed to learn about being terminated.

“I lost access to everything immediately :( ,” one Reddit user said.

Others discussed how they should have found time to transfer important work examples or positive interactions related to their performance over to personal computers.

“One thing I would recommend for everyone is to back up your personal files onto your personal laptop,” one user said on Reddit. “I used to keep all my accolades and praise in a quip file along with all my 2×2 write ups and MBR/QBR write ups cataloging my wins. When I found out I got laid off my head was spinning so I went outside for a walk, by the time I returned I was locked out of my laptop and no longer had access to anything.”

Is this Amazon’s way of saying 100% laid off?

Any Amazon folks on the timeline – seen this before?#Amazon #layoffs #amazonlayoffs pic.twitter.com/1MCxoXjfHQ

— Aravind Naveen (@MydAravind) October 28, 2025

Why layoffs now?

Amazon human resources chief Beth Galetti pinned the layoffs in part on the need to reduce bureaucracy and become more efficient in the new era of artificial intelligence. Others looked for deeper meaning in the cuts.

In a post on LinkedIn, Yahoo! Finance Executive Editor Brian Rozzi said stock price is likely a key consideration when it comes to top execs and the Amazon board signing off on such mass layoffs.

Amazon’s stock was up about 1% on Tuesday to $229 per share.

“If the layoffs keep jacking up the stock price, maybe I can retire instead,” one longtime employee told GeekWire.

Entrepreneur and investor Jason Calacanis posted on X about how AI was coming for middle managers and those with “rote jobs” faster than anyone expected. He encouraged workers to become a founder and do a startup before it’s too late.

Hard-hit divisions

Mid-level managers in Amazon’s retail division were heavily impacted by Tuesday’s cuts, according to internal data obtained by Business Insider.

More than 78% of the roles eliminated were held by managers assigned L5 to L7 designations, BI reported. (L5 is typically the starting point for managers at Amazon, with more seniority assigned to higher levels.)

BI also said that U.S.-focused data showed that more than 80% of employees laid off Tuesday worked in Amazon’s retail business, spanning e-commerce, human resources, and logistics.

Bloomberg and others reported that significant cuts are also being felt by Amazon’s video games unit.

Steve Boom, VP of audio, Twitch, and games said in a memo shared with The Verge that “significant role reductions” would be felt at studios in Irvine and San Diego, Calif., as well on Amazon’s central publishing teams.

“We have made the difficult decision to halt a significant amount of our first-party AAA game development work — specifically around MMOs [massively multiplayer online games] — within Amazon Game Studios,” Boom wrote.

Current titles in Amazon’s MMO lineup include “New World: Aeternum,” “Throne and Liberty,” and “Lost Ark.” Amazon also previously announced that it would be developing a “Lord of the Rings” MMO.

‘Ripple effects throughout the community’

Amazon employees and others line up at a food truck near Amazon offices in Seattle’s South Lake Union neighborhood. (GeekWire File Photo / Kurt Schlosser)

Jon Scholes, president and CEO of the Downtown Seattle Association (DSA), has previously praised Amazon for its mandate calling for employees to return to the office five days per week, saying that the foot traffic from thousands of tech workers in the city is a necessary element to helping downtown Seattle rebound from the pandemic.

On Tuesday, Scholes reacted to Amazon’s layoffs in a statement to GeekWire:

“As downtown’s largest employer, a workforce change of this scale has ripple effects throughout the community — on individual employees and families and our small businesses that rely on the weekday foot traffic customer base. In addition, these jobs buttress our tax base that helps fund the city services we all depend on. Employers have options for where they locate jobs, and we want to ensure downtown Seattle is the most attractive place to invest and grow. We must provide vibrancy and a predictable regulatory environment in a competitive landscape because other cities would welcome the jobs currently based in downtown.”

RPCS3 GPU recommendations increase due to dropped driver support

AMD and Nvidia have forced RPCS3 to increase its recommended GPU requirements The team behind RPCS3, the PlayStation 3 emulator, has announced that it has increased its recommended GPU requirements for Windows. This is due to AMD and Nvidia’s decision to drop driver support for older Radeon and GeForce graphics cards. Now, the emulator’s recommended […]

The post RPCS3 GPU recommendations increase due to dropped driver support appeared first on OC3D.

Whatnot Lands $225M Series F, More Than Doubles Valuation to $11.5B Since January

Whatnot, a live shopping platform and marketplace, has closed a $225 million Series F round, more than doubling its valuation to $11.5 billion in less than 10 months.

DST Global and CapitalG co-led the financing, which brings the Los Angeles-based company’s total raised to about $968 million since its 2019 inception. Whatnot had raised $265 million in a Series E round at a nearly $5 billion valuation in January.

New investors Sequoia Capital and Alkeon Capital participated in the Series F, alongside returning backers Greycroft, Andreessen Horowitz, Avra and Bond. Other investors include Y Combinator, Lightspeed Venture Partners and Liquid 2 Ventures.

As part of the latest financing, Whatnot says it will initiate a tender offer where select current investors will buy up to $126 million worth of shares.

Funding to e-commerce startups globally so far this year totals $7.1 billion, per Crunchbase data. That compares to $11.3 billion raised by e-commerce startups globally in all of 2024. This year’s numbers are also down significantly from post-pandemic funding totals, which surged to $93 billion in 2021.

‘Retail’s new normal’

Live commerce is the combination of livestreaming and online shopping. Grant LaFontaine, co-founder and CEO of Whatnot, said in an announcement that his startup is “proving that live shopping is retail’s new normal.”

Whatnot co-founders Logan Head and Grant LaFontaine. Courtesy photo.

The company says more than $6 billion worth of items have been sold on its platform in 2025 so far, more than twice its total for all of 2024. Its app facilitates the buying and selling of collectibles like trading cards and toys through live video auctions. It also offers items such as clothing and sneakers. It competes with the likes of eBay, which currently does not offer a livestreaming option. It’s also a competitor to TikTok Shop.

“Whatnot brought the live shopping wave to the US, the UK, and Europe and has turned it into one of the fastest growing marketplaces of all time, Laela Sturdy, Whatnot board member and managing partner at CapitalG, Alphabet’s independent growth fund, said in a release.

The company plans to use its new funds to invest in its platform, roll out new features and “evolve” its policies. It is also accelerating its international expansion, adding to its current 900-person workforce by hiring across multiple departments.

Related query:

Related Reading:

Illustration: Dom Guzman

Pixel 10a CAD Renders Show A Pixel 9a Clone

Google Pixel smartphone in hand, blurred background.

Google appears to be playing it safe with its upcoming budget offering, the Pixel 10a. If the latest CAD renders are anything to go by, the tech giant is eschewing flashy design changes and opting for a predictable, if somewhat boring, overall design language. Almost nothing appears to have changed between the Pixel 9a and the upcoming Pixel 10a, as per the new CAD renders As per the CAD renders published by the X user OnLeaks on behalf of Android Headlines, the following can be easily concluded: As for the budget offering's rumored specs, the following is known at the […]

Read full article at https://wccftech.com/pixel-10a-cad-renders-show-a-pixel-9a-clone/

DON’T NOD Admits Lost Records: Bloom & Rage Missed Expectations, Signs Deal With Netflix to Create New Game Based on “A Major IP”

“Lost Records: Bloom & Rage” title screen with four characters.

Developer and publisher DON'T NOD has published its latest financial release which goes over its half-year results for 2025, which includes a few notable updates from the studio, like how its most recent release, Lost Records: Bloom & Rage performed "below expectations," and that the studio signed a deal with Netflix to make a narrative game based on "a major IP." It's definitely a disappointing result for DON'T NOD, particularly considering the fact that its last major releases last year, Banishers: Ghosts of New Eden and Jusant, also fell below expectations. The studio's total operating revenue took a 5% dip […]

Read full article at https://wccftech.com/lost-records-bloom-and-rage-missed-expectations-dont-nod-signs-deal-with-netflix/

Base iPhone 17 OLED Panel Is Around 42% Cheaper To Make Than ‘Pro’ Models, Despite Apple Making ProMotion Technology A Standard

Base iPhone 17 OLED is cheaper to make, claims new report

Apple revamped its iPhone 17 lineup this year by introducing ProMotion technology to the base model, making it one of the best decisions it could ever make for its flagship smartphone family. Best of all, it brings a host of other upgrades while retaining that $699 price point, which is probably why the iPhone 17 has garnered immense popularity worldwide, particularly in China. Part of why Apple has been able to keep this price unchanged from the iPhone 16 is by keeping the display costs low. According to the latest report, the OLED panel in the iPhone 17 costs around 42 […]

Read full article at https://wccftech.com/iphone-17-oled-panel-around-42-percent-to-make-than-pro-models/

Oppo Has Just A Few Hours Now To Hand Over To Apple Evidentiary Documents On A Former Engineer Who Stole Apple Watch Secrets

A glowing Apple logo-headed figure with a sword confronts a hooded figure holding a sword, with oppo written in neon green.

In the ongoing high-stakes court battle between Oppo and Apple, the former has only a few hours left to complete a transfer of required documents and device forensic reports on an ex-Apple engineer who stands accused of stealing proprietary intellectual property (IP) at the behest of Oppo. Apple accuses Oppo of using its former employee, Chen Shi, to steal Apple Watch secrets Before going further, let's summarize what has happened in this high-stakes saga so far: Apple is asking the court for injunctive relief on four counts: For its part, Oppo maintains that it has conducted a comprehensive search of […]

Read full article at https://wccftech.com/oppo-has-just-a-few-hours-left-to-hand-over-to-apple-evidentiary-documents-on-a-former-engineer-who-stole-apple-watch-secrets/

Sony’s WH-1000XM4 Wireless Headphones Cost Less Than Half The Price Of An AirPods Max But Fulfill Your ANC & Long Battery Life Needs For Under $200 On Amazon

Sony WH-1000XM4 wireless headphones are available for under $200 on Amazon

In a market that is littered with countless options, Sony successfully stands out with its family of wireless headphones that offer comfort, impeccable audio, a boatload of features, and value, though the latter is subjective, especially if you are not on the hunt for the WH-1000XM6, which cost a jaw-dropping $458 on Amazon. Sure, the latter are the crème de la crème of wireless headphones, but if your primary objective is affordability, you will want to pick the WH-1000XM4, which are available at the same online retailer, but at a more affordable $198, or 43 percent off. Despite being two […]

Read full article at https://wccftech.com/sony-wh-1000xm4-wireless-headphones-cost-less-than-half-the-price-of-airpods-max-on-amazon/

Call of Duty: Black Ops 7 is Reportedly “Far Behind” Battlefield 6 In Pre-Orders Leading Up to Launch

Call of Duty: Black Ops 7

A new report from GamesIndustry.Biz, based on data provided by Alinea Analytics, shows that in the lead-up to launch, Call of Duty: Black Ops 7 trails "far behind" the numbers that Battlefield 6 was able to pull. Setting the parameters here: this is based on data from Steam pre-order sales for Battlefield 6 and Call of Duty: Black Ops 7, 18 days ahead of their respective launches. Within that 18-day lead-up period, Battlefield 6 was able to sell close to a million copies in pre-orders. Black Ops 7 has only managed 200K pre-order copies sold. These numbers start to look […]

Read full article at https://wccftech.com/call-of-duty-black-ops-7-far-behind-battlefield-6-pre-order-sales/

CORSAIR Unveils Its Flagship MP700 PRO XT PCIe 5.0 SSD, Offering Up To 14,900 MB/s Of Read Speeds

Corsair MP700 PRO XT PCIe 5.0 x4 NVMe SSD on a desk next to a laptop.

After Team Group, now Corsair also claims to have reached 14,900 MB/s of read speeds on its latest PCIe 5.0 SSD. CORSAIR Launches MP700 PRO XT and Compact 2242 Form Factor MP700 MICRO PCIe 5.0 SSDs with Blazing Fast Read/Write Speeds One of the leading hardware and peripheral manufacturers, CORSAIR, has released its two new high-performance PCIe 5.0 SSDs for enthusiasts, offering the best-in-class performance for PC builders. The first SSD is the MP700 PRO XT, which is its flagship offering, delivering up to 14,900 MB/s of sequential Read speeds and up to 14,500 MB/s of sequential Write speeds. If […]

Read full article at https://wccftech.com/corsair-unveils-its-flagship-mp700-pro-xt-pcie-5-0-ssd-offering-up-to-14900-mb-s-of-read-speeds/

DayZ Creator Says AI Fears Remind Him of People Worrying About Google & Wikipedia; ‘Regardless of What We Do, AI Is Here’

Unbranded game controller with futuristic AI head wearing headphones beside portrait of Dean Hall

With each passing month, artificial intelligence creeps into more industries. That does not exclude the gaming industry, which has long used artificial intelligence to populate its virtual worlds. Still, the generative AI that is taking root everywhere offers much more power, and also much greater risk, compared to what gaming developers were used to. Big companies like Microsoft, Amazon, and EA are already laying off (or thinking about laying off) employees to invest further into artificial intelligence. What do the actual developers think about this artificial intelligence revolution? Their takes, as you would expect, are quite varied. The creator of […]

Read full article at https://wccftech.com/dayz-creator-says-ai-fears-remind-him-people-worrying-about-google-wikipedia-ai-is-here/

Thermaltake Confirms One Of Its Existing AIO Coolers Will Be Compatible With LGA 1954 Socket

ASRock motherboard with exposed CPU socket and TOUGHRAM RGB in a high-performance PC build, CPU at 53°C, GPU at 28°C.

The latest AIO cooler from Thermaltake will work with Intel's upcoming LGA 1954 platform, as spotted on the official website. Thermaltake Lists LGA 1954 as a Compatible Socket for MINECUBE 360 Ultra ARGB Sync AIO Cooler, Confirming Support for Intel Nova Lake Popular cooler and PC case maker, Thermaltake, has officially listed the Intel LGA 1954 socket as a compatible platform for one of its latest AIO coolers. Thermaltake's MINECUBE 360 Ultra ARGB Sync, which was showcased at Computex this year, lists the LGA 1954 on its compatibility list, which confirms that the cooler won't just be compatible with the […]

Read full article at https://wccftech.com/thermaltake-confirms-one-of-its-existing-aio-coolers-will-be-compatible-with-lga-1954-socket/

A ‘Rocket League Minus the Cars’ 3v3 F2P Arcade Game Superball Shadow-Dropped on PC and Xbox Series X/S

SUPERBALL text with a futuristic ball and action-packed scene.

Pathea Games, the studio known for games like My Time at Portia, My Time at Sandrock, and the upcoming My Time at Evershine has just shadow-dropped something you'd be more likely to expect from Velan Studios after its game Knockout City, or even Psyonix as a spin-off from Rocket League with Superball, a new free-to-play 3v3 arcade hero football game that's out now on PC and Xbox Series X/S. Announced during the ID@Xbox and IGN Showcase, Superball is described as a mash between Rocket League, something that's made extremely obvious with its giant ball and arena style, and Overwatch with […]

Read full article at https://wccftech.com/rocket-league-minus-cars-f2p-superball-pathea-games/

ClearWork – ClearWork maps business processes and plans digital transformations


ClearWork helps companies transform their operations by first automatically discovering and mapping their actual, end-to-end processes. Unlike old-school methods that rely on manual workshops and guesswork, our AI analyzes real user activity to give a precise, objective view of current operations and pinpoint friction points.

From there, we use AI to help you model and plan an optimized future state that's grounded in your operational reality. Finally, we provide an AI co-pilot, powered by your own data, and orchestrate automated, cross-platform workflows to ensure new processes are not only planned but also executed and sustained across the organization.

View startup

Fortanix and NVIDIA partner on AI security platform for highly regulated industries

Data security company Fortanix Inc. announced a new joint solution with NVIDIA: a turnkey platform that allows organizations to deploy agentic AI within their own data centers or sovereign environments, backed by NVIDIA’s "confidential computing" GPUs.

“Our goal is to make AI trustworthy by securing every layer—from the chip to the model to the data," said Fortanix CEO and co-founder Anand Kashyap, in a recent video call interview with VentureBeat. "Confidential computing gives you that end-to-end trust so you can confidently use AI with sensitive or regulated information.”

The solution arrives at a pivotal moment for industries such as healthcare, finance, and government — sectors eager to embrace AI but constrained by strict privacy and regulatory requirements.

Fortanix’s new platform, powered by NVIDIA Confidential Computing, enables enterprises to build and run AI systems on sensitive data without sacrificing security or control.

“Enterprises in finance, healthcare and government want to harness the power of AI, but compromising on trust, compliance, or control creates insurmountable risk,” said Anuj Jaiswal, chief product officer at Fortanix, in a press release. “We’re giving enterprises a sovereign, on-prem platform for AI agents—one that proves what’s running, protects what matters, and gets them to production faster.”

Secure AI, Verified from Chip to Model

At the heart of the Fortanix–NVIDIA collaboration is a confidential AI pipeline that ensures data, models, and workflows remain protected throughout their lifecycle.

The system uses a combination of Fortanix Data Security Manager (DSM) and Fortanix Confidential Computing Manager (CCM), integrated directly into NVIDIA’s GPU architecture.

“You can think of DSM as the vault that holds your keys, and CCM as the gatekeeper that verifies who’s allowed to use them," Kashyap said. "DSM enforces policy, CCM enforces trust.”

DSM serves as a FIPS 140-2 Level 3 hardware security module that manages encryption keys and enforces strict access controls.

CCM, introduced alongside this announcement, verifies the trustworthiness of AI workloads and infrastructure using composite attestation—a process that validates both CPUs and GPUs before allowing access to sensitive data.

Only when a workload is verified by CCM does DSM release the cryptographic keys necessary to decrypt and process data.

“The Confidential Computing Manager checks that the workload, the CPU, and the GPU are running in a trusted state," explained Kashyap. "It issues a certificate that DSM validates before releasing the key. That ensures the right workload is running on the right hardware before any sensitive data is decrypted.”

This “attestation-gated” model creates what Fortanix describes as a provable chain of trust extending from the hardware chip to the application layer.

It’s an approach aimed squarely at industries where confidentiality and compliance are non-negotiable.

From Pilot to Production—Without the Security Trade-Off

According to Kashyap, the partnership marks a step forward from traditional data encryption and key management toward securing entire AI workloads.

Kashyap explained that enterprises can deploy the Fortanix–NVIDIA solution incrementally, using a lift-and-shift model to migrate existing AI workloads into a confidential environment.

“We offer two form factors: SaaS with zero footprint, and self-managed. Self-managed can be a virtual appliance or a 1U physical FIPS 140-2 Level 3 appliance," he noted. "The smallest deployment is a three-node cluster, with larger clusters of 20–30 nodes or more.”

Customers already running AI models—whether open-source or proprietary—can move them onto NVIDIA’s Hopper or Blackwell GPU architectures with minimal reconfiguration.

For organizations building out new AI infrastructure, Fortanix’s Armet AI platform provides orchestration, observability, and built-in guardrails to speed up time to production.

“The result is that enterprises can move from pilot projects to trusted, production-ready AI in days rather than months,” Jaiswal said.

Compliance by Design

Compliance remains a key driver behind the new platform’s design. Fortanix’s DSM enforces role-based access control, detailed audit logging, and secure key custody—elements that help enterprises demonstrate compliance with stringent data protection regulations.

These controls are essential for regulated industries such as banking, healthcare, and government contracting.

The company emphasizes that the solution is built for both confidentiality and sovereignty.

For governments and enterprises that must retain local control over their AI environments, the system supports fully on-premises or air-gapped deployment options.

Fortanix and NVIDIA have jointly integrated these technologies into the NVIDIA AI Factory Reference Design for Government, a blueprint for building secure national or enterprise-level AI systems.

Future-Proofed for a Post-Quantum Era

In addition to current encryption standards such as AES, Fortanix supports post-quantum cryptography (PQC) within its DSM product.

As global research in quantum computing accelerates, PQC algorithms are expected to become a critical component of secure computing frameworks.

“We don’t invent cryptography; we implement what’s proven,” Kashyap said. “But we also make sure our customers are ready for the post-quantum era when it arrives.”

Real-World Flexibility

While the platform is designed for on-premises and sovereign use cases, Kashyap emphasized that it can also run in major cloud environments that already support confidential computing.

Enterprises operating across multiple regions can maintain consistent key management and encryption controls, either through centralized key hosting or replicated key clusters.

This flexibility allows organizations to shift AI workloads between data centers or cloud regions—whether for performance optimization, redundancy, or regulatory reasons—without losing control over their sensitive information.

Fortanix converts usage into “credits,” which correspond to the number of AI instances running within a factory environment. The structure allows enterprises to scale incrementally as their AI projects grow.

Fortanix will showcase the joint platform at NVIDIA GTC, held October 27–29, 2025, at the Walter E. Washington Convention Center in Washington, D.C. Visitors can find Fortanix at booth I-7 for live demonstrations and discussions on securing AI workloads in highly regulated environments.

About Fortanix

Fortanix Inc. was founded in 2016 in Mountain View, California, by Anand Kashyap and Ambuj Kumar, both former Intel engineers who worked on trusted execution and encryption technologies. The company was created to commercialize confidential computing—then an emerging concept—by extending the security of encrypted data beyond storage and transmission to data in active use, according to TechCrunch and the company’s own About page.

Kashyap, who previously served as a senior security architect at Intel and VMware, and Kumar, a former engineering lead at Intel, drew on years of work in trusted hardware and virtualization systems. Their shared insight into the gap between research-grade cryptography and enterprise adoption drove them to found Fortanix, according to Forbes and Crunchbase.

Today, Fortanix is recognized as a global leader in confidential computing and data security, offering solutions that protect data across its lifecycle—at rest, in transit, and in use.

Fortanix serves enterprises and governments worldwide with deployments ranging from cloud-native services to high-security, air-gapped systems.

"Historically we provided encryption and key-management capabilities," Kashyap said. "Now we’re going further to secure the workload itself—specifically AI—so an entire AI pipeline can run protected with confidential computing. That applies whether the AI runs in the cloud or in a sovereign environment handling sensitive or regulated data.

New TEE.Fail Side-Channel Attack Extracts Secrets from Intel and AMD DDR5 Secure Enclaves

A group of academic researchers from Georgia Tech, Purdue University, and Synkhronix have developed a side-channel attack called TEE.Fail that allows for the extraction of secrets from the trusted execution environment (TEE) in a computer's main processor, including Intel's Software Guard eXtensions (SGX) and Trust Domain Extensions (TDX) and AMD's Secure Encrypted Virtualization with Secure

New Android Trojan 'Herodotus' Outsmarts Anti-Fraud Systems by Typing Like a Human

Cybersecurity researchers have disclosed details of a new Android banking trojan called Herodotus that has been observed in active campaigns targeting Italy and Brazil to conduct device takeover (DTO) attacks. "Herodotus is designed to perform device takeover while making first attempts to mimic human behaviour and bypass behaviour biometrics detection," ThreatFabric said in a report shared with

Researchers Expose GhostCall and GhostHire: BlueNoroff's New Malware Chains

Threat actors tied to North Korea have been observed targeting the Web3 and blockchain sectors as part of twin campaigns tracked as GhostCall and GhostHire. According to Kaspersky, the campaigns are part of a broader operation called SnatchCrypto that has been underway since at least 2017. The activity is attributed to a Lazarus Group sub-cluster called BlueNoroff, which is also known as APT38,

CyDeploy wants to create a replica of a company’s system to help it test updates before pushing them out — catch it at Disrupt 2025

Tina Williams-Koroma said CyDeploy uses machine learning to understand what happens on a company’s machine and then creates a “digital twin” where system administrators can test updates.

Nvidia and partners to build seven AI supercomputers for the U.S. gov't with over 100,000 Blackwell GPUs —combined performance of 2,200 ExaFLOPS of compute

Nvidia, Oracle, and the U.S. Department of Energy will build seven ExaFLOPS-class AI supercomputers for Argonne National Laboratory — including the Oracle-built Equinox and Solstice systems with over 100,000 Blackwell GPUs delivering up to 2,200 FP4 ExaFLOPS — to power next-generation AI and scientific research.

OpenAI and Microsoft sign agreement to restructure OpenAI into a public benefit corporation with Microsoft retaining 27% stake — non-profit 'Open AI Foundation' to oversee 'Open AI PBC'

OpenAI is restructuring into a public benefit corporation with Microsoft retaining a 27% stake in the new "OpenAI PBC," worth roughly $135 billion. OpenAI PBC will still be overseen by the non-profit OpenAI Inc., soon to be renamed OpenAI Foundation. Both companies are intertwined till at least 2032 with major cloud computing contracts.

Fake Nvidia GTC stream hosting deepfake Jensen Huang crypto scam garners 100,000 YouTube viewers, AI-generated hoax generates 5x more views than real event

Unsuspecting YouTube viewers looking for Nvidia's GTC keynote on Tuesday might well have found themselves accidentally watching a Jensen Huang deepfake promoting a cryptocurrency scam, after YouTube promoted the video over the official stream.

Musk says Samsung's Texas fab outclasses TSMC's US-based fabs — with AI5 still in development, questions remain over whether Tesla will need advanced tools

Elon Musk's statement that Samsung's Taylor, Texas fab is more advanced than TSMC's Fab 21 in Arizona reflects the newer 3nm-era tools being installed there. However, this advantage has little relevance for Tesla's AI5 processor, which likely relies on SF4A FinFET technology, which gains minimal benefit from those capabilities.

AI’s True Impact: Productivity, Not Layoffs, Driving CEO Agendas

The post AI’s True Impact: Productivity, Not Layoffs, Driving CEO Agendas appeared first on StartupHub.ai.

Despite widespread anxieties about artificial intelligence decimating the workforce, Steve Odland, CEO of The Conference Board, offers a more nuanced, and perhaps more optimistic, perspective: AI is not primarily a job killer, but a catalyst for productivity. He contends that while AI will profoundly reshape the professional landscape, current large-scale layoffs stem more from broader […]

The post AI’s True Impact: Productivity, Not Layoffs, Driving CEO Agendas appeared first on StartupHub.ai.

Google Backs AI Cybersecurity Startups in Latin America

The post Google Backs AI Cybersecurity Startups in Latin America appeared first on StartupHub.ai.

Google's new accelerator program is investing in 11 AI cybersecurity startups in Latin America, aiming to fortify the region's digital defenses.

The post Google Backs AI Cybersecurity Startups in Latin America appeared first on StartupHub.ai.

E-commerce consumption could bump 20% because of agentic AI, says Mizuho’s Dan Dolev

The post E-commerce consumption could bump 20% because of agentic AI, says Mizuho’s Dan Dolev appeared first on StartupHub.ai.

Dan Dolev, Mizuho’s managing director and senior analyst covering the fintech and payments space, spoke with the host of CNBC’s “The Exchange” following the announcement of a strategic partnership between PayPal and OpenAI. The discussion centered on the potential total addressable market for “agentic commerce” and the specific upside for PayPal in this burgeoning domain, […]

The post E-commerce consumption could bump 20% because of agentic AI, says Mizuho’s Dan Dolev appeared first on StartupHub.ai.

Nano Banana’s Creative Revolution: Unpacking DeepMind’s Viral Image Model

The post Nano Banana’s Creative Revolution: Unpacking DeepMind’s Viral Image Model appeared first on StartupHub.ai.

Google DeepMind’s Nano Banana, the image model that recently captivated the internet, represents a pivotal moment in the democratization and evolution of digital creativity. Its creators, Principal Scientist Oliver Wang and Group Product Manager Nicole Brichtova, recently sat down with a16z partners Yoko Li and Guido Appenzeller to unravel the model’s origins, its unexpected viral […]

The post Nano Banana’s Creative Revolution: Unpacking DeepMind’s Viral Image Model appeared first on StartupHub.ai.

Google Gemini for Home: The AI Assistant’s Next Evolution

The post Google Gemini for Home: The AI Assistant’s Next Evolution appeared first on StartupHub.ai.

Google Gemini for Home is rolling out in early access, upgrading smart assistants with advanced conversational AI and introducing a premium subscription for enhanced features.

The post Google Gemini for Home: The AI Assistant’s Next Evolution appeared first on StartupHub.ai.

Mem0 raises $24M to cure AI’s digital amnesia

The post Mem0 raises $24M to cure AI’s digital amnesia appeared first on StartupHub.ai.

Mem0 is tackling AI's "digital amnesia" with a universal memory layer, aiming to become the foundational database for the next generation of intelligent agents.

The post Mem0 raises $24M to cure AI’s digital amnesia appeared first on StartupHub.ai.

Amazon’s AI-Driven Efficiency Reshapes Big Tech Workforce

The post Amazon’s AI-Driven Efficiency Reshapes Big Tech Workforce appeared first on StartupHub.ai.

The transformative power of artificial intelligence, while heralding unprecedented innovation, is simultaneously catalyzing a profound restructuring of the tech workforce, a reality starkly illustrated by Amazon’s recent corporate layoffs. As CNBC’s MacKenzie Sigalos reported on “Money Movers,” Amazon is embarking on a multi-year efficiency drive, predominantly focused on “hollowing out layers of middle management.” This […]

The post Amazon’s AI-Driven Efficiency Reshapes Big Tech Workforce appeared first on StartupHub.ai.

AI Reshapes M&A Landscape, Trillions in Value Up for Grabs

The post AI Reshapes M&A Landscape, Trillions in Value Up for Grabs appeared first on StartupHub.ai.

The convergence of advanced artificial intelligence and a uniquely poised global economy is setting the stage for an unprecedented era of mergers and acquisitions, fundamentally altering how companies operate and how value is created. This transformative period, characterized by both immense opportunity and inherent risks, was a central theme in Ken Moelis’s discussion with CNBC’s […]

The post AI Reshapes M&A Landscape, Trillions in Value Up for Grabs appeared first on StartupHub.ai.

Pomelli AI: Google’s Play for SMB Marketing

The post Pomelli AI: Google’s Play for SMB Marketing appeared first on StartupHub.ai.

Google Labs' new Pomelli AI aims to democratize on-brand social media campaign generation for SMBs by leveraging AI to understand and replicate brand identity.

The post Pomelli AI: Google’s Play for SMB Marketing appeared first on StartupHub.ai.

AI Valuations Spark Bubble Fears Amidst Broader Market Optimism

The post AI Valuations Spark Bubble Fears Amidst Broader Market Optimism appeared first on StartupHub.ai.

A stark warning echoes from the latest CNBC Fed Survey: nearly 80% of respondents believe AI stocks are currently overvalued, with a quarter deeming them “extremely overvalued.” This sentiment, highlighted by CNBC Senior Economics Reporter Steve Liesman on “Squawk on the Street,” paints a picture of growing apprehension within the investment community regarding the sustainability […]

The post AI Valuations Spark Bubble Fears Amidst Broader Market Optimism appeared first on StartupHub.ai.

Building AI Unicorns: Lessons from Casetext’s $650M Exit

The post Building AI Unicorns: Lessons from Casetext’s $650M Exit appeared first on StartupHub.ai.

“I cannot believe that they are doing it this way.” This sentiment, articulated by Jake Heller, co-founder and CEO of Casetext, encapsulates the entrepreneurial spark that ignited his $650 million AI legal startup, CoCounsel, recently acquired by Thomson Reuters. His candid talk at the AI Startup School on June 17th, 2025, offered a masterclass in […]

The post Building AI Unicorns: Lessons from Casetext’s $650M Exit appeared first on StartupHub.ai.

(PR) NVIDIA Launches BlueField-4 DPUs with 800 Gb/s Throughput for AI Data Centers

AI factories continue to grow at unprecedented scale, processing structured, unstructured and emerging AI-native data. With demand for trillion-token workloads exploding, a new class of infrastructure is required to keep pace. At NVIDIA GTC Washington, D.C, NVIDIA revealed the NVIDIA BlueField-4 data processing unit, part of the full-stack BlueField platform that accelerates gigascale AI infrastructure, delivering massive computing performance, supporting 800 Gb/s of throughput and enabling high-performance inference processing.

Powered by software-defined acceleration across AI data storage, networking and security, NVIDIA BlueField-4 transforms data centers into secure, intelligent AI infrastructure—designed to accelerate every workload, in every AI factory. It's purpose-built as the end-to-end engine for a new class of AI storage platforms, bringing AI data storage acceleration to the foundation of AI data pipelines for efficient data processing and breakthrough performance at scale.

OneXPlayer Officially Reveals Water-Cooled AMD Strix Halo-Powered OneXFly Apex Gaming Handheld

OneXPlayer has officially announced its latest gaming handheld, the OneXFly Apex, which puts the exciting AMD Ryzen AI Max+ 395 APU and Radeon 8060 graphics into a compact handheld form factor with some interesting cooling and power tricks. The new Windows gaming handheld from OneXPlayer is clearly aimed to combat recent announcements from the likes of GPD, replete with a detachable battery, just like GPD's Win 5. Unlike the Win 5, however, OneXPlayer also saw fit to equip the OneXFly Apex with a liquid cooling system to keep the AMD Ryzen AI Max+ 395 in check. OneXPlayer says that the powerful APU is capable of drawing as much as 120 W with this cooling solution, claiming that it is the first Windows gaming handheld to achieve this feat. The Apex will come with an 8-inch, 120 Hz IPS display with a maximum rated brightness of 500 nits and 100% coverage of the sRGB color space.

The watercooling solution is a detachable tower containing the radiator, pump, and reservior, much like the XMG Neo 17's Oasis system we reviewed prior. In handheld mode, without the water cooling tower, the OneXFly's APU is said to be capable of sustained 80 W TDP with up to 100 W supposedly also possible. This is all powered by an 85 Wh external battery in a similar piggyback configuration to GPD's Win 5 detachable battery. OneXPlayer showed off some comparative testing putting the device up against another handheld equipped with the AMD Ryzen Z2 Extreme, and the Strix Halo-powered device expectedly blew the smaller APU out of the water when it came to gaming tests. As is the case with other portable devices using the same APU, the OneXPlayer OneXFly Apex will be available with up to 128 GB of LPDDR5x-8000 memory and a 2 TB NVMe SSD (with another M.2 slot available for upgrades). While the device is clearly intended primarily as a gaming handheld, OneXPlayer is openly marketing the Apex as a do-it-all machine, especially considering the water cooling dock.

(PR) NVIDIA to Build Seven New AI Supercomputers for U.S. Government

NVIDIA today announced that it is working with the U.S. Department of Energy's national labs and the nation's leading companies to build America's AI infrastructure to support scientific discovery, economic growth and power the next industrial revolution.

"We are at the dawn of the AI industrial revolution that will define the future of every industry and nation," said Jensen Huang, founder and CEO of NVIDIA. "It is imperative that America lead the race to the future—this is our generation's Apollo moment. The next wave of inventions, discoveries and progress will be determined by our nation's ability to scale AI infrastructure. Together with our partners, we are building the most advanced AI infrastructure ever created, ensuring that America has the foundation for a prosperous future, and that the world's AI runs on American innovation, openness and collaboration, for the benefit of all."

(PR) NVIDIA Introduces NVQLink — Connecting Quantum and GPU Computing for 17 Quantum Builders and Nine Scientific Labs

NVIDIA today announced NVIDIA NVQLink, an open system architecture for tightly coupling the extreme performance of GPU computing with quantum processors to build accelerated quantum supercomputers.

Researchers from leading supercomputing centers at national laboratories including Brookhaven National Laboratory, Fermi Laboratory, Lawrence Berkeley National Laboratory (Berkeley Lab), Los Alamos National Laboratory, MIT Lincoln Laboratory, the Department of Energy's Oak Ridge National Laboratory, Pacific Northwest National Laboratory and Sandia National Laboratories guided the development of NVQLink, helping accelerate next-generation work on quantum computing. NVQLink provides an open approach to quantum integration, supporting 17 QPU builders, five controller builders and nine U.S national labs.

Almost 90% of Windows Games Run on Linux, Notes Report

Linux gaming has quietly reached a new inflection point. A recent Boiling Steam summary of crowd-sourced ProtonDB compatibility reports shows that about 89.7% of Windows titles now at least launch on Linux systems. The numbers are spread into a few categories. Games rated "Platinum," meaning they install, run, and save on Linux without requiring user intervention, made up 42% of new releases tracked in October, up from 29% the previous year. At the same time the share of titles that refuse to launch, the so-called "Borked" cohort, has fallen to roughly 3.8%, a group that still includes deliberate blocks such as March of Giants, which explicitly detects Wine and Proton and exits to the desktop.

The most persistent obstacles are not obscure indies but anti-cheat middleware and contractual choices. Easy Anti-Cheat, BattlEye, and similar systems remain the primary gatekeepers for online multiplayer, and enabling them on Linux is often more a negotiation than a mere technical flip of a switch. When a studio approves Steam Deck support, desktop Linux compatibility frequently follows within a single build cycle, suggesting the code paths are already unified and only sign-off is pending.

(PR) Razer Unveils Huntsman V3 Pro and V3 Pro Tenkeyless 8KHz Esports Keyboards

Razer, the leading global lifestyle brand for gamers, today unveiled the Razer Huntsman V3 Pro 8KHz and Razer Huntsman V3 Pro Tenkeyless 8KHz, its most advanced esports gaming keyboards to date. The Huntsman V3 Pro 8KHz and Huntsman V3 Pro Tenkeyless 8KHz build on the award-winning Huntsman legacy, introducing next-generation responsiveness and refined keystroke feel for a truly competitive edge.

"The Huntsman V3 Pro 8KHz is a reflection of our relentless pursuit of esports excellence. With the evolution of our Analog Optical Switches and the introduction of 8000 Hz HyperPolling, we've pushed performance to new heights," said Barrie Ooi, Head of Razer's PC Gaming Division. "It delivers the speed, control and precision that elite players demand. It's a showcase of what happens when engineering meets competitive ambition."

(PR) PNY Unveils CS3250 M.2 NVMe PCIe Gen 5 x4 SSD, Transforming Storage with Lightning-Fast Performance

PNY announced the addition of the CS3250 M.2 NVMe PCIe Gen 5 x4 SSD to its lineup of solid-state drives. The CS3250 pushes the limits of storage technology with ultra-fast NVMe PCIe Gen 5 x4 performance. With sequential read speeds of up to 14,900 MB/s and write speeds up to 14,000 MB/s, it delivers the speed and responsiveness required for today's most demanding workloads. Designed for AI developers, gamers, content creators, and performance-driven professionals, the CS3250 sets a new benchmark for high-end computing.

Enhanced Computing
Built for the future of computing, the CS3250 harnesses next-gen NVMe PCIe Gen 5 x4 technology to deliver next-level performance, making it the ultimate solution for powering AI image generation, AAA titles, and demanding workloads. Whether you are pushing the limits of creativity or performance, the CS3250 ensures lightning-fast load times, seamless multitasking, and unbeatable responsiveness, empowering professionals and enthusiasts alike - raising the bar for premium storage solutions.

(PR) Endorfy Presents Arx 500 White ARGB PC Case

After the success of the highly acclaimed and award-winning Arx 500 and Arx 700 cases, ENDORFY presents their younger, equally ambitious sibling. The new Arx 500 White ARGB, finished in an elegant white color scheme, is a natural evolution of the series and another step toward a complete product portfolio that allows to build a reliable, visually consistent ENDORFY ecosystem. Designed with attention to detail, the white Arx 500 impresses with perfectly matched shades of white, ensuring that it looks great both right out of the box and after long-term use. It's a blend of performance and design that creators, gamers and professionals alike will appreciate.

Technology In Its Purest Form
Behind its beautiful form lies thoughtful engineering. The spacious interior can accommodate up to seven fans and radiators up to 360 mm, and it's compatible with ATX, microATX, and Mini-ITX motherboards. Straight out of the box, the case comes equipped with four pre-installed Stratus 140 White PWM ARGB fans, developed in collaboration with Synergy Cooling. Each operates between 200 and 1400 RPM, delivering not only excellent airflow but also silence.

Corsair delivers peak PCIe 5.0 speeds with its new MP700 PRO XT

Corsair extends its PCIe 5.0 offerings with its MP700 PRO XT and MP700 Micro Corsair has just added two new SSDs to its PCIe 5.0 storage lineup, promising high-end SSD performance and Microsoft DirectStorage support. Catering to the high-end market, Corsair’s new MP700 PRO XT SSD promises performance levels that reach the limits of the […]

The post Corsair delivers peak PCIe 5.0 speeds with its new MP700 PRO XT appeared first on OC3D.

NVIDIA Shows Next-Gen Vera Rubin Superchip For The First Time, Two Massive GPUs Primed For Production Next Year

NVIDIA circuit board displayed on stage shows TWW 2538 on chips.

NVIDIA has shown off its next-gen Vera Rubin Superchip for the first time at GTC in Washington, primed to spark the next wave of AI. NVIDIA Has Received Its First Rubin GPUs In The Labs, Ready For Vera Rubin Superchip Mass Production Next Year, Around The Same Time or Earlier At GTC October 2025, NVIDIA's CEO Jensen Huang showcased the next-gen Vera Rubin Superchip. This is the first time that we are seeing an actual sample of the motherboard, or Superchip as NVIDIA loves to call it, featuring the Vera CPU and two massive Rubin GPUs. The motherboard also hosts […]

Read full article at https://wccftech.com/nvidia-shows-next-gen-vera-rubin-superchip-two-massive-gpus-production-next-year/

NVIDIA Unveils a Massive Partnership With Nokia, Bringing Next-Gen 6G Connectivity By Leveraging the Power of AI

Announcing Nokia to build AI-native 6G on new NVIDIA ARC Aerial RAN Computer on stage with Nokia MIMO Radio displayed.

NVIDIA has announced a surprise partnership with Nokia to bring 6G connectivity by utilizing the firm's new AI-RAN products, involving Grace CPUs and Blackwell GPUs. NVIDIA's Collaboration With Nokia Allows Merging CUDA & Computing Tech With Existing RAN Infrastructure Team Green has managed to integrate AI into everything mainstream, and it seems that the telecommunications industry is now expected to benefit from the next wave of AI's computing capabilities. At the GTC 2025 keynote, NVIDIA's CEO announced a pivotal partnership with Nokia, formally entering the race for achieving 6G connectivity through a new suite of AI-RAN products combined with Nokia's […]

Read full article at https://wccftech.com/nvidia-announces-a-massive-partnership-with-nokia-bringing-next-gen-6g-connectivity/

Amazon Game Studios Hit With “Significant” Cuts Amid Mass 14,000+ Layoff

New World game artwork with fiery and lush landscapes, featuring a warrior face with glowing eyes at the center.

Amazon is laying off more than 14,000 corporate jobs today, and per a report from Bloomberg, the video games division, Amazon Game Studios, is not immune to the cuts. While Amazon doesn't specify exactly how many people from its video games division will be laid off, a statement from Steve Boom, Amazon's head of audio, Twitch, and games, does call the cut "significant," and says that the cuts are happening despite Amazon being "proud" of the success it has had. While the studio's MMO, New World, isn't mentioned by name, the statement does say that Amazon is halting its game […]

Read full article at https://wccftech.com/amazon-video-game-division-hit-significant-cuts-amid-mass-14000-layoff/

Snapdragon 8 Elite Gen 6 Rumored To Get LPDDR6 RAM & UFS 5.0 Support For Faster AI Operations, But Tipster Shares Questionable Lithography Details

Snapdragon 8 Elite Gen 6 details shared by tipster

Qualcomm will keep pace with Apple and announce its first 2nm chipset in late 2026, the Snapdragon 8 Elite Gen 6, directly succeeding the Snapdragon 8 Elite Gen 5. A tipster now shares some partial specifications of the chipset, claiming that it will feature LPDDR6 RAM and UFS 5.0 storage, bringing in a wave of improvements. However, the rumor also mentions that the Snapdragon 8 Elite Gen 6 will utilize TSMC’s more advanced ‘N2P’ process, which has been refuted on a previous occasion. Based on TSMC’s 2nm production timeline, its N2 wafers will be available in higher volume for customers like […]

Read full article at https://wccftech.com/snapdragon-8-elite-gen-6-to-get-lpddr6-and-ufs-5-0-support-but-will-stick-with-tsmc-n2-process/

Final Fantasy VII Rebirth Zack Gameplay Overhaul Mod Will Introduce New Skills and Mechanics

Final Fantasy VII Rebirth key art

Zack Fair's gameplay in Final Fantasy VII Rebirth is set to be significantly expanded by a new mod introducing new mechanics and skill for an overhauled combat experience. This Zack gameplay overhaul mod is being developed by NSK, the modder behind the Zack and Sephiroth Combat Fix mod whichaddressed some issues for the two characters and expanded their possibilities when added to the regular combat party outside their small playable segments. Judging from the video showcase shared a few days ago on YouTube, the changes being made to Zack's gameplay are going to be significant, leveraging his unique Charge mechanics […]

Read full article at https://wccftech.com/final-fantasy-vii-rebirth-zack-gameplay-mod/

Battlefield 6 Season 1, Battlefield REDSEC Now Live, Full Season 1 Roadmap Revealed

Battlefield Redsec title screen with armed soldiers walking on a street amidst explosions.

It's a big day for Battlefield 6, with both its Season 1 update now live for players to jump into, and its new free-to-play battle royale mode, Battlefield REDSEC, also now available. EA and Battlefield Studios confirmed yesterday what was already rumored, that REDSEC would be revealed and launched today, and now it's here for all players on PC, PS5, and Xbox Series X/S. Once the gameplay trailer that was teased yesterday was over, the mode and the new season was officially live for all players to jump into, and we got our first major question of the day answered. […]

Read full article at https://wccftech.com/battlefield-6-season-1-battlefield-redsec-out-now-pc-ps5-xbox-series-x-s/

Lenovo Launches Legion Pro 27Q-10, The Cheapest QHD OLED Monitor At Just $337

Lenovo Legion desktop setup with RGB keyboard, monitor displaying LEGION, and headset on desk.

The Pro 27Q-10 is probably the cheapest QHD OLED gaming monitor available on the market and is currently available for just 2,399 Yuan in China. Lenovo Debuts Legion Pro Series OLED Monitors, Starting at $337; Available in Both 2K and 4K Variants with Up To 280Hz Refresh Rate Competition in the OLED display category is getting aggressive, and while we already have some QHD OLED gaming monitors available for as low as $450-$500, Lenovo just brought the price to under $350. Lenovo is the most popular PC brand on earth, isn't just involved in desktops and laptops; it is also […]

Read full article at https://wccftech.com/lenovo-launches-legion-pro-27q-10-the-cheapest-qhd-oled-monitor-at-just-337/

President Trump to Meet NVIDIA’s CEO Jensen Huang at a Time When the U.S. & China Have Agreed on the Framework for a Trade Deal

Unbranded chip held on stage with spiral backdrop.

President Trump is expected to meet with NVIDIA's CEO, Jensen Huang, during his visit to South Korea, where he will congratulate him on the firm's recent achievements. President Trump Will Congratulate NVIDIA On Producing The First Blackwell Chip Wafer In the US Well, the timing of a meeting between President Trump and Jensen Huang is indeed a 'massive' coincidence, to say the least, especially since both the US and China have agreed on a trade deal framework, which is expected to reduce hostilities between the two nations. While speaking with business leaders in Tokyo, Japan, President Trump announced his meeting […]

Read full article at https://wccftech.com/president-trump-to-meet-nvidia-ceo-jensen-huang/

Microsoft Will No Longer Have Any Say In OpenAI’s Upcoming “Apple iPhone Killer” Consumer Device Decisions

Apple logo in fiery orange and OpenAI logo in metallic blue appear side by side in dramatic background.

OpenAI has been working for quite a while now with the famous Apple designer, Jony Ive, to come up with a consumer AI device, one that would supposedly render smartphones obsolete, devastating Apple's legendary moat around its iPhones in the process. Now, we have just received the clearest sign yet that OpenAI is indeed working on such a device. What's more, Microsoft will no longer exercise any influence over the upcoming "Apple iPhone killer." OpenAI and Microsoft have successfully renegotiated their tie-up, removing the latter's influence over the former's upcoming "Apple iPhone killer" consumer device, among other things Microsoft and […]

Read full article at https://wccftech.com/microsoft-will-no-longer-have-any-say-in-openais-upcoming-apple-iphone-killer-consumer-device-decisions/

Sandbox Racer Wreckreation Out Now on PC, PS5, and Xbox Series X/S

Wreckreation logo above a landscape with sports cars racing on twisting tracks and roads.

Wrekcreation, the sandbox open-world arcade racing game from Three Fields Entertainment, a studio founded by former Criterion developers who worked on the Burnout series, is out now on PC, PS5, and Xbox Series X/S. Published by THQ Nordic, Wreckreation gives players the freedom to create whatever kinds of tracks they want, from the kinds of things you'd only expect to see in Hot Wheels Unleashed to something super realistic if that's more your speed, and race the wide variety of vehicles on them to your heart's content. With more than 400 square kilometres of space to create tracks in and […]

Read full article at https://wccftech.com/sandbox-racer-wreckreation-out-now-pc-ps5-xbox-series/

Smart Glasses Can Be The Future Of Chip Manufacturing And Smartphone & AI GPU Production, Says Vuzix’s Enterprise Solutions Head

Vuzix Z100 smart glasses displaying Dinner at 8?, battery and Wi-Fi icons, and 5:45 PM on the lens.

The advent of AI and Meta's launch of its smart glasses have injected fresh air into the sector after Google decided to shelve its smart glasses in 2023, the sector has seen increased interest. In fact, Meta CEO Mark Zuckerberg has gone as far as to suggest that courtesy of AI, users who do not use smart glasses can find themselves at a cognitive disadvantage. To understand the smart glasses industry and how the gadgets can impact consumer electronics manufacturing, semiconductor fabrication and AI GPU production, we decided to talk to Vuzix Corporation's President, Enterprise Solutions Dr. Chris Parkinson. Vuzix […]

Read full article at https://wccftech.com/smart-glasses-can-be-the-future-of-chip-manufacturing-and-smartphone-ai-gpu-production-says-vuzixs-enterprise-solutions-head/

Intel Foundry Reportedly in Bold Pursuit of Former TSMC Executive Who Drove the Company’s High-End Chip Breakthroughs

Logos of tsmc and intel overlaid on semiconductor chip background.

TSMC's former SVP, known for his key role in driving the Taiwan giant's chip technologies, is reportedly being pursued to join Intel Foundry, which could be a significant hiring move for Team Blue. Intel's Pursuit of TSMC's Former Executive Shows the Firm's 'Hunger' Towards a Comeback in the Chip Industry Intel has been scaling up its chipmaking ambitions since the change in leadership, and under CEO Lip-Bu Tan, the foundry division has vowed to gain recognition in the semiconductor industry. Structural changes are being made within the department, including adjustments to the management hierarchy and the approach towards specific chip […]

Read full article at https://wccftech.com/intel-foundry-reportedly-pursuing-former-tsmc-executive/

Guild Wars 2: Visions of Eternity Expansion Out Now on PC

A colorful bird flies over a vibrant fantasy landscape with waterfalls and cliffs.

Developer ArenaNet has launched the sixth major expansion for Guild Wars 2 today, with Visions of Eternity now available to players on PC. Visions of Eternity adds a new island to explore called Castora, with two new maps to explore, a new storyline, and plenty more. The new storyline kicks off with whispers and rumors of the island of Castora, with the Tyrian Alliance stepping in to uncover more about the magical island once they discover that the Inquest has begun sniffing around for Castora. Alongside two new maps included with the new expansion, Shipwreck Strand and Starlit Weald, players […]

Read full article at https://wccftech.com/guild-wars-2-visions-of-eternity-expansion-out-now-pc/

Tampo – Manage team and personal tasks in one app


Tampo is a modern task and team management platform built for startups and growing teams. It helps you organize projects, assign tasks, and collaborate seamlessly—all in one place. With features like multi-user assignments, real-time tracking, and smart filters, Tampo simplifies team coordination without sacrificing power. Designed to be fast, intuitive, and mobile-friendly, Tampo is the productivity partner your team needs to get more done, together.

View startup

Bill Gates urges world to ‘refocus’ climate goals, pushes back on emissions targets

Cipher executive editor Amy Harder and Bill Gates at the Breakthrough Energy Summit in Seattle on Oct. 19, 2022. (GeekWire Photo / Lisa Stiffler)

Less than two weeks ahead of the United Nations climate conference, Bill Gates posted a memo on his personal blog encouraging folks to just calm down about climate change.

“Although climate change will have serious consequences — particularly for people in the poorest countries — it will not lead to humanity’s demise. People will be able to live and thrive in most places on Earth for the foreseeable future,” Gates wrote.

The missive seems to run counter to earlier climate actions taken by the Microsoft co-founder and billionaire, but also echoes Gates’ long-held priorities and perspectives. In some regards, it’s the framing, timing and broader political context that heighten the memo’s impact.

What the world needs to do, he said, is to shift the goals away from reducing carbon emissions and keeping warming below agreed-upon temperature targets.

“This is a chance to refocus on the metric that should count even more than emissions and temperature change: improving lives,” he wrote. “Our chief goal should be to prevent suffering, particularly for those in the toughest conditions who live in the world’s poorest countries.

More than four years ago, Gates published “How to Avoid a Climate Disaster,” a book highlighting the urgency and necessity of cutting carbon emissions and promoting the need to reduce “green premiums” in order to make climate friendly technologies as cheap as unsustainable alternatives.

“It’ll be tougher than anything humanity’s ever done, and only by staying constant in working on this over the next 30 years do we have a chance to do it,” Gates told GeekWire in 2021. “Having some people who think it’s easy will be an impediment. Having people who think that it’s not important will be an impediment.”

Gates’ clean energy efforts go back even earlier. In 2006 he helped launch the next-gen nuclear company TerraPower, which is currently building its first reactor in Wyoming. In 2015 he founded Breakthrough Energy Ventures, a $1 billion fund to support carbon-cutting startups, which evolved into Breakthrough Energy, an umbrella organization tackling clean tech policies, funding for researchers and data generation.

Earlier this year, however, Gates began taking steps that suggested a cooling commitment to the challenge.

Roughly two months after President Trump took office in January, and as clean energy policies and funding began getting axed, Breakthrough Energy laid off staff. In May Gates announced he would direct nearly all of his wealth to his eponymous global health foundation, deploying $200 billion through the organization over two decades.

At the same time, many of the key points in the memo published today reflect statements that Gates has made in the past.

In both his new post and at a 2022 global climate summit organized in Seattle by Breakthrough Energy, Gates urged people to focus on reducing green premiums more than on cutting emissions as a key benchmark.

“If you keep the primary measures, which is the emissions reductions in the near term, you’re going to be very depressed,” Gates said. At his summit talk, he shared optimism that new innovations were arriving quickly and would address climate challenges.

A curious paradox in Gates’ stance is the reality that people living in lower-income nations and in regions important to the Gates Foundation are often hardest hit by the rising temperatures and natural disasters that are stoked by increased carbon emissions.

Gates acknowledged that truth in his post this week, and said that solutions such as engineering drought tolerant crops and making air conditioning more widespread can address some of those harms. At the Seattle summit three years ago, one of the Breakthrough Energy executives likewise said the organization was going to increase its investment into technologies for adapting to climate change.

On Nov. 10, global climate leaders will meet in Brazil for COP30 to discuss climate progress and issues. Gates has often attended the event, but the New York Times reported that won’t be the case this year.

UN efforts meanwhile continue to emphasize the importance of reducing emissions. A statement today from the organization notes that while carbon emissions are curving downward, it’s not happening fast enough.

The world needs to raise its climate ambitions, the statement continues, “to avoid the worst climate impacts by limiting warming to 1.5°C this century, as science demands.”

GitHub's Agent HQ aims to solve enterprises' biggest AI coding problem: Too many agents, no central control

GitHub is making a bold bet that enterprises don't need another proprietary coding agent: They need a way to manage all of them.

At its Universe 2025 conference, the Microsoft-owned developer platform announced Agent HQ. The new architecture transforms GitHub into a unified control plane for managing multiple AI coding agents from competitors including Anthropic, OpenAI, Google, Cognition and xAI. Rather than forcing developers into a single agent experience, the company is positioning itself as the essential orchestration layer beneath them all.

Agent HQ represents GitHub's attempt to apply its collaboration platform approach to AI agents. Just as the company transformed Git, pull requests and CI/CD into collaborative workflows, it's now trying to do the same with a fragmented AI coding landscape.

The announcement marks what GitHub calls the transition from "wave one" to "wave two" of AI-assisted development. According to GitHub's Octoverse report, 80% of new developers use Copilot in their first week and AI has helped to lead to a large increase overall in the use of the GitHub platform.

"Last year, the big announcements for us, and what we were saying as a company, is wave one is done, that was kind of code completion," GitHub's COO Mario Rodriguez told VentureBeat. "We're into this wave two era, [which] is going to be multimodal, it's going to be agentic and it's going to have these new experiences that will feel AI native."

What is Agent HQ?

GitHub already updated its GitHub Copilot coding tool for the agentic era with the debut of GitHub Copilot Agent in May.

Agent HQ transforms GitHub into an open ecosystem that unites multiple AI coding agents on a single platform. Over the coming months, coding agents from Anthropic, OpenAI, Google, Cognition, xAI and others will become available directly within GitHub as part of existing paid GitHub Copilot subscriptions.

The architecture maintains GitHub's core primitives. Developers still work with Git, pull requests and issues. They still use their preferred compute, whether GitHub Actions or self-hosted runners. What changes is the layer above: agents from multiple vendors can now operate within GitHub's security perimeter, using the same identity controls, branch permissions and audit logging that enterprises already trust for human developers.

This approach differs fundamentally from standalone tools. When developers use Cursor or grant repository access to Claude, those agents typically receive broad permissions across entire repositories. Agent HQ compartmentalizes access at the branch level and wraps all agent activity in enterprise-grade governance controls.

Mission Control: One interface for all agents

At the heart of Agent HQ is Mission Control. It's a unified command center that appears consistently across GitHub's web interface, VS Code, mobile apps and the command line. Through Mission Control, developers can assign work to multiple agents simultaneously. They can track progress and manage permissions, all from a single pane of glass.

The technical architecture addresses a critical enterprise concern: Security. Unlike standalone agent implementations where users must grant broad repository access, GitHub's Agent HQ implements granular controls at the platform level.

"Our coding agent has a set of security controls and capabilities that are built natively into the platform, and that's what we're providing to all of these other agents as well," Rodriguez explained. "It runs with a GitHub token that is very locked down to what it can actually do."

Agents operating through Agent HQ can only commit to designated branches. They run within sandboxed GitHub Actions environments with firewall protections. They operate under strict identity controls. Rodriguez explained that even if an agent goes rogue, the firewall prevents it from accessing external networks or exfiltrating data unless those protections are explicitly disabled.

Technical differentiation: MCP integration and custom agents

Beyond managing third-party agents, GitHub is introducing two technical capabilities that set Agent HQ apart from alternative approaches like Cursor's standalone editor or Anthropic's Claude integration.

Custom agents via AGENTS.md files: Enterprises can now create source-controlled configuration files that define specific rules, tools and guardrails for how Copilot behaves. For example, a company could specify "prefer this logger" or "use table-driven tests for all handlers." This permanently encodes organizational standards without requiring developers to re-prompt every time.

"Custom agents have an immense amount of product market fit within enterprises, because they could just codify a set of skills that the coordination can do, then standardize on those and get really high quality output," Rodriguez said.

The AGENTS.md specification allows teams to version control their agent behavior alongside their code. When a developer clones a repository, they automatically inherit the custom agent rules. This solves a persistent problem with AI coding tools: Inconsistent output quality when different team members use different prompting strategies.

Native Model Context Protocol (MCP) support: VS Code now includes a GitHub MCP Registry. Developers can discover, install and enable MCP servers with a single click. They can then create custom agents that combine these tools with specific system prompts.

This positions GitHub as the integration point between the emerging MCP ecosystem and actual developer workflows. MCP, introduced by Anthropic but rapidly gaining industry support, is becoming a de facto standard for agent-to-tool communication. By supporting the full specification, GitHub can orchestrate agents that need access to external services without each agent implementing its own integration logic.

Plan Mode and agentic code review

GitHub is also shipping new capabilities within VS Code itself. Plan Mode allows developers to collaborate with Copilot on building step-by-step project approaches. The AI asks clarifying questions before any code is written. Once approved, the plan can be executed either locally in VS Code or by cloud-based agents.

The feature addresses a common failure mode in AI coding: Beginning implementation before requirements are fully understood. By forcing an explicit planning phase, GitHub aims to reduce wasted effort and improve output quality.

More significantly, GitHub's code review feature is becoming agentic. The new implementation will use GitHub's CodeQL engine, which previously largely focused on security vulnerabilities to identify bugs and maintainability issues. The code review agent will automatically scan agent-generated pull requests before human review. This creates a two-stage quality gate.

"Our code review agent will be able to make calls into the CodeQL engine to then find a set of bugs," Rodriguez explained. "We're extending the engine and we're going to be able to tap into that engine also to find bugs."

Enterprise considerations: What to do now

For enterprises already deploying multiple AI coding tools, Agent HQ offers a path to consolidation without forcing tool elimination.

GitHub's multi-agent approach provides vendor flexibility and reduces lock-in risk. Organizations can test multiple agents within a unified security perimeter and switch providers without retraining developers. The tradeoff is potentially less optimized experiences compared to specialized tools that tightly integrate UI and agent behavior.

Rodriguez's recommendation is clear: Begin with custom agents. This allows enterprises to codify organizational standards that agents follow consistently. Once established, organizations can layer in additional third-party agents to expand capabilities.

"Go and do agent coding, custom agents and start playing with that," he said. "That is a capability available tomorrow, and it allows you to really start shaping your SDLC to be personalized to you, your organization and your people."

Intuit learned to build AI agents for finance the hard way: Trust lost in buckets, earned back in spoonfuls

Building AI for financial software requires a different playbook than consumer AI, and Intuit's latest QuickBooks release provides an example.

The company has announced Intuit Intelligence, a system that orchestrates specialized AI agents across its QuickBooks platform to handle tasks including sales tax compliance and payroll processing. These new agents augment existing accounting and project management agents (which have also been updated) as well as a unified interface that lets users query data across QuickBooks, third-party systems and uploaded files using natural language.

The new development follow years of investment and improvement in Intuit's GenOS, allowing the company to build AI capabilities that reduce latency and improve accuracy.

But the real news isn't what Intuit built — it's how they built it and why their design decisions will make AI more usable. The company's latest AI rollout represents an evolution built on hard-won lessons about what works and what doesn't when deploying AI in financial contexts.

What the company learned is sobering: Even when its accounting agent improved transaction categorization accuracy by 20 percentage points on average, they still received complaints about errors.

"The use cases that we're trying to solve for customers include tax and finance; if you make a mistake in this world, you lose trust with customers in buckets and we only get it back in spoonfuls," Joe Preston, Intuit's VP of product and design, told VentureBeat.

The architecture of trust: Real data queries over generative responses

Intuit's technical strategy centers on a fundamental design decision. For financial queries and business intelligence, the system queries actual data, rather than generating responses through large language models (LLMs).

Also critically important: That data isn't all in one place. Intuit's technical implementation allows QuickBooks to ingest data from multiple distinct sources: native Intuit data, OAuth-connected third-party systems like Square for payments and user-uploaded files such as spreadsheets containing vendor pricing lists or marketing campaign data. This creates a unified data layer that AI agents can query reliably.

"We're actually querying your real data," Preston explained. "That's very different than if you were to just copy, paste out a spreadsheet or a PDF and paste into ChatGPT."

This architectural choice means that the Intuit Intelligence system functions more as an orchestration layer. It's a natural language interface to structured data operations. When a user asks about projected profitability or wants to run payroll, the system translates the natural language query into database operations against verified financial data.

This matters because Intuit's internal research has uncovered widespread shadow AI usage. When surveyed, 25% of accountants using QuickBooks admitted they were already copying and pasting data into ChatGPT or Google Gemini for analysis.

Intuit's approach treats AI as a query translation and orchestration mechanism, not a content generator. This reduces the hallucination risk that has plagued AI deployments in financial contexts.

Explainability as a design requirement, not an afterthought

Beyond the technical architecture, Intuit has made explainability a core user experience across its AI agents. This goes beyond simply providing correct answers: It means showing users the reasoning behind automated decisions.

When Intuit's accounting agent categorizes a transaction, it doesn't just display the result; it shows the reasoning. This isn't marketing copy about explainable AI, it's actual UI displaying data points and logic.

"It's about closing that trust loop and making sure customers understand the why," Alastair Simpson, Intuit's VP of design, told VentureBeat.

This becomes particularly critical when you consider Intuit's user research: While half of small businesses describe AI as helpful, nearly a quarter haven't used AI at all. The explanation layer serves both populations: Building confidence for newcomers, while giving experienced users the context to verify accuracy.

The design also enforces human control at critical decision points. This approach extends beyond the interface. Intuit connects users directly with human experts, embedded in the same workflows, when automation reaches its limits or when users want validation.

Navigating the transition from forms to conversations

One of Intuit's more interesting challenges involves managing a fundamental shift in user interfaces. Preston described it as having one foot in the past and one foot in the future.

"This isn't just Intuit, this is the market as a whole," said Preston. "Today we still have a lot of customers filling out forms and going through tables full of data. We're investing a lot into leaning in and questioning the ways that we do it across our products today, where you're basically just filling out, form after form, or table after table, because we see where the world is headed, which is really a different form of interacting with these products."

This creates a product design challenge: How do you serve users who are comfortable with traditional interfaces while gradually introducing conversational and agentic capabilities?

Intuit's approach has been to embed AI agents directly into existing workflows. This means not forcing users to adopt entirely new interaction patterns. The payments agent appears alongside invoicing workflows; the accounting agent enhances the existing reconciliation process rather than replacing it. This incremental approach lets users experience AI benefits without abandoning familiar processes.

What enterprise AI builders can learn from Intuit's approach

Intuit's experience deploying AI in financial contexts surfaces several principles that apply broadly to enterprise AI initiatives.

Architecture matters for trust: In domains where accuracy is critical, consider whether you need content generation or data query translation. Intuit's decision to treat AI as an orchestration and natural language interface layer dramatically reduces hallucination risk and avoids using AI as a generative system.

Explainability must be designed in, not bolted on: Showing users why the AI made a decision isn't optional when trust is at stake. This requires deliberate UX design. It may constrain model choices.

User control preserves trust during accuracy improvements: Intuit's accounting agent improved categorization accuracy by 20 percentage points. Yet, maintaining user override capabilities was essential for adoption.

Transition gradually from familiar interfaces: Don't force users to abandon forms for conversations. Embed AI capabilities into existing workflows first. Let users experience benefits before asking them to change behavior.

Be honest about what's reactive versus proactive: Current AI agents primarily respond to prompts and automate defined tasks. True proactive intelligence that makes unprompted strategic recommendations remains an evolving capability.

Address workforce concerns with tooling, not just messaging: If AI is meant to augment rather than replace workers, provide workers with AI tools. Show them how to leverage the technology.

For enterprises navigating AI adoption, Intuit's journey offers a clear directive. The winning approach prioritizes trustworthiness over capability demonstrations. In domains where mistakes have real consequences, that means investing in accuracy, transparency and human oversight before pursuing conversational sophistication or autonomous action.

Simpson frames the challenge succinctly: "We didn't want it to be a bolted-on layer. We wanted customers to be in their natural workflow, and have agents doing work for customers, embedded in the workflow."

Ferguson’s AI balancing act: Washington governor wants to harness innovation while minimizing harms

Washington Gov. Bob Ferguson speaks at Seattle AI Week, at the AI House on Pier 70 along the city’s waterfront. (GeekWire Photo / Todd Bishop)

Washington state Gov. Bob Ferguson is threading the needle when it comes to artificial intelligence.

Ferguson made a brief appearance at the opening reception for Seattle AI Week on Monday evening, speaking at AI House on Pier 70 about his approach to governing the consequential technology.

“I view my job as maximizing the benefits and minimizing harms,” said Ferguson, who took office earlier this year.

Ferguson called AI one of the “top five biggest challenges” he thinks about daily, both professionally and personally.

In a follow-up interview with GeekWire, the governor said AI “could totally transform our government, as well as the private sector, in many ways.”

His comments came just as Amazon, the largest employer in Washington state, said it would eliminate about 14,000 corporate jobs, citing a need to reduce bureaucracy and become more efficient in the new era of artificial intelligence.

Ferguson told the crowd that the future of work and “loss of jobs that come with the technology” is on his mind.

The governor highlighted Washington’s AI Task Force, created during his tenure as attorney general, which is studying issues from algorithmic bias to data security. The group’s next set of recommendations arrives later this year and could shape upcoming legislation, he said.

States are moving ahead with their own AI rules in the absence of a comprehensive federal framework. Washington appears to sit in the pragmatic middle of this fast-moving regulatory landscape — using executive action and an expert task force to build guidelines, while watching experiments in states such as California and Colorado.

Seattle city leaders also getting involved. Seattle Mayor Bruce Harrell last month announced a “responsible AI plan” that provides guidelines for Seattle’s use of artificial intelligence and its support of the AI tech sector as an economic driver.

(GeekWire Photo / Taylor Soper)

Ferguson said he’s aware of how AI can “really revolutionize our economy and state in so many ways,” from healthcare to education to wildfire detection.

But he also flagged his concerns — both as a policymaker and parent. The governor, who has 17-year-old twins, said he worries about the technology’s impact on young people, referencing reports of teen suicides linked to AI chatbots.

Despite those concerns, Ferguson maintained an upbeat tone during his remarks at Seattle AI Week, citing the region’s technical talent and economic opportunity from the technology.

He noted that the state, amid a $16 billion budget shortfall this year, kept $300,000 in funding for the AI House, the new waterfront startup hub that hosted Monday’s event.

“There is no better place anywhere in the United States for this innovation than right here in the Northwest,” he said.

Related: A tale of two Seattles in the age of AI: Harsh realities and new hope for the tech community

Helion gives behind-the-scenes tour of secretive 60-foot fusion prototype as it races to deployment

Stacks of pallets containing power units that deliver massive pulses of energy to Helion’s Polaris fusion generator. (Helion Photo)

EVERETT, Wash. — In an industrial stretch of Everett is a boxy, windowless building called Ursa. Inside that building is a vault built from concrete blocks up to 5 feet thick with an additional layer of radiation-absorbing plastic. Within that vault is Polaris, a machine that could change the world.

Helion Energy is trying to replicate the physics that fuel the sun and the stars — hence the celestial naming theme — to provide nearly limitless power on earth through fusion reactions.

The company recently invited a small group of journalists to visit its headquarters and see Polaris, which is the seventh iteration of its fusion generator and the prototype for a commercial facility called Orion that broke ground this summer in Malaga in Central Washington.

David Kirtley, Helion CEO, at the Malaga, Wash., site where the company broke ground this summer on its planned commercial fusion plant. (LinkedIn Photo)

Few people outside of Helion have been provided such access; photographs were not allowed.

“We run these systems right now at 100 million degrees, about 10 times the temperature of the sun, and compress them to high pressure… the same pressure as the bottom of the Marianas Trench,” said Helion CEO and co-founder David Kirtley, referencing the deepest part of the ocean.

Polaris and its vault occupy a relative small footprint inside of Ursa. The majority of the space is filled with 2,500 power units. They’re configured into 4-foot-by-4-foot pallets, lined up in rows and stacked seven high. The units are packed with capacitors that are charged from the grid to provide super high intensity pulses of electricity — 100 gigawatts of peak power — that create the temperatures and pressure needed for fusion reactions.

All of that energy is carried through miles and miles of coaxial cables filled with copper, aluminum and custom-metal alloys. End-to-end, the cables would stretch across Washington state and back again — roughly 720 miles. They flow in thick, black bundles from the pallets into the vault. They curl on the floor in giant heaps before connecting to the tubular-shaped, 60-foot-long Polaris generator.

The ultimate goal is for the generator to force lightweight ions to fuse, creating a super hot plasma that expands, pushing on a magnetic field that surrounds it. The energy created by that expansion is directly captured and carried back the capacitors to recharge them so the process can be repeated over and over again.

And the small amount of extra power that’s produced by fusion goes into the electrical grid for others to use — or at least that’s the plan for the future.

‘Worth being aggressive’

Helion is building fusion generators that smash together deuterium and helium-3 isotopes in super hot, super high pressure conditions to produce power. (Helion Illustration)

Helion is a contender in a global race to generate fusion power for a rapidly escalating demand for electricity, driven in part by data centers and AI. No one so far has been able to make and capture enough energy from fusion to commercialize the process, but dozens of companies — including three other competitors in the Pacific Northwest — are trying.

The company aims by 2028 to begin producing energy at the Malaga site, which Microsoft has agreed to purchase. If it hits this extremely ambitious target — and many are highly skeptical — it could be the world’s first company to do so.

“There is a level of risk, of being aggressive with program development, new technology and timelines,” Kirtley said. “But I think it’s worth it. Fusion is the same process that happens in the stars. It has the promise of very low cost electricity that’s clean and safe and base load and always on. And so it’s worth being aggressive.”

Some in the sector worry that Helion will miss the mark and cast doubt on a sector that is working hard to prove itself. At a June event, the head of R&D for fusion competitor Zap Energy questioned Helion’s deadline.

“I don’t see a commercial application in the next few years happening,” said Ben Levitt. “There is a lot of complicated science and engineering still to be discovered and to be applied.”

Others are willing to take the bet. Helion has raised more than $1 billion from investors that include SoftBank, Lightspeed Venture Partners and Sam Altman, who is OpenAI’s CEO and co-founder, as well as Helion’s longtime chair of its board of directors. The company is able to unlock an additional $1.8 billion if it hits Polaris milestones.

The generator has been operating since December, running all day, five days a week, creating fusion, Kirtley said.

Energy without ignition

A section of Trenta, Helion’s sixth fusion generator prototype, which is no longer in service. (GeekWire File Photo / Lisa Stiffler)

Helion is highly cautious — some would say too cautious — in sharing details on its progress. Helion officials say they must hold their tech close to the vest as Chinese competitors have stolen pieces of their intellectual property; critics say the secrecy makes it difficult for the scientific community to verify their likelihood of success in a very risky, highly technical field.

In August, Kirtley shared an online post about Helion’s power-producing strategy, which upends the conventional approach.

Most efforts are trying to achieve ignition in their fusion generators, which is a condition where the reactions produce more power than is required for fusion to occur. This feat was first accomplished at a national lab in California in 2022 — but it still wasn’t enough energy that one could put electricity on the grid.

Helion is not aiming for ignition but rather for a system that is so efficient it can capture enough energy from fusion without reaching that state.

Kirtley compares the strategy for producing power to regenerative braking in electric vehicles. Simply put, an EV’s battery gets the car moving, and regenerative braking by the driver puts energy back into the battery to help it run longer. In the fusion generator, the capacitors provide that initial power, and the fusion reaction resupplies the energy and a little bit more.

“We can recover electricity at high efficiency,” Kirtley said. Compared to other commercial fusion approaches, “we require a lot less fusion. Fusion is the hard part. My goal, ironically, is to do the minimum amount of fusion that we can deliver a product to the customer and generate electricity.”

The glow from a super hot plasma generated inside Polaris, Helion’s seventh fusion prototype device. (Helion Photo)

Google Business Profiles What’s happening feature expands

Google has expanded the What’s happening feature within Google Business Profiles to restaurants and bars in the United Kingdom, Canada, Australia, and New Zealand. It is now available for multi-location restaurants, not just single-location restaurants.

The What’s happening feature launched back in May as a way for some businesses to highlight events, deals, and specials prominently at the top of your Google Business Profile. Now, Google is bringing it to more countries.

What Google said. Google’s Lisa Landsman wrote on LinkedIn:

How do you promote your “Taco Tuesday” in Toledo and your “Happy Hour” in Houston… right when locals are searching for a place to go?

I’m excited to share that the Google Business Profile feature highlighting what’s happening at your business, such as timely events, specials and deals, has now rolled out for multi-location restaurants & bars across the US, UK, CA, AU & NZ! (It was previously only available for single-location restaurants)

This is a great option for driving real-time foot traffic. It automatically surfaces the unique specials, live music, or events you’re already promoting at a specific location, catching customers at the exact moment they’re deciding where to eat or grab a cocktail.

What it looks like. Here is a screenshot of this feature:

More details. Google’s Lisa Landsman added, “We’ve already seen excellent results from testing and look forward to hearing how this works for you!”

Availability. This feature is only available for restaurants & bars. Google said it hopes to expand to more categories soon. It is also only available in the United States, United Kingdom, Canada, Australia, and New Zealand.

The initial launch was for single-location Food and Drink businesses in the U.S., UK, Australia, Canada, and New Zealand. It is now available for multi-location restaurants, not just single-location restaurants.

Why we care. If you manage restaurants and/or bars, this may be a new way to get more attention and visitors to your business from Google Search. Now, if you manage multi-location restaurants or bars, you can leverage this feature.

LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

LLM optimization in 2026: Tracking, visibility, and what’s next for AI discovery

Marketing, technology, and business leaders today are asking an important question: how do you optimize for large language models (LLMs) like ChatGPT, Gemini, and Claude? 

LLM optimization is taking shape as a new discipline focused on how brands surface in AI-generated results and what can be measured today. 

For decision makers, the challenge is separating signal from noise – identifying the technologies worth tracking and the efforts that lead to tangible outcomes.

The discussion comes down to two core areas – and the timeline and work required to act on them:

  • Tracking and monitoring your brand’s presence in LLMs.
  • Improving visibility and performance within them.

Tracking: The foundation of LLM optimization

Just as SEO evolved through better tracking and measurement, LLM optimization will only mature once visibility becomes measurable. 

We’re still in a pre-Semrush/Moz/Ahrefs era for LLMs. 

Tracking is the foundation of identifying what truly works and building strategies that drive brand growth. 

Without it, everyone is shooting in the dark, hoping great content alone will deliver results.

The core challenges are threefold:

  • LLMs don’t publish query frequency or “search volume” equivalents.
  • Their responses vary subtly (or not so subtly) even for identical queries, due to probabilistic decoding and prompt context.
  • They depend on hidden contextual features (user history, session state, embeddings) that are opaque to external observers.

Why LLM queries are different

Traditional search behavior is repetitive – millions of identical phrases drive stable volume metrics. LLM interactions are conversational and variable. 

People rephrase questions in different ways, often within a single session. That makes pattern recognition harder with small datasets but feasible at scale. 

These structural differences explain why LLM visibility demands a different measurement model.

This variability requires a different tracking approach than traditional SEO or marketing analytics.

The leading method uses a polling-based model inspired by election forecasting.

The polling-based model for measuring visibility

A representative sample of 250–500 high-intent queries is defined for your brand or category, functioning as your population proxy. 

These queries are run daily or weekly to capture repeated samples from the underlying distribution of LLM responses.

Competitive mentions and citations metrics

Tracking tools record when your brand and competitors appear as citations (linked sources) or mentions (text references), enabling share of voice calculations across all competitors. 

Over time, aggregate sampling produces statistically stable estimates of your brand visibility within LLM-generated content.

Early tools providing this capability include:

  • Profound.
  • Conductor.
  • OpenForge.
Early tools for LLM visibility tracking

Consistent sampling at scale transforms apparent randomness into interpretable signals. 

Over time, aggregate sampling provides a stable estimate of your brand’s visibility in LLM-generated responses – much like how political polls deliver reliable forecasts despite individual variations.

Building a multi-faceted tracking framework

While share of voice paints a picture of your presence in the LLM landscape, it doesn’t tell the complete story. 

Just as keyword rankings show visibility but not clicks, LLM presence doesn’t automatically translate to user engagement. 

Brands need to understand how people interact with their content to build a compelling business case.

Because no single tool captures the entire picture, the best current approach layers multiple tracking signals:

  • Share of voice (SOV) tracking: Measure how often your brand appears as mentions and citations across a consistent set of high-value queries. This provides a benchmark to track over time and compare against competitors.
  • Referral tracking in GA4: Set up custom dimensions to identify traffic originating from LLMs. While attribution remains limited today, this data helps detect when direct referrals are increasing and signals growing LLM influence.
  • Branded homepage traffic in Google Search Console: Many users discover brands through LLM responses, then search directly in Google to validate or learn more. This two-step discovery pattern is critical to monitor. When branded homepage traffic increases alongside rising LLM presence, it signals a strong causal connection between LLM visibility and user behavior. This metric captures the downstream impact of your LLM optimization efforts.

Nobody has complete visibility into LLM impact on their business today, but these methods cover all the bases you can currently measure.

Be wary of any vendor or consultant promising complete visibility. That simply isn’t possible yet.

Understanding these limitations is just as important as implementing the tracking itself.

Because no perfect models exist yet, treat current tracking data as directional – useful for decisions, but not definitive.

Why mentions matter more than citations

Dig deeper: In GEO, brand mentions do what links alone can’t

Estimating LLM ‘search volume’

Measuring LLM impact is one thing. Identifying which queries and topics matter most is another.

Compared to SEO or PPC, marketers have far less visibility. While no direct search volume exists, new tools and methods are beginning to close the gap.

The key shift is moving from tracking individual queries – which vary widely – to analyzing broader themes and topics. 

The real question becomes: which areas is your site missing, and where should your content strategy focus?

To approximate relative volume, consider three approaches:

Correlate with SEO search volume

Start with your top-performing SEO keywords. 

If a keyword drives organic traffic and has commercial intent, similar questions are likely being asked within LLMs. Use this as your baseline.

Layer in industry adoption of AI

Estimate what percentage of your target audience uses LLMs for research or purchasing decisions:

  • High AI-adoption industries: Assume 20-25% of users leverage LLMs for decision-making.
  • Slower-moving industries: Start with 5-10%.

Apply these percentages to your existing SEO keyword volume. For example, a keyword with 25,000 monthly searches could translate to 1,250-6,250 LLM-based queries in your category.

Using emerging inferential tools

New platforms are beginning to track query data through API-level monitoring and machine learning models. 

Accuracy isn’t perfect yet, but these tools are improving quickly. Expect major advancements in inferential LLM query modeling within the next year or two.

Get the newsletter search marketers rely on.


Optimizing for LLM visibility

The technologies that help companies identify what to improve are evolving quickly. 

While still imperfect, they’re beginning to form a framework that parallels early SEO development, where better tracking and data gradually turned intuition into science.

Optimization breaks down into two main questions:

  • What content should you create or update, and should you focus on quality content, entities, schema, FAQs, or something else?
  • How should you align these insights with broader brand and SEO strategies?

Identify what content to create or update

One of the most effective ways to assess your current position is to take a representative sample of high-intent queries that people might ask an LLM and see how your brand shows up relative to competitors. This is where the Share of Voice tracking tools we discussed earlier become invaluable.

These same tools can help answer your optimization questions:

  • Track who is being cited or mentioned for each query, revealing competitive positioning.
  • Identify which queries your competitors appear for that you don’t, highlighting content gaps.
  • Show which of your own queries you appear for and which specific assets are being cited, pinpointing what’s working.

From this data, several key insights emerge:

  • Thematic visibility gaps: By analyzing trends across many queries, you can identify where your brand underperforms in LLM responses. This paints a clear picture of areas needing attention. For example, you’re strong in SEO but not in PPC content. 
  • Third-party resource mapping: These tools also reveal which external resources LLMs reference most frequently. This helps you build a list of high-value third-party sites that contribute to visibility, guiding outreach or brand mention strategies. 
  • Blind spot identification: When cross-referenced with SEO performance, these insights highlight blind spots; topics or sources where your brand’s credibility and representation could improve.

Understand the overlap between SEO and LLM optimization

LLMs may be reshaping discovery, but SEO remains the foundation of digital visibility.

Across five competitive categories, brands ranking on Google’s first page appeared in ChatGPT answers 62% of the time – a clear but incomplete overlap between search and AI results.

That correlation isn’t accidental. 

Many retrieval-augmented generation (RAG) systems pull data from search results and expand it with additional context. 

The more often your content appears in those results, the more likely it is to be cited by LLMs.

Brands with the strongest share of voice in LLM responses are typically those that invested in SEO first. 

Strong technical health, structured data, and authority signals remain the bedrock for AI visibility.

What this means for marketers:

  • Don’t over-focus on LLMs at the expense of SEO. AI systems still rely on clean, crawlable content and strong E-E-A-T signals.
  • Keep growing organic visibility through high-authority backlinks and consistent, high-quality content.
  • Use LLM tracking as a complementary lens to understand new research behaviors, not a replacement for SEO fundamentals.

Redefine on-page and off-page strategies for LLMs

Just as SEO has both on-page and off-page elements, LLM optimization follows the same logic – but with different tactics and priorities.

Off-page: The new link building

Most industries show a consistent pattern in the types of resources LLMs cite:

  • Wikipedia is a frequent reference point, making a verified presence there valuable.
  • Reddit often appears as a trusted source of user discussion.
  • Review websites and “best-of” guides are commonly used to inform LLM outputs.

Citation patterns across ChatGPT, Gemini, Perplexity, and Google’s AI Overviews show consistent trends, though each engine favors different sources.

This means that traditional link acquisition strategies, guest posts, PR placements, or brand mentions in review content will likely evolve. 

Instead of chasing links anywhere, brands should increasingly target:

  • Pages already being cited by LLMs in their category.
  • Reviews or guides that evaluate their product category.
  • Articles where branded mentions reinforce entity associations.

The core principle holds: brands gain the most visibility by appearing in sources LLMs already trust – and identifying those sources requires consistent tracking.

On-page: What your own content reveals

The same technologies that analyze third-party mentions can also reveal which first-party assets, content on your own website, are being cited by LLMs. 

This provides valuable insight into what type of content performs well in your space.

For example, these tools can identify:

  • What types of competitor content are being cited (case studies, FAQs, research articles, etc.).
  • Where your competitors show up but you don’t.
  • Which of your own pages exist but are not being cited.

From there, three key opportunities emerge:

  • Missing content: Competitors are cited because they cover topics you haven’t addressed. This represents a content gap to fill.
  • Underperforming content: You have relevant content, but it isn’t being referenced. Optimization – improving structure, clarity, or authority – may be needed.
  • Content enhancement opportunities: Some pages only require inserting specific Q&A sections or adding better-formatted information rather than full rewrites.

Leverage emerging technologies to turn insights into action

The next major evolution in LLM optimization will likely come from tools that connect insight to action.

Early solutions already use vector embeddings of your website content to compare it against LLM queries and responses. This allows you to:

  • Detect where your coverage is weak.
  • See how well your content semantically aligns with real LLM answers.
  • Identify where small adjustments could yield large visibility gains.

Current tools mostly generate outlines or recommendations.

The next frontier is automation – systems that turn data into actionable content aligned with business goals.

Timeline and expected results

While comprehensive LLM visibility typically builds over 6-12 months, early results can emerge faster than traditional SEO. 

The advantage: LLMs can incorporate new content within days rather than waiting months for Google’s crawl and ranking cycles. 

However, the fundamentals remain unchanged.

Quality content creation, securing third-party mentions, and building authority still require sustained effort and resources. 

Think of LLM optimization as having a faster feedback loop than SEO, but requiring the same strategic commitment to content excellence and relationship building that has always driven digital visibility.

From SEO foundations to LLM visibility

LLM traffic remains small compared to traditional search, but it’s growing fast.

A major shift in resources would be premature, but ignoring LLMs would be shortsighted. 

The smartest path is balance: maintain focus on SEO while layering in LLM strategies that address new ranking mechanisms.

Like early SEO, LLM optimization is still imperfect and experimental – but full of opportunity. 

Brands that begin tracking citations, analyzing third-party mentions, and aligning SEO with LLM visibility now will gain a measurable advantage as these systems mature.

In short:

  • Identify the third-party sources most often cited in your niche and analyze patterns across AI engines.
  • Map competitor visibility for key LLM queries using tracking tools.
  • Audit which of your own pages are cited (or not) – high Google rankings don’t guarantee LLM inclusion.
  • Continue strong SEO practices while expanding into LLM tracking – the two work best as complementary layers.

Approach LLM optimization as both research and brand-building.

Don’t abandon proven SEO fundamentals. Rather, extend them to how AI systems discover, interpret, and cite information.

A tale of two Seattles in the age of AI: Harsh realities and new hope for the tech community

The opening panel at Seattle AI Week 2025, from left: Randa Minkarah, WTIA chief operating executive; Joe Nguyen, Washington commerce director; Rep. Cindy Ryu; Nathan Lambert, Allen Institute for AI; and Brittany Jarnot, Salesforce. (GeekWire Photo / Taylor Soper)

Seattle is looking to celebrate and accelerate its leadership in artificial intelligence at the very moment the first wave of the AI economy is crashing down on the region’s tech workforce.

That contrast was hard to miss Monday evening at the opening reception for Seattle AI Week 2025 at Pier 70. On stage, panels offered a healthy dose of optimism about building the AI future. In the crowd, buzz about Amazon’s impending layoffs brought the reality of the moment back to earth.

A region that rose with Microsoft and then Amazon is now dealing with the consequences of Big Tech’s AI-era restructuring. Companies that hired by the thousands are now thinning their ranks in the name of efficiency and focus — a dose of corporate realism for the local tech economy.

The double-edged nature of this shift is not lost on Washington Gov. Bob Ferguson.

“AI, and the future of AI, and what that means for our state and the world — each day I do this job, the more that moves up in my mind in terms of the challenges and the opportunities we have,” Ferguson told the AI Week crowd. He touted Washington’s concentration of AI jobs, saying his goal is to maximize the benefits of AI while minimizing its downsides.

Gov. Bob Ferguson addresses the AI Week opening reception. (GeekWire Photo / Todd Bishop)

Seattle AI Week, led by the Washington Technology Industry Association, was started last year after a Forbes list of the nation’s top 50 AI startups included none from Seattle, said the WTIA’s Nick Ellingson, opening this year’s event. That didn’t seem right. Was it a messaging problem?

“A bunch of us got together and said, let’s talk about all the cool things happening around AI in Seattle, and let’s expand the tent beyond just tech things that are happening,” Ellingson explained.

So maybe that’s the best measuring stick: how many startups will this latest shakeout spark, and how can the Seattle region’s startup and tech leaders make it happen? Can the region become less dependent on the whims of the Microsoft and Amazon C-suites in the process? 

“Washington has so much opportunity. It’s one of the few capitals of AI in the world,” said WTIA’s Arry Yu in her opening remarks. “People talk about China, people talk about Silicon Valley — there are a few contenders, but really, it’s here in Seattle. … The future is built on data, on powerful technology, but also on community. That’s what makes this place different.”

And yet, “AI is a sleepy scene in Seattle, where people work at their companies, but there’s very little activity and cross-pollinating outside of this,” said Nathan Lambert, senior research scientist with the Allen Institute for AI, during the opening panel discussion.

No, we don’t want to become San Francisco or Silicon Valley, Lambert added. But that doesn’t mean the region can’t cherry-pick some of the ingredients that put Bay Area tech on top.

Whether laid-off tech workers will start their own companies is a common question after layoffs like this. In the Seattle region at least, that outcome has been more fantasy than reality. 

This is where AI could change things, if not with the fabled one-person unicorn then with a bigger wave of new companies born of this employment downturn. Who knows, maybe one will even land on that elusive Forbes AI 50 list. (Hey, a region can dream!)

But as the new AI reality unfolds in the regional workforce, maybe the best question to ask is whether Seattle’s next big thing can come from its own backyard again.

Related: Ferguson’s AI balancing act: Washington governor wants to harness innovation while minimizing harms

Microsoft gets 27% stake in OpenAI, and a $250B Azure commitment

Sam Altman and OpenAI announced a new deal with Microsoft, setting revised terms for future AI development. (GeekWire File Photo / Todd Bishop)

Microsoft and OpenAI announced the long-awaited details of their new partnership agreement Tuesday morning — with concessions on both sides that keep the companies aligned but not in lockstep as they move into their next phases of AI development.

Under the arrangement, Microsoft gets a 27% equity stake in OpenAI’s new for-profit entity, the OpenAI Group PBC (Public Benefit Corporation), a stake valued at approximately $135 billion. That’s a decrease from 32.5% equity but not a bad return on an investment of $13.8 billion.

At the same time, OpenAI has contracted to purchase an incremental $250 billion in Microsoft Azure cloud services. However, in a significant concession in return for that certainty, Microsoft will no longer have a “right of first refusal” on new OpenAI cloud workloads.

Microsoft, meanwhile, will retain its intellectual property rights to OpenAI models and products through 2032, an extension of the timeframe that existed previously. 

A key provision of the new agreement centers on Artificial General Intelligence (AGI), with any declaration of AGI by OpenAI now subject to verification by an independent expert panel. This was a sticking point in the earlier partnership agreement, with an ambiguous definition of AI potentially triggering new provisions of the prior arrangement. 

Microsoft and OpenAI had previously announced a tentative agreement without providing details. More aspects of the deal are disclosed in a joint blog post from the companies.

Shares of Microsoft are up 2% in early trading after the announcement. The company reports earnings Wednesday afternoon, and some analysts have said the uncertainty over the OpenAI arrangement has been impacting Microsoft’s stock. 

Why Early Threat Detection Is a Must for Long-Term Business Growth

In cybersecurity, speed isn’t just a win — it’s a multiplier. The faster you learn about emerging threats, the faster you adapt your defenses, the less damage you suffer, and the more confidently your business keeps scaling. Early threat detection isn’t about preventing a breach someday: it’s about protecting the revenue you’re supposed to earn every day. Companies that treat cybersecurity as a

AMD swoops in to help as John Carmack slams Nvidia's $4,000 DGX Spark, says it doesn't hit performance claims, overheats, and maxes out at 100W power draw — developer forums inundated with crashing and shutdown reports

Nvidia’s DGX Spark, the company’s new $4,000 developer box powered by the Grace Blackwell GB10 superchip, is under fire after questions were raised about real-world performance and power draw.

OpenAI calls on U.S. to build 100 gigawatts of additional power-generating capacity per year, increase equivalent to 100 nuclear reactors yearly — says electricity is a 'strategic asset' in AI race against China

OpenAI has called on the US to build out more power-generating infrastructure, claiming that it is needed to help provide the backbone for the AI race the US is now in with China. With enormous infrastructure projects planned, it wants the US to build an additional 100 gigawatts of new energy capacity every year.

AMD CEO on new $1 billion AI supercomputer partnership with the Department of Energy

The post AMD CEO on new $1 billion AI supercomputer partnership with the Department of Energy appeared first on StartupHub.ai.

“We are super excited to announce a new partnership with the Department of Energy,” stated Lisa Su, Chair and CEO of AMD, during a CNBC interview. This monumental $1 billion collaboration will usher in the development of two advanced supercomputers, designed to tackle some of the most complex scientific challenges facing humanity. The partnership signifies […]

The post AMD CEO on new $1 billion AI supercomputer partnership with the Department of Energy appeared first on StartupHub.ai.

Bitcoin Miners’ AI Pivot: A Strategic Masterclass in Energy and Compute

The post Bitcoin Miners’ AI Pivot: A Strategic Masterclass in Energy and Compute appeared first on StartupHub.ai.

The burgeoning demand for artificial intelligence, a computational arms race among hyperscalers, has illuminated a critical bottleneck: access to reliable, scalable power. This very challenge, as discussed by CleanSpark CEO Matthew Schultz with CNBC’s Jordan Smith, is precisely where Bitcoin miners like CleanSpark find their strategic advantage. Their conversation unveils a nuanced pivot, not merely […]

The post Bitcoin Miners’ AI Pivot: A Strategic Masterclass in Energy and Compute appeared first on StartupHub.ai.

OpenAI Recapitalization Reshapes AI Landscape with Microsoft at the Helm

The post OpenAI Recapitalization Reshapes AI Landscape with Microsoft at the Helm appeared first on StartupHub.ai.

The recent finalization of OpenAI’s recapitalization plan marks a pivotal moment in the trajectory of artificial intelligence, not just for the involved parties but for the entire tech ecosystem. On CNBC, David Faber broke down the intricate details of this agreement, joined by Jim Cramer, who offered his characteristic sharp market commentary. Their discussion illuminated […]

The post OpenAI Recapitalization Reshapes AI Landscape with Microsoft at the Helm appeared first on StartupHub.ai.

Wild Moose Emerges from Stealth with $7 Million Seed Round to Redefine Site Reliability Engineering with AI

The post Wild Moose Emerges from Stealth with $7 Million Seed Round to Redefine Site Reliability Engineering with AI appeared first on StartupHub.ai.

Wild Moose, the AI-powered Site Reliability Engineering (SRE) platform acting as a first responder for production incidents, today announced its emergence from stealth with $7 million in seed funding. The round was led by iAngels, with participation from Y Combinator, F2 Venture Capital, Maverick Ventures, and others. The company is also backed by a distinguished […]

The post Wild Moose Emerges from Stealth with $7 Million Seed Round to Redefine Site Reliability Engineering with AI appeared first on StartupHub.ai.

Gemini for Education: Google’s AI Dominates Higher Ed

The post Gemini for Education: Google’s AI Dominates Higher Ed appeared first on StartupHub.ai.

Google's Gemini for Education is rapidly integrating into higher education, offering no-cost AI tools to over 1000 institutions and 10 million students.

The post Gemini for Education: Google’s AI Dominates Higher Ed appeared first on StartupHub.ai.

Securitize IPO to bring tokenization to Nasdaq at $1.25B

The post Securitize IPO to bring tokenization to Nasdaq at $1.25B appeared first on StartupHub.ai.

The Securitize IPO is a bellwether moment, creating the first publicly-traded company focused purely on the infrastructure for tokenizing real-world assets.

The post Securitize IPO to bring tokenization to Nasdaq at $1.25B appeared first on StartupHub.ai.

FriendliAI Expands Ultra-Fast AI Inference Platform with Nebius AI Cloud Integration

The post FriendliAI Expands Ultra-Fast AI Inference Platform with Nebius AI Cloud Integration appeared first on StartupHub.ai.

Enterprises can now deploy large-scale AI inference with FriendliAI’s optimized stack on Nebius AI infrastructure, combining top performance with cost efficiency.

The post FriendliAI Expands Ultra-Fast AI Inference Platform with Nebius AI Cloud Integration appeared first on StartupHub.ai.

Server DRAM Pricing Jumps 50%, Only 70% of Orders Getting Filled

Server memory has become the scarcest commodity in tech. According to DigiTimes, Samsung and SK Hynix quietly circulated fourth-quarter contract appendix that retroactively increased RDIMM prices by 40-50%. Even hyperscalers that signed agreements in August must now pay the new rate or risk losing their queue position. The two Korean manufacturers simultaneously reduced confirmed allocations by 30%, pushing Tier-1 U.S. and Chinese cloud order books to an effective 70% fill rate and eliminating the safety stock most buyers believed they had secured. Module manufacturers such as Kingston and ADATA now pay $13 for 16 GB DDR5 chips that cost $7 six weeks ago, an increase large enough to erase entire gross margin.

Smaller OEMs and channel distributors have been told to expect 35-40% fulfillment through the first quarter of 2026, forcing them either to gamble on the spot market or idle production lines. Even the older DDR4, now reduced to 20% of global DRAM output, is affected. Switches, routers, and set-top boxes that still use DDR4 are suddenly facing very long lead times because no fabrication plant wants to allocate wafers to trailing nodes. Analysts at TrendForce now forecast that the DRAM shortfall will outlast the 2026 hyperscaler build-out, meaning the industry's next relief valve may not be new capacity but a potential demand contraction, an outcome no manufacturer is willing to budget for.

(PR) Corsair Releases New MP700 PRO XT Gen 5 M.2 SSD and Adds 4 TB Version of MP700 MICRO M.2 2242 SSD

Corsair today announced two new PCIe 5.0 NVMe M.2 SSDs that extend high‑end performance across multiple form factors: the MP700 PRO XT, built for uncompromising Gen 5 speed, and the MP700 MICRO, a power‑efficient M.2 2242 drive sized for thin‑and‑light devices and space‑constrained builds. Utilizing a next-generation controller with cutting-edge internals, the Corsair MP700 PRO XT pushes PCIe Gen 5 storage to its limits. Featuring sequential read speeds up to 14,900 MB/s and sequential write speeds up to 14,500 MB/s, it offers incredible boot, load, and transfer times. Its power-efficient design ensures top-tier performance with lower power draw and less heat.

With support for Microsoft DirectStorage, this SSD can communicate directly with a GPU for unbeatable gaming speed, enabling snappier game loads and smoother in‑game transitions. Thanks to the versatility of Corsair SSD Toolbox software and a five-year warranty, the MP700 PRO XT will deliver cutting-edge storage performance for years to come for performance desktop and laptop platforms. Initially available in 1 TB, 2 TB and 4 TB capacities, additional capacity options are expected to become available in early 2026.

(PR) Corsair and CD Projekt RED Celebrate 10 Years of The Witcher 3: Wild Hunt with Limited Edition Keyboard, Mouse, and Mousepad

Corsair, maker of award-winning gaming peripherals, today revealed a collaboration with CD Projekt RED celebrating the 10th anniversary of The Witcher 3: Wild Hunt. The Corsair x The Witcher 3: Wild Hunt 10th Anniversary Collection commemorates a decade of killing monsters with a limited run of officially licensed gaming peripherals. Fans can relive their monster slaying adventures with a new set of gear, thoughtfully recrafted with iconic touches from the game.

"Fans worldwide have been spellbound by Geralt of Rivia's quest to find Ciri, and we're thrilled to help them complete their own quest to find the ideal gear to celebrate 10 years of this legendary game," said Tobias Brinkmann, Vice President and General Manager of Gaming Peripherals at Corsair. "We're proud to have the opportunity to work with CD Projekt RED to bring this collaboration to life and to let fans showcase their love for this iconic game with our high-performance peripherals featuring a captivating new design."

(PR) Kensington Announces Flagship Finger-Operated Trackball for Creative Professionals

Kensington, a worldwide leader of desktop computing and mobility solutions for IT, business, and home office professionals, today announced the Expert Mouse TB800 EQ Trackball, a next-generation, finger-operated ergonomic trackball that delivers premium performance and precise control for creative professionals and consumers.

A recipient of a prestigious 2025 iF Design Award, the TB800 enhances user comfort and productivity through advanced customization options, an adjustable scroll ring, exceptional tracking, an ambidextrous design, and seamless wireless connectivity. Featuring an innovative symmetrical design that is ideal for use by left- and right-handed users, the TB800 incorporates a slight angle that enables users to operate the trackball with their hands in a natural resting position that maximizes comfort and potentially reduces fatigue and repetitive strain injuries.

(PR) LG, SK Enmove and GRC Sign MOU to Advance Liquid Immersion Cooling Solutions

LG Electronics (LG) today announced the signing of a memorandum of understanding (MOU) with SK Enmove (SKEN) and Green Revolution Cooling (GRC) to jointly develop and expand next-generation liquid immersion cooling solutions optimized for artificial intelligence data centers (AIDCs). Under the agreement, the three companies will collaborate to explore new business opportunities, conduct joint marketing and deliver proof of concept (PoC) demonstrations for integrated solutions and business models in liquid immersion cooling for AIDCs.

SKEN, a leading company in advanced lubricating base oils and premium lubricants, provides next-generation thermal management solutions, including immersion cooling fluids, and is strengthening its global partnerships to foster an immersion cooling ecosystem. In 2022, SKEN became the first company in Korea to develop immersion cooling fluids. Through an equity investment in GRC, SKEN is also leading the way in innovation in the future cooling market, jointly developing data center immersion cooling systems with the company.

US cuts a $1 billion deal with AMD to build two new AI Supercomputers

AMD will power two new AI supercomputers in the US, using MI355X and MI430X accelerators The US Department of Energy has announced a $1 billion deal under which AMD will deliver two next-generation supercomputers to the Oak Ridge National Laboratory (ORNL). These systems are designed to expand the US’s leadership in artificial intelligence (AI) and […]

The post US cuts a $1 billion deal with AMD to build two new AI Supercomputers appeared first on OC3D.

The Infinite Game Of Building Companies

By Jeff Seibert

I’ve been building products and companies my entire career — Increo, Box, Crashlytics, Twitter and now, Digits — and I’ve had the privilege of speaking with some of the sharpest minds in venture and entrepreneurship along the way.

One recent conversation with a legendary investor really crystallized for me a set of truths about startups: what success really is, why some founders thrive while others burn out, and how to navigate the inevitable chaos of building something from nothing.

Here are some of the lessons I’ve internalized from years of building, observing and learning.

Success has no finish line

Jeff Seibert is the founder and CEO of Digits
Jeff Seibert

In the startup world, we talk a lot about IPOs, acquisitions and valuations. But those are milestones, not destinations.

The companies that endure don’t “win” and stop — they keep creating, adapting and pushing forward. They’re playing an infinite game, where the only goal is to remain in the game.

When you’re building something truly generative — driven by a purpose greater than yourself — there’s no point at which you can say “done.” If your company has a natural stopping point, you may be building the wrong thing.

You don’t choose the work — the work chooses you

The best founders I’ve met — and the best moments I’ve had as a founder — come from an almost irrational pull toward solving a specific problem I myself experienced.

You may want to start a company, but if you have to talk yourself into your idea, it probably won’t survive contact with reality. The founders who succeed are often the ones who can’t not work on their thing.

Starting a company shouldn’t be a career move — it should be the last possible option after every other path fails to scratch the itch.

The real killer: founder fatigue

Most companies don’t die because of one bad decision or one tough competitor. They die because the founders run out of energy.

Fatigue erodes vision, motivation and creativity. Protecting your own drive — keeping it clean and focused — may be the single most important survival skill you have.

That means staying close to the product, protecting time for customer work, and avoiding the slow drift into managing around problems instead of solving them.

Customer > competitor

It’s easy to get caught up in competitor moves, investor chatter or market gossip. But the most important question is always: Are we delivering joy to the customer?

If you’re losing focus, sign up for your own product as a brand-new user. Feel the friction. Fix it. Repeat.

At Digits, we run our own signup and core flows every week. It’s uncomfortable — it surfaces flaws we’d rather not see — but it keeps us anchored to the only metric that matters: customer delight.

Boards should ask questions, not give answers

Over the years, I’ve learned the most effective boards aren’t presentation theaters — they’re discussion rooms.

The best structure I’ve seen:

  • No slides;
  • A narrative pre-read sent in advance; and
  • A deep dive into one essential question.

Good directors help you widen your perspective. They don’t hand you a to-do list. Rather, they help you see the problem in a way that makes the answer obvious.

Twitter: lessons from a phenomenon

When I think back to my time at Twitter, the most enduring lesson is that not all companies are built top-down. Some — like Twitter — are shaped more by their users than their executives.

Features like @mentions, hashtags and retweets didn’t come from a product roadmap — they came from the community.

That’s messy, but it’s also powerful. Sometimes your job isn’t to control the phenomenon, rather it’s to keep it healthy without smothering what made it magical in the first place.

Why now is a great time to start

If you’re building today, you have an advantage over the so-called “unicorn zombies” that raised massive rounds pre-AI and are now locked into defending old business models.

Fresh founders can design from scratch for the new reality; there’s no legacy to protect, no sacred cows to defend.

The macro environment? Irrelevant. The only timing that matters is when the problem calls you so strongly that not working on it feels impossible.

If there’s one takeaway from all of this, it’s that success is continuing. The real prize is the ability to keep playing, keep serving and keep creating.

If you’re standing at the edge, wondering if you should start — start. Take one step. See if it grows. And if it does, welcome to the infinite game.


 Jeff Seibert is the founder and CEO of Digits, the world’s first AI-native accounting platform. He previously served as Twitter‘s head of consumer product and starred in the Emmy Award-winning Netflix documentary “The Social Dilemma.”

Illustration: Dom Guzman

Crunchbase Sector Snapshot: Cleantech Isn’t Having A Great Year

While startup investment has been climbing lately, not all industries are partaking in the gains.

Cleantech is one of the spaces that’s been mostly left out. Overall funding to the space is down this year, despite some pockets of bullishness in areas like fusion and battery recycling.

The broad trend: Cleantech- and sustainability-related startup investment has been on a downward trajectory for several years now. And so far, 2025 is on track to be another down year.

On the bright side, however, there’s been some pickup in recent months, boosted by big rounds for companies in energy storage, fusion and other cleantech subsectors.

The numbers: Investors put an estimated $20 billion into seed- through growth-stage funding to companies in cleantech, EV and sustainability-related categories so far this year.

That puts 2025 funding on track to come in well below last year’s levels, which were already at a multiyear low.

Still, quarter by quarter, the pattern looks more encouraging. Investment hit a low point in Q1 of this year and recovered some in the subsequent two quarters. The current quarter is also off to a strong start.

Noteworthy recent rounds

The largest cleantech-related round of the year closed this month. Base Power, a provider of residential battery backup systems and electricity plans, raised $1 billion in Series C funding. The Austin, Texas-based company says its systems allow energy providers to more efficiently harness renewable power.

The second-largest round was Commonwealth Fusion Systems’ $863 million Series B2 financing. The Devens, Massachusetts-based company says it is moving closer to being the first in the world to commercialize fusion power.

For a bigger-picture view, below we put together a list of 10 of the year’s largest cleantech- and sustainability-related financings.

The broad takeaway: Startups innovating for an era of rising power consumption

Not to over-generalize, but if there was one big takeaway from recent cleantech and sustainability startup funding, it would be that founders and investors recognize that these are times of ever-escalating energy demand. They’re planning accordingly, looking to tap new sources of power, fusion in particular, as well as better utilize and scale existing clean energy sources.

Related Crunchbase query and list:

Illustration: Dom Guzman

Elon Musk launches Grokipedia, an AI-written rival to Wikipedia


Elon Musk has introduced Grokipedia, a digital encyclopedia that relies on artificial intelligence rather than human editors to compile and update entries. The new platform, developed through his artificial intelligence company xAI, marks his most direct challenge yet to Wikipedia, a site written and curated by human volunteers that he...

Read Entire Article

Cascadia’s AI paradox: A world-leading opportunity threatened by rising costs and a talent crunch

The downtown Seattle skyline. (GeekWire Photo / Lisa Stiffler)

A new report exploring the potential for the Pacific Northwest to stake its claim as the global leader in responsible AI offers a paradoxical view. The Cascadia region, which includes Seattle, Portland and Vancouver, B.C., is described as a proven, promising player in the sphere — but with significant risks that threaten its success.

“We created companies that transformed global commerce,” writes former Gov. Chris Gregoire in a forward to the document. “Now we have the chance to add another chapter — one where Cascadia becomes the world’s standard-bearer for innovation that uplifts both people and planet.”

The Cascadia Innovation Corridor, which Gregoire chairs, released the report this morning as it kicks off its two-day conference. The economic advocacy group’s eighth annual event is being held in Seattle.

The study is built on an analysis by the Boston Consulting Group that ranks Cascadia’s three metro areas against 15 comparable regions in the U.S. and Canada for their economic competitiveness, including livability, workforce, and business and innovation climate. Seattle came in fourth behind Boston, Austin and Raleigh, while Portland ranked 13th and Vancouver 14th.

Over the past decade, the region’s gross domestic product and populations have both grown significantly, and when combined, their economies approach the 18th largest in the world.

Cascadia’s strengths, the report explains, include tech engines such as cloud giants Microsoft and Amazon in Washington, silicon chip manufacturing in Oregon, and quantum innovation in Vancouver, as well as academic excellence from the University of Washington, University of British Columbia and Oregon State University.

But as time goes on and as business and civic leaders aim for the prize of AI dominance, cracks in the system are increasingly troubling.

  • Business costs are rising and there are mounting regulatory concerns — but it’s a tricky picture. Seattle, for example, often turns to B&O and headcount taxes to cover costs, while the state struggles to balance budgets in the absence of an income tax.
  • Housing affordability is continuing to decline for many residents in these metro areas.
  • Skilled tech workers are leaving Portland, in particular, and Seattle relies heavily on foreign workers receiving H1-B visas, which are less certain under the Trump administration.
  • The clean, affordable energy that was once abundant in the Pacific Northwest is decreasingly available as droughts reduce river flows that drive hydropower dams and electricity demand increases with rapid data center growth.

The report notes that multiple regions around the U.S. and Canada have created AI-focused hubs with hundreds of millions of dollars in public and private funding to bolster their hold on the sector.

New Jersey has a half-billion dollar “AI Moonshot” program including tax incentives and public-worker AI training programs; New York’s “Empire AI Consortium” has an AI computing training center at the University of Buffalo and startup supports; and California has a public-private task force to increase AI adoption within government services and connecting tech leaders with state agencies.

For its part, Seattle Mayor Bruce Harrell announced a “responsible AI plan” this fall that provides guidelines for the municipality’s use of artificial intelligence and its support of the AI tech sector as an economic driver, which includes the earlier launches of the startup-focused AI House and Foundations.

But what the region really needs to succeed is a collaborative effort tapping all of the metro areas’ assets.

“For Cascadia, the lesson is clear: without a coordinated strategy that links our strengths in cloud computing, semiconductors, and research, we risk falling behind,” states the Cascadia Innovation Corridor report. “Acting together, we can position Cascadia not just to keep pace, but to lead.”

Apple iPhone 18 To Use A Simplified Camera Control Button, iPhone 20 To Feature Haptics Instead Of Mechanical Buttons

Unbranded smartphone with side buttons on a blue gradient background.

With the iPhone 17 lineup now in the hands of consumers, the legendary rumor mill, which typically revolves around Apple's new products, is naturally shifting its focus towards next year's lineup. The iPhone 18, as well as the much-anticipated iPhone 20, which is due in 2027 and would commemorate 20 years since the first iPhone launched all the way back in 2007. Now, a new rumor suggests that Apple is transitioning towards simplified buttons in stages. The iPhone 18 lineup is likely to adopt a less complicated mechanical button for camera control, which will be replaced entirely by solid-state buttons […]

Read full article at https://wccftech.com/apple-iphone-18-to-use-a-simplified-camera-control-button-iphone-20-to-feature-haptics-instead-of-mechanical-buttons/

Death Stranding 2: On the Beach Features The Most Interesting Implentation of PS5’s Power Saver Mode

Character standing on a rocky landscape holding a small figure, with Captured on PS5 text visible at the bottom.

Death Stranding 2: On the Beach now supports the PlayStation 5's power saver mode, and its implementation is among the most interesting to date, according to a new technical analysis. In the latest episode of their weekly podcast, the tech experts at Digital Foundry examined how the two entries in the Kojima Productions series support Power Saver Mode, a newly introduced operating mode for the PlayStation 5 console that cuts CPU resources in half, halves the memory bandwidth, and reduces CPU and GPU clocks to reduce the system's power consumption. While the implementation in Death Stranding: Director's Cut was not […]

Read full article at https://wccftech.com/death-stranding-2-on-the-beach-most-interesting-power-saver-mode/

MSI Intros GeForce RTX 5050 INSPIRE ITX And OC Cards, Measuring Just 147mm

INSPIRE ITX graphics card displayed on a schematic background.

The INSPIRE series RTX 5050 is probably the smallest RTX 5050 editions, which offer a single fan design and weigh just 551 grams. MSI Launches Small Form-Factor RTX 5050 INSPIRE ITX and OC GPUs, Boasting Dual-Slot Thickness MSI has officially launched two new GeForce RTX 5050 cards in the INSPIRE series. These are probably the smallest RTX 5050 cards on the market, boasting a dual-slot design and a single-fan cooler to ensure compatibility with very small ITX cases. Apart from MSI, PNY also has a similarly compact GeForce RTX 5050, which measures just 147mm. The INSPIRE ITX RTX 5050 cards […]

Read full article at https://wccftech.com/msi-intros-geforce-rtx-5050-inspire-itx-and-oc-cards-measuring-just-147mm/

EA is Pushing Employees To Use AI For Everything, Including Producing Code Requiring Manual Fixing

EA logo on a red background with striped pattern.

EA is pushing its employees to use AI for basically every task, but the results can be flawed, resulting in more work for developers. Business Insider recently talked with current EA staff, who confirmed that the company's leadership has spent the past year or so pushing its 15,000 employees to use AI for virtually every task, from producing code and concept art for games to advising managers how to speak to staff about a certain number of topics, including pay or promotions. The AI tools used to produce code are among those creating the most issues for developers. It is […]

Read full article at https://wccftech.com/ea-is-pushing-employees-to-use-ai-for-everything-including-producing-code-requiring-manual-fixing/

Intel CEO Says U.S. Government Stake Was a “Deliberate” Move to Drive a Comeback, Comparing It to How Taiwan Supports TSMC

Intel's CEO Lip-Bu Tan with Intel building in background

Intel's CEO, Lip-Bu Tan, has discussed the stake taken by the US government in the company, claiming that it was a necessary step to ensure that the American chipmaker could compete with Taiwan's TSMC. Intel's CEO Also Tells Specifics About His Meeting With President Trump, Calling It a Massive Success Well, the interest from the Trump administration in Intel was indeed a surprise for many of us, but for CEO Lip-Bu Tan, this initiative was "good to have", as he claims that it is similar to how Taiwan supports TSMC or South Korea backs the likes of Samsung Foundry. In […]

Read full article at https://wccftech.com/intel-ceo-says-us-government-stake-was-a-deliberate-move/

OneXfly Apex Handheld Launched: AMD Ryzen AI Max+ 395 With Liquid Cooling, 128 GB RAM, 85Wh Battery, $1200-$2250

ONEXFLY APEX handhelds with AMD Ryzen AI Max+ 395 and Radeon 8060S Graphics text on gray background.

OnexPlayer has officially launched its flagship handheld, the OneXfly Apex, with a liquid-cooled AMD Ryzen AI MAX+ 395 SoC. AMD Ryzen AI MAX+ 395 Gets Liquid-Cooled Inside A Handheld With OneXPlayer's OneXfly Apex The OneXfly Apex handheld was teased last month and is positioned to be a flagship device featuring the AMD Ryzen AI MAX+ 395 SoC. This SoC has already been featured in other handhelds such as GPD Win 5 and Ayaneo Next 2. Now, OneXPlayer is rolling out its own high-end handheld, offering a nice upgrade vs the Ryzen AI 300 "Strix Point" stack. Just to recap the […]

Read full article at https://wccftech.com/onexfly-apex-handheld-launch-amd-ryzen-ai-max-395-liquid-cooling-128-gb-85wh-battery/

RPCS3 Removes AMD RX 400/500 And NVIDIA GTX 900/1000 Series From Recommended GPU Requirements

NVIDIA GeForce GTX 1080 Ti and Radeon graphics cards with Sony controller beneath RPSC3 text on colored background.

The popular PS3 emulator has updated its latest GPU recommendation list to AMD's RDNA and NVIDIA's Turing series. RPCS3 Announces Updated GPU Requirements for the Emulator; Recommends At Least AMD RX 5000 or NVIDIA RTX 2000 Series RPCS3 has just announced the new recommended GPU requirements for its popular PS3 emulator, which comes as a result of major GPU manufacturers ending support for some of its older generation GPU series. RPCS3 announced on X that it no longer "recommends" the AMD RX 400 and NVIDIA GTX 900 series GPUs as the recommended GPUs. The newer GPU recommendations now start with […]

Read full article at https://wccftech.com/rpcs3-removes-amd-rx-400-500-and-nvidia-gtx-900-1000-series-from-recommended-gpu-requirements/

Apple has Already Started Shortlisting Suppliers For The M6 iPad Pro’s Vapor Chamber

Apple reportedly in talks with suppliers to provide vapor chambers to the M6 iPad Pro

A vapor chamber will make a significant difference to the overall temperatures of the M6 iPad Pro, with Apple previously reported to bring this cooling upgrade to its flagship tablet lineup. The California-based giant is often known to commence product development several months in advance, and according to the latest report, Apple is already in talks with two suppliers that could manufacture this crucial component. The M6 iPad Pro’s vapor chamber is reported to be provided by a Chinese and Taiwanese manufacturer Considering that the M6 iPad Pro launch will materialize approximately 18 months after the M5 iPad Pro’s inception, […]

Read full article at https://wccftech.com/apple-shortlisting-m6-ipad-pro-vapor-chamber-suppliers/

PayPal’s agentic commerce play shows why flexibility, not standards, will define the eext c-commerce wave

Enterprises looking to sell goods and services online are waiting for the backbone of agentic commerce to be hashed out; but PayPal is hoping its new features will bridge the gap.

The payments company is launching a discoverability solution that allows enterprises to make its product available on any chat platform, regardless of the model or agent payment protocol. 

PayPal, which is a participant in Google’s Agent Payments Protocol (AP2), found that it can leverage its relationship with merchants and enterprises to help pave the way for an easier transition into agentic commerce and offer flexibility that will benefit the ecosystem. 

Michelle Gill, PayPal's GM for small business and financial services, told VentureBeat that AI-powered shopping will continue to grow, so enterprises and brands must begin laying the groundwork early. 

“We think that merchants who've historically sold through web stores, particularly in the e-commerce space, are really going to need a way to get active on all of these large language models (LLMs),” Gill said. “The challenge is that no one really knows how fast all of this is going to move. We’re trying to help merchants think through how to do all of this as low-touch as possible while using the infrastructure they already have without doing a bazillion integrations.”

She added that AI shopping would also bring about “a resurgence from consumers trying to ensure their investment is protected.”

PayPal partnered with website builder Wix, Cymbio, Commerce and Shopware to bring products to chat platforms like Perplexity

Agent-powered shopping 

PayPal’s Agentic Commerce Services include two features. The first is Agent Ready, which would allow existing PayPal merchants to accept payments on AI platforms. The second is Shop Sync, which will enable companies’ product data to be discoverable through different AI chat interfaces. It takes a company’s catalog information and plug its inventory and fulfillment data to chat platforms. 

Gill said the data goes into a central repository where AI models can ingest the information. 

Right now, companies can access Shop Sync; Agent Ready is coming in 2026. 

Gill said Agentic Commerce Services is a one-to-many solution that would be helpful right now, as different LLMs scrape different data sources to surface information. 

Other benefits include:

  • Fast integration with current and future partners;

  • More product discovery over the traditional search, browse and cart experiences;

  • Preserved customer insights and relationships where the brand continues to have control over their records and communications with customers. 

Right now, the service is only available through Perplexity, but Gill said more platforms will be added soon. 

Fragmented AI platforms 

Agentic commerce is still very much in the early stages. AI agents are just beginning to get better at reading a browser. while platforms like ChatGPT, Gemini and Perplexity can now surface products and services based on user queries, people cannot technically buy things from chat (yet).

There’s a race right now to create a standard to enable agents to transact on behalf of users. Other than Google’s AP2, OpenAI and Stripe have the Agentic Commerce Protocol (ACP), and Visa recently launched its Trusted Agent Protocol

Beyond enabling a trust layer for agents to transact, enterprises struggle with fragmentation in agentic commerce. Different chat platforms use different models, which also interpret information in slightly different ways. Gill said PayPal learned that when it comes to working with merchants, flexibility is critical. 

“How do you decide if you're going to spend your time integrating with Google, Microsoft, ChatGPT or Perplexity?" Gill noted. "And each one of them right now has a different protocol, a different catalog, config, a different everything. That is a lot of time to make a bet as to where you should spend your time." 

How to balance speed and credibility in AI-assisted content creation

How to balance speed and credibility in AI-assisted content creation

AI tools can help teams move faster than ever – but speed alone isn’t a strategy.

As more marketers rely on LLMs to help create and optimize content, credibility becomes the true differentiator. 

And as AI systems decide which information to trust, quality signals like accuracy, expertise, and authority matter more than ever.

It’s not just what you write but how you structure it. AI-driven search rewards clear answers, strong organization, and content it can easily interpret.

This article highlights key strategies for smarter AI workflows – from governance and training to editorial oversight – so your content remains accurate, authoritative, and unmistakably human.

Create an AI usage policy

More than half of marketers are using AI for creative endeavors like content creation, IAB reports.

Still, AI policies are not always the norm. 

Your organization will benefit from clear boundaries and expectations. Creating policies for AI use ensures consistency and accountability.

Only 7% of companies using genAI in marketing have a full-blown governance framework, according to SAS.

However, 63% invest in creating policies that govern how generative AI is used across the organization. 

Source- “Marketers and GenAI- Diving Into the Shallow End,” SAS
Source- “Marketers and GenAI- Diving Into the Shallow End,” SAS

Even a simple, one-page policy can prevent major mistakes and unify efforts across teams that may be doing things differently.

As Cathy McPhillips, chief growth officer at the Marketing Artificial Intelligence Institute, puts it

  • “If one team uses ChatGPT while others work with Jasper or Writer, for instance, governance decisions can become very fragmented and challenging to manage. You’d need to keep track of who’s using which tools, what data they’re inputting, and what guidance they’ll need to follow to protect your brand’s intellectual property.” 

So drafting an internal policy sets expectations for AI use in the organization (or at least the creative teams).

When creating a policy, consider the following guidelines: 

  • What the review process for AI-created content looks like. 
  • When and how to disclose AI involvement in content creation. 
  • How to protect proprietary information (not uploading confidential or client information into AI tools).
  • Which AI tools are approved for use, and how to request access to new ones.
  • How to log or report problems.

Logically, the policy will evolve as the technology and regulations change. 

Keep content anchored in people-first principles

It can be easy to fall into the trap of believing AI-generated content is good because it reads well. 

LLMs are great at predicting the next best sentence and making it sound convincing. 

But reviewing each sentence, paragraph, and the overall structure with a critical eye is absolutely necessary.

Think: Would an expert say it like that? Would you normally write like that? Does it offer the depth of human experience that it should?

“People-first content,” as Google puts it, is really just thinking about the end user and whether what you are putting into the world is adding value. 

Any LLM can create mediocre content, and any marketer can publish it. And that’s the problem. 

People-first content aligns with Google’s E-E-A-T framework, which outlines the characteristics of high-quality, trustworthy content.

E-E-A-T isn’t a novel idea, but it’s increasingly relevant in a world where AI systems need to determine if your content is good enough to be included in search.

According to evidence in U.S. v. Google LLC, we see quality remains central to ranking:

  • “RankEmbed and its later iteration RankEmbedBERT are ranking models that rely on two main sources of data: [redacted]% of 70 days of search logs plus scores generated by human raters and used by Google to measure the quality of organic search results.” 
Source: U.S. v. Google LLC court documentation
Source: U.S. v. Google LLC court documentation

It suggests that the same quality factors reflected in E-E-A-T likely influence how AI systems assess which pages are trustworthy enough to ground their answers.

So what does E-E-A-T look like practically when working with AI content? You can:

  • Review Google’s list of questions related to quality content: Keep these in mind before and after content creation.
  • Demonstrate firsthand experience through personal insights, examples, and practical guidance: Weave these insights into AI output to add a human touch.
  • Use reliable sources and data to substantiate claims: If you’re using LLMs for research, fact-check in real time to ensure the best sources. 
  • Insert authoritative quotes either from internal stakeholders or external subject matter experts: Quoting internal folks builds brand credibility while external sources lend authority to the piece.
  • Create detailed author bios: Include:
    • Relevant qualifications, certifications, awards, and experience.
    • Links to social media, academic papers (if relevant), or other authoritative works.
  • Add schema markup to articles to clarify the content further: Schema can clarify content in a way that AI-powered search can better understand.
  • Become the go-to resource on the topic: Create a depth and breadth of material on the website that’s organized in a search-friendly, user-friendly manner. You can learn more in my article on organizing content for AI search.
Source: Creating helpful, reliable, people-first content,” Google Search Central
Source: Creating helpful, reliable, people-first content,” Google Search Central

Dig deeper: Writing people-first content: A process and template

Train the LLM 

LLMs are trained on vast amounts of data – but they’re not trained on your data. 

Put in the work to train the LLM, and you can get better results and more efficient workflows. 

Here are some ideas.

Maintain a living style guide

If you already have a corporate style guide, great – you can use that to train the model. If not, create a simple one-pager that covers things like:

  • Audience personas.
  • Voice traits that matter.
  • Reading level, if applicable.
  • The do’s and don’ts of phrases and language to use. 
  • Formatting rules such as SEO-friendly headers, sentence length, paragraph length, bulleted list guidelines, etc. 

You can refresh this as needed and use it to further train the model over time. 

Build a prompt kit  

Put together a packet of instructions that prompts the LLM. Here are some ideas to start with: 

  • The style guide
    • This covers everything from the audience personas to the voice style and formatting.
    • If you’re training a custom GPT, you don’t need to do this every time, but it may need tweaking over time. 
  • A content brief template
    • This can be an editable document that’s filled in for each content project and includes things like:
      • The goal of the content.
      • The specific audience.
      • The style of the content (news, listicle, feature article, how-to).
      • The role (who the LLM is writing as).
      • The desired action or outcome.
  • Content examples
    • Upload a handful of the best content examples you have to train the LLM. This can be past articles, marketing materials, transcripts from videos, and more. 
    • If you create a custom GPT, you’ll do this at the outset, but additional examples of content may be uploaded, depending on the topic. 
  • Sources
    • Train the model on the preferred third-party sources of information you want it to pull from, in addition to its own research. 
    • For example, if you want it to source certain publications in your industry, compile a list and upload it to the prompt.  
    • As an additional layer, prompt the model to automatically include any third-party sources after every paragraph to make fact-checking easier on the fly.
  • SEO prompts
    • Consider building SEO into the structure of the content from the outset.  
    • Early observations of Google’s AI Mode suggest that clearly structured, well-sourced content is more likely to be referenced in AI-generated results.

With that in mind, you can put together a prompt checklist that includes:

  • Crafting a direct answer in the first one to two sentences, then expanding with context.
  • Covering the main question, but also potential subquestions (“fan-out” queries) that the system may generate (for example, questions related to comparisons, pros/cons, alternatives, etc.).
  • Chunking content into many subsections, with each subsection answering a potential fan-out query to completion.
  • Being an expert source of information in each individual section of the page, meaning it’s a passage that can stand on its own.
  • Provide clear citations and semantic richness (synonyms, related entities) throughout. 

Dig deeper: Advanced AI prompt engineering strategies for SEO

Create custom GPTs or explore RAG 

A custom GPT is a personalized version of ChatGPT that’s trained on your materials so it can better create in your brand voice and follow brand rules. 

It mostly remembers tone and format, but that doesn’t guarantee the accuracy of output beyond what’s uploaded.

Some companies are exploring RAG (retrieval-augmented generation) to further train LLMs on the company’s own knowledge base. 

RAG connects an LLM to a private knowledge base, retrieving relevant documents at query time so the model can ground its responses in approved information.

While custom GPTs are easy, no-code setups, RAG implementation is more technical – but there are companies/technologies out there that can make it easier to implement. 

That’s why GPTs tend to work best for small or medium-scale projects or for non-technical teams focused on maintaining brand consistency.

Create a custom GPT in ChatGPT
Create a custom GPT in ChatGPT

RAG, on the other hand, is an option for enterprise-level content generation in industries where accuracy is critical and information changes frequently.

Run an automated self-review

Create parameters so the model can self-assess the content before further editorial review. You can create a checklist of things to prompt it.

For example:

  • “Is the advice helpful, original, people-first?” (Perhaps using Google’s list of questions from its helpful content guidance.) 
  • “Is the tone and voice completely aligned with the style guide?” 

Have an established editing process 

Even the best AI workflow still depends on trained editors and fact-checkers. This human layer of quality assurance protects accuracy, tone, and credibility.

Editorial training

About 33% of content writers and 24% of marketing managers added AI skills to their LinkedIn profiles in 2024.

Writers and editors need to continue to upskill in the coming year, and, according to the Microsoft 2025 annual Work Trend Index, AI skilling is the top priority.  

Microsoft 2025 Annual Work Trend Index
Source: 2025 Microsoft Work Trend Index Annual Report

Professional training creates baseline knowledge so your team gets up to speed faster and can confidently handle outputs consistently.

This includes training on how to effectively use LLMs and how to best create and edit AI content.

In addition, training content teams on SEO helps them build best practices into prompts and drafts.

Editorial procedures

Ground your AI-assisted content creation in editorial best practices to ensure the highest quality. 

This might include:

  • Identifying the parts of the content creation workflow that are best suited for LLM assistance.
  • Conducting an editorial meeting to sign off on topics and outlines. 
  • Drafting the content.
  • Performing the structural edit for clarity and flow, then copyediting for grammar and punctuation.
  • Getting sign-off from stakeholders.  
AI editorial process
AI editorial process

The AI editing checklist

Build a checklist to use during the review process for quality assurance. Here are some ideas to get you started:

  • Every claim, statistic, quote, or date is accompanied by a citation for fact-checking accuracy.
  • All facts are traceable to credible, approved sources.
  • Outdated statistics (more than two years) are replaced with fresh insights. 
  • Draft meets the style guide’s voice guidelines and tone definitions. 
  • Content adds valuable, expert insights rather than being vague or generic.
  • For thought leadership, ensure the author’s perspective is woven throughout.
  • Draft is run through the AI detector, aiming for a conservative percentage of 5% or less AI. 
  • Draft aligns with brand values and meets internal publication standards.
  • Final draft includes explicit disclosure of AI involvement when required (client-facing/regulatory).

Grounding AI content in trust and intent

AI is transforming how we create, but it doesn’t change why we create.

Every policy, workflow, and prompt should ultimately support one mission: to deliver accurate, helpful, and human-centered content that strengthens your brand’s authority and improves your visibility in search. 

Dig deeper: An AI-assisted content process that outperforms human-only copy

Is Your Google Workspace as Secure as You Think it is?

The New Reality for Lean Security Teams If you’re the first security or IT hire at a fast-growing startup, you’ve likely inherited a mandate that’s both simple and maddeningly complex: secure the business without slowing it down. Most organizations using Google Workspace start with an environment built for collaboration, not resilience. Shared drives, permissive settings, and constant

Chrome Zero-Day Exploited to Deliver Italian Memento Labs' LeetAgent Spyware

The zero-day exploitation of a now-patched security flaw in Google Chrome led to the distribution of an espionage-related tool from Italian information technology and services provider Memento Labs, according to new findings from Kaspersky. The vulnerability in question is CVE-2025-2783 (CVSS score: 8.3), a case of sandbox escape which the company disclosed in March 2025 as having come under

RTX 4090 laptop GPU gets 20% performance boost after shunt mod, beats the mobile RTX 5090, on average — reduced resistance boosts power to 240W

A user on Reddit shunt-modded their Zephyrus M16's RTX 4090 laptop GPU, which led to a 20% bump in performance compared to stock, while even beating RTX 5090 mobile on average. This was achieved by just stacking one resistor atop the existing one to trick the GPU into consuming way more power than it thinks it is.

China builds brain-mimicking AI server the size of a mini-fridge, claims 90% power reduction — BI Explorer 1 packs in 1,152 CPU cores and 4.8TB of memory, runs on a household power outlet

China's GDIIST research institute has announced the development and soon release of the BIE-1, an AI supercomputer inspired by the operation of the human brain. This neuromorphic computing tech is one of the first standalone, non-rack-based brain-based computers we've ever seen.

Scientists claim you can't see the difference between 1440p and 8K at 10 feet in new study on the limits of the human eye — would still be an improvement on the previously-touted upper limit of 60 pixels per degree

Researchers at the University of Cambridge and Meta Reality Labs have conducted a new study on just how many pixels the human eye can take in at certain distances, and determined it's fewer than we might think. They claim in their results that it means most humans wouldn't be able to tell the difference between 1440p and 4K on a 50-inch screen at 10 feet distance.

Beyond LLMs: Crafting Robust AI with Multi-Method Agentic Architectures

The post Beyond LLMs: Crafting Robust AI with Multi-Method Agentic Architectures appeared first on StartupHub.ai.

“Large language models have well-known issues and constraints. And so if you want to solve complex problems, you’re going to want to adopt what’s called multi-method agentic AI, which combines large language models with other kinds of proven automation technologies so that you can build more adaptable, more transparent systems that are much more likely […]

The post Beyond LLMs: Crafting Robust AI with Multi-Method Agentic Architectures appeared first on StartupHub.ai.

Google Arts & Culture Elevates Virtual Travel with AI Tours

The post Google Arts & Culture Elevates Virtual Travel with AI Tours appeared first on StartupHub.ai.

Google Arts & Culture is redefining virtual exploration with new AI tours for North Gyeongsang, South Korea, featuring interactive, Gemini-powered commentary.

The post Google Arts & Culture Elevates Virtual Travel with AI Tours appeared first on StartupHub.ai.

Nvidia’s AI Imperative: Beyond Moore’s Law, Network is the New Compute

The post Nvidia’s AI Imperative: Beyond Moore’s Law, Network is the New Compute appeared first on StartupHub.ai.

Michael Kagan, CTO of Nvidia and co-founder of Mellanox, recently engaged in a candid discussion with Sonya Huang and Pat Grady at Sequoia’s Europe100 event, offering profound insights into Nvidia’s meteoric rise as the architect of AI infrastructure. His commentary illuminated the pivotal role of the Mellanox acquisition in transforming Nvidia from a mere chip […]

The post Nvidia’s AI Imperative: Beyond Moore’s Law, Network is the New Compute appeared first on StartupHub.ai.

Fedora Linux 43 Now Available For Download

It's Fedora 43 release day! This latest installment of Fedora Linux is now available for download with Fedora Workstation 43 using the GNOME 49 desktop, the modern Linux 6.17 kernel powering this distribution release, and many exciting improvements and other leading-edge software updates powering this Red Hat sponsored Linux distribution...

Windows 11 Will Start Memory Scans After BSOD to Prevent Future Issues

The land of Windows 11 is finally getting a feature most users will appreciate, with the introduction of the new memory scanning for issues after a blue screen of death (BSOD) happens. "We're introducing a new feature that helps improve system reliability. If your PC experiences a bug check (unexpected restart), you may see a notification when signing in suggesting a quick memory scan," noted Windows Insider Program lead Amanda Langowski. Additionally, the "If you choose to run it, the system will schedule a Windows Memory Diagnostic scan to run during your next reboot (taking 5 minutes or less on average) and then continue to Windows. If a memory issue is found and mitigated, you will see a notification post-reboot."

Microsoft notes that this first wave flags every bug check so they can watch how memory glitches turn into blue screens, and they will refine targeting of these issues in the later updates. At the moment the preview will not run on Arm64 PCs, machines that have Administrator Protection turned on, or any BitLocker setup that boots without Secure Boot enabled. Users that are part of the Windows Insiders Dev and Beta channels will be able to access this feature. Windows 11 Insider Preview Build 26220.6982 (KB5067109) and Windows 11 Insider Preview Build 26120.6982 (KB5067109) are the first in the rollout, so they can beta test the feature before it hits the main stable Windows 11 branch.

NVIDIA DGX Spark Reportedly Runs at Half the Power and Performance

NVIDIA's DGX Spark machine, designed as the ultimate AI box for local and fast AI prototyping, is reportedly operating at half the expected power and performance levels. John Carmack, founder of AGI-focused Keen Technologies and former CTO of Oculus VR, claims that the DGX Spark mini PC is not meeting its specified performance. NVIDIA rates the DGX Spark mini PC at 240 W of system power, but Carmack's benchmarks indicate it only draws about 100 W, effectively halving the power draw and performance. The DGX Spark's peak throughput is approximately 31 TeraFLOPS for FP32 and around 1,000 TOPS with NVIDIA's NVFP4 reduced-precision format. At BF16 dense compute, it is supposed to achieve 125 TeraFLOPS, but these targets are not being met. The measured compute is about 480 TeraFLOPS at FP4 and only about 60 TeraFLOPS at BF16.

After facing multiple delays, NVIDIA's DGX Spark has finally reached developers. However, many are reporting software and firmware issues on NVIDIA's end. There may also be thermal throttling problems, causing the chip to reduce frequency and voltage to prevent overheating. In some cases, the system has rebooted, potentially due to inadequate cooling. The GB10 SoC is rated for a 140 W TDP, and the 128 GB configuration of LPDDR5X could add several dozen additional watts. Therefore, a 100 W power draw doesn't seem feasible for the DGX Spark. It remains to be seen whether a software or firmware update will address these issues, or if NVIDIA will provide an aftermarket cooling solution for its $3,999 machine if it continues to overheat.

(PR) Samsung Launches New microSD Express Card P9 Express

Samsung Electronics today announced the launch of its new microSD Express card lineup, the P9 Express, designed to deliver next-gen gaming experiences and optimized for leading platforms, including the Nintendo Switch 2. Based on the PCIe interface and NVMe protocol, SD Express technology significantly enhances data
transfer performance compared to UHS-I cards, making it ideal for environments that demand high-capacity processing and fast data transmission.

The P9 Express is especially valuable for hardcore console gamers who frequently enjoy a diverse range of games and Downloadable Content (DLC), often demanding additional storage capacity beyond the internal storage. To meet different gaming needs, it is available in both 256 GB and 512 GB options. It also provides an ideal solution for multiple users sharing a single console, where ample capacity is required for several different game installations, helping gamers overcome limited internal storage and enjoy a wide variety of titles without compromise. When used with a dedicated SD Express interface, the P9 Express delivers sequential read speeds up to four times faster than UHS-I, enabling creators and professionals to efficiently move large volumes of data from devices to PCs, laptops, or workstations.

(PR) ASUS Launches XA NB3I-E12 AI Server Built with NVIDIA HGX B300

ASUS today announced the shipment of the XA NB3I-E12 AI server, built on the NVIDIA HGX B300 platform. Delivering next-generation AI performance and reliability, XA NB3I-E12 gives enterprises and cloud-service providers (CSPs) early access to cutting-edge computing capabilities for the AI era. Accelerated by eight NVIDIA Blackwell Ultra GPUs and dual Intel Xeon 6 Scalable processors, ASUS XA NB3I-E12 is engineered for intensive AI workloads. With eight NVIDIA ConnectX-8 InfiniBand SuperNICs, five PCIe expansion slots, 32 DIMMs, 10 NVMe drives, and dual 10 Gb LAN, it transforms data into intelligence for real-world automation. The system is ideal for enterprises and CSPs running large language models (LLMs), research institutions and universities performing scientific computing, and the financial and automotive sectors focused on AI model training and inference.

(PR) Klevv Expands Urbane V RGB DDR5 Gaming/OC Memory Series With an All-New Jet Black Edition

KLEVV, the leading consumer memory and storage brand introduced by Essencore, proudly unveils a striking new colorway for its award-winning URBANE V RGB DDR5 Gaming/OC memory. The sleek new Jet Black Edition joins the popular Brilliant White, broadening the lineup and giving enthusiasts more freedom to personalize their setups without compromise.

The URBANE V RGB Gaming/OC series is designed with both style and function in mind, featuring a 2 mm-thick aluminium heatsink with refined, curved edges and precision linear grooves that ensure durability and efficient cooling. With a low-profile height of just 42.5 mm, the modules fit seamlessly into diverse builds while maintaining optimal thermal performance. A distinctive dual-beam RGB light guide delivers smooth, customizable illumination across 16 million colors, fully compatible with major motherboard lighting software. This proven design, recognized with the prestigious iF Design Award, is now available in an elegant Jet Black finish that complements today's modern gaming and creator setups.

Amazon confirms 14,000 corporate job cuts, says push for ‘efficiency gains’ will continue into 2026

Amazon CEO Andy Jassy has been pushing to reduce bureaucracy across the company. (GeekWire Photo / Todd Bishop)

Amazon confirmed Tuesday that it is cutting about 14,000 corporate jobs, citing a need to reduce bureaucracy and become more efficient in the new era of artificial intelligence.

In a message to employees, posted on the company’s website, Amazon human resources chief Beth Galetti signaled that additional cutbacks are expected to take place into 2026, while indicating that the company will also continue to hire in key strategic areas.

Reuters reported Monday that the number of layoffs could ultimately total as many as 30,000 people, which is still a possibility as the cutbacks continue into next year. At that scale, the overall number of job cuts could eventually be the largest in Amazon’s history, exceeding the 27,000 positions that the company eliminated in 2023 across multiple rounds of layoffs.

“This generation of AI is the most transformative technology we’ve seen since the Internet, and it’s enabling companies to innovate much faster than ever before,” wrote Galetti, senior vice president of People Experience and Technology.

The goal is “to be organized more leanly, with fewer layers and more ownership, to move as quickly as possible for our customers and business,” she explained.

Galetti wrote that the company is “shifting resources to ensure we’re investing in our biggest bets and what matters most to our customers’ current and future needs” — indicating that layoff decisions are based whether teams and roles align with the company’s direction.

Amazon’s corporate workforce numbered around 350,000 people in early 2023, the last time the company provided a public number. At that scale, the initial reduction of 14,000 represents about 4% of Amazon’s corporate workforce. However, the number is a much smaller fraction of its overall workforce of 1.55 million people, which includes workers in its warehouses.

Cuts are expected across multiple regions and countries, but they are likely to hit hard in the Seattle region, home to the company’s first headquarters and its largest corporate workforce. The region has already felt the impact of major layoffs by Microsoft and others, as companies adjust to the uncertain economy and accelerate investments in AI-driven automation.

Many displaced tech workers here have found job searches slower and more competitive than in previous cycles in which the tech sector was more insulated than other industries.

The cuts at Amazon are the latest pullback after a pandemic-era hiring spree. They come two days before the company’s third quarter earnings report. Amazon and other cloud giants have been pouring billions into capital expenses to boost AI capacity. Cutting jobs is one way of showing operating-expense discipline to Wall Street.

In a memo to employees in June, Amazon CEO Andy Jassy wrote that he expected Amazon’s total corporate workforce to get smaller over time as a result of efficiency gains from AI.

Jassy took over as Amazon CEO from founder Jeff Bezos in mid-2021. In recent years he has been pushing to reduce management layers and eliminate bureaucracy inside the company, saying he wants Amazon to operate like the “world’s largest startup.” 

Bloomberg News reported this week that Jassy has told colleagues that parts of the company remain “unwieldy” despite the 2023 layoffs and other efforts to streamline operations. 

As part of its report, Reuters cited sources saying the magnitude of the cuts is also a result of Amazon’s strict return-to-office policy failing to cause enough employees to quit voluntarily. Amazon brought workers back five days a week earlier this year.

Impacted teams and people will be notified of the layoffs today, Galetti wrote.

Amazon is offering most impacted employees 90 days to find a new role internally, though the timing may vary based on local laws, according to the message. Those who do not find a new position at Amazon or choose to leave will be offered severance pay, outplacement services, health insurance benefits, and other forms of support.

Nvidia DGX Spark delivers half quoted performance for John Carmack

John Carmack reports performance issues with Nvidia’s DGX Spark AI system John Carmack, ID Software founder and former CTO of Oculus VR, has been testing an Nvidia DGX Spark AI system. So far, he is not impressed by the performance the system has delivered. His system appears to be maxing out at 100 watts, which […]

The post Nvidia DGX Spark delivers half quoted performance for John Carmack appeared first on OC3D.

Battlefield REDSEC is launching today – Here’s what you need to know

Battlefield is getting a free-to-play Battle Royale mode EA has confirmed that Battlefield REDSEC will launch on October 28th at 3 PM GMT, a free-to-play Battle Royale game that debuts alongside Battlefield 6’s Season 1 content. Battlefield REDSEC acts as EA’s counter to Call of Duty: Warzone. Currently, exact details for the new game are […]

The post Battlefield REDSEC is launching today – Here’s what you need to know appeared first on OC3D.

The Matrix Creators Wanted Kojima to Make a Game Based on the IP, But Konami Refused

Hideo Kojima on the left beside Matrix code background with Neo on the right.

There's little doubt that The Matrix franchise is criminally underserved when it comes to videogame adaptations, despite being theoretically a perfect fit for the medium. In the 26 years since the original movie's theatrical debut, we only got two decent games: 2003's single player action/adventure game Enter the Matrix and 2005's MMORPG The Matrix Online. More recently, the interactive experience The Matrix Awakens was released in late 2021, but it was really just a tech demo for Unreal Engine 5 and a tease at the level of quality that gaming fans of the IP never really got to fully experience. […]

Read full article at https://wccftech.com/the-matrix-creators-wanted-kojima-make-a-game-on-the-ip-konami-refused/

Watch The NVIDIA GTC 2025, CEO Jensen Huang, Keynote Here: Live From Washington, US

NVIDIA GTC event in Washington, D.C. with dates October 27-29, 2025, displayed alongside the Washington Monument.

Today at GTC 2025, NVIDIA's CEO, Jensen Huang, will deliver the opening keynote live from Washington, US, for the first time. NVIDIA GTC Comes To Washington, D.C, US: CEO Jensen Huang To Talk About Next Chapter of AI, Watch It Live Here NVIDIA's GTC 2025 is just a few hours away, and while you might be wondering, didn't GTC already happen a few months back? Well, it should be mentioned that while GTC used to be a one-time per annum affair in the past, the recent growth and success have turned NVIDIA's GTC into more of a quarterly event. As […]

Read full article at https://wccftech.com/watch-nvidia-gtc-2025-ceo-jensen-huang-keynote-live-washington-us/

Apple’s Next In-House 5G Modem – The C2, Will Use An Older Manufacturing Process From TSMC Next Year, Unlike The A20 & A20 Pro

Apple's C2 5G modem found in the iPhone 18 will be made on TSMC's N4 process

The iPhone 17 lineup is expected to be Apple’s last to ship with Qualcomm’s 5G modems as the company prepares its transition to ship all of its iPhone 18 models with the C2 baseband chip. This in-house solution was said to be in development shortly after the iPhone 16e was announced, and while we will witness its materialization in 2026, a new report states that, unlike other Apple chipsets like the A20 and A20 Pro, it will not leverage TSMC’s newest 2nm process, but a lithography that is a couple of generations old. The C2 5G modem will reportedly be mass […]

Read full article at https://wccftech.com/apple-c2-to-be-mass-produced-on-older-tsmc-process-says-report/

Bully Online Mod Promises to Let You Roam Rockstar’s Classic with Friends

Bully Online title screen with comic-style character in a cheerleader outfit marked with a 'B'.

A team of modders is working on Bully Online, a modification for the PC version of Bully: Scholarship Edition that promises to allow players to roam the grounds of Bullworth Academy and the nearby town with their friends. The Wii and Xbox 360 versions of Scholarship Edition did have a multiplayer mode, but it was limited to two players and only allowed them to face off in the class minigames. According to community creator SWEGTA, Bully Online promises much more, including free roam support, solo and group minigames, and even a role-playing system. They were able to add a 'fully […]

Read full article at https://wccftech.com/bully-online-mod-promises-let-you-roam-rockstars-classic-with-friends/

Loulan: The Cursed Sand Is a Chinese Hack ‘n’ Slash ARPG Where You Play as a Skeletal Warrior

Loulan: The Cursed Sand poster with a half-skeletal face and desert backdrop.

This morning, indie Chinese developer ChillyRoom unveiled Loulan: The Cursed Sand, one of the games funded through the PlayStation China Hero Project. The game is a hack 'n' slash action RPG viewed from a Diablo-like camera. The setting is the ancient Silk Road, in China's Western Regions. Loulan: The Cursed Sand tells the tragic love story of an exiled royal guard who returns to the titular fallen kingdom amidst the chaos of war in search of his beloved princess. Players will step into the game as the skeletal warrior known as 'The Cursed Sand', mastering the power of sand as […]

Read full article at https://wccftech.com/loulan-the-cursed-sand-chinese-hack-n-slash-arpg/

Galaxy Z TriFold, Samsung’s First Triple-Folding Smartphone, Gets A First Look Through A Series Of Images

Samsung's Galaxy Z TriFold gets pictured

Samsung looks to be all set to announce its first triple-folding smartphone, the Galaxy Z TriFold, and even though the device is expected to be limited to a few markets, it was high time that we saw smartphones gravitate to a new form factor. Just before the official announcement happens, a series of images provides a first look at the Galaxy Z TriFold, showing a dual-infolding structure that can transform into a large-screen tablet. The Galaxy Z TriFold was on display at the Samsung booth at the K-Tech Showcase, with one report stating that the prototype did not display any […]

Read full article at https://wccftech.com/samsung-galaxy-z-trifold-first-look-image-gallery/

I tested Ulefone's latest rugged phone - and found the Armor 29 Ultra is as refined as these devices get

Featuring a powerful CPU/GPU combo, bright, high-resolution AMOLED screen, quad cameras, powerful camping light and huge capacity battery, this rugged smartphone is about as refined as they come. Designed for those who need a mobile device that can withstand the elements and go days, if not weeks, between charges.

On-Policy Distillation LLMs Redefine Post-Training Efficiency

The post On-Policy Distillation LLMs Redefine Post-Training Efficiency appeared first on StartupHub.ai.

On-policy distillation LLMs from Thinking Machines Lab offer a highly efficient and cost-effective method for post-training specialized smaller models, combining direct learning with dense feedback.

The post On-Policy Distillation LLMs Redefine Post-Training Efficiency appeared first on StartupHub.ai.

Tensormesh exits stealth with $4.5M to slash AI inference caching costs

The post Tensormesh exits stealth with $4.5M to slash AI inference caching costs appeared first on StartupHub.ai.

Tensormesh's AI inference caching technology eliminates redundant computation, promising to make enterprise AI cheaper and faster to run at scale.

The post Tensormesh exits stealth with $4.5M to slash AI inference caching costs appeared first on StartupHub.ai.

Halo: Campaign Evolved Lead Responds to Criticisms of AI Use

News recently broke in a Rolling Stones interview that Halo Studios had relied on AI in the development of Halo: Campaign Evolved, with the game director, Greg Hermann, commented about "how integrated AI is becoming" in the "tooling" of the studio's development pipeline. Following this and other comments that implied AI was being heavily used in the remake of Halo: Combat Evolved, fans started assuming that varying degrees of the creative workload involved in the development of Campaign Evolved was being handled by generative AI. This comes after EA and Krafton both announced an increase in their reliance on generative AI in both the game development process and overall corporate management processes.

More recently, however, an Xbox representative responded directly to Rolling Stones, clarifying that "There is no mandate to use generative AI in our game development, and that includes Halo: Campaign Evolved," contrary to the situation facing many EA workers, who have allegedly been facing pressure to use AI tools for over a year. This response echoed the game director's prior insistence that generative AI is being used merely as a tool by the developers to improve general workflows, and that game development "really is about that creative spark that comes from people." Mentions of generative AI in video games are only becoming more frequent, and many online discussions surrounding AI indicate that it is fuelling an erosion of trust in game studios and developers.

Filing: Meta’s AI layoffs hit Washington offices in Bellevue, Seattle, Redmond

Meta’s office at Dexter Station in Seattle. (Meta Photo)

Meta plans to lay off more than 100 employees in Washington state as part of a broader round of cuts within its artificial intelligence division.

A new filing with the state’s Employment Security Department shows 101 employees impacted, including 48 in Bellevue, 23 in Seattle, and four in Redmond, along with 23 remote workers based in Washington.

The filing lists dozens of affected roles across Meta’s AI research and infrastructure units, including software engineers, AI researchers, and data scientists. Meta product managers, privacy specialists, and compliance analysts were also affected.

Meta is cutting around 600 positions in its AI unit, Axios reported last week. The company is investing heavily in AI and wants to create a “more agile operation,” according to an internal memo cited by Axios. Meta has just under 3,000 roles within its superintelligence lab, CNBC reported.

The separations at Meta in Washington take effect Dec. 22, according to the Worker Adjustment and Retraining Notification (WARN) notice filed Oct. 22.

Meta employs thousands of people across multiple offices in the Seattle region, one of its largest engineering hubs outside Menlo Park.

The latest reductions mark another contraction for Meta’s Pacific Northwest footprint following multiple rounds of layoffs over the past several years.

The company’s rapid expansion in Seattle over the past decade made it one of the emblems of the region’s tech boom, coinciding with Microsoft’s resurgence and Amazon’s rise.

Among the Bay Area titans, Google was among the first to establish a Seattle-area engineering office, way back in 2004. However, it was Facebook’s decision to open its own outpost across from Pike Place Market in 2010 that really got the attention of their Silicon Valley tech brethren.

In the decade that followed, out-of-town companies set up more than 130 engineering centers in the region.

The Meta Open Arts maker space in Block 16 in Bellevue’s Spring District. (GeekWire File Photo / Kurt Schlosser)

However, more recently Meta has made moves to trim its Seattle-area footprint.

Apple earlier this year took over a building previously occupied by Meta in Seattle’s South Lake Union neighborhood, near Amazon’s headquarters. CoStar reported in April that Meta listed its other Arbor Blocks building for sublease.

Meta previously gobbled up much of the planned office space at the Spring District, a sprawling development northeast of downtown Bellevue, including a building that was originally going to be a new REI headquarters. But it has subleased some of the space since then to companies such as Snowflake, which recently took an entire building from Meta at the Spring District.

Meta’s office in Redmond, near Microsoft’s headquarters, is focused on its mixed reality development.

GeekWire has reached out to the company for an updated Seattle-area headcount.

Meta’s cuts come amid reported layoffs at Amazon that could impact up to 30,000 workers.

Tech companies have laid off more than 128,000 employees this year, according to Layoffs.fyi. Last year, companies cut nearly 153,000 positions.


SideWinder Adopts New ClickOnce-Based Attack Chain Targeting South Asian Diplomats

A European embassy located in the Indian capital of New Delhi, as well as multiple organizations in Sri Lanka, Pakistan, and Bangladesh, have emerged as the target of a new campaign orchestrated by a threat actor known as SideWinder in September 2025. The activity "reveals a notable evolution in SideWinder's TTPs, particularly the adoption of a novel PDF and ClickOnce-based infection chain, in

Stray Appears To Be November's First PlayStation Plus Free Monthly Game

Streaming services like Xbox Game Pass and PlayStation Plus provide gamers with a wealth of games to play with the obvious drawback that you don't get to keep the games for an extended period of time. PlayStation Plus's free monthly games, however, skip this caveat and allow players to keep the game as long as it's added to their library during the month the game is featured. According to supposed leaks by DeaLabs, November's monthly free game for PS Plus subscribers will be Stray, and it will be available to claim from November 4. After that, as long as players have a PS Plus subscription, they will be able to play Stray. Stray was previously available to play via a PS Plus subscription, but it was subsequently removed around the game's Xbox launch.

Stray is a single-player indie adventure game that originally launched in 2022 for the PS5, PS4, and PC via Steam, later launching on the Xbox Series X|S, macOS, and Nintendo Switch. The game follows an orange cat as it explores an underground cyberpunk city occupied exclusively by robots in an effort to find its way back to the surface. It largely features puzzle-platformer mechanics, with a particular focus on environmental puzzles. The game will apparently be the flagship title for the month of November on PS Plus, but there will be two other as-yet unknown free games joining Stray at the same time.

QuietNet - is like a filter for the internet. – Block ads, trackers, and threats — for every device, all in one place


Quietnet blocks ads, tracking, and harmful websites before they even reach your phone, laptop, or TV — no apps needed, and it works for every device in your home or office.

We make the internet faster, safer, and more private for families and small businesses — without the noise. We're not backed by big tech or VC money. We're privacy-focused, bootstrapped, and already seeing people pay for peace and quiet online. QuietNet is built by people who care — no ads, no tracking, just a cleaner, safer internet for your family or team.

View startup

Battlefield 6's long-awaited battle royale mode officially drops October 28 — "RedSec" will be free to play across PC and console

EA announced a battle royale mode was in development for Battlefield 6 a while ago, and now we know what it'll be called. RedSec releases tomorrow, alongside the game's first season update, and it will be completely free to play. More details will be revealed tomorrow, but we already know that RedSec will feature an insta-kill zone system, far less lenient than any other game.

❌