Normal view

Today — 1 November 2025Main stream

ChatGPT as a soccer advisor: Seattle Reign FC uses AI to develop winning defensive strategy

1 November 2025 at 21:28
Seattle Reign FC head coach Laura Harvey. (Reign FC Photo)

Generative AI has made its way onto the professional soccer field.

Laura Harvey, head coach of Seattle Reign FC, said this week that ChatGPT helped her come up with a new defensive strategy.

Speaking on the Soccerish Podcast, Harvey said she was curious if ChatGPT could answer questions about soccer. So she started prompting OpenAI’s chatbot with questions about her team.

At first, she asked broad questions like, “what is Seattle Reign’s identity?” She didn’t really love the answer.

But then she asked: “What formation should you play to beat NWSL teams?”

It then listed every team in the women’s soccer professional league, with a suggested formation. And for two of the teams, it suggested “back-five,” a defensive setup using five players in the backline.

Harvey said she wasn’t super familiar with the strategy and had not used it as a coach.

She took the AI suggestion to her staff and did a deep dive on the potential change.

“We liked it,” Harvey said. “And it worked — we won the game.”

Harvey, a three-time NWSL Coach of the Year, didn’t reveal the opponent but said they were “really good.” Now the team uses the formation as an option during matches. The Reign have improved since last season and are ranked fourth in the NWSL heading into the playoffs.

It’s a fascinating example of using AI as a tactical consultant, combining human expertise and intuition with machine suggestions.

“It didn’t tell you how to play it, what to do in it or any of that stuff,” Harvey said on the podcast. “But it was like, ‘This is what we would say to do.’ And I was like, ‘Hmm, interesting.’ And that was what spurred me to look into it. So then I really looked into it.”

Across industries, professionals are treating tools such as ChatGPT as sounding boards — running ideas by them, exploring scenarios, or pressure-testing strategies before making decisions.

OpenAI’s own research this year found that people increasingly rely on ChatGPT “as an advisor rather than only for task completion.”

“ChatGPT likely improves worker output by providing decision support, which is especially important in knowledge-intensive jobs where productivity is increasing in the quality of decision-making,” according to the research.

Seattle’s tech paradox: Amazon’s layoffs collide with the AI boom — or is it a bubble?

1 November 2025 at 19:36
Image created by Google Gemini based on the audio of this week’s GeekWire Podcast.

This week on the GeekWire Podcast: Why is Amazon laying off 14,000 people in the middle of an AI boom — and is it really a boom at all? We dig into the contradiction at the heart of Seattle’s tech scene, discussing Amazon CEO Andy Jassy’s “world’s largest startup” rationale and what it says about the company’s culture and strategy. And we debate whether AI progress represents true transformation or the familiar signs of a tech bubble in the making.

Then we examine the vision of Cascadia high-speed rail — the ambitious plan to connect Portland, Seattle, and Vancouver, B.C., by bullet train. Is it the regional infrastructure needed to power the Pacific Northwest’s next chapter, or an expensive dream looking for a purpose?

With GeekWire co-founders John Cook and Todd Bishop

Related headlines from the week

Amazon layoffs

Amazon earnings

Microsoft Azure, earnings and OpenAI

Seattle-Portland-Vancouver

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Google’s AI co-scientist just solved a biological mystery that took humans a decade

1 November 2025 at 16:00

Can artificial intelligence function as a partner in scientific discovery, capable of generating novel, testable hypotheses that rival those of human experts? Two recent studies highlight how a specialized AI developed by Google not only identified drug candidates that showed significant anti-fibrotic activity in a laboratory model of chronic liver disease but also independently deduced a complex mechanism of bacterial gene transfer that had taken human scientists years to solve.

The process of scientific discovery has traditionally relied on human ingenuity, combining deep expertise with creative insight to formulate new questions and design experiments. However, the sheer volume of published research makes it challenging for any single scientist to connect disparate ideas across different fields. A new wave of artificial intelligence tools aims to address this challenge by augmenting, and accelerating, human-led research.

One such tool is Google’s AI co-scientist, which its developers hope will significantly alter the landscape of biomedical research. Recent studies published in Advanced Science and Cell provide early evidence of this potential, showing the system’s ability to not only sift through vast datasets but also to engage in a reasoning process that can lead to high-impact discoveries.

Google’s AI Co-scientist: A Multi-Agent System for Discovery

Google’s AI co-scientist is a multi-agent system built upon the Gemini 2.0 large language model, designed to mirror the iterative process of the scientific method. It operates not as a single entity, but as a team of specialized AI agents working together. This structure is intended to help scientists generate new research ideas, create detailed proposals, and plan experiments.

The system operates through a “scientist-in-the-loop” model, where human experts can provide initial research goals, offer feedback, and guide the AI’s exploration using natural language. The specialized agents each handle a distinct part of the scientific reasoning process. The Generation Agent acts as a brainstormer, exploring scientific literature and engaging in simulated debates to produce initial ideas. The Reflection Agent serves as a peer reviewer, critically assessing these ideas for quality, novelty, and plausibility.

Other agents contribute to refining the output. The Ranking Agent runs an Elo-based tournament, similar to chess rankings, to prioritize the most promising hypotheses. The Evolution Agent works to improve top-ranked ideas by combining concepts or thinking in unconventional ways. A Meta-review Agent synthesizes all the feedback to improve the performance of the other agents over time. This collaborative, self-improving cycle is designed to produce increasingly novel and high-quality scientific insights.

AI Pinpoints New Drug Candidates for Liver Fibrosis

In the study published in Advanced Science, researchers partnered with Google to explore new ways of treating liver fibrosis, a progressive condition marked by excessive scarring in the liver. Current treatment options are extremely limited, in part because existing models for studying the disease do not accurately replicate how fibrosis develops in the human liver. These limitations have hindered drug development for years.

To address this gap, the research team asked the AI co-scientist to generate new, testable hypotheses for treating liver fibrosis. Specifically, they tasked the AI with exploring how epigenomic mechanisms—chemical changes that influence gene activity without altering the DNA sequence—might be targeted to reduce or reverse fibrosis.

“For the data used in the paper, we provided a single prompt and received a response from AI co-scientist, which are shown in supplemental data file 1,” explained Gary Peltz, a professor at Stanford University School of Medicine. “The prompt was carefully prepared, providing the area (epigenomic effects in liver fibrosis) and experimental methods (use of our hepatic organoids) to focus on. However, in most cases, it is important to iteratively engage with an AI in order to better define the question and enable it to provide a more complete answer.”

The AI system scanned the scientific literature and proposed that three classes of epigenomic regulators could be promising targets for anti-fibrotic therapy: histone deacetylases (HDACs), DNA methyltransferase 1 (DNMT1), and bromodomain protein 4 (BRD4). It also outlined experimental techniques for testing these ideas, such as single-cell RNA sequencing to track how the drugs might affect different cell populations. The researchers incorporated these suggestions into their experimental design.

To test the AI’s proposals, the team used a laboratory system based on human hepatic organoids—three-dimensional cell cultures derived from stem cells that resemble key features of the human liver. These mini-organs contain a mix of liver cell types and can model fibrosis when exposed to fibrotic triggers like TGF-beta, a molecule known to promote scarring. The organoid system allowed researchers to assess not just whether a drug could reduce fibrosis, but also whether it would be toxic or promote regeneration of liver tissue.

The findings provided evidence that two of the drug classes proposed by AI (HDAC inhibitors and BRD4 inhibitors) showed strong anti-fibrotic effects. One of the tested compounds, Vorinostat, is an FDA-approved cancer drug. In the organoid model, it not only suppressed fibrosis but also appeared to stimulate the growth of healthy liver cells.

“Since I was working on the text for a grant submission in this area, I was surprised by the AI co-scientist output,” Peltz told PsyPost.

In particular, Peltz was struck by how little prior research had explored this potential. After checking PubMed, he found over 180,000 papers on liver fibrosis in general, but only seven that mentioned Vorinostat in this context. Of those, four turned out to be unrelated to fibrosis, and another only referenced the drug in a data table without actually testing it. That left just two studies directly investigating Vorinostat for liver fibrosis.

While the HDAC and BRD4 inhibitors showed promising effects, the third AI-recommended class, DNMT1 inhibitors, did not. One compound in this category was too toxic to the organoids to be considered viable for further study.

To evaluate the AI’s performance, Peltz also selected two additional drug targets for comparison based on existing literature. These were chosen precisely because they had more published support suggesting they might work against fibrosis.

But when tested in the same organoid system, the inhibitors targeting those well-supported pathways did not reduce fibrosis. This outcome suggested that the AI was able to surface potentially effective treatments that human researchers might have missed, despite extensive literature reviews.

Looking ahead, Peltz said his team is “developing additional data with our liver organoid system to determine if Vorinostat can be effective for reducing an established fibrosis, and we are talking with some organizations and drug companies about the potential for Vorinostat being tested as an anti-fibrotic agent.”

An AI Recapitulates a Decade-Long Discovery in Days

In a separate demonstration of its reasoning power, the AI co-scientist was challenged to solve a biological mystery that had taken a team at Imperial College London over a decade to unravel. The research, published in Cell, focused on a peculiar family of mobile genetic elements in bacteria known as capsid-forming phage-inducible chromosomal islands, or cf-PICIs.

Scientists were puzzled by how identical cf-PICIs were found in many different species of bacteria. This was unexpected because these elements rely on viruses called phages to spread, and phages typically have a very narrow host range, often infecting only a single species or strain. The human research team had already solved the puzzle through years of complex experiments, but their findings were not yet public.

They had discovered a novel mechanism they termed “tail piracy,” where cf-PICIs produce their own DNA-filled “heads” (capsids) but lack tails. These tailless particles are then released and can hijack tails from a wide variety of other phages infecting different bacterial species, creating chimeric infectious particles that can inject the cf-PICI’s genetic material into a new host.

To test the AI co-scientist, the researchers provided it only with publicly available information from before their discovery was made and posed the same question: how do identical cf-PICIs spread across different bacterial species?

The AI co-scientist generated five ranked hypotheses. Its top-ranked suggestion was that cf-PICIs achieve their broad host range through “capsid-tail interactions,” proposing that the cf-PICI heads could interact with a wide range of phage tails. This hypothesis almost perfectly mirrored the “tail piracy” mechanism the human team had spent years discovering.

The AI, unburdened by the researchers’ initial assumptions and biases from existing scientific models, arrived at the core of the discovery in a matter of days. When the researchers benchmarked this result, they found that other leading AI models were not able to produce the same correct hypothesis, suggesting a more advanced reasoning capability in the AI co-scientist system.

Limitations and the Path Forward

Despite these promising results, researchers involved in the work caution that significant limitations remain. The performance of the AI co-scientist has so far been evaluated on a small number of specific biological problems. More testing is needed to determine if this capability can be generalized across other scientific domains. The AI’s reasoning is also dependent on the quality and completeness of the publicly available data it analyzes, which may contain its own biases or gaps in knowledge.

Perhaps most importantly, human expertise remains essential. While an AI can generate a large volume of plausible hypotheses, it lacks the deep contextual judgment that comes from years of hands-on experience. An experienced scientist is still needed to evaluate which ideas are truly worth pursuing and to design the precise experiments required for validation. The challenge of how to prioritize AI-generated ideas is substantial, as traditional experimental pipelines are not fast or inexpensive enough to test every promising lead.

“Generally, AI output must be evaluated by people with knowledge in the area; and AI output is most valuable to those with domain-specific expertise because they are best positioned to assess it and to make use of it,” Peltz told PsyPost.

Nevertheless, these two studies provide evidence that AI systems are evolving from helpful assistants into true collaborative partners in the scientific process. By generating novel and experimentally verifiable hypotheses, tools like the AI co-scientist have the potential to supercharge human intuition and accelerate the pace of scientific and biomedical breakthroughs.

“I believe that AI will dramatically accelerate the pace of discovery for many biomedical areas and will soon be used to improve patient care,” Peltz said. “My lab is currently using it for genetic discovery and for drug re-purposing, but there are many other areas of bioscience that will soon be impacted. At present, I believe that AI co-scientist is the best in this area, but this is a rapidly advancing field.”

The study, “AI-Assisted Drug Re-Purposing for Human Liver Fibrosis,” was authored by Yuan Guan, Lu Cui, Jakkapong Inchai, Zhuoqing Fang, Jacky Law, Alberto Alonzo Garcia Brito, Annalisa Pawlosky, Juraj Gottweis, Alexander Daryin, Artiom Myaskovsky, Lakshmi Ramakrishnan, Anil Palepu, Kavita Kulkarni, Wei-Hung Weng, Zhuanfen Cheng, Vivek Natarajan, Alan Karthikesalingam, Keran Rong, Yunhan Xu, Tao Tu, and Gary Peltz.

The study, “Chimeric infective particles expand species boundaries in phage-inducible chromosomal island mobilization,” was authored by Lingchen He, Jonasz B. Patkowski, Jinlong Wang, Laura Miguel-Romero, Christopher H.S. Aylett, Alfred Fillol-Salom, Tiago R.D. Costa, and José R. Penadés.

The study, “AI mirrors experimental science to uncover a mechanism of gene transfer crucial to bacterial evolution,” was authored by José R. Penadés, Juraj Gottweis, Lingchen He, Jonasz B. Patkowski, Alexander Daryin, Wei-Hung Weng, Tao Tu, Anil Palepu, Artiom Myaskovsky, Annalisa Pawlosky, Vivek Natarajan, Alan Karthikesalingam, and Tiago R.D. Costa.

AI Spend Mania Signals Caution for Tech Investors

1 November 2025 at 01:45

The post AI Spend Mania Signals Caution for Tech Investors appeared first on StartupHub.ai.

The current surge in AI spending, while indicative of technological advancement, is increasingly resembling a “speculative mania” driven by abundant central bank liquidity rather than organic demand from the real economy. This provocative insight comes from Bob Elliott, CEO and CIO of Unlimited, who recently joined CNBC’s “Closing Bell Overtime” to discuss the roaring tech […]

The post AI Spend Mania Signals Caution for Tech Investors appeared first on StartupHub.ai.

The hidden debt behind the AI boom: How Meta and xAI are quietly raising billions to finance AI investments

1 November 2025 at 01:29

The race to dominate artificial intelligence is pushing tech giants into uncharted financial territory. Behind the scenes, a quiet revolution is reshaping how companies fund their massive AI ambitions. Instead of piling debt directly onto their books, firms like Meta […]

The post The hidden debt behind the AI boom: How Meta and xAI are quietly raising billions to finance AI investments first appeared on Tech Startups.

Nvidia CEO: AI has reached a ‘virtuous cycle’ and ushered in a new era of computing

31 October 2025 at 17:33

Artificial intelligence, according to Nvidia’s Jensen Huang, has entered a self-sustaining phase — a “virtuous cycle” that’s reshaping how the world builds and profits from technology. Speaking at the APEC CEO Summit in South Korea, Huang said the progress in […]

The post Nvidia CEO: AI has reached a ‘virtuous cycle’ and ushered in a new era of computing first appeared on Tech Startups.

Nvidia to invest up to $1 billion in Poolside, valuing the AI startup at $12 billion

31 October 2025 at 02:26

Nvidia, the trillion-dollar chipmaker fueling today’s AI boom, is reportedly preparing one of its biggest startup bets yet. According to Bloomberg, the company plans to invest as much as $1 billion in Poolside, a Paris-based AI startup building software that […]

The post Nvidia to invest up to $1 billion in Poolside, valuing the AI startup at $12 billion first appeared on Tech Startups.

Top 10 Startup and Tech Funding News – October 30, 2025

31 October 2025 at 02:07

It’s Thursday, October 30, 2025, and we’re back with the top startup and tech funding news stories making waves today. From billion-dollar AI bets to fintech, space tech, and deep-tech innovation, investors showed no signs of slowing down as capital […]

The post Top 10 Startup and Tech Funding News – October 30, 2025 first appeared on Tech Startups.

The Digital Afterlife: AI’s Disruption of Death and Legacy

1 November 2025 at 00:16

The post The Digital Afterlife: AI’s Disruption of Death and Legacy appeared first on StartupHub.ai.

The ancient human yearning for immortality is rapidly colliding with the cutting edge of artificial intelligence, redefining not just what it means to live, but what it means to be dead. In a recent WIRED “Incognito Mode” segment, host Andrew Couts delved into the burgeoning “death tech” industry, exploring the fantastical promises of cryogenics alongside […]

The post The Digital Afterlife: AI’s Disruption of Death and Legacy appeared first on StartupHub.ai.

Apple’s Insular AI Strategy Raises Analyst Concerns

31 October 2025 at 21:16

The post Apple’s Insular AI Strategy Raises Analyst Concerns appeared first on StartupHub.ai.

Needham senior internet and media analyst Laura Martin delivered a pointed critique of Apple’s generative AI strategy during a recent appearance on CNBC’s ‘Money Movers’. While acknowledging that Apple has finally articulated an AI narrative, Martin contends the company is “four quarters late” to the party, and its approach lacks the expansive, economy-retooling vision demonstrated […]

The post Apple’s Insular AI Strategy Raises Analyst Concerns appeared first on StartupHub.ai.

The secret to sustainable AI may have been in our brains all along

31 October 2025 at 22:00

Researchers have developed a new method for training artificial intelligence that dramatically improves its speed and energy efficiency by mimicking the structured wiring of the human brain. The approach, detailed in the journal Neurocomputing, creates AI models that can match or even exceed the accuracy of conventional networks while using a small fraction of the computational resources.

The study was motivated by a growing challenge in the field of artificial intelligence: sustainability. Modern AI systems, such as the large language models that power generative AI, have become enormous. They are built with billions of connections, and training them can require vast amounts of electricity and cost tens of millions of dollars. As these models continue to expand, their financial and environmental costs are becoming a significant concern.

“Training many of today’s popular large AI models can consume over a million kilowatt-hours of electricity, which is equivalent to the annual use of more than a hundred US homes, and cost tens of millions of dollars,” said Roman Bauer, a senior lecturer at the University of Surrey and a supervisor on the project. “That simply isn’t sustainable at the rate AI continues to grow. Our work shows that intelligent systems can be built far more efficiently, cutting energy demands without sacrificing performance.”

To find a more efficient design, the research team looked to the human brain. While many artificial neural networks are “dense,” meaning every neuron in one layer is connected to every neuron in the next, the brain operates differently. Its connectivity is highly sparse and structured. For instance, in the visual system, neurons in the retina form localized and orderly connections to process information, creating what are known as topographical maps. This design is exceptionally efficient, avoiding the need for redundant wiring. The brain also refines its connections during development, pruning away unnecessary pathways to optimize its structure.

Inspired by these biological principles, the researchers developed a new framework called Topographical Sparse Mapping, or TSM. Instead of building a dense network, TSM configures the input layer of an artificial neural network with a sparse, structured pattern from the very beginning. Each input feature, such as a pixel in an image, is connected to only one neuron in the following layer in an organized, sequential manner. This method immediately reduces the number of connections, known as parameters, which the model must manage.

The team then developed an enhanced version of the framework, named Enhanced Topographical Sparse Mapping, or ETSM. This version introduces a second brain-inspired process. After the network trains for a short period, it undergoes a dynamic pruning stage. During this phase, the model identifies and removes the least important connections throughout its layers, based on their magnitude. This process is analogous to the synaptic pruning that occurs in the brain as it learns and matures, resulting in an even leaner and more refined network.

To evaluate their approach, the scientists built and trained a type of network known as a multilayer perceptron. They tested its ability to perform image classification tasks using several standard benchmark datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100. This setup allowed for a direct comparison of the TSM and ETSM models against both conventional dense networks and other leading techniques designed to create sparse, efficient AI.

The results showed a remarkable balance of efficiency and performance. The ETSM model was able to achieve extreme levels of sparsity, in some cases removing up to 99 percent of the connections found in a standard network. Despite this massive reduction in complexity, the sparse models performed just as well as, and sometimes better than, their dense counterparts. For the more difficult CIFAR-100 dataset, the ETSM model achieved a 14 percent improvement in accuracy over the next best sparse method while using far fewer connections.

“The brain achieves remarkable efficiency through its structure, with each neuron forming connections that are spatially well-organised,” said Mohsen Kamelian Rad, a PhD student at the University of Surrey and the study’s lead author. “When we mirror this topographical design, we can train AI systems that learn faster, use less energy and perform just as accurately. It’s a new way of thinking about neural networks, built on the same biological principles that make natural intelligence so effective.”

The efficiency gains were substantial. Because the network starts with a sparse structure and does not require complex phases of adding back connections, it trains much more quickly. The researchers’ analysis of computational costs revealed that their method consumed less than one percent of the energy and used significantly less memory than a conventional dense model. This combination of speed, low energy use, and high accuracy sets it apart from many existing methods that often trade performance for efficiency.

A key part of the investigation was to confirm the importance of the orderly, topographical wiring. The team compared their models to networks that had a similar number of sparse connections but were arranged randomly. The results demonstrated that the brain-inspired topographical structure consistently produced more stable training and higher accuracy, indicating that the specific pattern of connectivity is a vital component of its success.

The researchers acknowledge that their current framework applies the topographical mapping only to the model’s input layer. A potential direction for future work is to extend this structured design to deeper layers within the network, which could lead to even greater gains in efficiency. The team is also exploring how the approach could be applied to other AI architectures, such as the large models used for natural language processing, where the efficiency improvements could have a profound impact.

The study, “Topographical sparse mapping: A neuro-inspired sparse training framework for deep learning models,” was authored by Mohsen Kamelian Rad, Ferrante Neri, Sotiris Moschoyiannis, and Roman Bauer.

Yesterday — 31 October 2025Main stream

The Week’s 10 Biggest Funding Rounds: AI, Fintech And E-Commerce In The Lead

31 October 2025 at 21:15

Want to keep track of the largest startup funding deals in 2025 with our curated list of $100 million-plus venture deals to U.S.-based companies? Check out The Crunchbase Megadeals Board.

This is a weekly feature that runs down the week’s top 10 announced funding rounds in the U.S. Check out last week’s biggest funding rounds here.

The week’s largest funding rounds confirmed that we’re still very much in the AI era. This included the biggest deal, a $350 million Series C for AI hiring startup Mercor, along with good-sized financings for legal tech unicorn Harvey, shopping platform Whatnot, and email security provider Sublime Security.

1. Mercor, $350M, AI hiring: San Francisco-based Mercor, a provider of AI-enabled tools for hiring, secured $350 million in Series C funding at a $10 billion valuation. Felicis 1 led the financing, which included participation by Robinhood Ventures, General Catalyst and Benchmark.

2. (tied) SavvyMoney, $225M, fintech: SavvyMoney, which offers tools for financial services providers to embed features like credit scores and personalized offers into their consumer offerings, announced a $225 million investment co-led by PSG Equity and Canapi Ventures. Founded in 2009, the Dublin, California, company currently works with more than 1,500 financial institution customers.

2. (tied) Whatnot, $225M, e-commerce: Whatnot, a live shopping platform and marketplace, has closed a $225 million Series F round, more than doubling its valuation to $11.5 billion in less than 10 months. DST Global and CapitalG co-led the financing, which brings the Los Angeles-based company’s total raised to about $968 million since its 2019 inception.

4. (tied) Sublime Security, $150M, cybersecurity: Sublime Security, a developer of agentic AI tools for email security, raised $150 million in a Series C round led by Georgian. The financing brings total funding to date for the 6-year-old Washington, D.C.-based company to around $240 million, per Crunchbase data.

4. (tied) Harvey, $150M, legal tech: Harvey, developer of an AI-enabled platform for legal professionals, closed on a fresh $150 million, bringing total reported funding to date to $1 billion. Andreessen Horowitz led the latest round, which reportedly set an $8 billion valuation for the 3-year-old, San Francisco-based company.

6. (tied) Human Interest, $100M, finance: Human Interest, a San Francisco-based startup that helps small businesses offer 401(k) plans to their employees, raised more than $100 million at a $3 billion valuation, Axios reports. That valuation is up from the $1.3 billion the company was last valued at in 2024. Previous investors Baillie Gifford, BlackRock, Marshall Wace, Morgan Stanley and TPG again backed the company. 

6. (tied) Substrate, $100M, semiconductors: Substrate, a San Francisco-based startup seeking to build semiconductor factories with new laser-based technology, raised $100 million from Founders Fund, General Catalyst, IQT and others.

8. Zag Bio, $80M, biotech: Cambridge, Massachusetts-based Zag Bio, a developer of thymus-targeted medicines, announced its public launch with $80 million in financing, including a recently closed Series A round. Polaris Partners founded and incubated the startup and co-led the Series A financing with the JDRF T1D Fund.

9. ConductorOne, $79M, identity security: ConductorOne, an identity security startup building an AI platform geared for human, non-human and AI identities, landed $79 million in a Series B financing led by Greycroft. The 4-year-old Portland, Oregon-based company says it saw 400% revenue growth last year.

10. Blueprint, $60M, personal care: Blueprint, a Los Angeles-based brand that markets supplements, skin and hair care products, and foods geared to promote well-being and longevity, raised $60 million from a long list of venture and celebrity investors including Paris Hilton, Cameron Winklevoss, Tyler Winklevoss and Logan Paul.

Methodology

We tracked the largest announced rounds in the Crunchbase database that were raised by U.S.-based companies for the period of Oct. 25-31. Although most announced rounds are represented in the database, there could be a small time lag as some rounds are reported late in the week.

Illustration: Dom Guzman


  1. Felicis Ventures is an investor in Crunchbase. They have no say in our editorial process. For more, head here.

ASEAN’s Quest for Culturally Intelligent AI

31 October 2025 at 19:48

The post ASEAN’s Quest for Culturally Intelligent AI appeared first on StartupHub.ai.

The global surge of artificial intelligence presents both unprecedented opportunities and profound challenges for the diverse nations of ASEAN. At the recent Bloomberg Business Summit at ASEAN in Kuala Lumpur, a panel featuring Khairul Anwar, Founder & CEO of Pandai; Ilaria Chan, Chairperson of Tech For Good Institute and Group Advisor for Tech & Social […]

The post ASEAN’s Quest for Culturally Intelligent AI appeared first on StartupHub.ai.

ASEAN’s Digital Destiny: Blockchain, AI, and the Global South’s Opportunity

31 October 2025 at 18:47

The post ASEAN’s Digital Destiny: Blockchain, AI, and the Global South’s Opportunity appeared first on StartupHub.ai.

The tectonic plates of technology shift every couple of decades, fundamentally altering how humanity interacts with data and systems. Dato’ Fadzli Shah, Co-Founder of Zetrix, addressed attendees at the Bloomberg Business Summit at ASEAN in Kuala Lumpur, outlining his vision for how the region can strengthen its digital economy and competitiveness through the advancement of […]

The post ASEAN’s Digital Destiny: Blockchain, AI, and the Global South’s Opportunity appeared first on StartupHub.ai.

Survey: Two-thirds of AI-native startups let AI write most of their code

31 October 2025 at 17:30
(Photo by Radowan Nakif Rehan on Unsplash)

[Editor’s Note: This guest post is by Marcelo Calbucci, a longtime Seattle tech and startup community leader.]

This month, I ran a survey with early-stage founders from Seattle-based Foundations about their use of AI tools and agents. There were some surprises in the data — and not in the direction you’d expect — and trends that are worth talking about.

The sample size represents 22 startups with one-to-five software engineers each, for a total of 42 people. What makes this cohort valuable to understand is that they are AI-native startups, started during a time that AI can code. This gives us a glimpse into the future of tech companies.

The first question I asked on the survey was about the percentage of production code being written by AI. I wrote this question explicitly to exclude unit tests, scripts, documents, and other artifacts that are not related to the core value proposition of a business. If you know one thing about AI coding, it is that it generates large volumes of unit tests, readme files, and scripts. None of that relates to the code that delivers the value to the customer.

Here’s the surprising fact: out of the 22, four startups (18%) said AI is writing 100% of their code. That’s mind-blowing! It doesn’t mean these folks are not reviewing and re-prompting the AI to refine the code. However, it means they aren’t typing code in an IDE. There are 11 startups (50%) where AI is writing 80-99% of the code. Adding the four where AI writes everything, 68% of startups have AI write over 80% of the production code. On the other side of the spectrum, three startups (13.6%) said that AI is writing less than 50% of their code.

Choose your weapons

From the news that Cursor gets in the press, you’d think usage for this cohort is close to 100%. In our sample, out of 42 programmers from 22 unique startups, “only” 23 of them (54.7%) use Cursor. On average, Cursor programmers spent $113.63/person in September. The most popular tool, though, is Claude Code, with 64.3% of programmers using it and spending $167.41/person in September. Claude is the preferred tool for startups, with 16 of the 22 (72.7%) using it.

After Claude and Cursor, there is a big cliff, with OpenAI Codex coming in a distant third place with seven of the startups using it, representing 12 of the 42 programmers. On average, expenses with OpenAI Codex came in at $48.49/person in September. The fourth and fifth places were GitHub Copilot and Gemini CLI by Google. They had 9.52% and 4.76% of programmers using it, respectively.

On average, each software engineer spent $182.55 in the top five AI tools mentioned above, with some startups spending over $400/person.

Founders also mentioned they use a variety of tools to create production code, including Lovable, Devplan, Mentat, Factory.ai, Jetbrains Junie, Warp, and Figma.

Roadblocks

When asked about what’s preventing more use of AI for coding, the number one complaint was the quality of the code. Another hindrance to faster adoption is the learning curve to get the agent to do what you want.

In terms of frustration, this group raises three key issues. First, the quality of the output, requiring considerable rework. Second, a mismatch between expectation and reality based on what everyone is hearing. Lastly, the most common frustration — and I definitely empathize with this one — is managing the context and dealing with large code bases.

What’s next?

In the survey, I asked about their intention to continue using AI tools and agents to assist with product development. The survey asked the founders if they intended to add, remove, increase, or decrease usage of each tool. The biggest winner, by far, was Codex, with nine startups (40.9%) saying they aren’t using it yet, but plan to use it in Q4. Once I normalize the data to account for what the expectations are for Q4, Claude will maintain its leadership, but Codex will match in the number of startups. Cursor and GitHub Copilot will trend slightly lower, each with one startup saying they will stop using it. Finally, the Gemini CLI might see a small increase in adoption, with three startups claiming to give it a try in Q4.

Contrary to the many other aspects of software engineering like choosing a cloud provider, a language, or database, AI tools and agents are not a zero-sum market. On this survey, 68.2% of startups used more than one AI tool to assist in production code development. Based on their stated intention, that number will grow to 86.4% in Q4.

Beyond the Magnificent Seven: Unearthing AI’s Hidden Investment Gems

31 October 2025 at 16:16

The post Beyond the Magnificent Seven: Unearthing AI’s Hidden Investment Gems appeared first on StartupHub.ai.

The current fervor around artificial intelligence has led to unprecedented market concentration, particularly within the “Magnificent 7” tech giants. Yet, as Daniel Kim, Portfolio Manager of the Saturna International Fund, recently articulated on CNBC’s Worldwide Exchange, the most compelling opportunities in AI may lie precisely where fewer eyes are looking. Kim, speaking with interviewer Frank […]

The post Beyond the Magnificent Seven: Unearthing AI’s Hidden Investment Gems appeared first on StartupHub.ai.

The Non-Humanoid Robot Startups Are Rising Too

31 October 2025 at 15:00

Despite our acclimatization to the forward march of technology, many of us remain vaguely creeped out by the concept of humanoid robots.

Sure, it’d be wonderful to have autonomous machines adept at cleaning the house, harvesting and preparing food, running warehouses and performing a host of generally thankless and burdensome jobs. But must they look like us too?

For many startups, the answer to this question is “no.”

While humanoid robots startups like Figure and Apptronik have drawn headlines in recent months for big funding deals and flashy prototypes, an array of companies working on less-anthropomorphic designs have also secured considerable investment. These include four-legged models, AI-enabled appendages and skilled swimmers.

The non-humanoid bot startups getting funded

To illustrate, we used Crunchbase data to assemble a sample list of 26 companies in the non-humanoid robot startup sector that have raised rounds in the past few quarters. It’s a varied lot, with focus areas ranging from farming to pool cleaning to massaging.

Bots around town

The list also features a mix of consumer-facing and industrial use cases, and we figured we’d start by highlighting the first category. It’s not that these bots are necessarily more useful, but rather that being out in public does make it a bit more fun to contemplate.

If recently funded startups have their way, some of the bots we see in action could be taking on more of the everyday drudgery currently shouldered by humans.

Cleaning is one of the big areas. China-based Narwal Robotics, which closed a $100 million Series E in April, makes robot vacuums and mops and touts its “AI adaptive hot water mop washing,” LiDAR navigation and embedded dirt sensor. San Francisco-based The Bot Co., meanwhile, has raised $300 million since last year to iterate its vision of robots for household chores but has not yet released a prototype.

Pool-cleaning, an area already long-dominated by autonomous machines, is also set for an AI era upgrade, with two China-based companies pulling in rounds of $140 million each this year. Xingmai Innovation, which closed its round in September, markets its $3,000 Beatbot model as the “world’s first AI-powered 5-in-1 robotic pool cleaner.” Rival Aiper charges $1,700 for its Scuba Max Pro, which features smart pool mapping and a dedicated app.

And for those who need some pampering after a long day of not cleaning the pool, massage bot startup Aescape offers another spending option. The New York-based company secured $83 million in March to expand its customizable, “fully autonomous, AI-driven massage” offering.

Bots behind the scenes

While we may enjoy gawking at the still-unusual sight of a bot in public making a latte or delivering a restaurant meal, the bulk of funded companies in the non-humanoid bot space are working on models that will do their work behind the scenes.

Surgical robots have long been one of the more heavily funded areas, and this holds true for recent investment as well. The largest fundraiser on our list, U.K.-based CMR Surgical, developer of a soft tissue surgical robot, has secured $1.1 billion in known funding to date, including a $200 million April financing. Israel-based ForSight Robotics, developer of a robotic platform for ophthalmic surgery, is also scaling up, closing a $125 million Series B in June.

On the industrial front, Swiss startup Anybotics has raised more than $150 million to develop a four-legged bot optimized for inspections, capable of climbing stairs and avoiding obstacles.

And Flexiv, which closed a $100 million Series C this summer, is working on appendage-like, AI-enabled robots that can be adapted for multiple industries.

Agtech also emerged as a favored area for investment. Ecorobotix, based in Switzerland, has raised a couple hundred million for precision crop spraying, while Seattle-based Carbon Robotix is working on technology to kill weeds with lasers.

Won’t mistake it for a human

Of all the above-mentioned startups, none appear to be working on anything that could be remotely confused for a human, even from a distance. This seems logical, considering that so many jobs people have historically done don’t seem ideally suited to our particular form.

If all goes well with these non-humanoid robot startups, perhaps it would leave us humans free to spend more time doing the activities that do seem optimally suited to our form. Sitting on the couch would be high on this author’s list, though I’m sure others could find many more productive pursuits.

Related Crunchbase list:

Illustration: Dom Guzman

AI’s Relentless March: Efficiency, Autonomy, and Economic Reshaping

31 October 2025 at 00:46

The post AI’s Relentless March: Efficiency, Autonomy, and Economic Reshaping appeared first on StartupHub.ai.

The accelerating integration of artificial intelligence into daily life and industrial infrastructure is no longer a distant vision but a tangible reality, as evidenced by the rapid-fire developments discussed in Matthew Berman’s latest Forward Future AI news briefing. From the nascent stages of consumer robotics to revolutionary computing paradigms, the AI landscape is undergoing a […]

The post AI’s Relentless March: Efficiency, Autonomy, and Economic Reshaping appeared first on StartupHub.ai.

XPO’s AI-Driven Efficiency in a Soft Freight Market

31 October 2025 at 00:15

The post XPO’s AI-Driven Efficiency in a Soft Freight Market appeared first on StartupHub.ai.

In an era where artificial intelligence often conjures images of job displacement, XPO CEO Mario Harik offers a refreshingly pragmatic perspective: AI, for his logistics giant, is fundamentally about efficiency and optimization, not headcount reduction. This insight anchored a recent interview on CNBC’s Worldwide Exchange with anchor Frank Hollan, where Harik detailed XPO’s latest earnings […]

The post XPO’s AI-Driven Efficiency in a Soft Freight Market appeared first on StartupHub.ai.

Amazon’s Anthropic investment boosts its quarterly profits by $9.5B

31 October 2025 at 02:46
Amazon just opened Project Rainier, one of the world’s largest AI compute clusters, in partnership with Anthropic.

Amazon’s third-quarter profits rose 38% to $21.2 billion, but a big part of the jump had nothing to do with its core businesses of selling goods or cloud services.

The company reported a $9.5 billion pre-tax gain from its investment in the AI startup Anthropic, which was included in Amazon’s non-operating income for the quarter.

The windfall wasn’t the result of a sale or cash transaction, but rather accounting rules. After Anthropic raised new funding in September at a $183 billion valuation, Amazon was required to revalue its equity stake to reflect the higher market price, a process known as a “mark-to-market” adjustment.

To put the $9.5 billion paper gain in perspective, the Amazon Web Services cloud business — historically Amazon’s primary profit engine — generated $11.4 billion in quarterly operating profits.

At the same time, Amazon is spending big on its AI infrastructure buildout for Anthropic and others. The company just opened an $11 billion AI data center complex, dubbed Project Rainier, where Anthropic’s Claude models run on hundreds of thousands of Amazon’s Trainium 2 chips.

Amazon is going head-to-head against Microsoft, which just re-upped its partnership with ChatGPT maker OpenAI; and Google, which reported record cloud revenue for its recent quarter, driven by AI. The AI infrastructure race is fueling a big surge in capital spending for all three cloud giants.

Amazon spent $35.1 billion on property and equipment in the third quarter, up 55% from a year earlier.

Andy Jassy, the Amazon CEO, sought to reassure Wall Street that the big outlay will be worth it.

“You’re going to see us continue to be very aggressive investing in capacity, because we see the demand,” Jassy said on the company’s conference call. “As fast as we’re adding capacity right now, we’re monetizing it. It’s still quite early, and represents an unusual opportunity for customers and AWS.”

The cash for new data centers doesn’t hit the bottom line immediately, but it comes into play as depreciation and amortization costs are recorded on the income statement over time.

And in that way, the spending is starting to impact on AWS results: sales rose 20% to $33 billion in the quarter, yet operating income increased only 9.6% to $11.4 billion. The gap indicates that Amazon’s heavy AI investments are compressing profit margins in the near term, even as the company bets on the infrastructure build-out to expand its business significantly over time.

Those investments are also weighing on cash generation: Amazon’s free cash flow dropped 69% over the past year to $14.8 billion, reflecting the massive outlays for data centers and infrastructure.

Amazon has invested and committed a total of $8 billion in Anthropic, initially structured as convertible notes. A portion of that investment converted to equity with Anthropic’s prior funding round in March.

New study shows that a robot’s feedback can shape human relationships

31 October 2025 at 00:00

A new study has found that a robot’s feedback during a collaborative task can influence the feeling of closeness between the human participants. The research, published in Computers in Human Behavior, indicates that this effect changes depending on the robot’s appearance and how it communicates.

As robots become more integrated into workplaces and homes, they are often designed to assist with decision-making. While much research has focused on how robots affect the quality of a group’s decisions, less is known about how a robot’s presence might alter the personal relationships between the humans on the team. The researchers sought to understand this dynamic by exploring how a robot’s agreement or disagreement impacts the sense of interpersonal connection people feel.

“Given the rise of large language models in recent years, we believe robots of different forms will soon be equipped with non-scripted verbal language to help people make decisions in various contexts. We conducted our research to call for careful consideration and control over the precise behaviors robots should use to provide feedback in the future,” said study author Ting-Han Lin, a computer science PhD student at the University of Chicago.

The investigation centered on two established psychological ideas. One, known as Balance Theory, suggests that people feel more positive toward one another when they are treated similarly by a third party, even if that treatment is negative. The other concept, the Influence of Negative Affect, proposes that a negative tone or criticism can damage the general atmosphere of an interaction and harm relationships.

To test these ideas, the researchers conducted two separate experiments, each involving pairs of participants who did not know each other. In both experiments, the pairs worked together to answer a series of eight personal questions, such as “What is the most important factor contributing to a life well-lived?” For each question, participants first gave their own individual answers before discussing and agreeing on a joint response.

A robot was present to mediate the task. After each person gave their initial answer, the robot would provide feedback. This feedback varied in two ways. First was its positivity, meaning the robot would either agree or disagree with the person’s statement. Second was its treatment of the pair, meaning the robot would either treat both people equally (agreeing with both or disagreeing with both) or unequally (agreeing with one and disagreeing with the other).

The first experiment involved 172 participants interacting with a highly human-like robot named NAO. This robot could speak, use gestures like nodding or shaking its head, and employed artificial intelligence to summarize a person’s response before giving its feedback. Its verbal disagreements were designed to grow in intensity, beginning with mild phrases and ending with statements like, “I am fundamentally opposed with your viewpoint.”

The results from this experiment showed that the positivity of the robot’s feedback had a strong effect on the participants’ relationship. When the NAO robot gave positive feedback, the two human participants reported feeling closer to each other. When the robot consistently gave negative feedback, the participants felt more distant from one another.

“A robot’s feedback to two people in a decision-making task can shape their closeness,” Lin told PsyPost.

This outcome supports the theory regarding the influence of negative affect. The robot’s consistent negativity seemed to create a less pleasant social environment, which in turn reduced the feeling of connection between the two people. The robot’s treatment of the pair, whether equal or unequal, did not appear to be the primary factor shaping their closeness in this context. Participants also rated the human-like robot as warmer and more competent when it was positive, though they found it more discomforting when it treated them unequally.

The second experiment involved 150 participants and a robot with a very low degree of human-like features. This robot resembled a simple, articulated lamp and could not speak. It communicated its feedback exclusively through minimal gestures, such as nodding for agreement or shaking its head from side to side for disagreement.

With this less-human robot, the findings were quite different. The main factor influencing interpersonal closeness was the robot’s treatment of the pair. When the robot treated both participants equally, they reported feeling closer to each other, regardless of whether the feedback was positive or negative. Unequal treatment, where the robot agreed with one person and disagreed with the other, led to a greater sense of distance between them.

This result aligns well with Balance Theory. The shared experience of being treated the same by the robot, either through mutual agreement or mutual disagreement, seemed to create a bond. The researchers also noted a surprising finding. When the lamp-like robot disagreed with both participants, they felt even closer than when it agreed with both, suggesting that the robot became a “common enemy” that united them.

“Heider’s Balance Theory dominates when a low anthropomorphism robot is present,” Lin said.

The researchers propose that the different outcomes are likely due to the intensity of the feedback delivered by each robot. The human-like NAO robot’s use of personalized speech and strong verbal disagreement was potent enough to create a negative atmosphere that overshadowed other social dynamics. Its criticism was taken more seriously, and its negativity was powerful enough to harm the human-human connection.

“The influence of negative affect prevails when a high anthropomorphism robot exists,” Lin said.

In contrast, the simple, non-verbal gestures of the lamp-like robot were not as intense. Because its disagreement was less personal and less powerful, it did not poison the overall interaction. This allowed the more subtle effects of balanced versus imbalanced treatment to become the main influence on the participants’ relationship. Interviews with participants supported this idea, as people interacting with the machine-like robot often noted that they did not take its opinions as seriously.

Across both experiments, the robot’s feedback did not significantly alter how the final joint decisions were made. Participants tended to incorporate each other’s ideas fairly evenly, regardless of the robot’s expressed opinion. This suggests the robot’s influence was more on the social and emotional level than on the practical outcome of the decision-making task.

The study has some limitations, including the fact that the two experiments were conducted in different countries with different participant populations. The first experiment used a diverse group of museum visitors in the United States, while the second involved university students in Israel. Future research could explore these dynamics in more varied contexts.

The study, “The impact of a robot’s agreement (or disagreement) on human-human interpersonal closeness in a two-person decision-making task,” was authored by Ting-Han Lin, Yuval Rubin Kopelman, Madeline Busse, Sarah Sebo, and Hadas Erel.

Meta loses 12% market cap after $70B AI spending plan sparks investor doubts despite strong quarter

30 October 2025 at 18:25

Meta Platforms’ stock took a sharp hit on Thursday, sliding more than 12% as investors grew uneasy about CEO Mark Zuckerberg’s massive new spending plans for artificial intelligence. The drop wiped out tens of billions in market value in a […]

The post Meta loses 12% market cap after $70B AI spending plan sparks investor doubts despite strong quarter first appeared on Tech Startups.

Top 10 Startup and Tech Funding News – October 29, 2025

30 October 2025 at 03:09

It’s Wednesday, October 29, 2025, and we’re back with the top startup and tech funding news stories making waves today. From billion-dollar AI rounds to breakthroughs in biotech, climate tech, and fintech infrastructure, investors continue to back transformative ideas across […]

The post Top 10 Startup and Tech Funding News – October 29, 2025 first appeared on Tech Startups.

Before yesterdayMain stream

Inside the UW Allen School: Six ‘grand challenges’ shaping the future of computer science

30 October 2025 at 20:42
Magdalena Balazinska, director of the UW Allen School of Computer Science & Engineering, opens the school’s annual research showcase Wednesday in Seattle. (GeekWire Photo / Todd Bishop)

The University of Washington’s Paul G. Allen School of Computer Science & Engineering is reframing what it means for its research to change the world.

In unveiling six “Grand Challenges” at its annual Research Showcase and Open House in Seattle on Wednesday, the Allen School’s leaders described a blueprint for technology that protects privacy, supports mental health, broadens accessibility, earns public trust, and sustains people and the planet.

The idea is to “organize ourselves into some more specific grand challenges that we can tackle together to have an even greater impact,” said Magdalena Balazinska, director of the Allen School and a UW computer science professor, opening the school’s annual Research Showcase and Open House.

Here are the six grand challenges:

  • Anticipate and address security, privacy, and safety issues as tech permeates society.
  • Make high-quality cognitive and mental health support available to all.
  • Design technology to be accessible at its inception — not as an add-on.
  • Design AI in a way that is transparent and equally beneficial to all.
  • Build systems that can be trusted to do exactly what we want them to do, every time.
  • Create technologies that sustain people and the planet.

Balazinska explained that the list draws on the strengths and interests of its faculty, who now number more than 90, including 74 on the tenure track.

With total enrollment of about 2,900 students, last year the Allen School graduated more than 600 undergrads, 150 master’s students, and 50 Ph.D. students.

The Allen School has grown so large that subfields like systems and NLP (natural language processing) risk becoming isolated “mini departments,” said Shwetak Patel, a University of Washington computer science professor. The Grand Challenges initiative emerged as a bottom-up effort to reconnect these groups around shared, human-centered problems. 

Patel said the initiative also encourages collaborations on campus beyond the computer science school, citing examples like fetal heart rate monitoring with UW Medicine.

A serial entrepreneur and 2011 MacArthur Fellow, Patel recalled that when he joined UW 18 years ago, his applied and entrepreneurial focus was seen as unconventional. Now it’s central to the school’s direction. The grand challenges initiative is “music to my ears,” Patel said.

In tackling these challenges, the Allen School has a unique advantage against many other computer science schools. Eighteen faculty members currently hold what’s known as “concurrent engagements” — formally splitting time between the Allen School and companies and organizations such as Google, Meta, Microsoft, and the Allen Institute for AI (Ai2).

University of Washington computer science professor Shwetak Patel at the Paul G. Allen School’s annual research showcase and open house. (GeekWire Photo / Taylor Soper)

This is a “superpower” for the Allen School, said Patel, who has a concurrent engagement at Google. These arrangements, he explained, give faculty and students access to data, computing resources, and real-world challenges by working directly with companies developing the most advanced AI systems.

“A lot of the problems we’re trying to solve, you cannot solve them just at the university,” Patel said, pointing to examples such as open-source foundation models and AI for mental-health research that depend on large-scale resources unavailable in academia alone.

These roles can also stretch professors thin. “When somebody’s split, there’s only so much mental energy you can put into the university,” Patel said. Many of those faculty members teach just one or two courses a year, requiring the school to rely more on lecturers and teaching faculty.

Still, he said, the benefits outweigh the costs. “I’d rather have 50% of somebody than 0% of somebody, and we’ll make it work,” he said. “That’s been our strategy.”

The Madrona Prize, an annual award presented at the event by the Seattle-based venture capital firm, went to a project called “Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward.” The system makes AI chatbots more personal by giving them a “curiosity reward,” motivating the AI to actively learn about a user’s traits during a conversation to create more personalized interactions.

On the subject of industry collaborations, the lead researcher on the prize-winning project, UW Ph.D. student Yanming Wan, conducted the research while working as an intern at Google DeepMind. (See full list of winners and runners-up below.)

At the evening poster session, graduate students filled the rooms to showcase their latest projects — including new advances in artificial intelligence for speech, language, and accessibility.

DopFone: Doppler-based fetal heart rate monitoring using commodity smartphones

Poojita Garg, a second-year PhD student.

DopFone transforms phones into fetal heart rate monitors. It uses the phone speaker to transmit a continuous sine wave and uses the microphone to record the reflections. It then processes the audio recordings to estimate fetal heart rate. It aims to be an alternative to doppler ultrasounds that require trained staff, which aren’t practical for frequent remote use.

“The major impact would be in the rural, remote and low-resource settings where access to such maternity care is less — also called maternity care deserts,” said Poojita Garg, a second-year PhD student.

CourseSLM: A Chatbot Tool for Supporting Instructors and Classroom Learning

Marquiese Garrett, a sophomore at the UW.

This custom-built chatbot is designed to help students stay focused and build real understanding rather than relying on quick shortcuts. The system uses built-in guardrails to keep learners on task and counter the distractions and over-dependence that can come with general large language models.

Running locally on school devices, the chatbot helps protect student data and ensures access even without Wi-Fi.

“We’re focused on making sure students have access to technology, and know how to use it properly and safely,” said Marquiese Garrett, a sophomore at the UW.

Efficient serving of SpeechLMs with VoxServe

Keisuke Kamahori, a third-year PhD student at the Allen School.

VoxServe makes speech-language models run more efficiently. It uses a standardized abstraction layer and interface that allows many different models to run through a single system. Its key innovation is a custom scheduling algorithm that optimizes performance depending on the use case.

The approach makes speech-based AI systems faster, cheaper, and easier to deploy, paving the way for real-time voice assistants and other next-gen speech applications.

“I thought it would be beneficial if we can provide this sort of open-source system that people can use,” said Keisuke Kamahori, third-year Ph.D. student at the Allen School.

ConvFill: Model collaboration for responsive conversational voice agents

Zachary Englhardt (left), a fourth-year PhD student, and Vidya Srinivas, a third-year PhD student.

ConvFill is a lightweight conversational model designed to reduce the delay in voice-based large language models. The system responds quickly with short, initial answers, then fills in more detailed information as larger models complete their processing.

By combining small and large models in this way, ConvFill delivers faster responses while conserving tokens and improving efficiency — an important step toward more natural, low-latency conversational AI.

“This is an exciting way to think about how we can combine systems together to get the best of both worlds,” said Zachary Englhardt, a third-year Ph.D. student. “It’s an exciting way to look at problems.”

ConsumerBench: Benchmarking generative AI on end-user devices

Yile Gu, a third-year PhD student at the Allen School.

Running generative AI locally — on laptops, phones, or other personal hardware — introduces new system-level challenges in fairness, efficiency, and scheduling.

ConsumerBench is a benchmarking framework that tests how well generative AI applications perform on consumer hardware when multiple AI models run at the same time. The open-source tool helps researchers identify bottlenecks and improve performance on consumer devices.

There are a number of benefits to running models locally: “There are privacy purposes — a user can ask for questions related to email or private content, and they can do it efficiently and accurately,” said Yile Gu, a third-year Ph.D. student at the Allen School.

Designing Chatbots for Sensitive Health Contexts: Lessons from Contraceptive Care in Kenyan Pharmacies

Lisa Orii, a fifth-year Ph.D. student at the Allen School.

A project aimed at improving contraceptive access and guidance for adolescent girls and young women in Kenya by integrating low-fidelity chatbots into healthcare settings. The goal is to understand how chatbots can support private, informed conversations and work effectively within pharmacies.

“The fuel behind this whole project is that my team is really interested in improving health outcomes for vulnerable populations,” said Lisa Orii, a fifth-year Ph.D. student.

See more about the research showcase here. Here’s the list of winning projects.

Madrona Prize Winner: “Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward” Yanming Wan, Jiaxing Wu, Marwa Abdulhai, Lior Shani, Natasha Jaques

Runner up: “VAMOS: A Hierarchical Vision-Language-Action Model for Capability-Modulated and Steerable Navigation” Mateo Guaman Castro, Sidharth Rajagopal, Daniel Gorbatov, Matt Schmittle, Rohan Baijal, Octi Zhang, Rosario Scalise, Sidharth Talia, Emma Romig, Celso de Melo, Byron Boots, Abhishek Gupta

Runner up: “Dynamic 6DOF VR reconstruction from monocular videos” Baback Elmieh, Steve Seitz, Ira-Kemelmacher, Brian Curless

People’s Choice: “MolmoAct” Jason Lee, Jiafei Duan, Haoquan Fang, Yuquan Deng, Shuo Liu, Boyang Li, Bohan Fang, Jieyu Zhang, Yi Ru Wang, Sangho Lee, Winson Han, Wilbert Pumacay, Angelica Wu, Rose Hendrix, Karen Farley, Eli VanderBilt, Ali Farhadi, Dieter Fox, Ranjay Krishna

Editor’s Note: The University of Washington underwrites GeekWire’s coverage of artificial intelligence. Content is under the sole discretion of the GeekWire editorial team. Learn more about underwritten content on GeekWire.

Exclusive: Founded By Uber Alumni, Archy Raises $20M To Put Dental Practices ‘On Autopilot’

30 October 2025 at 18:00

It was 2021 and Jonathan Rat was tired of seeing his wife, a dentist, struggle to maintain the tech stack at her practice.

Rat, who had served as a product manager at companies including Uber, Meta and SurveyMonkey, dug into the problem and discovered that “most of the software used in the industry” was more than 20 years old and still required physical services onsite.

“Most lacked integration with other platforms, were slow and buggy, and impossible to train new employees on,” he recalls.

Archy Founders Benjamin Kolin and Jonathan Rat
Archy Founders Benjamin Kolin and Jonathan Rat

So Rat teamed up with Benjamin Kolin, a former director of engineering at Uber, to start Archy, an AI-powered platform that aims “to put dental practices on autopilot.” The pair previously led the rebuilding of Uber’s payment platform that’s still in use today.

“I realized there was a massive need and opportunity for a modern, cloud-based software platform and set out to build that,” Rat told Crunchbase News. “I also realized bigger tech players have been building software for the larger healthcare market but overlooked the $500 billion dental industry.”

And now, Archy has just raised $20 million in Series B funding to help it grow even more, it told Crunchbase News exclusively. TCV led the financing, which also included participation from Bessemer Venture Partners, CRV, Entrée Capital and 25 practicing dentists who wrote checks as angel investors. The raise brings Archy’s total funding to date to $47 million, Rat said.

The company raised a $15 million Series A led by Entrée Capital almost exactly one year ago. Rat confirmed the Series B was an up round, but declined to disclose Archy’s valuation.

All-in-one tool

Archy claims to replace more than five existing tools to handle scheduling, charting, billing, imaging, insurance, payments, staffing, messaging and reporting “from one login.”

It is now building AI agents “to handle the busywork” such as checking eligibility, filing and following up on claims, writing notes, managing patient communications and scheduling, and “turning raw practice data into clear answers,” according to Rat.

The startup processes more than $100 million in payments annually across 45 states and has seen roughly 300% year-over-year growth, he said. It currently serves 2.5 million patients and has processed over 35 million X-rays through its platform.

The company claims that mid-sized dental practices report saving around 80 hours a month by using its technology, and are able to avoid “big hardware costs.” For example, Rat said that one practice saved about $50,000 in its first year of using Archy.

Dual-revenue model

San Jose, California-based Archy operates on a dual-revenue model that combines subscription-based fees with payment processing services, and offers tiered monthly subscription packages. In addition to its subscription fees, Archy serves as a merchant processor for its clients, generating revenue from a percentage of payment transactions processed through the platform.

“This hybrid approach allows us to remain aligned with our clients’ success while providing flexible options that scale with their business needs,” Rat told Crunchbase News.

The company plans to use its new capital to “hire aggressively” across its engineering, AI and go-to-market teams. Presently, it has 57 employees. It plans to expand internationally starting in 2026.

Austin Levitt, partner at TCV, told Crunchbase News via email that his firm had been looking for a way to invest in the dental space “for a long time” but didn’t find a company that was “appropriately tackling the root of the problem — the core PMS (practice management systems)” until it came across Archy.

He added: “We consistently heard that Archy was supremely easy to use, requiring almost no training in contrast to others, providing a seamless ‘iPhone-like’ experience, and reducing what took 10 clicks in other software to one or none in Archy.”

Related Crunchbase queries:

Illustration: Dom Guzman

Australia's police to use AI to decode criminals' emoji slang to curb online crime — "crimefluencers" will be decoded and translated for investigators

Australia's police are looking to build an AI tool that would detect and interpret emoji slang online in an effort to curb crime among bad actors in hateful communities, dubbed "crimefluencers." The AI will understand the difference between harmless lingo and coded messages to help police combat violent crime.

Mapping the Missing Green: AI Framework Enhances Urban Greening in Tokyo to Combat Heat and Improve Resilience

30 October 2025 at 16:41
Mapping the Missing Green: AI Framework Enhances Urban Greening in Tokyo to Combat Heat and Improve Resilience

As cities become denser and hotter, the challenge of integrating greenery into urban environments has grown increasingly urgent. In Tokyo, researchers from Chiba University have developed a data-driven framework that leverages artificial intelligence (AI) and spatial analysis to map vertical greenery across the city’s 23 wards. This cutting-edge study, published in September 2025, identifies where additional greenery, particularly vertical greening like green walls, could help mitigate urban heat and improve the city’s resilience to climate change. The framework offers a pioneering approach for cities worldwide to make urban greening more equitable and effective.

Vertical Greening: A Creative Solution to Urban Space Limitations

Tokyo faces a unique challenge: as one of the densest metropolitan areas in the world, it has limited space for traditional greenery like parks and large trees. To address this, vertical greening has become a promising solution. This involves placing vegetation on building façades, creating “green walls” that help cool down urban areas and improve air quality. However, until recently, there was no clear method to assess where this greenery was most needed. This gap led to the development of the data-driven framework by Chiba University, which provides the first comprehensive citywide map of vertical greenery in Tokyo.

AI-Driven Analysis of Tokyo’s Vertical Greening Landscape

The research team, led by Professor Katsunori Furuya from Chiba University, utilised artificial intelligence to analyse over 80,000 Google Street View images of Tokyo. By using a deep-learning model (YOLOv8), the researchers identified building façades featuring vertical greenery, such as green walls and balcony plants. This AI-powered analysis allowed the team to create a detailed spatial inventory of Tokyo’s existing vertical greening systems, providing insights into how they are distributed across the city. With this new data, the team was able to identify areas that are lacking greenery, helping urban planners better target greening efforts.

Introducing the Vertical Greening Demand Index (VGDI)

To make this data more actionable, the research team introduced a new metric: the Vertical Greening Demand Index (VGDI). This index helps evaluate where adding more vertical greening could have the greatest environmental impact. The VGDI takes into account factors such as land use, building density, surface temperature, and pedestrian exposure to heat. By combining these elements, the VGDI provides a clear picture of which areas in Tokyo would benefit most from additional vertical greening. This approach ensures that greening efforts are not only visually appealing but also effective in combating urban heat and improving the overall urban environment.

Uneven Distribution of Greenery Across Tokyo

The research findings revealed an uneven distribution of vertical greenery throughout Tokyo. Central commercial and residential areas had some vegetated façades, but many lower-income neighbourhoods and heat-prone zones had far fewer green walls. This highlighted a significant issue: urban greening was not being distributed equitably across the city. By pinpointing these discrepancies, the study stressed the importance of targeting areas where greenery could have the most substantial impact, particularly in underserved regions. The researchers identified “priority greening zones,” areas where adding vertical vegetation could reduce surface temperatures and improve the comfort of residents.

Data-Driven Planning for More Resilient Cities

Professor Furuya explained, “With data-driven planning, city authorities can target specific areas to enhance cooling, biodiversity, and overall urban resilience.” The research team’s framework presents a practical tool for policymakers and urban planners to address the growing challenges of climate change. By using the VGDI, cities can prioritise areas where vertical greening will be most effective in reducing urban heat island effects, improving air quality, and supporting biodiversity. This approach goes beyond just aesthetics—it is an essential part of building climate-resilient cities that can withstand rising temperatures and changing environmental conditions.

Implications for Global Urban Planning

The impact of this research extends beyond Tokyo. As cities around the world face similar challenges of rising temperatures and limited space for traditional greenery, this data-driven approach offers a scalable solution. Policymakers in dense urban areas globally can use indices like the VGDI to inform building regulations, urban renewal projects, and incentives for vertical greening. By adopting such frameworks, cities can ensure that their greening efforts are effective, equitable, and tailored to the unique needs of each neighbourhood. This will play a crucial role in mitigating climate change impacts in urban environments.

Towards Fair and Accessible Urban Greening

An important aspect of the study is its focus on accessibility and fairness in urban environmental planning. The data-driven framework allows for more transparent and equitable decision-making by clearly visualising areas lacking vertical greening. As cities worldwide work toward sustainability goals, this approach ensures that all residents, not just those in wealthier districts, can benefit from the positive impacts of urban greening. By highlighting areas of need, the study advocates for more inclusive and socially equitable urban planning practices, ensuring that urban greenery is distributed in a way that benefits all members of society.

Future Directions for AI and Urban Ecology

The success of this study marks an important step towards integrating artificial intelligence with urban ecology and planning. In the future, the researchers hope to refine the model by incorporating additional environmental parameters, such as air quality and energy savings, to provide even more detailed insights into the benefits of vertical greening. As AI and spatial analysis continue to evolve, they will play a central role in shaping how cities around the world adapt to the challenges posed by climate change. This research paves the way for smarter, more sustainable urban planning solutions.

Conclusion: A Smarter Approach to Greener Cities

As urban populations grow and climate change intensifies, creating more sustainable, livable cities is becoming an urgent priority. This study from Chiba University offers a powerful new tool for tackling one of the most pressing issues facing cities today: how to incorporate greenery into dense urban environments. By using artificial intelligence to map vertical greenery and assess where it is most needed, the researchers have provided a blueprint for cities worldwide to create greener, cooler, and more resilient urban spaces. With this data-driven approach, cities can move closer to achieving their sustainability goals while improving the quality of life for residents.

To see more news from Chiba University, visit www.cn.chiba-u.jp

The post Mapping the Missing Green: AI Framework Enhances Urban Greening in Tokyo to Combat Heat and Improve Resilience appeared first on Travel And Tour World.

Regulation As Alpha: Why The Smartest Startups Now Build Legal Strategy Into Their DNA

30 October 2025 at 15:00

Every founder knows the thrill of the moment: the first term sheet lands, the product is live, the market is opening up. But in 2025, there’s a new line in the sand: Did you clear the regulatory path before you scaled?

Today, it’s not enough to disrupt the market — you have to anticipate the rule-set that will govern it.

Investors are shifting gears. After a decade of “move fast and break things,” they’re asking: Who built the compliance engine before the crash? Because the truth is, regulation has become a form of alpha — a competitive advantage for startups that think of law not as a hurdle, but as a moat.

The new era of smart compliance

The startup landscape has changed. High-profile failures — from crypto exchanges to wild valuations in fintech and AI — taught us that the regulatory cost of growth can be massive. Today’s investors and founders alike expect legal strategy from day one, not as an afterthought.

Consider the RegTech market: One recent estimate projects it will swell to about $70.64 billion by 2030, growing at a compound annual rate of roughly 23%. Another forecast predicts growth to $70.8 billion by 2033. The message: Companies are no longer asking if they need compliance automation and legal-engineering infrastructure. They’re asking when they can monetize it.

So when a startup designs its product around KYC, AML, data-protection or licensing from the outset, it’s not just avoiding risk — it’s building a moat others will struggle to cross. For founders, regulation isn’t just the cost of entry anymore — it’s the cost of exit-edge.

When the law becomes a moat

There are former unicorns, and there are regulation-ready unicorns. The difference hinges on when they built their compliance architecture, hired legal engineers and treated regulation as product.

Take payment infrastructure: Stripe built payment-security and licensing into its model early, as Stripe’s PCI Level 1 certification and multijurisdiction licenses (U.S. money-transmitter, EU/UK e-money) enabled it to integrate cleanly with Apple Pay, power Shopify’s native payments, and — per a 2023 announcement — expand its role processing payments for Amazon.

Or look at crypto: Coinbase built a licensure footprint early, publishing its U.S. money-transmitter licenses and securing New York’s BitLicense in 2017. Its 2021 SEC S-1 repeatedly frames regulatory compliance and licensing as fundamental to the business.

In insurtech, from the outset, Lemonade hired senior insurance veterans (e.g., former AIG executive Ty Sagalow) and, per its S-1 and subsequent filings, expanded licensure across the U.S., operationalizing the 50-state regulatory landscape rather than trying to route around it.

These examples show a pattern: When compliance is built in from the start, the cost of scaling drops and competitors face much higher entry bars. Regulation becomes a moat — not a burden.

The rise of ‘legal engineering’

Welcome to the era of the legal engineer. The traditional model (sign contract, then lawyer reads, then flagged risk) is being replaced by code, automation and internal teams who speak both product and law.

Startups such as Carta built cap-table software that includes “built-in tools and support to help with compliance year-round,” allowing it to embed governance and securities-law readiness into the product nature of equity management.

Plaid has publicly positioned itself for evolving “data use, access, and consumer permission” rules (e.g., Section 1033) by building features such as data transparency messaging and consent-capture into its API stack — indicating a clear regulatory-first posture in its product roadmap.

And what’s happening in AI? Founders are hiring general counsels on day one to forecast imminent regimes — privacy law (GDPR, CCPA), AI transparency bills, emerging algorithms-as-infrastructure regulation.

The startup battle isn’t simply product vs. product anymore — it’s regulatory architecture vs. regulatory architecture.

Reports back this up: One credible industry estimate shows the global compliance, governance and risk market is already around $80 billion and projected to reach $120 billion in the next five years. In short: Startups that solve compliance at scale are building infrastructure for everyone else to rent. That’s platform-level potential.

Investors are taking note

Regulation-ready startups aren’t just surviving — they’re attracting smarter capital. Venture funds now assess regulatory maturity, legal runway and governance readiness early on. A startup that can show it isn’t “waiting to deal with compliance” but designed it, has a valuation edge.

Crunchbase data shows global startup funding reached $91 billion in Q2 2025, up 11% year over year. While not all of that is focused on law or compliance, the trend signals that smart investors are buried deeper in risk assessment and governance. Legal tech funding is accelerating, too: the sector recently topped $2.4 billion in venture funding this year, an all-time high.

Funds are no longer only assessing TAM or go-to-market speed; they’re asking: “What’s the regulatory runway? Who owns risk? Who built the compliance pipeline?” Because in sectors like fintech, climate tech, health tech and AI, the fastest growth path is often the one that avoids the enforcement arm.

The future: law as competitive advantage

Let’s zoom out for a moment. We’re moving into a world where regulation isn’t a ceiling — it’s scaffolding. It defines markets, enables scaling and filters winners from pretenders. Founders who see law as a source of architecture, not as chewing-gum-on-the-shoe, will be the ones writing the playbook.

Think about AI: Startups that design for regulatory change (data-provenance, audit trails, rights management) are already positioning for the future.

Think about climate tech: Companies that can navigate evolving carbon-credit regimes or ESG disclosure laws are building invisible advantages.

Think about fintech: Those that mastered licensing, KYC/AML, consumer-data flows early are the backbone of infrastructure.

The next wave of unicorns won’t just have better tech — they’ll have truly infinitely better legal DNA. They won’t just disrupt a market; they’ll help write the rules of the market before they scale.

Because in this new era, regulation isn’t a deadweight — it’s a launchpad.


Aron Solomon is the chief strategy officer for Amplify. He holds a law degree and has taught entrepreneurship at McGill University and the University of Pennsylvania, and was elected to Fastcase 50, recognizing the top 50 legal innovators in the world. His writing has been featured in Newsweek, The Hill, Fast Company, Fortune, Forbes, CBS News, CNBC, USA Today and many other publications. He was nominated for a Pulitzer Prize for his op-ed in The Independent exposing the NFL’s “race-norming” policies.

Related Crunchbase query:

Related reading:

Illustration: Dom Guzman

Why Felicis’ Newest Partner Focuses On Community Building To Win AI Deals At Seed

30 October 2025 at 15:00

Feyza Haskaraman is joining Felicis Ventures 1 as a partner after several years at Menlo Ventures, Crunchbase News has exclusively learned.

In her new role, Haskaraman will focus on investing in “soon-to-break-out” AI infrastructure, cybersecurity, and applications companies for Felicis, an early-stage firm with $3.9 billion in assets under management.

During her time at Menlo, Haskaraman sourced investments in startups including Semgrep, Astrix, Abacus, Parade and CloudTrucks — zeroing in early on how AI is reshaping developer security and enterprise infrastructure.

Feyza Haskaraman of Felicis Ventures
Feyza Haskaraman, partner at Felicis Ventures

Haskaraman, an MIT graduate who was born in Turkey, brings an engineering background to her role as an investor. She previously worked as an engineer at various companies at different growth stages, including Analog Devices, Fitbit and Nucleus Scientific. She is also a former McKinsey & Co. consultant who advised multibillion-dollar technology companies and early-stage startups on strategy and operations. It was after working with startups at McKinsey that her interest in venture capital was piqued, and she joined Insight Partners.

Her decision to join Menlo Park, California-based Felicis stems from a shared interest alongside firm founder and managing partner Aydin Senkut to build communities even in “unsexy” industries such as infrastructure and security, she said.

“Whether it’s connecting AI founders or bringing together technical and cybersecurity communities, the mission is the same: Believe in the best founders early and help them go the distance,” she told Crunchbase News.

Felicis is currently investing out of its 10th fund, a $900 million vehicle, its largest yet. More than 60% of its investments out of Fund 9 and 10 (so far) are seed stage; 94% are seed or Series A. In 83% of its investments, Felicis has led or co-led the round.

Nearly $3 out of every $4 that it’s deployed have gone into AI-related companies, including n8n, Supabase, Mercor, Crusoe Energy Systems, Periodic Labs, Runway, Revel, Skild AI, Deep Infra, Browser Use, Evertune, Poolside, Letta and LMArena.

In an interview, Haskaraman shared more about her investment plans at Felicis, as well as why she thinks we’re in the “early innings” with AI. This interview has been edited for clarity and brevity.

Let’s talk more about community-building and why you think it’s so important. 

Over the past few years in the venture ecosystem, just providing the capital is not enough. You need to surround yourself with the best talent. You’re seeing one of the fiercest talent wars in terms of AI talent.

So one of the things that I’ve spent a lot of time on in my VC career is building a community, going back to my MIT roots, surrounding myself with founders, engineers and operators, and also going into specific domains, like cybersecurity — just building a network of CISOs that I communicate with regularly and really support them however I can, and then obviously get their take on the latest technology.

That type of community-building effort is something that Aydin and I will be debating strategy for Felicis as well.

Yes, Aydin (Felicis’ founder) has said that he thinks the next generation of enterprise investors aren’t just picking companies, they’re building ecosystems. Would you agree with that?

Yes, we’re fully aligned on that. First of all, it’s a way of sourcing. Being able to source the best founders involves surrounding yourself in a community of people. You get very close to them, and you want to be the first call when they decide to jump ship and start a business.

As early-connection investors, we want to invest in the founders as early as possible. So that’s why we want to immerse ourselves in these communities that provide prolific grounds for the technical founders that are coming in and building an AI.

You were investing in AI before the big boom took off. Would you say there’s too much hype around the space?

You are correct that there is a lot of euphoria around AI, but if you look at the overall landscape, we haven’t seen a technology that can have such a large impact.

And we’re already seeing the results in enterprises that buyers of these solutions, and consumers of these solutions, including myself and our team, are seeing immense amounts of productivity gains. I remain immensely optimistic about the future and investing in AI, and that’s what we are paid to do, and what I also enjoy as a former engineer.

Are there specific aspects of AI that have you particularly excited?

I personally feel we’re still very much at the early innings. It’s been three years since ChatGPT came out, and the model companies really pushed their products into our lives. But if you take a look at what’s happening now, we have agents that are coordinating and automating our work.

What are ways in which we should be securing agent architecture? And that is also evolving across the board, and if you think about another layer down, like the infrastructure to support these LLMs and agents, I have to ask “What do we need underneath?”

I think there’s a lot more that will come, and there’s a lot of hope for innovation that will happen both across the infrastructure layer, as well as agents. There’s also the issue of “can applications actually be enabled?” I go back to the importance of securing our interactions with the agents and making sure that they’re not abused and misused. It’s a great time to be investing in AI.

What stages are you primarily investing in at Felicis?

We try to go as early as possible. But obviously, given our fund’s size, we have flexibility to invest whenever we see the venture scale returns make sense. But the majority of our investments are seed.

It’s such a competitive investing environment right now. How do you stand out?

Ultimately, what founders value is how you will work with them, your references. They value how you show up in those tough times, how you surround them with talent, how you help them see around the corners. That matters a lot.

I believe that winning boils down to the prior founder experiences that you left, people who can speak highly of you and how you work. I tend to be a big hustler. So, there’s a lot more value-add that we want to make sure we bring to the table, even before investments. And then after the investment we can continue to bring that type of value to a company.

Are you investing outside of AI?

I’m investing in AI infrastructure, cybersecurity and AI-enabled apps. We are also at the verge of a big overhaul in terms of the application layer, companies that we’ve seen prior to AI — that is all getting disrupted.

We’re seeing AI scribes in healthcare intake solutions, for example. We’re seeing code-generation solutions in developer stacks. We are looking at every single vertical, as well as horizontal application. I’m very interested in how all of these verticals’ application layers will get a different type of automation.

What’s your take on the market overall right now?

I feel like I lived three lifetimes in my investing career — just over the past few years. We as a VC community and tech ecosystem learned a lot, obviously, just in terms of what’s happening. We’re seeing new ingredients in the market, and that is AI, that did not exist during COVID.

Think about the fact that this is not a structural change in the market driven by the economy. This is truly a new technology. I would bucket those waves as separate.

I’m very grateful to be investing at this time. What a time to be investing, because AI is truly game-changing as a technology.

Clarification: The paragraph about Haskaraman’s investments at Menlo Ventures has been updated to more accurately reflect her role.

Related Crunchbase query:

Related reading:

Illustration: Dom Guzman


  1. Felicis Ventures is an investor in Crunchbase. They have no say in our editorial process. For more, head here.

AI’s Second Inning: Decoding Big Tech’s Investment Spree and Market Reactions

30 October 2025 at 01:15

The post AI’s Second Inning: Decoding Big Tech’s Investment Spree and Market Reactions appeared first on StartupHub.ai.

“We’re in the second innings of this,” declared Stephanie Link of Hightower Advisors on CNBC’s Closing Bell Overtime, referring to the burgeoning artificial intelligence trade. Her commentary, delivered amidst a flurry of recent earnings reports, offered a nuanced perspective on the market’s current fixation with AI, particularly concerning the substantial capital expenditures undertaken by major […]

The post AI’s Second Inning: Decoding Big Tech’s Investment Spree and Market Reactions appeared first on StartupHub.ai.

FII Institute, Accenture Launches AI Investment Report

The Future Investment Initiative (FII) Institute, in collaboration with Accenture, unveiled a research report on AI investment entitled “Rebalancing Intelligence: How the Next Wave of AI Investment is Set to Flow South.”

The report is being launched at FII9, a global conference at which the world’s investment agenda is set, convening the world’s most influential leaders.

After years of concentration in the Global North, investors now predict a significant rebalancing of AI capital flows towards emerging markets, a major shift in investor focus.

The new data show that 87% of global investors plan to increase AI investments in the Global South within the next two years, with India, Southeast Asia, and the Middle East identified as most likely beneficiaries.

The study surveyed 250 C-suite leaders from private equity firms (40%), venture capital firms (40%), and corporate venture units of large enterprises (20%) across 13 countries in the Global North. It also included 15 in-depth interviews with senior investors from leading PE, VC, and sovereign wealth funds.

Despite the Global South representing nearly half the world’s population and a quarter of global economic growth, it currently attracts only 28% of AI-related foreign direct investment, a fraction of the $548 billion invested globally over the past two years. There are just nine AI unicorns in the Global South, compared with 305 in the North.

AI dominates this year’s FII9 agenda, with over one third of panels and speakers exploring its potential. From tech and chip CEOs to sovereign funds, global investors and policymakers, FII9 is where the future of AI capital flows is discussed.

“OpenAI’s recent $1 trillion chip investment commitment shows the scale of transformation ahead,” said Richard Attias, CEO of FII Institute. “We must ensure this wave lifts all boats. Bridging the AI investment divide is an economic opportunity and a moral imperative. Innovation must be a driver of shared global prosperity.”

“We are excited to join FII in launching this insightful report, which provides a unique and timely opportunity for global business leaders to learn about the untapped potential of AI to unlock growth in the Global South,” said Julie Sweet, Chair and CEO, Accenture. “AI is much more than a technology—it’s a catalyst for reinvention—and investment in talent, infrastructure and local ecosystems across these regions will help ensure that AI becomes a force for shared prosperity and shape a future where innovation knows no borders.”

The report is the first major deliverable of AI Inclusive, an FII Institute initiative designed to accelerate AI growth in emerging markets by mobilizing investment, supporting startups, and deploying adaptable governance tools.

 

The post FII Institute, Accenture Launches AI Investment Report appeared first on My Startup World - Everything About the World of Startups!.

Powell Cautiously Monitors AI’s Impact on Jobs and a Bifurcated Economy

29 October 2025 at 23:45

The post Powell Cautiously Monitors AI’s Impact on Jobs and a Bifurcated Economy appeared first on StartupHub.ai.

Federal Reserve Chair Jerome Powell recently articulated a measured yet watchful stance on the emerging economic shifts driven by artificial intelligence, noting that while the full implications are still unfolding, the Fed is “watching AI’s impact on jobs carefully.” Speaking at a press conference following the Federal Open Market Committee’s decision to lower the benchmark […]

The post Powell Cautiously Monitors AI’s Impact on Jobs and a Bifurcated Economy appeared first on StartupHub.ai.

The State Of Startups In 7 Charts: These Sectors And Stages Are Down As AI Megarounds Dominate In 2025

29 October 2025 at 15:00

Venture funding has most definitely rebounded since the 2022 correction, but there’s a sharp divide between who’s getting funding and who’s not.

That was the overarching theme from our third-quarter market reports, which showed that global startup funding in Q3 totaled $97 billion, marking only the fourth quarter above $90 billion since Q3 2022.

Still, there are stark differences between the 2021 market peak and now, as contributing reporter Joanna Glasner noted in a couple of recent columns, here and here. Just as we saw four years ago, funding is frothy and often seems to be driven by investor FOMO. Some companies are even raising follow-on rounds at head-spinning speeds.

But the funding surge this time is also much, much more concentrated — namely in outsized rounds for AI companies.

With that, let’s take a look at the charts that illustrate the major private-market and startup funding themes as we head into the final quarter of 2025.

AI funding continues to drive venture growth

Nearly half — 46% — of startup funding globally in Q3 went to AI companies, Crunchbase data shows. Almost a third went to a single company: Anthropic, which raised $13 billion last quarter.

Even with an astonishing $45 billion going to artificial intelligence startups in Q3, it was only the third-highest quarter on record for AI funding, with Q4 2024 and Q1 2025 each clocking in higher.

Megarounds gobble up lion’s share

It shouldn’t come as too much of a surprise that AI has also skewed investment heavily toward megarounds, which we define as funding deals of $100 million or more.

The percentage of overall funding going into such deals hit a record high this year, with an astonishing 60% of global and 70% of U.S. venture capital going to $100 million-plus rounds, per Crunchbase data.

Even with several months left in the year, it also seems plausible that the total dollars going into such deals will match or top what we saw in 2021, which marked a peak for startup funding not scaled before or since.

The difference? Back then, startup dollars were widely distributed, going to a whole host of sectors — from food tech to health tech to robotics — and to early-stage, late-stage and in-between companies alike.

Contrast that with recent quarters, when the LLM giants and other large, established, AI-centric companies are getting the largest slice of venture dollars.

Seed deals slide further

As megarounds have increased, seed deals have declined.

The number of seed deals has shown a steady downward trend in recent quarters, Crunchbase data shows, even as total dollars invested at the stage has stayed relatively steady. That indicates that while seed deals are growing larger, they’re also harder to come by.

Early-stage funding has essentially flatlined, despite larger rounds to companies working on robotics, biotech, AI and other technologies.

The AI haves and have-nots

AI has enthralled investors for the past three years.

What are they less interested in? Old standbys like cybersecurity and biotech. Biotech investment as a share of overall funding recently hit a 20-year low. Crunchbase data shows that cybersecurity investment, while still relatively steady, also retreated somewhat in Q3 2025. That’s notable given that many cybersecurity companies are integrating AI into their offerings.

Still, other sectors that benefit heavily from AI-driven automation are seeing a surge in investment. Perhaps most notable is legal tech, which hit an all-time high last month on the back of large rounds for companies promising to automate much of the drudgery of the profession.

Among the other sectors buoyed by AI is human resources software (including AI-powered recruitment and hiring offerings).

Other data points of note

Other interesting points that emerged from our Q3 reports and recent coverage include:

Looking ahead

The increasing concentration of capital into a small cadre of large AI companies — not to mention the interconnectedness of those deals — begs some obvious questions. Are we in a bubble? And given that nearly half of venture capital in recent years has been tied up in AI, what happens to the startup ecosystem if or when it pops?

Related reading:

Illustration: Dom Guzman

DayZ Creator Says AI Fears Remind Him of People Worrying About Google & Wikipedia; ‘Regardless of What We Do, AI Is Here’

28 October 2025 at 22:30

Unbranded game controller with futuristic AI head wearing headphones beside portrait of Dean Hall

With each passing month, artificial intelligence creeps into more industries. That does not exclude the gaming industry, which has long used artificial intelligence to populate its virtual worlds. Still, the generative AI that is taking root everywhere offers much more power, and also much greater risk, compared to what gaming developers were used to. Big companies like Microsoft, Amazon, and EA are already laying off (or thinking about laying off) employees to invest further into artificial intelligence. What do the actual developers think about this artificial intelligence revolution? Their takes, as you would expect, are quite varied. The creator of […]

Read full article at https://wccftech.com/dayz-creator-says-ai-fears-remind-him-people-worrying-about-google-wikipedia-ai-is-here/

OpenAI and Microsoft sign agreement to restructure OpenAI into a public benefit corporation with Microsoft retaining 27% stake — non-profit 'Open AI Foundation' to oversee 'Open AI PBC'

OpenAI is restructuring into a public benefit corporation with Microsoft retaining a 27% stake in the new "OpenAI PBC," worth roughly $135 billion. OpenAI PBC will still be overseen by the non-profit OpenAI Inc., soon to be renamed OpenAI Foundation. Both companies are intertwined till at least 2032 with major cloud computing contracts.

Microsoft is reportedly being less than truthful about its OpenAI dealings — hiding a $4.7 billion loss under "other expenses"

Microsoft is trying to keep its dealings with OpenAI under wraps, seemingly burying a $4.7 billion loss by the ChatGPT maker in its latest annual report for the fiscal year ending June 30, 2025.

A tale of two Seattles in the age of AI: Harsh realities and new hope for the tech community

28 October 2025 at 19:52
The opening panel at Seattle AI Week 2025, from left: Randa Minkarah, WTIA chief operating executive; Joe Nguyen, Washington commerce director; Rep. Cindy Ryu; Nathan Lambert, Allen Institute for AI; and Brittany Jarnot, Salesforce. (GeekWire Photo / Taylor Soper)

Seattle is looking to celebrate and accelerate its leadership in artificial intelligence at the very moment the first wave of the AI economy is crashing down on the region’s tech workforce.

That contrast was hard to miss Monday evening at the opening reception for Seattle AI Week 2025 at Pier 70. On stage, panels offered a healthy dose of optimism about building the AI future. In the crowd, buzz about Amazon’s impending layoffs brought the reality of the moment back to earth.

A region that rose with Microsoft and then Amazon is now dealing with the consequences of Big Tech’s AI-era restructuring. Companies that hired by the thousands are now thinning their ranks in the name of efficiency and focus — a dose of corporate realism for the local tech economy.

The double-edged nature of this shift is not lost on Washington Gov. Bob Ferguson.

“AI, and the future of AI, and what that means for our state and the world — each day I do this job, the more that moves up in my mind in terms of the challenges and the opportunities we have,” Ferguson told the AI Week crowd. He touted Washington’s concentration of AI jobs, saying his goal is to maximize the benefits of AI while minimizing its downsides.

Gov. Bob Ferguson addresses the AI Week opening reception. (GeekWire Photo / Todd Bishop)

Seattle AI Week, led by the Washington Technology Industry Association, was started last year after a Forbes list of the nation’s top 50 AI startups included none from Seattle, said the WTIA’s Nick Ellingson, opening this year’s event. That didn’t seem right. Was it a messaging problem?

“A bunch of us got together and said, let’s talk about all the cool things happening around AI in Seattle, and let’s expand the tent beyond just tech things that are happening,” Ellingson explained.

So maybe that’s the best measuring stick: how many startups will this latest shakeout spark, and how can the Seattle region’s startup and tech leaders make it happen? Can the region become less dependent on the whims of the Microsoft and Amazon C-suites in the process? 

“Washington has so much opportunity. It’s one of the few capitals of AI in the world,” said WTIA’s Arry Yu in her opening remarks. “People talk about China, people talk about Silicon Valley — there are a few contenders, but really, it’s here in Seattle. … The future is built on data, on powerful technology, but also on community. That’s what makes this place different.”

And yet, “AI is a sleepy scene in Seattle, where people work at their companies, but there’s very little activity and cross-pollinating outside of this,” said Nathan Lambert, senior research scientist with the Allen Institute for AI, during the opening panel discussion.

No, we don’t want to become San Francisco or Silicon Valley, Lambert added. But that doesn’t mean the region can’t cherry-pick some of the ingredients that put Bay Area tech on top.

Whether laid-off tech workers will start their own companies is a common question after layoffs like this. In the Seattle region at least, that outcome has been more fantasy than reality. 

This is where AI could change things, if not with the fabled one-person unicorn then with a bigger wave of new companies born of this employment downturn. Who knows, maybe one will even land on that elusive Forbes AI 50 list. (Hey, a region can dream!)

But as the new AI reality unfolds in the regional workforce, maybe the best question to ask is whether Seattle’s next big thing can come from its own backyard again.

Related: Ferguson’s AI balancing act: Washington governor wants to harness innovation while minimizing harms

Microsoft gets 27% stake in OpenAI, and a $250B Azure commitment

28 October 2025 at 19:49
Sam Altman and OpenAI announced a new deal with Microsoft, setting revised terms for future AI development. (GeekWire File Photo / Todd Bishop)

Microsoft and OpenAI announced the long-awaited details of their new partnership agreement Tuesday morning — with concessions on both sides that keep the companies aligned but not in lockstep as they move into their next phases of AI development.

Under the arrangement, Microsoft gets a 27% equity stake in OpenAI’s new for-profit entity, the OpenAI Group PBC (Public Benefit Corporation), a stake valued at approximately $135 billion. That’s a decrease from 32.5% equity but not a bad return on an investment of $13.8 billion.

At the same time, OpenAI has contracted to purchase an incremental $250 billion in Microsoft Azure cloud services. However, in a significant concession in return for that certainty, Microsoft will no longer have a “right of first refusal” on new OpenAI cloud workloads.

Microsoft, meanwhile, will retain its intellectual property rights to OpenAI models and products through 2032, an extension of the timeframe that existed previously. 

A key provision of the new agreement centers on Artificial General Intelligence (AGI), with any declaration of AGI by OpenAI now subject to verification by an independent expert panel. This was a sticking point in the earlier partnership agreement, with an ambiguous definition of AI potentially triggering new provisions of the prior arrangement. 

Microsoft and OpenAI had previously announced a tentative agreement without providing details. More aspects of the deal are disclosed in a joint blog post from the companies.

Shares of Microsoft are up 2% in early trading after the announcement. The company reports earnings Wednesday afternoon, and some analysts have said the uncertainty over the OpenAI arrangement has been impacting Microsoft’s stock. 

OpenAI calls on U.S. to build 100 gigawatts of additional power-generating capacity per year, increase equivalent to 100 nuclear reactors yearly — says electricity is a 'strategic asset' in AI race against China

28 October 2025 at 18:22
OpenAI has called on the US to build out more power-generating infrastructure, claiming that it is needed to help provide the backbone for the AI race the US is now in with China. With enormous infrastructure projects planned, it wants the US to build an additional 100 gigawatts of new energy capacity every year.

OpenAI's staggering mental health crisis revealed — Millions use ChatGPT like a therapist, but that's about to change

OpenAI has provided some internal numbers related to users who seek help from GPT-5, and they're a lot bigger than I was expecting. The AI firm has laid out details as to how it's combating the issue.

China builds brain-mimicking AI server the size of a mini-fridge, claims 90% power reduction — BI Explorer 1 packs in 1,152 CPU cores and 4.8TB of memory, runs on a household power outlet

28 October 2025 at 14:00
China's GDIIST research institute has announced the development and soon release of the BIE-1, an AI supercomputer inspired by the operation of the human brain. This neuromorphic computing tech is one of the first standalone, non-rack-based brain-based computers we've ever seen.

Qualcomm unveils AI200 and AI250 AI inference accelerators — Hexagon takes on AMD and Nvidia in the booming data center realm

Qualcomm has unveiled its AI200 and AI250 rack-scale AI inference solutions relying on data center-grade Hexagon NPUs with near-memory computing, micro-tile inferencing, and confidential computing support.

Seattle studio PSL encodes its playbook into Lev, an AI co-founder that helps turn ideas into companies

27 October 2025 at 20:09
(Lev screenshot)

Pioneer Square Labs has launched more than 40 tech startups and vetted 500-plus ideas since creating its studio a decade ago in Seattle.

Now it’s testing whether its company-building expertise and data on successful startup formulas can be codified into software — with help from the latest AI models.

PSL just unveiled Lev, a new project that aims to be an “AI co-founder” for early stage entrepreneurs.

Developed inside PSL and now rolling out publicly, Lev can evaluate ideas, score their potential, and help founders develop them into companies.

Lev grew out of an internal PSL tool that used PSL’s proprietary rubric to score startup ideas. The studio decided to turn it into a product after outside founders who tested early versions wanted access for themselves.

Here’s how it works:

  • Users start by entering an idea (along with any associated information/background) and selecting “venture” or “bootstrap.”
  • Lev walks founders through milestones from solution to customer discovery, go-to-market, and product build.
  • It can generate “assets” like interview scripts, outreach templates, competitive maps, pricing models, brand palettes, customer personas, landing pages, potential leads, and even product specs.

“We’re mapping a lot of the PSL process into it,” said T.A. McCann, managing director at PSL.

Lev’s structured workflow sets it apart from generic chatbots, said Shilpa Kannan, principal at PSL.

“The sequencing of these components as you go through the process is one of the biggest value-adds,” she said.

Lev joins a growing number of startups leveraging AI to act as an idea validation tool for early-stage founders, though its precise approach makes it stand out.

Pioneer Square Labs Managing Director T.A. McCann (left) and Principal Shilpa Kannan. (PSL Photos)

Upcoming features will add team-building and fundraising modules and let users trigger actions — such as sending emails or buying domains — directly from within the platform.

McCann envisions Lev eventually connecting to tools like Notion and HubSpot to serve as a “command center” for running a company — integrating tools, drafting investor updates, tracking competitors, and suggesting priorities. There are several competitors in this space offering different versions of “AI chief of staff” products.

On a broader level, Lev raises an existential question for PSL: what happens when a startup studio teaches an AI to do the things that make a startup studio valuable?

“In some ways, this is ‘Innovators Dilemma,’ and you have to cannibalize yourself before someone else does it,” McCann said, referencing Clayton Christensen’s concept of technology disruption.

PSL also sees Lev as a potential funnel for entrepreneurs it could work with in the future. And it’s a way to expand the studio’s reach beyond its focus on the Pacific Northwest.

“It’s scaling our knowledge in a way that we wouldn’t be able to do otherwise,” McCann said.

Kannan and Kevin Leneway, principal at PSL, wrote a blog post describing how PSL designed the backbone of Lev and how the firm generated its own startup ideas at higher volumes with lower cost.

“As we see more and more individuals become founders with the support of AI, we are incredibly excited for the potential increase in velocity and successful outcomes from methodologies like ours that focus on upfront ideation and validation,” they wrote.

Kannan told GeekWire that PSL is prioritizing founders’ privacy and intellectual property. “We are making intentional product and technical decisions to ensure Lev is designed from the ground up to safeguard ideas and founder data, including guardrails on data we collect and our team can access,” she said.

For now, PSL is targeting venture-scale founders — people in tech companies or accelerators with ambitions to build fast-growing startups. But McCann believes Lev could eventually empower solo operators running multiple micro-businesses.

Lev is currently free for one idea, $20 per month for up to five ideas, and $100 per month for 10 ideas and advanced features. It’s available on a waitlist basis.

Lev also offers a couple fun tools to help boost its own marketing, including a founder “personality test” and an “idea matcher” that produces startup concepts based on your interests and experience.

Sure, Valuations Look High. But Here’s How Today Is Different From The Last Peak

27 October 2025 at 15:00

Correctly calling a market peak is a notoriously tricky endeavor.

Case in point: When tech stocks and startup funding hit their last cyclical peak four years ago, few knew it was the optimal time to cease new deals and cash in liquidatable holdings.

This time around, quite a few market watchers are wondering if the tech stock and AI boom has reached bubble territory. And, as we explored in Friday’s column, there are plenty of similarities between current conditions and the 2021 peak.

Even so, by other measures we’re also in starkly different territory. The current boom is far more concentrated in AI and a handful of hot companies. The exit environment is also much quieter. And of course, the macro conditions don’t resemble 2021, which had the combined economic effects of the COVID pandemic and historically low interest rates.

Below, we look at four of the top reasons why this time is different.

No. 1: Funding is largely going into AI, while other areas aren’t seeing a boom

Four years ago, funding to most venture-backed sectors was sharply on the rise. That’s not the case this time around. While AI megarounds accumulate, funding to startups in myriad other sectors continues to languish.

Biotech is on track to capture the smallest percentage of U.S. venture investment on record this year. Cleantech investment looks poised to hit a multiyear low. And consumer products startups also remain out of vogue, alongside quite a few other sectors that once attracted big venture checks.

The emergence of AI haves and non-AI have-nots means that if we do see a correction, it could be limited in scope. Sectors that haven’t seen a boom by definition won’t see a post-boom crash. (Though further declines are possible.)

No. 2: The IPO market is not on fire

The new offering market was on fire in 2020 and 2021, with traditional IPOs, direct listings and SPAC mergers all flooding exchanges with new ticker symbols to track.

In recent quarters, by contrast, the IPO market has been alive, but not especially lively. We’ve seen a few large offerings, with CoreWeave, Figma and Circle among the standouts.

But overall, numbers are way down.

In 2021, there were hundreds of U.S. seed or venture-backed companies that debuted on NYSE or Nasdaq, per Crunchbase data. This year, there have been less than 50.

Meanwhile, the most prominent unicorns of the AI era, like OpenAI and Anthropic, remain private companies with no buzz about an imminent IPO. As such, they don’t see the day-to-day fluctuations typical of public companies. Any drop in valuation, if it happens, could play out slowly and quietly.

No. 3: Funding is concentrated among fewer companies

That brings us to our next point: In addition to spreading their largesse across fewer sectors, startup investors are also backing fewer companies.

This year, the percentage of startup funding going to megarounds of $100 million or more reached an all-time high in the U.S. and came close to a record global level. A single deal, OpenAI’s $40 billion March financing, accounted for roughly a quarter of  U.S. megaround funding.

At the same time, fewer startup financings are getting done. This past quarter, for instance, reported deal count hit the lowest level in years, even as investment rose.

No. 4: ZIRP era is long gone

The last peak occurred amid an unusual financial backdrop, with economies beginning to emerge from the depths of the COVID pandemic and ultra-low interest rates contributing to investors shouldering more risk in pursuit of returns.

This time around, the macro environment is in a far different place, with “a “low fire, low hire” U.S. job market, AI disrupting or poised to disrupt a wide array of industries and occupations, a weaker dollar and a long list of other unusual drivers.

What both periods share in common, however, is the inexorable climb of big tech valuations, which brings us to our final thought.

Actually, maybe the similarities do exceed differences

While the argument that this time it’s different is a familiar one, the usual plot lines do tend to repeat themselves. Valuations overshoot, and they come down. And then the cycle repeats.

We may not have reached the top of the current cycle. But it’s certainly looking a lot closer to peak than trough.

Related Crunchbase query:

Related reading:

Illustration: Dom Guzman

What it’s like to wear Amazon’s new smart glasses for delivery drivers

27 October 2025 at 04:29
GeekWire’s Todd Bishop tries Amazon’s new smart delivery glasses in a simulated demo.

SAN FRANCISCO — Putting on Amazon’s new smart delivery glasses felt surprisingly natural from the start. Despite their high-tech components and slightly bulky design, they were immediately comfortable and barely heavier than my normal glasses.

Then a few lines of monochrome green text and a square target popped up in the right-hand lens — reminding me that these were not my regular frames. 

Occupying just a portion of my total field of view, the text showed an address and a sorting code: “YLO 339.” As I learned, “YLO” represented the yellow tote bag where the package would normally be found, and “339” was a special code on the package label.

My task: find the package with that code. Or more precisely, let the glasses find them.

Amazon image from a separate demo, showing the process of scanning packages with the new glasses.

As soon as I looked at the correct package label, the glasses recognized the code and scanned the label automatically. A checkmark appeared on a list of packages in the glasses.

Then an audio alert played from the glasses: “Dog on property.”

When all the packages were scanned, the tiny green display immediately switched to wayfinding mode. A simple map appeared, showing my location as a dot, and the delivery destination marked with pins. In this simulation, there were two pins, indicating two stops. 

After putting the package on the doorstep, it was time for proof of delivery. Instead of reaching for a phone, I looked at the package on the doorstep and pressed a button once on the small controller unit —the “compute puck” — on my harness. The glasses captured a photo.

With that, my simulated delivery was done, without ever touching a handheld device.

In my very limited experience, the biggest concern I had was the potential to be distracted — focusing my attention on the text in front of my eyes rather than the world around me. I understand now why the display automatically turns off when a van is in motion. 

But when I mentioned that concern to the Amazon leaders guiding me through the demo, they pointed out that the alternative is looking down at a device. With the glasses, your gaze is up and largely unobstructed, theoretically making it much easier to notice possible hazards. 

Beyond the fact that they’re not intended for public release, that simplicity is a key difference between Amazon’s utilitarian design and other augmented reality devices — such as Meta Ray-Bans, Apple Vision Pro, and Magic Leap — which aim to more fully enhance or overlay the user’s environment.

One driver’s experience

KC Pangan, who delivers Amazon packages in San Francisco and was featured in Amazon’s demo video, said wearing the glasses has become so natural that he barely notices them. 

Pangan has been part of an Amazon study for the past two months. On the rare occasions when he switches back to the old handheld device, he finds himself thinking, “Oh, this thing again.”

“The best thing about them is being hands-free,” Pangan said in a conversation on the sidelines of the Amazon Delivering the Future event, where the glasses were unveiled last week.

Without needing to look down at a handheld device, he can keep his eyes up and stay alert for potential hazards. With another hand free, he can maintain the all-important three points of contact when climbing in or out of a vehicle, and more easily carry packages and open gates.

The glasses, he said, “do practically everything for me” — taking photos, helping him know where to walk, and showing his location relative to his van. 

While Amazon emphasizes safety and driver experience as the primary goals, early tests hint at efficiency gains, as well. In initial tests, Amazon has seen up to 30 minutes of time savings per shift, although execs cautioned that the results are preliminary and could change with wider testing.

KC Pangan, an Amazon delivery driver in San Francisco who has been part of a pilot program for the new glasses. (GeekWire Photo / Todd Bishop)

Regulators, legislators and employees have raised red flags over new technology pushing Amazon fulfillment and delivery workers to the limits of human capacity and safety. Amazon disputes this premise, and calls the new glasses part of a larger effort to use technology to improve safety.

Using the glasses will be fully optional for both its Delivery Service Partners (DSPs) and their drivers, even when they’re fully rolled out, according to the company. The system also includes privacy features, such as a hardware button that allows drivers to turn off all sensors.

For those who use them, the company says it plans to provide the devices at no cost. 

Despite the way it may look to the public, Amazon doesn’t directly employ the drivers who deliver its packages in Amazon-branded vans and uniforms. Instead, it contracts with DSPs, ostensibly independent companies that hire drivers and manage package deliveries from inside Amazon facilities. 

This arrangement has periodically sparked friction, and even lawsuits, as questions have come up over DSP autonomy and accountability.

With the introduction of smart glasses and other tech initiatives, including a soon-to-be-expanded training program, Amazon is deepening its involvement with DSPs and their drivers — potentially raising more questions about who truly controls the delivery workforce.

From ‘moonshot’ to reality

The smart glasses, still in their prototype phase, trace their origins to a brainstorming session about five years ago, said Beryl Tomay, Amazon’s vice president of transportation.

Each year, the team brainstorms big ideas for the company’s delivery system. During one of those sessions, a question emerged: What if drivers didn’t have to interact with any technology at all?  

“The moonshot idea we came up with was, what if there was no technology that the driver had to interact with — and they could just follow the physical process of delivering a package from the van to the doorstep?” Tomay said in an interview. “How do we make that happen so they don’t have to use a phone or any kind of tech that they have to fiddle with?”

Beryl Tomay, Amazon’s vice president of transportation, introduces the smart glasses at Amazon’s Delivering the Future event. (GeekWire Photo / Todd Bishop)

That question led the team to experiment with different approaches before settling on glasses. It seemed kind of crazy at first, Tomay said, but they soon realized the potential to improve safety and the driver experience. Early trials with delivery drivers confirmed the theory.

“The hands-free aspect of it was just kind of magical,” she said, summing up the reaction from early users.

The project has already been tested with hundreds of delivery drivers across more than a dozen DSPs. Amazon plans to expand those trials in the coming months, with a larger test scheduled for November. The goal is to collect more feedback before deciding when the technology will be ready for wider deployment.

Typically, Amazon would have kept a new hardware project secret until later in its development. But Reuters reported on the existence of the project nearly a year ago. (The glasses were reportedly code-named “Amelia,” but they were announced without a name.) And this way, Amazon can get more delivery partners involved, get input, and make improvements.

Future versions may also expand the system’s capabilities, using sensors and data to automatically recognize potential hazards such as uneven walkways.

How the technology works

Amazon’s smart glasses are part of a system that also includes a small wearable computer and a battery, integrated with Amazon’s delivery software and vehicle systems.

The lenses are photochromatic, darkening automatically in bright sunlight, and can be fitted with prescription inserts. Two cameras — one centered, one on the left — support functions such as package scanning and photo capture for proof of delivery. 

A built-in flashlight switches on automatically in dim conditions, while onboard sensors help the system orient to the driver’s movement and surroundings.

Amazon executive Viraj Chatterjee and driver KC Pangan demonstrate the smart glasses.

The glasses connect by a magnetic wire to a small controller unit, or “compute puck,” worn on the chest of a heat-resistant harness. The controller houses the device’s AI models, manages the visual display, and handles functions such as taking a delivery photo. It also includes a dedicated emergency button that connects drivers directly to Amazon’s emergency support systems.

On the opposite side of the chest, a swappable battery keeps the system balanced and running for a full route. Both components are designed for all-day comfort — the result, Tomay said, of extensive testing with drivers to ensure that wearing the gear feels natural when they’re moving around.

Connectivity runs through the driver’s official Amazon delivery phone via Bluetooth, and through the vehicle itself using a platform called “Fleet Edge” — a network of sensors and onboard computing modules that link the van’s status to the glasses. 

This connection allows the glasses to know precisely when to activate, when to shut down, and when to sync data. When a van is put in park, the display automatically activates, showing details such as addresses, navigation cues, and package information. When the vehicle starts moving again, the display turns off — a deliberate safety measure so drivers never see visual data while driving.

Data gathered by the glasses plays a role in Amazon’s broader mapping efforts. Imagery and sensor data feed into “Project Wellspring,” a system that uses AI to better model the physical world. This helps Amazon refine maps, identify the safest parking spots, pinpoint building entrances, and optimize walking routes for future deliveries.

Amazon says the data collection is done with privacy in mind. In addition to the driver-controlled sensor shut-off button, any imagery collected is processed to “blur or remove personally identifiable information” such as faces and license plates before being stored or used.

The implications go beyond routing and navigation. Conceivably, the same data could also lay the groundwork for greater automation in Amazon’s delivery network over time.

Testing the delivery training

In addition to trying the glasses during the event at Amazon’s Delivery Station in Milpitas, Calif., I experienced firsthand just how difficult the job of delivering packages can be. 

GeekWire’s Todd Bishop uses an Amazon training program that teaches drivers to walk safely on slippery surfaces.
  • Strapped into a harness for a slip-and-fall demo, I learned how easily a driver can lose footing on slick surfaces if not careful to walk properly. 
  • I tried a VR training device that highlighted hidden hazards like pets sleeping under tires and taught me how to navigate complex intersections safely.
  • My turn in the company’s Rivian van simulator proved humbling. Despite my best efforts, I ran red lights and managed to crash onto virtual sidewalks.
GeekWire’s Todd Bishop after a highly unsuccessful attempt to use Amazon’s driving simulator.

The simulator, known as the Enhanced Vehicle Operation Learning Virtual Experience (EVOLVE), has been launched at Amazon facilities in Colorado, Maryland, and Florida, and Amazon says it will be available at 40 sites by the end of 2026. 

It’s part of what’s known as the Integrated Last Mile Driver Academy (iLMDA), a program available at 65 sites currently, which Amazon says it plans to expand to more than 95 delivery stations across North America by the end of 2026.

“Drivers are autonomous on the road, and the amount of variables that they interact with on a given day are countless,” said Anthony Mason, Amazon’s director of delivery training and programs, who walked me through the training demos. One goal of the training, he said, is to give drivers a toolkit to pull from when they face challenging situations.

Suffice it to say, this is not the job for me. But if Amazon’s smart glasses live up to the company’s expectations, they might be a step forward for the drivers doing the real work.

AI chatbots often violate ethical standards in mental health contexts

27 October 2025 at 00:00

A new study suggests that popular large language models like ChatGPT can systematically breach established ethical guidelines for mental health care, even when specifically prompted to use accepted therapeutic techniques. The research, which will be presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, provides evidence that these AI systems may pose risks to individuals who turn to them for mental health support.

The motivation for this research stems from the rapidly growing trend of people using publicly available AI chatbots for advice on mental health issues. While these systems can offer immediate and accessible conversational support, their alignment with the professional standards that govern human therapists has remained largely unexamined. Researchers from Brown University sought to bridge this gap by creating a systematic way to evaluate the ethical performance of these models in a therapeutic context. They collaborated with mental health practitioners to ensure their analysis was grounded in the real-world principles that guide safe and effective psychotherapy.

To conduct their investigation, the researchers first developed a comprehensive framework outlining 15 distinct ethical risks. This framework was informed by the ethical codes of professional organizations, including the American Psychological Association, translating core therapeutic principles into measurable behaviors for an AI. The team then designed a series of simulated conversations between a user and a large language model, or LLM, which is an AI system trained on vast amounts of text to generate human-like conversation. In these simulations, the AI was instructed to act as a counselor employing evidence-based psychotherapeutic methods.

The simulated scenarios were designed to present the AI with common and challenging mental health situations. These included users expressing feelings of worthlessness, anxiety about social situations, and even statements that could indicate a crisis, such as thoughts of self-harm. By analyzing the AI’s responses across these varied prompts, the researchers could map its behavior directly onto their practitioner-informed framework of ethical risks. This allowed for a detailed assessment of when and how the models tended to deviate from professional standards.

The study’s findings indicate that the large language models frequently engaged in behaviors that would be considered ethical violations for a human therapist. One of the most significant areas of concern was in the handling of crisis situations. When a simulated user expressed thoughts of self-harm, the AI models often failed to respond appropriately. Instead of prioritizing safety and providing direct access to crisis resources, some models offered generic advice or conversational platitudes that did not address the severity of the situation.

Another pattern observed was the reinforcement of negative beliefs. In psychotherapy, a practitioner is trained to help a person identify and gently challenge distorted or unhelpful thought patterns, such as believing one is a complete failure after a single mistake. The study found that the AIs, in an attempt to be agreeable and supportive, would sometimes validate these negative self-assessments. This behavior can inadvertently strengthen a user’s harmful beliefs about themselves or their circumstances, which is counterproductive to therapeutic goals.

The research also points to the issue of what the authors term a “false sense of empathy.” While the AI models are proficient at generating text that sounds empathetic, this is a simulation of emotion, not a genuine understanding of the user’s experience. This can create a misleading dynamic where a user may form an attachment to the AI or develop a dependency based on this perceived empathy. Such a one-sided relationship lacks the authentic human connection and accountability that are foundational to effective therapy.

Beyond these specific examples, the broader framework developed by the researchers suggests other potential ethical pitfalls. These include issues of competence, where an AI might provide advice on a topic for which it has no genuine expertise or training, unlike a licensed therapist who must practice within their scope. Similarly, the nature of data privacy and confidentiality is fundamentally different with an AI. Conversations with a chatbot may be recorded and used for model training, a practice that is in direct conflict with the strict confidentiality standards of human-centered therapy.

The study suggests that these ethical violations are not necessarily flaws to be fixed with simple tweaks but may be inherent to the current architecture of large language models. These systems are designed to predict the next most probable word in a sequence, creating coherent and contextually relevant text. They do not possess a true understanding of psychological principles, ethical reasoning, or the potential real-world impact of their words. Their programming prioritizes a helpful and plausible response, which in a therapeutic setting can lead to behaviors that are ethically inappropriate.

The researchers acknowledge certain limitations to their work. The study relied on simulated interactions, which may not fully capture the complexity and unpredictability of conversations with real individuals seeking help. Additionally, the field of artificial intelligence is evolving rapidly, and newer versions of these models may behave differently than the ones tested. The specific prompts used by the research team also shape the AI’s responses, and different user inputs could yield different results.

For future research, the team calls for the development of new standards specifically designed for AI-based mental health tools. They suggest that the current ethical and legal frameworks for human therapists are not sufficient for governing these technologies. New guidelines would need to be created to address the unique challenges posed by AI, from data privacy and algorithmic bias to the management of user dependency and crisis situations.

In their paper, the researchers state, “we call on future work to create ethical, educational, and legal standards for LLM counselors—standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.” The study ultimately contributes to a growing body of evidence suggesting that while AI may have a future role in mental health, its current application requires a cautious and well-regulated approach to ensure user safety and well-being.

The study, “How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework,” was authored by Zainab Iftikhar, Amy Xiao, Sean Ransom, Jeff Huang, and Harini Suresh.

Amazon and the media: Inside the disconnect on AI, robots and jobs

24 October 2025 at 21:51
Tye Brady, chief technologist for Amazon Robotics, introduces “Project Eluna,” an AI model that assists operations teams, during Amazon’s Delivering the Future event in Milpitas, Calif. (GeekWire Photo / Todd Bishop)

SAN FRANCISCO — Amazon showed off its latest robotics and AI systems this week, presenting a vision of automation that it says will make warehouse and delivery work safer and smarter. 

But the tech giant and some of the media at its Delivering the Future event were on different planets when it came to big questions about robots, jobs, and the future of human work. 

The backdrop: On Tuesday, a day before the event, The New York Times cited internal Amazon documents and interviews to report that the company plans to automate as much as 75% of its operations by 2033. According to the report, the robotics team expects automation to “flatten Amazon’s hiring curve over the next 10 years,” allowing it to avoid hiring more than 600,000 workers even as sales continue to grow.

In a statement cited in the article, Amazon said the documents were incomplete and did not represent the company’s overall hiring strategy.

On stage at the event, Tye Brady, chief technologist for Amazon Robotics, introduced the company’s newest systems — Blue Jay, a setup that coordinates multiple robotic arms to pick, stow, and consolidate items; and Project Eluna, an agentic AI model that acts as a digital assistant for operations teams.

Later, he addressed the reporters in the room: “When you write about Blue Jay or you write about Project Eluna … I hope you remember that the real headline is not about robots. The real headline is about people, and the future of work we’re building together.”

Amazon’s new “Blue Jay” robotic system uses multiple coordinated arms to pick, stow, and consolidate packages inside a fulfillment center — part of the company’s next generation of warehouse automation. (Amazon Photo)

He said the benefits for employees are clear: Blue Jay handles repetitive lifting, while Project Eluna helps identify safety issues before they happen. By automating routine tasks, he said, AI frees employees to focus on higher-value work, supported by Amazon training programs.

Brady coupled that message with a reminder that no company has created more U.S. jobs over the past decade than Amazon, noting its plan to hire 250,000 seasonal workers this year. 

His message to the company’s front-line employees: “These systems are not experiments. They’re real tools built for you, to make your job safer, smarter, and more rewarding.”

‘Menial, mundane, and repetitive’

Later, during a press conference, a reporter cited the New York Times report, asking Brady if he believes Amazon’s workforce could shrink on the scale the paper described based on the internal report.

Brady didn’t answer the question directly, but described the premise as speculation, saying it’s impossible to predict what will happen a decade from now. He pointed instead to the past 10 years of Amazon’s robotics investments, saying the company has created hundreds of thousands of new jobs — including entirely new job types — while also improving safety.

He said Amazon’s focus is on augmenting workers, not replacing them, by designing machines that make jobs easier and safer. The company, he added, will continue using collaborative robotics to help achieve its broader mission of offering customers the widest selection at the lowest cost.

In an interview with GeekWire after the press conference, Brady said he sees the role of robotics as removing the “menial, mundane, and repetitive” tasks from warehouse jobs while amplifying what humans do best — reasoning, judgment, and common sense. 

“Real leaders,” he added, “will lead with hope — hope that technology will do good for people.”

When asked whether the company’s goal was a “lights-out” warehouse with no people at all, Brady dismissed the idea. “There’s no such thing as 100 percent automation,” he said. “That doesn’t exist.” 

Tye Brady, chief technologist for Amazon Robotics, speaks about the company’s latest warehouse automation and AI initiatives during the Delivering the Future event. (GeekWire Photo / Todd Bishop)

Instead, he emphasized designing machines with real utility — ones that improve safety, increase efficiency, and create new types of technical jobs in the process.

When pressed on whether Amazon is replacing human hands with robotic ones, Brady pushed back: “People are much more than hands,” he said. “You perceive the environment. You understand the environment. You know when to put things together. Like, people got it going on. It’s not replacing a hand. That’s not the right way to think of it. It’s augmenting the human brain.”

Brady pointed to Amazon’s new Shreveport, La., fulfillment center as an example, saying the highly automated facility processes orders faster than previous generations while also adding about 2,500 new roles that didn’t exist before.

“That’s not a net job killer,” he said. “It’s creating more job efficiency — and more jobs in different pockets.”

The New York Times report offered a different view of Shreveport’s impact on employment. Describing it as Amazon’s “most advanced warehouse” and a “template for future robotic fulfillment centers,” the article said the facility uses about 1,000 robots. 

Citing internal documents, the Times reported that automation allowed Amazon to employ about 25% fewer workers last year than it would have without the new systems. As more robots are added next year, it added, the company expects the site to need roughly half as many workers as it would for similar volumes of items under previous methods.

Wall Street sees big savings

Analysts, meanwhile, are taking the potential impact seriously. A Morgan Stanley research note published Wednesday — the same day as Amazon’s event and in direct response to the Times report — said the newspaper’s projections align with the investment bank’s baseline analysis.

Rather than dismissing the report as speculative, Morgan Stanley’s Brian Nowak treated the article’s data points as credible. The analysts wrote that Amazon’s reported plan to build around 40 next-generation robotic warehouses by 2027 was “in line with our estimated slope of robotics warehouse deployment.”

More notably, Morgan Stanley put a multi-billion-dollar price tag on the efficiency gains. Its previous models estimated the rollout could generate $2 billion to $4 billion in annual savings by 2027. But using the Times’ figure — that Amazon expects to “avoid hiring 160,000+ U.S. warehouse employees by ’27” — the analysts recalculated that the savings could reach as much as $10 billion per year.

Back at the event, the specific language used by Amazon executives aligned closely with details in the Times report about the company’s internal communications strategy.

According to the Times, internal documents advised employees to avoid terms such as “automation” and “A.I.” and instead use collaborative language like “advanced technology” and “cobots” — short for collaborative robots — as part of a broader effort to “control the narrative” around automation and hiring.

On stage, Brady’s remarks closely mirrored that approach. He consistently framed Amazon’s robotics strategy as one of augmentation, not replacement, describing new systems as tools built for people.

In the follow-up interview, Brady said he disliked the term “artificial intelligence” altogether, preferring to refer to the technology simply as “machines.”

“Intelligence is ours,” he said. “Intelligence is a very much a human thing.”

The Week’s 10 Biggest Funding Rounds: More AI Megarounds (Plus Some Other Stuff)

24 October 2025 at 19:48

Want to keep track of the largest startup funding deals in 2025 with our curated list of $100 million-plus venture deals to U.S.-based companies? Check out The Crunchbase Megadeals Board.

This is a weekly feature that runs down the week’s top 10 announced funding rounds in the U.S. Check out last week’s biggest funding rounds here.

This was another active week for large startup financings. AI data center developer Crusoe Energy Systems led with $1.38 billion in fresh financing, and several other megarounds were AI-focused startups. Other standouts hailed from a diverse array of sectors, including battery recycling, biotech and even fire suppression.

1. Crusoe Energy Systems, $1.38B, AI data centers: Crusoe Energy Systems, a developer of AI data centers and infrastructure, raised $1.38 billion in a financing led by Valor Equity Partners and Mubadala Capital. The deal sets a $10 billion+ valuation for the Denver-based company.

2. Avride, $375M, autonomous vehicles: Avride, a developer of technology to power autonomous vehicles and delivery robots, announced that it secured commitments of up to $375 million backed by Uber and Nebius Group. The 8-year-old, Austin, Texas-based company said it plans to launch its first robotaxi service on Uber’s platform in Dallas this year.

3. Redwood Materials, $350M, battery recycling: Battery recycling company Redwood Materials closed a $350 million Series E round led by Eclipse Ventures with participation from new investors including Nvidia’s NVentures. Founded in 2017, the Carson City, Nevada-based company has raised over $2 billion in known equity funding to date.

4. Uniphore, $260M, agentic AI: Uniphore, developer of an AI platform for businesses to deploy agentic AI, closed on $260 million in a Series F round that included backing from Nvidia, AMD, Snowflake Ventures and Databricks Ventures. The round sets a $2.5 billion valuation for the Palo Alto, California-based company.

5. Sesame, $250M, voice AI and smart glasses: San Francisco-based Sesame, a developer of conversational AI technology and smart glasses, picked up $250 million in a Series B round led by Sequoia Capital. The startup is headed by former Oculus CEO and co-founder Brendan Iribe.

6. OpenEvidence, $200M, AI for medicine: OpenEvidence, developer of an AI tool for medical professionals that has been nicknamed the “ChatGPT for doctors” reportedly raised $200 million in a GV-led round at a $6 billion valuation. Three months earlier, OpenEvidence pulled in $210 million at a $3.5 billion valuation.

7. Electra Therapeutics, $183M, biotech: Electra Therapeutics, a developer of therapies against novel targets for diseases in immunology and cancer, secured $183 million in a Series C round. Nextech Invest and EQT Life Sciences led the financing for the South San Francisco, California-based company.

8. LangChain, $125M, AI agents: LangChain, developer of a platform for engineering AI agents, picked up $125 million in fresh funding at a $1.25 billion valuation. IVP led the financing for the 3-year-old, San Francisco-based company.

9. ShopMy, $70M, brand marketing: New York-based ShopMy, a platform that connects brands and influencers, landed $70 million in a funding round led by Avenir. The financing sets a $1.5 billion valuation for the 5-year-old company.

10. Seneca, $60M, fire suppression: Seneca, a startup developing a fire suppression system that includes autonomous drones that help spot and put out fires, launched publicly with $60 million in initial funding. Caffeinated Capital and Convective Capital led the financing for the San Francisco-based company.

Methodology

We tracked the largest announced rounds in the Crunchbase database that were raised by U.S.-based companies for the period of Oct. 18-24. Although most announced rounds are represented in the database, there could be a small time lag as some rounds are reported late in the week.

Illustration: Dom Guzman

The Last Market Boom Ended 4 Years Ago. Here’s How Current Conditions Look Similar

24 October 2025 at 15:00

Nearly four years ago, the market hit a cyclical peak under conditions that in many ways look quite similar to what we’re seeing today.

Sky-high public tech valuations. Booming startup investment. Sharply rising valuations. And, a few cracks emerging on the new offering front.

Sure, there are quite a few differences in the investment environment, which we’ll explore in a follow-on piece. For this first installment, however, we are focusing on the commonalities, with an eye to the four highlighted above.

No. 1: Sky-high public tech valuations

First, both then and now, tech stocks hit unprecedented highs.

In mid-November 2021, the tech-heavy Nasdaq Composite index hit an all-time peak above 16,000. Gains stemmed largely from sharply rising tech share prices.

Today, the Nasdaq is hovering not far below a new all-time high of over 23,000. The five most valuable tech companies have a collective market cap of more than $16 trillion. Other hot companies, like AMD, Palantir Technologies and Broadcom have soared to record heights this year.

While private startups don’t see day-to-day valuation gyrations like publicly traded companies, their investors do take cues from public markets. When public-market bullishness subsides, private up rounds tend to diminish as well.

No. 2: Booming startup investment

In late 2021, just like today, venture investment was going strong.

Last time, admittedly, it was much stronger. Global startup funding shattered all records in 2021, with more than $640 billion invested. That was nearly double year-earlier levels. Funding surged to a broad swathe of startup sectors, with fintech in particular leading the gains.

For the first three quarters of this year, by contrast, global investment totaled a more modest $303 billion. However, that’s still on track for the highest tally in years. The core driver is, of course, voracious investor appetite for AI leaders, evidenced by OpenAI’s record-setting $40 billion financing in March.

The pace of unicorn creation is also picking up, which brings us to our next similarity.

No. 3: Up rounds and sharply rising valuations

At the last market peak, valuations for hot startups soared, driven in large part by heated competition among startup investors to get into pre-IPO rounds.

This time around, we’re also seeing sought-after startups raising follow-on rounds in quick succession, commonly at sharply escalated valuations. Per Crunchbase data, dozens of companies have scaled from Series A to Series C within just a couple of years, including several that took less than 12 months.

We’re also seeing prominent unicorns raising follow-on rounds at a rapid pace this year. Standouts include generative AI giants as well as hot startups in vertical AI, cybersecurity and defense tech.

No. 4: A few cracks emerging

During the 2021 market peak, even when the overall investment climate was buzzier than ever, we did see some worrisome developments and areas of declining valuations.

For that period, one of the earlier indicators was share-price deterioration for many of the initial companies to go public via SPAC. By late 2021, it had become clear that there were numerous “truly terrible performers” among the cohort, including well-known names such as WeWork, Metromile and Buzzfeed.

This time around, the new offerings market hasn’t been quite so active. But among those that did go public in recent months, performance has been decidedly mixed. Shares of Figma, one of the hottest IPOs in some time, are down more than 60% from the peak.

Online banking provider Chime and stablecoin platform Circle have shown similar declines.

At this point, these are still generously valued companies by many metrics. But it’s also worth noting the share price direction in recent months has been downward, not upward.

Next: Watch for more cracks

Looking ahead, one of the more reliable techniques to determine whether we are approaching peak or already past is to look for more cracks in the investment picture. Are GenAI hotshots struggling to secure financing at desired valuations? Is the IPO pipeline still sluggish? Are public tech stocks no longer cresting ever-higher heights?

Cracks can take some time to emerge, but inevitably, they do.

Related reading:

Illustration: Dom Guzman

The Splendor And Misery Of ARR Growth

24 October 2025 at 15:00

By Alexander Lis

AI startups are raising capital at record speed. According to Crunchbase data, AI-related companies have already raised $118 billion globally in 2025. And, so far, traction looks impressive. AI startups are posting stellar revenue growth, and even the $100 million ARR milestone is often achieved.

While this growth is breathtaking, some analysts are beginning to question its sustainability. They warn that AI spending may soon reach a peak and that unprofitable tech companies could be hit hardest when the cycle turns. If that happens, many investors in AI will find themselves in a difficult position.

Predicting a bubble is rarely productive, but preparing for volatility is. It would be wise for both founders and investors to ensure that portfolio companies have enough resilience to withstand a potential market shock.

The key lies in assessing the durability of ARR. In a major downturn, the “growth game” quickly becomes a survival game. History suggests that while a few companies may continue to grow more slowly, the majority will struggle or disappear.

The question, then, is how to tell the difference between sustainable and hype-driven ARR.

What distinguishes durable ARR from hype?

Alexander Lis of Social Discovery Ventures
Alexander Lis

Several factors set true, sustainable revenue growth apart from hype.

The first is customer commitment. Sustainable revenue comes from multiyear contracts, repeat renewal cycles and budgeted spend within core IT or operating lines. When revenue depends on pilots, proofs of concept or amorphous “innovation” budgets, it can vanish when corporate priorities shift. A company that touts these short trials as ARR is really reporting momentum, not recurring income.

This is what investor Jamin Ball has called experimental recurring revenue.

Traditional software firms can thrive with monthly churn in the low single digits — think 5% to 7%. But many AI companies are seeing double that. This means they have to sprint just to stand still, constantly replacing users who move on to the next shiny tool.

Another differentiator? Integration and workflow depth. Durable ARR is embedded into the customer’s core workflows, data pipelines or multiple teams. Ripping it out would be costly and disruptive. Hype ARR, by contrast, lives on the surface — lightweight integrations, fast deployments and limited stakeholders. Without unique intellectual property or deep workflow integration, such products can be replaced with minimal friction.

And finally, real growth is defined by clear value-add. True ARR is backed by measurable ROI, well-defined outcomes and long-term customer roadmaps.

In contrast, hype ARR is driven by urgency (we need to show our shareholders our AI deployment ASAP), or undefined ROI. In those cases, customers don’t even know how to define success. They are testing, not committing.

Beyond ARR

It is important to put ARR traction in context. Investors and founders should focus on a broader set of indicators — conversion from pilots to long-term contracts, contract length and expansion, net revenue retention, and gross margin trajectory. These metrics reveal if growth is sustainable.

It would also be helpful to assess the product’s real impact: efficiency uplift (more code, content, or customer conversations per employee-hour), accuracy improvement (e.g. for detecting bad actors), and higher conversion rates, among others. These metrics should exceed client expectations and outperform alternative tools. That’s what signals genuine value creation and a higher chance for experimental revenue to turn into durable ARR.

After all, AI may be changing how fast companies can form and grow, but it hasn’t suspended the basic laws of business.

For founders, the message is simple: Celebrate ARR if you so wish, but pair it with proof of retention, profitability and defensibility. For investors, resist the urge to chase every eye-popping run rate. The real competitive edge in this next phase of AI is stability, not spectacle.


Alexander Lis is the chief investment officer at Social Discovery Ventures. With 10-plus years of experience across public markets, VC, PE and real estate, he has managed a public markets portfolio that outperformed benchmarks, led early investments in Sumsub, Teachmint and Byrd, and achieved 20%-plus IRR by investing in distressed real estate across the U.S.

Illustration: Dom Guzman

Dell Technologies Capital On The Next Generation Of AI — And The Data Fueling It

23 October 2025 at 15:00

Editor’s note: This article is part of an ongoing series in which Crunchbase News interviews active investors in artificial intelligence. Read previous interviews with Foundation Capital, GV (formerly Google Ventures), Felicis, Battery Ventures, Bain Capital Ventures, Menlo Ventures, Scale Venture Partners, Costanoa, Citi Ventures, Sierra Ventures, Andrew Ng of AI Fund, and True Ventures, as well as highlights from more interviews done in 2023.

Fueled by AI, both Dell and its investment arm are on a hot streak this year.

The PC maker has seen demand for its server products surge with $20 billion in AI server shipments projected for fiscal 2026. At the same time, its investment arm, Dell Technologies Capital (DTC), has notched five exits — an IPO and four acquisitions — since June, an especially notable track record in a venture industry that has been challenged in recent years by a liquidity crunch.

Dell Technologies Capital managing director Daniel Docter
Daniel Docter, managing director at Dell Technologies Capital

On the heels of that success, we recently spoke with Dell Technologies Capital managing director Daniel Docter and partner Elana Lian.

DTC was founded more than a decade ago and operates as a full-stack investor, backing everything from silicon to applications.

“One big part of our network is the Dell relationship, which is the leader in GPU servers,” said Docter. As a result, Dell is connected to all the major players in the AI space, he said.

partner Elana Lian
Elana Lian, partner at Dell Technologies Capital

One of its earliest AI investments was during the machine-learning era in 2014 in a company called Moogsoft. Dell went on to acquire the alert remediation company in 2023.

DTC’s investment thesis was that the advent of machine learning was going to disrupt the tech industry. At that time, data had expanded to such a degree that new tools were required by the market to analyze data and find patterns, which informed the firm’s early investments in AI.

The investment team at DTC is largely comprised of  technical people, often “double E” degree engineers.

Docter has an electrical engineering background, worked at Hughes Research Labs, now HRL Laboratories, and transitioned from engineering to business development. He joined the venture industry 25 years ago and spent more than a decade at Intel. He joined DTC in 2016 through Dell’s EMC acquisition.

Lian worked in semiconductors for a decade, joined Intel Capital in 2010, and then joined Dell Technologies Capital in 2024.

Data evolution

Docter believes we are in the fifth generation of AI, which becomes more powerful with every iteration.

“We’re seeing that AI is almost a data problem,” said Lian. “For AI to get better and better, there’s an uncapped ceiling where there’s high-quality data coming in.”

The team is meeting startups focused on training, inference, reasoning and continuous learning along with safety requirements. Data is core to these advancements.

Even the definition of data is changing. “It used to be a word, then it was a context, then it was a task or a rationale or a path. Then it’s reasoning,” said Docter. “Who knows what’s next?”

As AI improves, there is demand for frontier data and for specialized data in fields such as philosophy, physics, chemistry and business. Humans are in the loop as these capabilities expand, said Docter, which has informed some of the firm’s investments.

On deal flow

DTC is a financial investor, assessing a potential company on whether it is a good investment, rather than backing businesses based on Dell’s strategic goals.

Startup revenue is exceeding what was previously possible, Docter said: “I’ve been doing this for 25 years. I’ve never seen companies that have this type of revenue growth.”

The best deals are always hotly contested, he noted.

The question to ask when it comes to revenue, Docter said, is: “Is that an innovation CTO office budget? Or is that a VP of engineering budget?”

When assessing a potential portfolio investment the team asks: “Is revenue durable? Is there value in using this?”

The pace of investment also seems unprecedented. “We’ll meet with a company on a Tuesday for the first time and sometimes by Thursday, they have a term sheet that they’ve already signed,” he said.

The firm does not have a dedicated fund size, which is an advantage as it can be flexible in the size of the check as well as the stage to make a commitment and how it invests over time.

DTC has invested $1.8 billion to date across 165 companies. It likes to invest early, at seed or Series A, with check sizes running from $2 million to $12 million, and leads or co-leads 80% of new deals. The firm makes around 15 to 16 new investments per year.

Once DTC has invested, it looks at how the firm can help portfolio companies sell to potential customers across Dell’s deal partner network.

This year, DTC has posted five exits, including Netskope’s IPO and four acquisitions: Rivos by Meta, SingleStore by Vector Capital, TheLoops by Industrial & Financial Systems and Regrello by Salesforce 1.

Notable AI investments

DTC is investing a little more actively than it has in the past, but remains disciplined, Docter said. The investment team is focused on complex enterprise use cases and challenges, following the Warren Buffett rule, which is to invest in what you know.

The firm invests at the silicon level because you “can be incredibly disruptive to the ecosystem,” said Docter.

The DTC portfolio companies we discussed include the following in areas ranging from silicon to applications.

Infrastructure and hardware layer:

  • AI chipmaker Rivos, which Meta plans to acquire for an undisclosed amount. (The deal is pending regulatory approval.)
  • SiMa.ai, which makes a chip for embedded edge use cases including in automobile, drone and robot technologies.
  • Runpod, an AI developer software layer with on-demand access to GPUs. It allows developers to play with AI and then scale it to production. The service has 500,000 developers, including 30,000 paying monthly, said Docter.
  • SuperAnnotate, a data annotation platform for enterprises with humans in the loop to build accurate data pipelines. Its customers include Databricks and the women’s health app Flo Health.

Applications:

  • Maven AGI provides customer support for complex and high-compliance enterprise use cases, a potentially massive market. Lian projects customer experience overall will be a trillion-dollar market.
  • Series Entertainment, a GenAI platform for game development that aims to reduce deployment timelines from eight months to two weeks.

What’s next?

A major area of interest for Lian is advancements in voice AI, the day-to-day human interaction with a machine.

It’s hard to imagine that the transformer architecture is the last and final architecture, said Docter. The firm has made investments in companies creating different architectures in Cartesia, a leader in state-space model, which has a longer context window building a new reasoning model with a different architecture, initially focused on voice AI. DTC has also invested in Israeli-based AA-I Technologies, which is working on a new type of reasoning model architecture.

“Right now, the opportunity of AI is this big, but this ball keeps on exploding,” said Lian. “The contact area is getting bigger and bigger. And that’s the same for the data.”

Related Crunchbase list:

Illustration: Dom Guzman


  1. Salesforce Ventures is an investor in Crunchbase. They have no say in our editorial process. For more, head here.

Saudi-based Web3, AI startup Astra Nova nets $48.3m

18 October 2025 at 06:30
Astra Nova plans to enter new markets in the Middle East, Europe, and Asia, with partnerships including NEOM, Nvidia Inception, and Alibaba Cloud.

❌
❌