NVIDIA's GTC Washington, D.C., keynote unveiled a strategic blueprint for America's AI future, emphasizing national infrastructure, physical AI, and industry transformation.
NVIDIA is driving significant AI economic development across the US by partnering with states, cities, and universities to democratize AI access and foster innovation.
Microsoft’s staggering ten-fold return on its OpenAI investment, now valued at $135 billion, signals a new era where strategic AI stakes redefine corporate power and valuation. This monumental gain, highlighted by CNBC’s MacKenzie Sigalos, follows a significant corporate restructure at OpenAI that redefines its partnership terms with Microsoft, granting the tech giant a 27% equity […]
Nvidia just made another billion-dollar move — this time into one of Europe’s oldest tech names. Nokia announced Tuesday that Nvidia will purchase $1 billion in new Nokia shares, marking the latest in a string of equity stakes the AI […]
Building AI for financial software requires a different playbook than consumer AI, and Intuit's latest QuickBooks release provides an example.
The company has announced Intuit Intelligence, a system that orchestrates specialized AI agents across its QuickBooks platform to handle tasks including sales tax compliance and payroll processing. These new agents augment existing accounting and project management agents (which have also been updated) as well as a unified interface that lets users query data across QuickBooks, third-party systems and uploaded files using natural language.
The new development follow years of investment and improvement in Intuit'sGenOS, allowing the company to build AI capabilities that reducelatency and improve accuracy.
But the real news isn't what Intuit built — it's how they built it and why their design decisions will make AI more usable. The company's latest AI rollout represents an evolution built on hard-won lessons about what works and what doesn't when deploying AI in financial contexts.
What the company learned is sobering: Even when its accounting agent improved transaction categorization accuracy by 20 percentage points on average, they still received complaints about errors.
"The use cases that we're trying to solve for customers include tax and finance; if you make a mistake in this world, you lose trust with customers in buckets and we only get it back in spoonfuls," Joe Preston, Intuit's VP of product and design, told VentureBeat.
The architecture of trust: Real data queries over generative responses
Intuit's technical strategy centers on a fundamental design decision. For financial queries and business intelligence, the system queries actual data, rather than generating responses through large language models (LLMs).
Also critically important: That data isn't all in one place. Intuit's technical implementation allows QuickBooks to ingest data from multiple distinct sources: native Intuit data, OAuth-connected third-party systems like Square for payments and user-uploaded files such as spreadsheets containing vendor pricing lists or marketing campaign data. This creates a unified data layer that AI agents can query reliably.
"We're actually querying your real data," Preston explained. "That's very different than if you were to just copy, paste out a spreadsheet or a PDF and paste into ChatGPT."
This architectural choice means that the Intuit Intelligence system functions more as an orchestration layer. It's a natural language interface to structured data operations. When a user asks about projected profitability or wants to run payroll, the system translates the natural language query into database operations against verified financial data.
This matters because Intuit's internal research has uncovered widespread shadow AI usage. When surveyed, 25% of accountants using QuickBooks admitted they were already copying and pasting data into ChatGPT or Google Gemini for analysis.
Intuit's approach treats AI as a query translation and orchestration mechanism, not a content generator. This reduces the hallucination risk that has plagued AI deployments in financial contexts.
Explainability as a design requirement, not an afterthought
Beyond the technical architecture, Intuit has made explainability a core user experience across its AI agents. This goes beyond simply providing correct answers: It means showing users the reasoning behind automated decisions.
When Intuit's accounting agent categorizes a transaction, it doesn't just display the result; it shows the reasoning. This isn't marketing copy about explainable AI, it's actual UI displaying data points and logic.
"It's about closing that trust loop and making sure customers understand the why," Alastair Simpson, Intuit's VP of design, told VentureBeat.
This becomes particularly critical when you consider Intuit's user research: While half of small businesses describe AI as helpful, nearly a quarter haven't used AI at all. The explanation layer serves both populations: Building confidence for newcomers, while giving experienced users the context to verify accuracy.
The design also enforces human control at critical decision points. This approach extends beyond the interface. Intuit connects users directly with human experts, embedded in the same workflows, when automation reaches its limits or when users want validation.
Navigating the transition from forms to conversations
One of Intuit's more interesting challenges involves managing a fundamental shift in user interfaces. Preston described it as having one foot in the past and one foot in the future.
"This isn't just Intuit, this is the market as a whole," said Preston. "Today we still have a lot of customers filling out forms and going through tables full of data. We're investing a lot into leaning in and questioning the ways that we do it across our products today, where you're basically just filling out, form after form, or table after table, because we see where the world is headed, which is really a different form of interacting with these products."
This creates a product design challenge: How do you serve users who are comfortable with traditional interfaces while gradually introducing conversational and agentic capabilities?
Intuit's approach has been to embed AI agents directly into existing workflows. This means not forcing users to adopt entirely new interaction patterns. The payments agent appears alongside invoicing workflows; the accounting agent enhances the existing reconciliation process rather than replacing it. This incremental approach lets users experience AI benefits without abandoning familiar processes.
What enterprise AI builders can learn from Intuit's approach
Intuit's experience deploying AI in financial contexts surfaces several principles that apply broadly to enterprise AI initiatives.
Architecture matters for trust: In domains where accuracy is critical, consider whether you need content generation or data query translation. Intuit's decision to treat AI as an orchestration and natural language interface layer dramatically reduces hallucination risk and avoids using AI as a generative system.
Explainability must be designed in, not bolted on: Showing users why the AI made a decision isn't optional when trust is at stake. This requires deliberate UX design. It may constrain model choices.
User control preserves trust during accuracy improvements: Intuit's accounting agent improved categorization accuracy by 20 percentage points. Yet, maintaining user override capabilities was essential for adoption.
Transition gradually from familiar interfaces: Don't force users to abandon forms for conversations. Embed AI capabilities into existing workflows first. Let users experience benefits before asking them to change behavior.
Be honest about what's reactive versus proactive: Current AI agents primarily respond to prompts and automate defined tasks. True proactive intelligence that makes unprompted strategic recommendations remains an evolving capability.
Address workforce concerns with tooling, not just messaging: If AI is meant to augment rather than replace workers, provide workers with AI tools. Show them how to leverage the technology.
For enterprises navigating AI adoption, Intuit's journey offers a clear directive. The winning approach prioritizes trustworthiness over capability demonstrations. In domains where mistakes have real consequences, that means investing in accuracy, transparency and human oversight before pursuing conversational sophistication or autonomous action.
Simpson frames the challenge succinctly: "We didn't want it to be a bolted-on layer. We wanted customers to be in their natural workflow, and have agents doing work for customers, embedded in the workflow."
The burgeoning demand for artificial intelligence, a computational arms race among hyperscalers, has illuminated a critical bottleneck: access to reliable, scalable power. This very challenge, as discussed by CleanSpark CEO Matthew Schultz with CNBC’s Jordan Smith, is precisely where Bitcoin miners like CleanSpark find their strategic advantage. Their conversation unveils a nuanced pivot, not merely […]
Enterprises can now deploy large-scale AI inference with FriendliAI’s optimized stack on Nebius AI infrastructure, combining top performance with cost efficiency.
Michael Kagan, CTO of Nvidia and co-founder of Mellanox, recently engaged in a candid discussion with Sonya Huang and Pat Grady at Sequoia’s Europe100 event, offering profound insights into Nvidia’s meteoric rise as the architect of AI infrastructure. His commentary illuminated the pivotal role of the Mellanox acquisition in transforming Nvidia from a mere chip […]
Today, we announced a collaboration with NextEra Energy to accelerate nuclear deployment across the U.S., including the restart of the Duane Arnold Energy Center in Iowa…
“Infrastructure is destiny,” declared James Hairston, Head of International Policy & Partnerships for Asia, Africa, & Latin America at OpenAI, encapsulating the strategic imperative facing Southeast Asia in the burgeoning age of artificial intelligence. This powerful statement set the stage for a compelling discussion at the Bloomberg Business Summit at ASEAN in Kuala Lumpur, where […]
Saudi state-owned AI startup Humain is building a full-stack AI ecosystem — from data centers and cloud infrastructure to advanced AI models and a new voice-controlled operating system called “Humain 1.” Saudi Arabia’s ambitious push into artificial intelligence just took […]
AMD's strategic divestiture of ZT Systems' manufacturing, while retaining design expertise, sharpens its focus on integrated AMD rack-scale AI solutions.
Tejas Dessai, Director of Research at Global X ETFs, recently shared a compelling outlook on CNBC’s Worldwide Exchange, emphasizing that the trajectory of the AI trade through 2026 will be largely dictated by the capital expenditure (CAPEX) guidance and AI revenue acceleration from hyperscalers. “This week could set the tone for the AI trade going […]
Many businesses are equipped with a network of intelligent eyes that span operations. These IP cameras and intelligent edge devices were once solely focused on ensuring the safety of employees, customers, and inventory. These technologies have long proved to be essential tools for businesses, and while this sentiment still rings true, they’re now emerging as powerful resources.
These cameras and edge devices have rapidly evolved into real-time data producers. IP cameras can now see and understand, and the accompanying artificial intelligence helps companies and decision-makers generate business intelligence, improve operational efficiency, and gain a competitive advantage.
By treating cameras as vision sensors and sources of operational insight, businesses can transform everyday visibility into measurable business value.
Intelligence on the edge
Network cameras have come a long way since Axis Communications first introduced this technology in 1996. Over time, innovations like the ARTPEC chip, the first chip purpose-built for IP video, helped enhance image quality, analytics, and encoding performance.
Today, these intelligent devices are powering a new generation of business intelligence and operational efficiency solutions via embedded AI. Actionable insights are now fed directly into intelligence platforms, ERP systems, and real-time dashboards, and the results are significant and far-reaching.
In manufacturing, intelligent cameras are detecting defects on the production line early, before an entire production run is compromised. In retail, these cameras can run software that maps customer journeys and optimizes product placement. In healthcare, these solutions help facilities enhance patient care while improving operational efficiency and reducing costs.
The combination of video and artificial intelligence has significantly expanded what cameras can do — transforming them into vital tools for improving business performance.
Proof in practice
Companies are creatively taking advantage of edge devices like AI-enabled cameras to improve business intelligence and operational efficiencies.
BMW has relied on intelligent IP cameras to optimize efficiency and product quality, with AI-driven video systems catching defects that are often invisible to the human eye. Or take Google Cloud’s shelf-checking AI technology, an innovative software that allows retailers to make instant restocking decisions using real-time data.
These technologies appeal to far more than retailers and vendors. The A.C. Camargo Cancer Center in Brazil uses network cameras to reduce theft, assure visitor and employee safety, and optimize patient flow. By relying on newfound business intelligence, the facility has saved more than $2 million in operational costs through two years, with those savings being reinvested directly into patient care.
Urban projects can also benefit from edge devices and artificial intelligence. For example, Vanderbilt University turned to video analytics to study traffic flow, relying on AI to uncover the causes of phantom congestion and enabling smarter traffic management. These studies will have additional impact on the local environment and public, as the learnings can be used to optimize safety, air quality, and fuel efficiency.
Each case illustrates the same point: AI-powered cameras can fuel a tangible return on investment and crucial business intelligence, regardless of the industry.
Preparing for the next phase
The role of AI in video intelligence is still expanding, with several emerging trends driving greater advancements and impact in the years ahead:
Predictive operations: cameras that are capable of forecasting needs or risks through predictive analytics
Versatile analytics: systems that incorporate audio, thermal, and environmental sensors for more comprehensive and accurate insights
Technological collaboration: cameras that integrate with other intelligent edge devices to autonomously manage tasks
Sustainability initiatives: intelligent technologies that reduce energy use and support resource efficiency
Axis Communications helps advance these possibilities with open-source, scalable systems engineered to address both today’s challenges and tomorrow’s opportunities. By staying ahead of this ever-changing environment, Axis helps ensure that organizations continue to benefit from actionable business intelligence while maintaining the highest standards of security and safety.
Cameras have evolved beyond simple surveillance tools. They are strategic assets that inform operations, foster innovation, and enable future readiness. Business leaders who cling to traditional views of IP cameras and edge devices risk missing opportunities for efficiency and innovation. Those who embrace an AI-driven approach can expect not only stronger security but also better business outcomes.
Ultimately, the value of IP cameras and edge devices lies not in categories but in capabilities. In an era of rapidly evolving artificial intelligence, these unique technologies will become indispensable to overall business success.
About Axis Communications
Axis enables a smarter and safer world by improving security, safety, operational efficiency, and business intelligence. As a network technology company and industry leader, Axis offers video surveillance, access control, intercoms, and audio solutions. These are enhanced by intelligent analytics applications and supported by high-quality training.
Axis has around 5,000 dedicated employees in over 50 countries and collaborates with technology and system integration partners worldwide to deliver customer solutions. Axis was founded in 1984, and the headquarters are in Lund, Sweden.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
Starting and growing a business in the Philippines is getting easier as reforms and PCCI efforts simplify processes and open more growth opportunities.
It’s Saturday, October 25, 2025, and we’re back with the top startup and tech funding news stories spanning the U.S. and around the world. From billion-dollar AI infrastructure bets to fintech, enterprise SaaS, and Web3 innovations, investors showed no signs […]
SambaNova Systems, once one of Silicon Valley’s most promising AI hardware startups, is reportedly exploring a sale after struggling to secure new funding. The Palo Alto–based company, known for its AI chips and full-stack computing systems, has held early talks […]
While the world's leading artificial intelligence companies race to build ever-larger models, betting billions that scale alone will unlock artificial general intelligence, a researcher at one of the industry's most secretive and valuable startups delivered a pointed challenge to that orthodoxy this week: The path forward isn't about training bigger — it's about learning better.
"I believe that the first superintelligence will be a superhuman learner," Rafael Rafailov, a reinforcement learning researcher at Thinking Machines Lab, told an audience at TED AI San Francisco on Tuesday. "It will be able to very efficiently figure out and adapt, propose its own theories, propose experiments, use the environment to verify that, get information, and iterate that process."
This breaks sharply with the approach pursued by OpenAI, Anthropic, Google DeepMind, and other leading laboratories, which have bet billions on scaling up model size, data, and compute to achieve increasingly sophisticated reasoning capabilities. Rafailov argues these companies have the strategy backwards: what's missing from today's most advanced AI systems isn't more scale — it's the ability to actually learn from experience.
"Learning is something an intelligent being does," Rafailov said, citing a quote he described as recently compelling. "Training is something that's being done to it."
The distinction cuts to the core of how AI systems improve — and whether the industry's current trajectory can deliver on its most ambitious promises. Rafailov's comments offer a rare window into the thinking at Thinking Machines Lab, the startup co-founded in February by former OpenAI chief technology officer Mira Murati that raised a record-breaking $2 billion in seed funding at a $12 billion valuation.
Why today's AI coding assistants forget everything they learned yesterday
To illustrate the problem with current AI systems, Rafailov offered a scenario familiar to anyone who has worked with today's most advanced coding assistants.
"If you use a coding agent, ask it to do something really difficult — to implement a feature, go read your code, try to understand your code, reason about your code, implement something, iterate — it might be successful," he explained. "And then come back the next day and ask it to implement the next feature, and it will do the same thing."
The issue, he argued, is that these systems don't internalize what they learn. "In a sense, for the models we have today, every day is their first day of the job," Rafailov said. "But an intelligent being should be able to internalize information. It should be able to adapt. It should be able to modify its behavior so every day it becomes better, every day it knows more, every day it works faster — the way a human you hire gets better at the job."
The duct tape problem: How current training methods teach AI to take shortcuts instead of solving problems
Rafailov pointed to a specific behavior in coding agents that reveals the deeper problem: their tendency to wrap uncertain code in try/except blocks — a programming construct that catches errors and allows a program to continue running.
"If you use coding agents, you might have observed a very annoying tendency of them to use try/except pass," he said. "And in general, that is basically just like duct tape to save the entire program from a single error."
Why do agents do this? "They do this because they understand that part of the code might not be right," Rafailov explained. "They understand there might be something wrong, that it might be risky. But under the limited constraint—they have a limited amount of time solving the problem, limited amount of interaction—they must only focus on their objective, which is implement this feature and solve this bug."
The result: "They're kicking the can down the road."
This behavior stems from training systems that optimize for immediate task completion. "The only thing that matters to our current generation is solving the task," he said. "And anything that's general, anything that's not related to just that one objective, is a waste of computation."
Why throwing more compute at AI won't create superintelligence, according to Thinking Machines researcher
Rafailov's most direct challenge to the industry came in his assertion that continued scaling won't be sufficient to reach AGI.
"I don't believe we're hitting any sort of saturation points," he clarified. "I think we're just at the beginning of the next paradigm—the scale of reinforcement learning, in which we move from teaching our models how to think, how to explore thinking space, into endowing them with the capability of general agents."
In other words, current approaches will produce increasingly capable systems that can interact with the world, browse the web, write code. "I believe a year or two from now, we'll look at our coding agents today, research agents or browsing agents, the way we look at summarization models or translation models from several years ago," he said.
But general agency, he argued, is not the same as general intelligence. "The much more interesting question is: Is that going to be AGI? And are we done — do we just need one more round of scaling, one more round of environments, one more round of RL, one more round of compute, and we're kind of done?"
His answer was unequivocal: "I don't believe this is the case. I believe that under our current paradigms, under any scale, we are not enough to deal with artificial general intelligence and artificial superintelligence. And I believe that under our current paradigms, our current models will lack one core capability, and that is learning."
Teaching AI like students, not calculators: The textbook approach to machine learning
To explain the alternative approach, Rafailov turned to an analogy from mathematics education.
"Think about how we train our current generation of reasoning models," he said. "We take a particular math problem, make it very hard, and try to solve it, rewarding the model for solving it. And that's it. Once that experience is done, the model submits a solution. Anything it discovers—any abstractions it learned, any theorems—we discard, and then we ask it to solve a new problem, and it has to come up with the same abstractions all over again."
That approach misunderstands how knowledge accumulates. "This is not how science or mathematics works," he said. "We build abstractions not necessarily because they solve our current problems, but because they're important. For example, we developed the field of topology to extend Euclidean geometry — not to solve a particular problem that Euclidean geometry couldn't handle, but because mathematicians and physicists understood these concepts were fundamentally important."
The solution: "Instead of giving our models a single problem, we might give them a textbook. Imagine a very advanced graduate-level textbook, and we ask our models to work through the first chapter, then the first exercise, the second exercise, the third, the fourth, then move to the second chapter, and so on—the way a real student might teach themselves a topic."
The objective would fundamentally change: "Instead of rewarding their success — how many problems they solved — we need to reward their progress, their ability to learn, and their ability to improve."
This approach, known as "meta-learning" or "learning to learn," has precedents in earlier AI systems. "Just like the ideas of scaling test-time compute and search and test-time exploration played out in the domain of games first" — in systems like DeepMind's AlphaGo — "the same is true for meta learning. We know that these ideas do work at a small scale, but we need to adapt them to the scale and the capability of foundation models."
The missing ingredients for AI that truly learns aren't new architectures—they're better data and smarter objectives
When Rafailov addressed why current models lack this learning capability, he offered a surprisingly straightforward answer.
"Unfortunately, I think the answer is quite prosaic," he said. "I think we just don't have the right data, and we don't have the right objectives. I fundamentally believe a lot of the core architectural engineering design is in place."
Rather than arguing for entirely new model architectures, Rafailov suggested the path forward lies in redesigning the data distributions and reward structures used to train models.
"Learning, in of itself, is an algorithm," he explained. "It has inputs — the current state of the model. It has data and compute. You process it through some sort of structure, choose your favorite optimization algorithm, and you produce, hopefully, a stronger model."
The question: "If reasoning models are able to learn general reasoning algorithms, general search algorithms, and agent models are able to learn general agency, can the next generation of AI learn a learning algorithm itself?"
His answer: "I strongly believe that the answer to this question is yes."
The technical approach would involve creating training environments where "learning, adaptation, exploration, and self-improvement, as well as generalization, are necessary for success."
"I believe that under enough computational resources and with broad enough coverage, general purpose learning algorithms can emerge from large scale training," Rafailov said. "The way we train our models to reason in general over just math and code, and potentially act in general domains, we might be able to teach them how to learn efficiently across many different applications."
Forget god-like reasoners: The first superintelligence will be a master student
This vision leads to a fundamentally different conception of what artificial superintelligence might look like.
"I believe that if this is possible, that's the final missing piece to achieve truly efficient general intelligence," Rafailov said. "Now imagine such an intelligence with the core objective of exploring, learning, acquiring information, self-improving, equipped with general agency capability—the ability to understand and explore the external world, the ability to use computers, ability to do research, ability to manage and control robots."
Such a system would constitute artificial superintelligence. But not the kind often imagined in science fiction.
"I believe that intelligence is not going to be a single god model that's a god-level reasoner or a god-level mathematical problem solver," Rafailov said. "I believe that the first superintelligence will be a superhuman learner, and it will be able to very efficiently figure out and adapt, propose its own theories, propose experiments, use the environment to verify that, get information, and iterate that process."
This vision stands in contrast to OpenAI's emphasis on building increasingly powerful reasoning systems, or Anthropic's focus on "constitutional AI." Instead, Thinking Machines Lab appears to be betting that the path to superintelligence runs through systems that can continuously improve themselves through interaction with their environment.
The $12 billion bet on learning over scaling faces formidable challenges
Rafailov's appearance comes at a complex moment for Thinking Machines Lab. The company has assembled an impressive team of approximately 30 researchers from OpenAI, Google, Meta, and other leading labs. But it suffered a setback in early October when Andrew Tulloch, a co-founder and machine learning expert, departed to return to Meta after the company launched what The Wall Street Journal called a "full-scale raid" on the startup, approaching more than a dozen employees with compensation packages ranging from $200 million to $1.5 billion over multiple years.
Despite these pressures, Rafailov's comments suggest the company remains committed to its differentiated technical approach. The company launched its first product, Tinker, an API for fine-tuning open-source language models, in October. But Rafailov's talk suggests Tinker is just the foundation for a much more ambitious research agenda focused on meta-learning and self-improving systems.
"This is not easy. This is going to be very difficult," Rafailov acknowledged. "We'll need a lot of breakthroughs in memory and engineering and data and optimization, but I think it's fundamentally possible."
He concluded with a play on words: "The world is not enough, but we need the right experiences, and we need the right type of rewards for learning."
The question for Thinking Machines Lab — and the broader AI industry — is whether this vision can be realized, and on what timeline. Rafailov notably did not offer specific predictions about when such systems might emerge.
In an industry where executives routinely make bold predictions about AGI arriving within years or even months, that restraint is notable. It suggests either unusual scientific humility — or an acknowledgment that Thinking Machines Lab is pursuing a much longer, harder path than its competitors.
For now, the most revealing detail may be what Rafailov didn't say during his TED AI presentation. No timeline for when superhuman learners might emerge. No prediction about when the technical breakthroughs would arrive. Just a conviction that the capability was "fundamentally possible" — and that without it, all the scaling in the world won't be enough.
Solmate Infrastructure is making waves with a nearly 50% stock surge after announcing a new validator hub in the Middle East and strategic $SOL purchases, but could this be the start of a major expansion for the Solana ecosystem?
Solmate Infrastructure has gained attention with its major expansions and smart $SOL purchases. The Nasdaq listed company is supported by Cathie Wood.
According to the sources, its stock rose nearly 50% after it announced plans for a validator hub in the Middle East and a strong mergers and acquisitions strategy. Experts say this shows Solmate Infrastructure is becoming an important force in building the Solana ecosystem.
What is Solmate Infrastructure and What Does SOL Represent?
The company also creates real world infrastructure like validators to support its work. These validators help process transactions on the Solana network. By combining its token investments with this infrastructure, the company improves the network’s performance.
This approach helps the company play a bigger role in growing the Solana ecosystem. Solana (SOL) is the main cryptocurrency of the Solana blockchain and is made for fast and low cost transactions.
Solmate Infrastructure uses its treasury of discounted $SOL to grow its operations and make strategic purchases. The company focuses on both real infrastructure and token holdings. Experts say this makes Solmate Infrastructure a unique player in the cryptocurrency market.
Why Did Solmate Infrastructure’s Stock Jump 50%?
The main reason for the stock surge was Solmate Infrastructure announcing a validator center in the Middle East. The company finished assembling its first validator hardware in a UAE data center. It also bought $SOL tokens at a 15% discount, including a $50 million purchase during the recent crypto downturn.
CEO Marco Santori said these purchases will help support validator operations and long-term growth. Investors reacted positively to the news, sending the company’s stock to an intraday high of $12.55.
This gives the company a market capitalization of around $754 million. Analysts say this shows strong confidence in Solmate Infrastructure’s expansion plans.
How is Solmate Infrastructure Expanding Its M&A Strategy?
Solmate Infrastructure has revealed a strong mergers and acquisitions plan focused on businesses in the Solana value chain. CEO Marco Santori said the goal is not to make quick profits. The company wants to buy businesses where its $SOL treasury can help them grow.
This strategy is aimed at building long-term value for both the company and the Solana ecosystem. The company is focusing on businesses where its $SOL treasury can drive growth. Experts say this strategy helps bring together important parts of the Solana ecosystem.
It also increases confidence among investors. Many believe this approach will strengthen the company’s long term position in the market. The company is supporting its mergers and acquisitions plans with a $300 million PIPE financing round.
Ark Invest, the Solana Foundation, and UAE based Pulsar Group are providing this funding. The money will help the company expand and make key purchases. Experts see this as a sign of strong backing from important investors.
What Role Does Institutional Interest Play in Solmate Infrastructure’s Growth?
Institutional holdings in Solana are rising quickly, with 20 firms owning more than 20.3 million $SOL tokens worth about $3.86 billion. This shows growing trust in Solana based projects.
The company is benefiting from this trend, especially with Ark Invest holding an 11.5% stake. Experts say this support adds credibility and helps the company carry out its infrastructure and M&A plans.
How Does Solmate Infrastructure’s Validator Center Impact the Market?
The new validator center in the UAE is a significant milestone for the company. It allows the company to combine physical crypto infrastructure with its existing $SOL holdings, creating a stronger foundation for operations.
This validator network will not only improve the company’s ability to process transactions but also support the growth and stability of the wider Solana ecosystem.
Industry experts note that by pairing discounted SOL purchases with validator operations, the company can increase both financial returns and its influence within the Solana network over the long term.
Conclusion
Solmate Infrastructure has become a key player in the Solana ecosystem. It combines discounted $SOL purchases with a new validator hub in the Middle East. The company is focusing on buying key businesses to grow its Solana operations.
These actions are helping it build a strong foundation for the future. Investors are paying close attention to the company. The market’s response shows that many have confidence in the company’s strategy.
Analysts say its mix of building infrastructure and managing $SOL tokens is setting an example. Other institutional players may follow this model to enter the Solana ecosystem.
Glossary
Solmate Infrastructure: Company that builds Solana tools and holds $SOL.
SOL: Solana’s coin for fast and cheap payments.
Validator: Computer that checks blockchain transactions.
M&A: When a company buys or joins another to grow.
PIPE Financing: Investor money to help a company expand.
Frequently Asked Questions About Solmate Infrastructure
Why did Solmate Infrastructure’s stock rise 50%?
The stock rose because the company started a Solana validator in the UAE and shared new M&A plans.
How does Solmate use its $SOL tokens?
Solmate uses $SOL tokens to run validators, grow its business, and buy Solana related companies.
What is Solmate’s M&A strategy?
Solmate buys Solana related companies to grow in the long term and make the Solana ecosystem stronger.
What is Solmate’s current market value?
After the stock rise, Solmate Infrastructure is worth about $754 million.
Will the stock rally continue?
Experts think the rally could continue if Solmate grows its validators and completes its M&A plans.
Crusoe, the startup calling itself “the AI factory company,” has raised a staggering $1.37 billion in Series E funding co-led by Valor Equity Partners and Mubadala Capital, pushing its valuation above $10 billion. The new round cements Crusoe’s rise as […]
Anthropic is making a multi-billion dollar bet on Google Cloud TPUs, securing one million chips to fuel Claude's growth and intensify the AI compute race.
As 50,000 attendees descend on Salesforce's Dreamforce conference this week, the enterprise software giant is making its most aggressive bet yet on artificial intelligence agents, positioning itself as the antidote to what it calls an industry-wide "pilot purgatory" where 95% of enterprise AI projects never reach production.
The company on Monday launched Agentforce 360, a sweeping reimagination of its entire product portfolio designed to transform businesses into what it calls "agentic enterprises" — organizations where AI agents work alongside humans to handle up to 40% of work across sales, service, marketing, and operations.
"We are truly in the agentic AI era, and I think it's probably the biggest revolution, the biggest transition in technology I've ever experienced in my career," said Parker Harris, Salesforce's co-founder and chief technology officer, during a recent press briefing. "In the future, 40% of the work in the Fortune 1000 is probably going to be done by AI, and it's going to be humans and AI actually working together."
The announcement comes at a pivotal moment for Salesforce, which has deployed more than 12,000 AI agent implementations over the past year while building what Harris called a "$7 billion business" around its AI platform. Yet the launch also arrives amid unusual turbulence, as CEO Marc Benioff faces fierce backlash for recent comments supporting President Trump and suggesting National Guard troops should patrol San Francisco streets.
Why 95% of enterprise AI projects never launch
The stakes are enormous. While companies have rushed to experiment with AI following ChatGPT's emergence two years ago, most enterprise deployments have stalled before reaching production, according to recent MIT research that Salesforce executives cited extensively.
"Customers have invested a lot in AI, but they're not getting the value," said Srini Tallapragada, Salesforce's president and chief engineering and customer success officer. "95% of enterprise AI pilots fail before production. It's not because of lack of intent. People want to do this. Everybody understands the power of the technology. But why is it so hard?"
The answer, according to Tallapragada, is that AI tools remain disconnected from enterprise workflows, data, and governance systems. "You're writing prompts, prompts, you're getting frustrated because the context is not there," he said, describing what he called a "prompt doom loop."
Salesforce's solution is a deeply integrated platform connecting what it calls four ingredients: the Agentforce 360 agent platform, Data 360 for unified data access, Customer 360 apps containing business logic, and Slack as the "conversational interface" where humans and agents collaborate.
Slack becomes the front door to Salesforce
Perhaps the most significant strategic shift is the elevation of Slack — acquired by Salesforce in 2019 for $27.7 billion — as the primary interface for Salesforce itself. The company is effectively reimagining its traditional Lightning interface around Slack channels, where sales deals, service cases, and data insights will surface conversationally rather than through forms and dashboards.
"Imagine that you maybe don't log into Salesforce, you don't see Salesforce, but it's there. It's coming to you in Slack, because that's where you're getting your work done," Harris explained.
The strategy includes embedding Salesforce's Agentforce agents for sales, IT service, HR service, and analytics directly into Slack, alongside a completely rebuilt Slackbot that acts as a personal AI companion. The company is also launching "Channel Expert," an always-on agent that provides instant answers from channel conversations.
To enable third-party AI tools to access Slack's conversational data, Salesforce is releasing a Real-Time Search API and Model Context Protocol server. Partners including OpenAI, Anthropic, Google, Perplexity, Writer, Dropbox, Notion, and Cursor are building agents that will live natively in Slack.
"The best way to see the power of the platform is through the AI apps and agents already being built," Rob Seaman, a Salesforce executive, said during a technical briefing, citing examples of startups "achieving tens of thousands of customers that have it installed in 120 days or less."
Voice and IT service take aim at new markets
Beyond Slack integration, Salesforce announced major expansions into voice-based interactions and employee service. Agentforce Voice, now generally available, transforms traditional IVR systems into natural conversations that can update CRM records, trigger workflows, and seamlessly hand off to human agents.
The IT Service offering represents Salesforce's most direct challenge to ServiceNow, the market leader. Mudhu Sudhakar, who joined Salesforce two months ago as senior vice president for IT and HR Service, positioned the product as a fundamental reimagining of employee support.
"Legacy IT service management is very portals, forms, tickets focused, manual process," Sudhakar said. "What we had a few key tenets: conversation first and agent first, really focused on having a conversational experience for the people requesting the support and for the people providing the support."
The IT Service platform includes what Salesforce describes as 25+ specialized agents and 100+ pre-built workflows and connectors that can handle everything from password resets to complex incident management.
Early customers report dramatic efficiency gains
Customer results suggest the approach is gaining traction. Reddit reduced average support resolution time from 8.9 minutes to 1.4 minutes — an 84% improvement — while deflecting 46% of cases entirely to AI agents. "This efficiency has allowed us to provide on-demand help for complex tasks and boost advertiser satisfaction scores by 20%," said John Thompson, Reddit's VP of sales strategy and operations, in a statement.
Engine, a travel management company, reduced average handle time by 15%, saving over $2 million annually. OpenTable resolved 70% of restaurant and diner inquiries autonomously. And 1-800Accountant achieved a 90% case deflection rate during the critical tax week period.
Salesforce's own internal deployments may be most telling. Tallapragada's customer success organization now handles 1.8 million AI-powered conversations weekly, with metrics published at help.salesforce.com showing how many agents answer versus escalating to humans.
Even more significantly, Salesforce has deployed AI-powered sales development representatives to follow up on leads that would previously have gone uncontacted due to cost constraints. "Now, Agentforce has an SDR which is doing thousands of leads following up," Tallapragada explained. The company also increased proactive customer outreach by 40% by shifting staff from reactive support.
The trust layer problem enterprises can't ignore
Given enterprise concerns about AI reliability, Salesforce has invested heavily in what it calls the "trust layer" — audit trails, compliance checks, and observability tools that let organizations monitor agent behavior at scale.
"You should think of an agent as a human. Digital labor. You need to manage performance just like a human. And you need these audit trails," Tallapragada explained.
The company encountered this challenge firsthand when its own agent deployment scaled. "When we started at Agentforce at Salesforce, we would track every message, which is great until 1,000, 3,000," Tallapragada said. "Once you have a million chats, there's no human, we cannot do it."
The platform now includes "Agentforce Grid" for searching across millions of conversations to identify and fix problematic patterns. The company also introduced Agent Script, a new scripting language that allows developers to define precise guardrails and deterministic controls for agent behavior.
Data infrastructure gets a major upgrade
Underlying the agent capabilities is significant infrastructure investment. Salesforce's Data 360 includes "Intelligent Context," which automatically extracts structured information from unstructured content like PDFs, diagrams, and flowcharts using what the company describes as "AI-powered unstructured data pipelines."
The company is also collaborating with Databricks, dbt Labs, and Snowflake on the "Universal Semantic Interchange," an attempt to standardize how different platforms define business metrics. The pending $8 billion acquisition of Informatica, expected to close soon, will expand metadata management capabilities across the enterprise.
The competitive landscape keeps intensifying
Salesforce's aggressive AI agent push comes as virtually every major enterprise software vendor pursues similar strategies. Microsoft has embedded Copilot across its product line, Google offers agent capabilities through Vertex AI and Gemini, and ServiceNow has launched its own agentic offerings.
When asked how Salesforce's announcement compared to OpenAI's recent releases, Tallapragada emphasized that customers will use multiple AI tools simultaneously. "Most of the time I'm seeing they're using OpenAI, they're using Gemini, they're using Anthropic, just like Salesforce, we use all three," he said.
The real differentiation, executives argued, lies not in the AI models but in the integration with business processes and data. Harris framed the competition in terms familiar from Salesforce's founding: "26 years ago, we just said, let's make Salesforce automation as easy as buying a book on Amazon.com. We're doing that same thing. We want to make agentic AI as easy as buying a book on Amazon."
The company's customer success stories are impressive but remain a small fraction of its customer base. With 150,000 Salesforce customers and one million Slack customers, the 12,000 Agentforce deployments represent roughly 8% penetration — strong for a one-year-old product line, but hardly ubiquitous.
The company's stock, down roughly 28% year to date with a Relative Strength rating of just 15, suggests investors remain skeptical. This week's Dreamforce demonstrations — and the months of customer deployments that follow — will begin to provide answers to whether Salesforce can finally move enterprise AI from pilots to production at scale, or whether the "$7 billion business" remains more aspiration than reality.
Data engineers should be working faster than ever. AI-powered tools promise to automate pipeline optimization, accelerate data integration and handle the repetitive grunt work that has defined the profession for decades.
Yet, according to a new survey of 400 senior technology executives by MIT Technology Review Insights in partnership with Snowflake, 77% say their data engineering teams' workloads are getting heavier, not lighter.
The culprit? The very AI tools meant to help are creating a new set of problems.
While 83% of organizations have already deployed AI-based data engineering tools, 45% cite integration complexity as a top challenge. Another 38% are struggling with tool sprawl and fragmentation.
"Many data engineers are using one tool to collect data, one tool to process data and another to run analytics on that data," Chris Child, VP of product for data engineering at Snowflake, told VentureBeat. "Using several tools along this data lifecycle introduces complexity, risk and increased infrastructure management, which data engineers can't afford to take on."
The result is a productivity paradox. AI tools are making individual tasks faster, but the proliferation of disconnected tools is making the overall system more complex to manage. For enterprises racing to deploy AI at scale, this fragmentation represents a critical bottleneck.
From SQL queries to LLM pipelines: The daily workflow shift
The survey found that data engineers spent an average of 19% of their time on AI projects two years ago. Today, that figure has jumped to 37%. Respondents expect it to hit 61% within two years.
But what does that shift actually look like in practice?
Child offered a concrete example. Previously, if the CFO of a company needed to make forecast predictions, they would tap the data engineering team to help build a system that correlates unstructured data like vendor contracts with structured data like revenue numbers into a static dashboard. Connecting these two worlds of different data types was extremely time-consuming and expensive, requiring lawyers to manually read through each document for key contract terms and upload that information into a database.
Today, that same workflow looks radically different.
"Data engineers can use a tool like Snowflake Openflow to seamlessly bring the unstructured PDF contracts living in a source like Box, together with the structured financial figures into a single platform like Snowflake, making the data accessible to LLMs," Child said. "What used to take hours of manual work is now near instantaneous."
The shift isn't just about speed. It's about the nature of the work itself.
Two years ago, a typical data engineer's day consisted of tuning clusters, writing SQL transformations and ensuring data readiness for human analysts. Today, that same engineer is more likely to be debugging LLM-powered transformation pipelines and setting up governance rules for AI model workflows.
"Data engineers' core skill isn't just coding," Child said. "It's orchestrating the data foundation and ensuring trust, context and governance so AI outputs are reliable."
The tool stack problem: When help becomes hindrance
Here's where enterprises are getting stuck.
The promise of AI-powered data tools is compelling: automate pipeline optimization, accelerate debugging, streamline integration. But in practice, many organizations are discovering that each new AI tool they add creates its own integration headaches.
The survey data bears this out. While AI has led to improvements in output quantity (74% report increases) and quality (77% report improvements), those gains are being offset by the operational overhead of managing disconnected tools.
"The other problem we're seeing is that AI tools often make it easy to build a prototype by stitching together several data sources with an out-of-the-box LLM," Child said. "But then when you want to take that into production, you realize that you don't have the data accessible and you don't know what governance you need, so it becomes difficult to roll the tool out to your users."
For technical decision-makers evaluating their data engineering stack right now, Child offered a clear framework.
"Teams should prioritize AI tools that accelerate productivity, while at the same time eliminate infrastructure and operational complexity," he said. "This allows engineers to move their focus away from managing the 'glue work' of data engineering and closer to business outcomes."
The agentic AI deployment window: 12 months to get it right
The survey revealed that 54% of organizations plan to deploy agentic AI within the next 12 months. Agentic AI refers to autonomous agents that can make decisions and take actions without human intervention. Another 20% have already begun doing so.
For data engineering teams, agentic AI represents both an enormous opportunity and a significant risk. Done right, autonomous agents can handle repetitive tasks like detecting schema drift or debugging transformation errors. Done wrong, they can corrupt datasets or expose sensitive information.
"Data engineers must prioritize pipeline optimization and monitoring in order to truly deploy agentic AI at scale," Child said. "It's a low-risk, high-return starting point that allows agentic AI to safely automate repetitive tasks like detecting schema drift or debugging transformation errors when done correctly."
But Child was emphatic about the guardrails that must be in place first.
"Before organizations let agents near production data, two safeguards must be in place: strong governance and lineage tracking, and active human oversight," he said. "Agents must inherit fine-grained permissions and operate within an established governance framework."
The risks of skipping those steps are real. "Without proper lineage or access governance, an agent could unintentionally corrupt datasets or expose sensitive information," Child warned.
The perception gap that's costing enterprises AI success
Perhaps the most striking finding in the survey is a disconnect at the C-suite level.
While 80% of chief data officers and 82% of chief AI officers consider data engineers integral to business success, only 55% of CIOs share that view.
"This shows that the data-forward leaders are seeing data engineering's strategic value, but we need to do more work to help the rest of the C-suite recognize that investing in a unified, scalable data foundation and the people helping drive this is an investment in AI success, not just IT operations," Child said.
That perception gap has real consequences.
Data engineers in the surveyed organizations are already influential in decisions about AI use-case feasibility (53% of respondents) and business units' use of AI models (56%). But if CIOs don't recognize data engineers as strategic partners, they're unlikely to give those teams the resources, authority or seat at the table they need to prevent the kinds of tool sprawl and integration problems the survey identified.
The gap appears to correlate with visibility. Chief data officers and chief AI officers work directly with data engineering teams daily and understand the complexity of what they're managing. CIOs, focused more broadly on infrastructure and operations, may not see the strategic architecture work that data engineers are increasingly doing.
This disconnect also shows up in how different executives rate the challenges facing data engineering teams. Chief AI officers are significantly more likely than CIOs to agree that data engineers' workloads are becoming increasingly heavy (93% vs. 75%). They're also more likely to recognize data engineers' influence on overall AI strategy.
What data engineers need to learn now
The survey identified three critical skills data engineers need to develop: AI expertise, business acumen and communication abilities.
For an enterprise with a 20-person data engineering team, that presents a practical challenge. Do you hire for these skills, train existing engineers or restructure the team? Child's answer suggested the priority should be business understanding.
"The most important skill right now is for data engineers to understand what is critical to their end business users and prioritize how they can make those questions easier and faster to answer," he said.
The lesson for enterprises: Business context matters more than adding technical certifications. Child stressed that understanding the business impact of 'why' data engineers are performing certain tasks will allow them to anticipate the needs of customers better, delivering value more immediately to the business.
"The organizations with data engineering teams that prioritize this business understanding will set themselves apart from competition."
For enterprises looking to lead in AI, the solution to the data engineering productivity crisis isn't more AI tools. The organizations that will move fastest are consolidating their tool stacks now, deploying governance infrastructure before agents go into production and elevating data engineers from support staff to strategic architects.
The window is narrow. With 54% planning agentic AI deployment within 12 months and data engineers expected to spend 61% of their time on AI projects within two years, teams that haven't addressed tool sprawl and governance gaps will find their AI initiatives stuck in permanent pilot mode.
China is on track to dominate consumer artificial intelligence applications and robotics manufacturing within years, but the United States will maintain its substantial lead in enterprise AI adoption and cutting-edge research, according to Kai-Fu Lee, one of the world's most prominent AI scientists and investors.
In a rare, unvarnished assessment delivered via video link from Beijing to the TED AI conference in San Francisco Tuesday, Lee — a former executive at Apple, Microsoft, and Google who now runs both a major venture capital firm and his own AI company — laid out a technology landscape splitting along geographic and economic lines, with profound implications for both commercial competition and national security.
"China's robotics has the advantage of having integrated AI into much lower costs, better supply chain and fast turnaround, so companies like Unitree are actually the farthest ahead in the world in terms of building affordable, embodied humanoid AI," Lee said, referring to a Chinese robotics manufacturer that has undercut Western competitors on price while advancing capabilities.
The comments, made to a room filled with Silicon Valley executives, investors, and researchers, represented one of the most detailed public assessments from Lee about the comparative strengths and weaknesses of the world's two AI superpowers — and suggested that the race for artificial intelligence leadership is becoming less a single contest than a series of parallel competitions with different winners.
Why venture capital is flowing in opposite directions in the U.S. and China
At the heart of Lee's analysis lies a fundamental difference in how capital flows in the two countries' innovation ecosystems. American venture capitalists, Lee said, are pouring money into generative AI companies building large language models and enterprise software, while Chinese investors are betting heavily on robotics and hardware.
"The VCs in the US don't fund robotics the way the VCs do in China," Lee said. "Just like the VCs in China don't fund generative AI the way the VCs do in the US."
This investment divergence reflects different economic incentives and market structures. In the United States, where companies have grown accustomed to paying for software subscriptions and where labor costs are high, enterprise AI tools that boost white-collar productivity command premium prices. In China, where software subscription models have historically struggled to gain traction but manufacturing dominates the economy, robotics offers a clearer path to commercialization.
The result, Lee suggested, is that each country is pulling ahead in different domains — and may continue to do so.
"China's got some challenges to overcome in getting a company funded as well as OpenAI or Anthropic," Lee acknowledged, referring to the leading American AI labs. "But I think U.S., on the flip side, will have trouble developing the investment interest and value creation in the robotics" sector.
Why American companies dominate enterprise AI while Chinese firms struggle with subscriptions
Lee was explicit about one area where the United States maintains what appears to be a durable advantage: getting businesses to actually adopt and pay for AI software.
"The enterprise adoption will clearly be led by the United States," Lee said. "The Chinese companies have not yet developed a habit of paying for software on a subscription."
This seemingly mundane difference in business culture — whether companies will pay monthly fees for software — has become a critical factor in the AI race. The explosion of spending on tools like GitHub Copilot, ChatGPT Enterprise, and other AI-powered productivity software has fueled American companies' ability to invest billions in further research and development.
Lee noted that China has historically overcome similar challenges in consumer technology by developing alternative business models. "In the early days of internet software, China was also well behind because people weren't willing to pay for software," he said. "But then advertising models, e-commerce models really propelled China forward."
Still, he suggested, someone will need to "find a new business model that isn't just pay per software per use or per month basis. That's going to not happen in China anytime soon."
The implication: American companies building enterprise AI tools have a window — perhaps a substantial one — where they can generate revenue and reinvest in R&D without facing serious Chinese competition in their core market.
How ByteDance, Alibaba and Tencent will outpace Meta and Google in consumer AI
Where Lee sees China pulling ahead decisively is in consumer-facing AI applications — the kind embedded in social media, e-commerce, and entertainment platforms that billions of people use daily.
"In terms of consumer usage, that's likely to happen," Lee said, referring to China matching or surpassing the United States in AI deployment. "The Chinese giants, like ByteDance and Alibaba and Tencent, will definitely move a lot faster than their equivalent in the United States, companies like Meta, YouTube and so on."
Lee pointed to a cultural advantage: Chinese technology companies have spent the past decade obsessively optimizing for user engagement and product-market fit in brutally competitive markets. "The Chinese giants really work tenaciously, and they have mastered the art of figuring out product market fit," he said. "Now they have to add technology to it. So that is inevitably going to happen."
This assessment aligns with recent industry observations. ByteDance's TikTok became the world's most downloaded app through sophisticated AI-driven content recommendation, and Chinese companies have pioneered AI-powered features in areas like live-streaming commerce and short-form video that Western companies later copied.
Lee also noted that China has already deployed AI more widely in certain domains. "There are a lot of areas where China has also done a great job, such as using computer vision, speech recognition, and translation more widely," he said.
The surprising open-source shift that has Chinese models beating Meta's Llama
Perhaps Lee's most striking data point concerned open-source AI development — an area where China appears to have seized leadership from American companies in a remarkably short time.
"The 10 highest rated open source [models] are from China," Lee said. "These companies have now eclipsed Meta's Llama, which used to be number one."
This represents a significant shift. Meta's Llama models were widely viewed as the gold standard for open-source large language models as recently as early 2024. But Chinese companies — including Lee's own firm, 01.AI, along with Alibaba, Baidu, and others — have released a flood of open-source models that, according to various benchmarks, now outperform their American counterparts.
The open-source question has become a flashpoint in AI development. Lee made an extensive case for why open-source models will prove essential to the technology's future, even as closed models from companies like OpenAI command higher prices and, often, superior performance.
"I think open source has a number of major advantages," Lee argued. With open-source models, "you can examine it, tune it, improve it. It's yours, and it's free, and it's important for building if you want to build an application or tune the model to do something specific."
He drew an analogy to operating systems: "People who work in operating systems loved Linux, and that's why its adoption went through the roof. And I think in the future, open source will also allow people to tune a sovereign model for a country, make it work better for a particular language."
Still, Lee predicted both approaches will coexist. "I don't think open source models will win," he said. "I think just like we have Apple, which is closed, but provides a somewhat better experience than Android... I think we're going to see more apps using open-source models, more engineers wanting to build open-source models, but I think more money will remain in the closed model."
Why China's manufacturing advantage makes the robotics race 'not over, but' nearly decided
On robotics, Lee's message was blunt: the combination of China's manufacturing prowess, lower costs, and aggressive investment has created an advantage that will be difficult for American companies to overcome.
When asked directly whether the robotics race was already over with China victorious, Lee hedged only slightly. "It's not over, but I think the U.S. is still capable of coming up with the best robotic research ideas," he said. "But the VCs in the U.S. don't fund robotics the way the VCs do in China."
The challenge is structural. Building robots requires not just software and AI, but hardware manufacturing at scale — precisely the kind of integrated supply chain and low-cost production that China has spent decades perfecting. While American labs at universities and companies like Boston Dynamics continue to produce impressive research prototypes, turning those prototypes into affordable commercial products requires the manufacturing ecosystem that China possesses.
Companies like Unitree have demonstrated this advantage concretely. The company's humanoid robots and quadrupedal robots cost a fraction of their American-made equivalents while offering comparable or superior capabilities — a price-to-performance ratio that could prove decisive in commercial markets.
What worries Lee most: not AGI, but the race itself
Despite his generally measured tone about China's AI development, Lee expressed concern about one area where he believes the global AI community faces real danger — not the far-future risk of superintelligent AI, but the near-term consequences of moving too fast.
When asked about AGI risks, Lee reframed the question. "I'm less afraid of AI becoming self-aware and causing danger for humans in the short term," he said, "but more worried about it being used by bad people to do terrible things, or by the AI race pushing people to work so hard, so fast and furious and move fast and break things that they build products that have problems and holes to be exploited."
He continued: "I'm very worried about that. In fact, I think some terrible event will happen that will be a wake up call from this sort of problem."
Lee's perspective carries unusual weight because of his unique vantage point spanning both Chinese and American AI development. Over a career spanning more than three decades, he has held senior positions at Apple, Microsoft, and Google, while also founding Sinovation Ventures, which has invested in more than 400 companies across both countries. His AI company, 01.AI, founded in 2023, has released several open-source models that rank among the most capable in the world.
For American companies and policymakers, Lee's analysis presents a complex strategic picture. The United States appears to have clear advantages in enterprise AI software, fundamental research, and computing infrastructure. But China is moving faster in consumer applications, manufacturing robotics at lower costs, and potentially pulling ahead in open-source model development.
The bifurcation suggests that rather than a single "winner" in AI, the world may be heading toward a technology landscape where different countries excel in different domains — with all the economic and geopolitical complications that implies.
As the TED AI conference continued Wednesday, Lee's assessment hung over subsequent discussions. His message seemed clear: the AI race is not one contest, but many — and the United States and China are each winning different races.
Standing in the conference hall afterward, one venture capitalist, who asked not to be named, summed up the mood in the room: "We're not competing with China anymore. We're competing on parallel tracks." Whether those tracks eventually converge — or diverge into entirely separate technology ecosystems — may be the defining question of the next decade.
Today with the City of The Dalles, Oregon, we’re celebrating the completion of a new aquifer storage and recovery (ASR) system — a new water infrastructure project set t…
AI models are only as good as the data they're trained on. That data generally needs to be labeled, curated and organized before models can learn from it in an effective way.
One of the big missing links in the AI ecosystem has been the availability of a large high-quality open-source multimodal dataset. That changes today with the debut of the EMM-1 dataset which is comprised of 1 billion data pairs and 100M data groups across 5 modalities: text, image, video, audio and 3d point clouds. Multimodal datasets combine different types of data that AI systems can process together. This mirrors how humans perceive the world using multiple senses simultaneously. These datasets enable AI systems to make richer inferences by understanding relationships across data types, rather than processing each modality in isolation.
EMM-1 is developed bydata labelingplatform vendorEncord. The company's platform enables teams to curate, label and manage training data at scale using both automated and human-in-the-loop workflows. Alongside the new model, Encord developed the EBind training methodology that prioritizes data quality over raw computational scale. The approach enabled a compact 1.8 billion parameter model to match the performance of models up to 17 times larger while slashing training time from days to hours on a single GPU rather than GPU clusters.
"The big trick for us was to really focus on the data and to make the data very, very high quality," Encord Co-Founder and CEO Eric Landau told VentureBeat in an exclusive interview. "We were able to get to the same level of performance as models 20 times larger, not because we were super clever on the architecture, but because we trained it with really good data overall."
The data quality advantage
Encord's dataset is 100 times larger than the next comparable multimodal dataset, according to Landau. It operates at petabyte scale with terabytes of raw data and over 1 million human annotations.
But scale alone doesn't explain the performance gains. The technical innovation centers on addressing what Landau calls an "under-appreciated" problem in AI training: data leakage between training and evaluation sets.
"The leakage problem was one which we spent a lot of time on," Landau explained. "In a lot of data sets, there is a kind of leakage between different subsets of the data. Leakage actually boosts your results. It makes your evaluations look better. But it's one thing that we were quite diligent about."
Data leakage occurs when information from test data inadvertently appears in training data, artificially inflating model performance metrics. Many benchmark datasets suffer from this contamination. Encord deployed hierarchical clustering techniques to ensure clean separation while maintaining representative distribution across data types. The company also used clustering to address bias and ensure diverse representation.
How EBind boosts efficiency
The data quality improvements work in tandem with an architectural approach designed for efficiency
Encord's EBind extends the CLIP (Contrastive Language-Image Pre-training) approach (originally developed by OpenAI) from two modalities to five. CLIP learns to associate images and text in a shared representation space, enabling tasks like searching for images using text descriptions.
Where CLIP learns to associate images and text in a shared latent space, EBind does the same across images, text, audio, 3D point clouds and video.
The architectural choice prioritizes parameter efficiency. Rather than deploying separate specialized models for each modality pair, EBind uses a single base model with one encoder per modality.
"Other methodologies, what they do is they use a bunch of different models, and they route to the best model for embedding these pairs, so they tend to explode in the number of parameters," Landau said. "We found we could use a single base model and just train one encoder per modality, so keeping it very simple and very parameter efficient, if we fed that overall architecture really, really good data."
The resulting model rivalsOmniBind, a much larger competitor in the multimodal space, but requires dramatically fewer computational resources for both training and inference. This makes EBind deployable in resource-constrained environments including edge devices for robotics and autonomous systems.
The enterprise value of a multi-modal dataset
Multimodal models enable enterprise use cases that span different data types.
Most organizations store different data types in separate systems: documents in content management platforms, audio recordings in communication tools, training videos in learning management systems and structured data in databases. Multimodal models can search and retrieve across all of these simultaneously.
"Enterprises have all different types of data. They don't just have documents. They have audio recordings, and they have training videos, and they have CSV files," Landau said. "Let's say you're a lawyer and you have a case file that has video evidence and also documents and recordings, and it's all scattered across a lot of silos of data. You can use EBind to pick all of the relevant data and bundle together to search and surface the right data much quicker than you would have before."
The same principle applies across verticals. Healthcare providers can link patient imaging data to clinical notes and diagnostic audio. Financial services firms can connect transaction records to compliance call recordings and customer communications. Manufacturing operations can tie equipment sensor data to maintenance video logs and inspection reports.
Beyond office environments, physical AI represents another frontier. Landau highlighted autonomous vehicles that benefit from both visual perception and audio cues like emergency sirens. In manufacturing and warehousing, robots that combine visual recognition with audio feedback and spatial awareness can operate more safely and effectively than vision-only systems.
Enterprise use case: Extending computer vision with multimodal context
Captur AI, an Encord customer, illustrates how companies are planning to use the dataset for specific business applications. The startup provides on-device image verification for mobile apps, validating photos in real-time for authenticity, compliance and quality before upload. The company works with shared mobility providers like Lime and delivery companies capturing billions of package photos.
Captur AI processes over 100 million images on-device and specializes in distilling models to 6-10 megabytes so they can run on smartphones without cloud connectivity. But CEO Charlotte Bax sees multimodal capabilities as critical for expanding into higher-value use cases.
"The market for us is massive. You submit photos for returns and retails. You submit photos to insurance companies for claims. You submit photos when you're listing something on eBay," Bax told VentureBeat in an exclusive interview. "Some of those use cases are very high risk or high value if something goes wrong, like insurance, the image only captures part of the context and audio can be an important signal."
Bax cited digital vehicle inspections as a prime example. When customers photograph vehicle damage for insurance claims, they often describe what happened verbally while capturing images. Audio context can significantly improve claim accuracy and reduce fraud.
"As you're doing that, oftentimes the customer is actually describing what's happened," Bax said. "A few of our potential prospects in InsurTech have asked us if we can actually do audio as well, because then that adds this additional bit of context for the user who's submitting the claim."
The challenge lies in maintaining Captur AI's core advantage: running models efficiently on-device rather than requiring cloud processing. The company plans to use Encord's dataset to train compact multimodal models that preserve real-time, offline capabilities while adding audio and sequential image context.
"The most important thing you can do is try and get as much context as possible," Bax said. "Can you get LLMs to be small enough to run on a device within the next three years, or can you run multimodal models on the device? Solving data quality before image upload is the interesting frontier."
What this means for enterprises
Encord's results challenge fundamental assumptions about AI development and suggest that the next competitive battleground may be data operations rather than infrastructure scale.
Multimodal datasets unlock new capabilities. The ability to train models that understand relationships across data types opens use cases that single-modality systems cannot address.
Data operations deserve equal investment with compute infrastructure. The 17x parameter efficiency gain from better data curation represents orders of magnitude in cost savings. Organizations pouring resources into GPU clusters while treating data quality as an afterthought may be optimizing the wrong variable.
For enterprises building multimodal AI systems, Landau's assessment captures the strategic shift.
"We were able to get to the same level of performance as models much larger, not because we were super clever on the architecture, but because we trained it with really good data overall," he said.
Visa is introducing a new security framework designed to solve one of the thorniest problems emerging in artificial intelligence-powered commerce: how retailers can tell the difference between legitimate AI shopping assistants and the malicious bots that plague their websites.
The payments giant unveiled its Trusted Agent Protocol on Tuesday, establishing what it describes as foundational infrastructure for "agentic commerce" — a term for the rapidly growing practice of consumers delegating shopping tasks to AI agents that can search products, compare prices, and complete purchases autonomously.
The protocol enables merchants to cryptographically verify that an AI agent browsing their site is authorized and trustworthy, rather than a bot designed to scrape pricing data, test stolen credit cards, or carry out other fraudulent activities.
The launch comes as AI-driven traffic to U.S. retail websites has exploded by more than 4,700% over the past year, according to data from Adobe cited by Visa. That dramatic surge has created an acute challenge for merchants whose existing bot detection systems — designed to block automated traffic — now risk accidentally blocking legitimate AI shoppers along with bad actors.
"Merchants need additional tools that provide them with greater insight and transparency into agentic commerce activities to ensure they can participate safely," said Rubail Birwadker, Visa's Global Head of Growth, in an exclusive interview with VentureBeat. "Without common standards, potential risks include ecosystem fragmentation and the proliferation of closed loop models."
The stakes are substantial. While 85% of shoppers who have used AI to shop report improved experiences, merchants face the prospect of either turning away legitimate AI-powered customers or exposing themselves to sophisticated bot attacks. Visa's own data shows the company prevented $40 billion in fraudulent activity between October 2022 and September 2023, nearly double the previous year, much of it involving AI-powered enumeration attacks where bots systematically test combinations of card numbers until finding valid credentials.
Inside the cryptographic handshake: How Visa verifies AI shopping agents
Visa's Trusted Agent Protocol operates through what Birwadker describes as a "cryptographic trust handshake" between merchants and approved AI agents. The system works in three steps:
First, AI agents must be approved and onboarded through Visa's Intelligent Commerce program, where they undergo vetting to meet trust and reliability standards. Each approved agent receives a unique digital signature key — essentially a cryptographic credential that proves its identity.
When an approved agent visits a merchant's website, it creates a digital signature using its key and transmits three categories of information: Agent Intent (indicating the agent is trusted and intends to retrieve product details or make a purchase), Consumer Recognition (data showing whether the underlying consumer has an existing account with the merchant), and Payment Information (optional payment data to support checkout).
Merchants or their infrastructure providers, such as content delivery networks, then validate these digital signatures against Visa's registry of approved agents. "Upon proper validation of these fields, the merchant can confirm the signature is a trusted agent," Birwadker explained.
Crucially, Visa designed the protocol to require minimal changes to existing merchant infrastructure. Built on the HTTP Message Signature standard and aligned with Web Both Auth, the protocol works with existing web infrastructure without requiring merchants to overhaul their checkout pages. "This is no-code functionality," Birwadker emphasized, though merchants may need to integrate with Visa's Developer Center to access the verification system.
The race for AI commerce standards: Visa faces competition from Google, OpenAI, and Stripe
Visa developed the protocol in collaboration with Cloudflare, the web infrastructure and security company that already provides bot management services to millions of websites. The partnership reflects Visa's recognition that solving bot verification requires cooperation across the entire web stack, not just the payments layer.
"Trusted Agent Protocol supplements traditional bot management by providing merchants insights that enable agentic commerce," Birwadker said. "Agents are providing additional context they otherwise would not, including what it intends to do, who the underlying consumer is, and payment information."
The protocol arrives as multiple technology giants race to establish competing standards for AI commerce. Google recently introduced its Agent Protocol for Payments (AP2), while OpenAI and Stripe have discussed their own approaches to enabling AI agents to make purchases. Microsoft, Shopify, Adyen, Ant International, Checkout.com, Cybersource, Elavon, Fiserv, Nuvei, and Worldpay provided feedback during Trusted Agent Protocol's development, according to Visa.
When asked how Visa's protocol relates to these competing efforts, Birwadker struck a collaborative tone. "Both Google's AP2 and Visa's Trusted Agent Protocol are working toward the same goal of building trust in agent-initiated payments," he said. "We are engaged with Google, OpenAI, and Stripe and are looking to create compatibility across the ecosystem."
Visa says it is working with global standards bodies including the Internet Engineering Task Force (IETF), OpenID Foundation, and EMVCo to ensure the protocol can eventually become interoperable with other emerging standards. "While these specifications apply to the Visa network in this initial phase, enabling agents to safely and securely act on a consumer's behalf requires an open, ecosystem-wide approach," Birwadker noted.
Who pays when AI agents go rogue? Unanswered questions about liability and authorization
The protocol raises important questions about authorization and liability when AI agents make purchases on behalf of consumers. If an agent completes an unauthorized transaction — perhaps misunderstanding a user's intent or exceeding its delegated authority — who bears responsibility?
Birwadker emphasized that the protocol helps merchants "leverage this information to enable experiences tied to existing consumer relationships and more secure checkout," but he did not provide specific details about how disputes would be handled when agents make unauthorized purchases. Visa's existing fraud protection and chargeback systems would presumably apply, though the company has not yet published detailed guidance on agent-initiated transaction disputes.
The protocol also places Visa in the position of gatekeeper for the emerging agentic commerce ecosystem. Because Visa determines which AI agents get approved for the Intelligent Commerce program and receive cryptographic credentials, the company effectively controls which agents merchants can easily trust. "Agents are approved and onboarded through the Visa Intelligent Commerce program, ensuring they meet our standards for trust and reliability," Birwadker said, though he did not detail the specific criteria agents must meet or whether Visa charges fees for approval.
This gatekeeping role could prove contentious, particularly if Visa's approval process favors large technology companies over startups, or if the company faces pressure to block agents from competitors or politically controversial entities. Visa declined to provide details about how many agents it has approved so far or how long the vetting process typically takes.
Visa's legal battles and the long road to merchant adoption
The protocol launch comes at a complex moment for Visa, which continues to navigate significant legal and regulatory challenges even as its core business remains robust. The company's latest earnings report for the third quarter of fiscal year 2025 showed a 10% increase in net revenues to $9.2 billion, driven by resilient consumer spending and strong growth in cross-border transaction volume. For the full fiscal year ending September 30, 2024, Visa processed 289 billion transactions, with a total payments volume of $15.2 trillion.
However, the company's legal headwinds have intensified. In July 2025, a federal judge rejected a landmark $30 billion settlement that Visa and Mastercard had reached with merchants over long-disputed credit card swipe fees, sending the parties back to the negotiating table and extending the long-running legal battle.
Simultaneously, Visa remains under investigation by the Department of Justice over its rules for routing debit card transactions, with regulators scrutinizing whether the company's practices unlawfully limit merchant choice and stifle competition. These domestic challenges are mirrored abroad, where European regulators have continued their own antitrust investigations into the fee structures of both Visa and its primary competitor, Mastercard.
Against this backdrop of regulatory pressure, Birwadker acknowledged that adoption of the Trusted Agent Protocol will take time. "As agentic commerce continues to rise, we recognize that consumer trust is still in its early stages," he said. "That's why our focus through 2025 is on building foundational credibility and demonstrating real-world value."
The protocol is available immediately in Visa's Developer Center and on GitHub, with agent onboarding already active and merchant integration resources available. But Birwadker declined to provide specific targets for how many merchants might adopt the protocol by the end of 2026. "Adoption is aligned with the momentum we're already seeing," he said. "The launch of our protocol marks another big step — it's not just a technical milestone, but a signal that the industry is beginning to unify."
Industry analysts say merchant adoption will likely depend on how quickly agentic commerce grows as a percentage of overall e-commerce. While AI-driven traffic has surged dramatically, much of that consists of agents browsing and researching rather than completing purchases. If AI agents begin accounting for a significant share of completed transactions, merchants will face stronger incentives to adopt verification systems like Visa's protocol.
From fraud detection to AI gatekeeping: Visa's $10 billion bet on artificial intelligence
Visa's move reflects broader strategic bets on AI across the financial services industry. The company has invested $10 billion in technology over the past five years to reduce fraud and increase network security, with AI and machine learning central to those efforts. Visa's fraud detection system analyzes over 500 different attributes for each transaction, using AI models to assign real-time risk scores to the 300 billion annual transactions flowing through its network.
"Every single one of those transactions has been processed by AI," James Mirfin, Visa's global head of risk and identity solutions, said in a July 2024 CNBC interview discussing the company's fraud prevention efforts. "If you see a new type of fraud happening, our model will see that, it will catch it, it will score those transactions as high risk and then our customers can decide not to approve those transactions."
The company has also moved aggressively into new payment territories beyond its core card business. In January 2025, Visa partnered with Elon Musk's X (formerly Twitter) to provide the infrastructure for a digital wallet and peer-to-peer payment service called the X Money Account, competing with services like Venmo and Zelle. That deal marked Visa's first major partnership in the social media payments space and reflected the company's recognition that payment flows are increasingly happening outside traditional e-commerce channels.
The agentic commerce protocol represents an extension of this strategy — an attempt to ensure Visa remains central to payment flows even as the mechanics of shopping shift from direct human interaction to AI intermediation. Jack Forestell, Visa's Chief Product & Strategy Officer, framed the protocol in expansive terms: "We believe the entire payments ecosystem has a responsibility to ensure sellers trust AI agents with the same confidence they place in their most valued customers and networks."
The coming battle for control of AI shopping
The real test for Visa's protocol won't be technical — it will be political. As AI agents become a larger force in retail, whoever controls the verification infrastructure controls access to hundreds of billions of dollars in commerce. Visa's position as gatekeeper gives it enormous leverage, but also makes it a target.
Merchants chafing under Visa's existing fee structure and facing multiple antitrust investigations may resist ceding even more power to the payments giant. Competitors like Google and OpenAI, each with their own ambitions in commerce, have little incentive to let Visa dictate standards. Regulators already scrutinizing Visa's market dominance will surely examine whether its agent approval process unfairly advantages certain players.
And there's a deeper question lurking beneath the technical specifications and corporate partnerships: In an economy increasingly mediated by AI, who decides which algorithms get to spend our money? Visa is making an aggressive bid to be that arbiter, wrapping its answer in the language of security and interoperability. Whether merchants, consumers, and regulators accept that proposition will determine not just the fate of the Trusted Agent Protocol, but the structure of AI-powered commerce itself.
For now, Visa is moving forward with the confidence of a company that has weathered disruption before. But in the emerging world of agentic commerce, being too trusted might prove just as dangerous as not being trusted enough.