100 things to try with the new Gemini for Home voice assistant
Learn more about the new Gemini for Home voice assistant, now rolling out in early access.
Learn more about the new Gemini for Home voice assistant, now rolling out in early access.
Pomelli, our newest experiment from Google Labs and DeepMind, is here to help you create on-brand marketing content.
An overview of the new course from The Google School for Leaders that helps all managers build high-performing teams.
Here’s how students and faculty at higher education institutions are using Gemini. 
Google has expanded the What’s happening feature within Google Business Profiles to restaurants and bars in the United Kingdom, Canada, Australia, and New Zealand. It is now available for multi-location restaurants, not just single-location restaurants.
The What’s happening feature launched back in May as a way for some businesses to highlight events, deals, and specials prominently at the top of your Google Business Profile. Now, Google is bringing it to more countries.
What Google said. Google’s Lisa Landsman wrote on LinkedIn:
How do you promote your “Taco Tuesday” in Toledo and your “Happy Hour” in Houston… right when locals are searching for a place to go?
I’m excited to share that the Google Business Profile feature highlighting what’s happening at your business, such as timely events, specials and deals, has now rolled out for multi-location restaurants & bars across the US, UK, CA, AU & NZ! (It was previously only available for single-location restaurants)
This is a great option for driving real-time foot traffic. It automatically surfaces the unique specials, live music, or events you’re already promoting at a specific location, catching customers at the exact moment they’re deciding where to eat or grab a cocktail.
What it looks like. Here is a screenshot of this feature:

More details. Google’s Lisa Landsman added, “We’ve already seen excellent results from testing and look forward to hearing how this works for you!”
Availability. This feature is only available for restaurants & bars. Google said it hopes to expand to more categories soon. It is also only available in the United States, United Kingdom, Canada, Australia, and New Zealand.
The initial launch was for single-location Food and Drink businesses in the U.S., UK, Australia, Canada, and New Zealand. It is now available for multi-location restaurants, not just single-location restaurants.
Why we care. If you manage restaurants and/or bars, this may be a new way to get more attention and visitors to your business from Google Search. Now, if you manage multi-location restaurants or bars, you can leverage this feature.
Marketing, technology, and business leaders today are asking an important question: how do you optimize for large language models (LLMs) like ChatGPT, Gemini, and Claude?
LLM optimization is taking shape as a new discipline focused on how brands surface in AI-generated results and what can be measured today.
For decision makers, the challenge is separating signal from noise – identifying the technologies worth tracking and the efforts that lead to tangible outcomes.
The discussion comes down to two core areas – and the timeline and work required to act on them:
Just as SEO evolved through better tracking and measurement, LLM optimization will only mature once visibility becomes measurable.
We’re still in a pre-Semrush/Moz/Ahrefs era for LLMs.
Tracking is the foundation of identifying what truly works and building strategies that drive brand growth.
Without it, everyone is shooting in the dark, hoping great content alone will deliver results.
The core challenges are threefold:
Why LLM queries are different
Traditional search behavior is repetitive – millions of identical phrases drive stable volume metrics. LLM interactions are conversational and variable.
People rephrase questions in different ways, often within a single session. That makes pattern recognition harder with small datasets but feasible at scale.
These structural differences explain why LLM visibility demands a different measurement model.
This variability requires a different tracking approach than traditional SEO or marketing analytics.
The leading method uses a polling-based model inspired by election forecasting.
A representative sample of 250–500 high-intent queries is defined for your brand or category, functioning as your population proxy.
These queries are run daily or weekly to capture repeated samples from the underlying distribution of LLM responses.

Tracking tools record when your brand and competitors appear as citations (linked sources) or mentions (text references), enabling share of voice calculations across all competitors.
Over time, aggregate sampling produces statistically stable estimates of your brand visibility within LLM-generated content.
Early tools providing this capability include:

Consistent sampling at scale transforms apparent randomness into interpretable signals.
Over time, aggregate sampling provides a stable estimate of your brand’s visibility in LLM-generated responses – much like how political polls deliver reliable forecasts despite individual variations.
While share of voice paints a picture of your presence in the LLM landscape, it doesn’t tell the complete story.
Just as keyword rankings show visibility but not clicks, LLM presence doesn’t automatically translate to user engagement.
Brands need to understand how people interact with their content to build a compelling business case.
Because no single tool captures the entire picture, the best current approach layers multiple tracking signals:
Nobody has complete visibility into LLM impact on their business today, but these methods cover all the bases you can currently measure.
Be wary of any vendor or consultant promising complete visibility. That simply isn’t possible yet.
Understanding these limitations is just as important as implementing the tracking itself.
Because no perfect models exist yet, treat current tracking data as directional – useful for decisions, but not definitive.

Dig deeper: In GEO, brand mentions do what links alone can’t
Measuring LLM impact is one thing. Identifying which queries and topics matter most is another.
Compared to SEO or PPC, marketers have far less visibility. While no direct search volume exists, new tools and methods are beginning to close the gap.
The key shift is moving from tracking individual queries – which vary widely – to analyzing broader themes and topics.
The real question becomes: which areas is your site missing, and where should your content strategy focus?
To approximate relative volume, consider three approaches:
Correlate with SEO search volume
Start with your top-performing SEO keywords.
If a keyword drives organic traffic and has commercial intent, similar questions are likely being asked within LLMs. Use this as your baseline.
Layer in industry adoption of AI
Estimate what percentage of your target audience uses LLMs for research or purchasing decisions:
Apply these percentages to your existing SEO keyword volume. For example, a keyword with 25,000 monthly searches could translate to 1,250-6,250 LLM-based queries in your category.
Using emerging inferential tools
New platforms are beginning to track query data through API-level monitoring and machine learning models.
Accuracy isn’t perfect yet, but these tools are improving quickly. Expect major advancements in inferential LLM query modeling within the next year or two.
The technologies that help companies identify what to improve are evolving quickly.
While still imperfect, they’re beginning to form a framework that parallels early SEO development, where better tracking and data gradually turned intuition into science.
Optimization breaks down into two main questions:
One of the most effective ways to assess your current position is to take a representative sample of high-intent queries that people might ask an LLM and see how your brand shows up relative to competitors. This is where the Share of Voice tracking tools we discussed earlier become invaluable.
These same tools can help answer your optimization questions:


From this data, several key insights emerge:
LLMs may be reshaping discovery, but SEO remains the foundation of digital visibility.
Across five competitive categories, brands ranking on Google’s first page appeared in ChatGPT answers 62% of the time – a clear but incomplete overlap between search and AI results.
That correlation isn’t accidental.
Many retrieval-augmented generation (RAG) systems pull data from search results and expand it with additional context.
The more often your content appears in those results, the more likely it is to be cited by LLMs.
Brands with the strongest share of voice in LLM responses are typically those that invested in SEO first.
Strong technical health, structured data, and authority signals remain the bedrock for AI visibility.
What this means for marketers:
Just as SEO has both on-page and off-page elements, LLM optimization follows the same logic – but with different tactics and priorities.
Off-page: The new link building
Most industries show a consistent pattern in the types of resources LLMs cite:
Citation patterns across ChatGPT, Gemini, Perplexity, and Google’s AI Overviews show consistent trends, though each engine favors different sources.
This means that traditional link acquisition strategies, guest posts, PR placements, or brand mentions in review content will likely evolve.
Instead of chasing links anywhere, brands should increasingly target:
The core principle holds: brands gain the most visibility by appearing in sources LLMs already trust – and identifying those sources requires consistent tracking.
On-page: What your own content reveals
The same technologies that analyze third-party mentions can also reveal which first-party assets, content on your own website, are being cited by LLMs.
This provides valuable insight into what type of content performs well in your space.
For example, these tools can identify:
From there, three key opportunities emerge:
The next major evolution in LLM optimization will likely come from tools that connect insight to action.
Early solutions already use vector embeddings of your website content to compare it against LLM queries and responses. This allows you to:
Current tools mostly generate outlines or recommendations.
The next frontier is automation – systems that turn data into actionable content aligned with business goals.
While comprehensive LLM visibility typically builds over 6-12 months, early results can emerge faster than traditional SEO.
The advantage: LLMs can incorporate new content within days rather than waiting months for Google’s crawl and ranking cycles.
However, the fundamentals remain unchanged.
Quality content creation, securing third-party mentions, and building authority still require sustained effort and resources.
Think of LLM optimization as having a faster feedback loop than SEO, but requiring the same strategic commitment to content excellence and relationship building that has always driven digital visibility.
LLM traffic remains small compared to traditional search, but it’s growing fast.
A major shift in resources would be premature, but ignoring LLMs would be shortsighted.
The smartest path is balance: maintain focus on SEO while layering in LLM strategies that address new ranking mechanisms.
Like early SEO, LLM optimization is still imperfect and experimental – but full of opportunity.
Brands that begin tracking citations, analyzing third-party mentions, and aligning SEO with LLM visibility now will gain a measurable advantage as these systems mature.
In short:
Approach LLM optimization as both research and brand-building.
Don’t abandon proven SEO fundamentals. Rather, extend them to how AI systems discover, interpret, and cite information.

Seattle is looking to celebrate and accelerate its leadership in artificial intelligence at the very moment the first wave of the AI economy is crashing down on the region’s tech workforce.
That contrast was hard to miss Monday evening at the opening reception for Seattle AI Week 2025 at Pier 70. On stage, panels offered a healthy dose of optimism about building the AI future. In the crowd, buzz about Amazon’s impending layoffs brought the reality of the moment back to earth.
A region that rose with Microsoft and then Amazon is now dealing with the consequences of Big Tech’s AI-era restructuring. Companies that hired by the thousands are now thinning their ranks in the name of efficiency and focus — a dose of corporate realism for the local tech economy.
The double-edged nature of this shift is not lost on Washington Gov. Bob Ferguson.
“AI, and the future of AI, and what that means for our state and the world — each day I do this job, the more that moves up in my mind in terms of the challenges and the opportunities we have,” Ferguson told the AI Week crowd. He touted Washington’s concentration of AI jobs, saying his goal is to maximize the benefits of AI while minimizing its downsides.

Seattle AI Week, led by the Washington Technology Industry Association, was started last year after a Forbes list of the nation’s top 50 AI startups included none from Seattle, said the WTIA’s Nick Ellingson, opening this year’s event. That didn’t seem right. Was it a messaging problem?
“A bunch of us got together and said, let’s talk about all the cool things happening around AI in Seattle, and let’s expand the tent beyond just tech things that are happening,” Ellingson explained.
So maybe that’s the best measuring stick: how many startups will this latest shakeout spark, and how can the Seattle region’s startup and tech leaders make it happen? Can the region become less dependent on the whims of the Microsoft and Amazon C-suites in the process?
“Washington has so much opportunity. It’s one of the few capitals of AI in the world,” said WTIA’s Arry Yu in her opening remarks. “People talk about China, people talk about Silicon Valley — there are a few contenders, but really, it’s here in Seattle. … The future is built on data, on powerful technology, but also on community. That’s what makes this place different.”
And yet, “AI is a sleepy scene in Seattle, where people work at their companies, but there’s very little activity and cross-pollinating outside of this,” said Nathan Lambert, senior research scientist with the Allen Institute for AI, during the opening panel discussion.
No, we don’t want to become San Francisco or Silicon Valley, Lambert added. But that doesn’t mean the region can’t cherry-pick some of the ingredients that put Bay Area tech on top.
Whether laid-off tech workers will start their own companies is a common question after layoffs like this. In the Seattle region at least, that outcome has been more fantasy than reality.
This is where AI could change things, if not with the fabled one-person unicorn then with a bigger wave of new companies born of this employment downturn. Who knows, maybe one will even land on that elusive Forbes AI 50 list. (Hey, a region can dream!)
But as the new AI reality unfolds in the regional workforce, maybe the best question to ask is whether Seattle’s next big thing can come from its own backyard again.

Microsoft and OpenAI announced the long-awaited details of their new partnership agreement Tuesday morning — with concessions on both sides that keep the companies aligned but not in lockstep as they move into their next phases of AI development.
Under the arrangement, Microsoft gets a 27% equity stake in OpenAI’s new for-profit entity, the OpenAI Group PBC (Public Benefit Corporation), a stake valued at approximately $135 billion. That’s a decrease from 32.5% equity but not a bad return on an investment of $13.8 billion.
At the same time, OpenAI has contracted to purchase an incremental $250 billion in Microsoft Azure cloud services. However, in a significant concession in return for that certainty, Microsoft will no longer have a “right of first refusal” on new OpenAI cloud workloads.
Microsoft, meanwhile, will retain its intellectual property rights to OpenAI models and products through 2032, an extension of the timeframe that existed previously.
A key provision of the new agreement centers on Artificial General Intelligence (AGI), with any declaration of AGI by OpenAI now subject to verification by an independent expert panel. This was a sticking point in the earlier partnership agreement, with an ambiguous definition of AI potentially triggering new provisions of the prior arrangement.
Microsoft and OpenAI had previously announced a tentative agreement without providing details. More aspects of the deal are disclosed in a joint blog post from the companies.
Shares of Microsoft are up 2% in early trading after the announcement. The company reports earnings Wednesday afternoon, and some analysts have said the uncertainty over the OpenAI arrangement has been impacting Microsoft’s stock.

The post AMD CEO on new $1 billion AI supercomputer partnership with the Department of Energy appeared first on StartupHub.ai.
“We are super excited to announce a new partnership with the Department of Energy,” stated Lisa Su, Chair and CEO of AMD, during a CNBC interview. This monumental $1 billion collaboration will usher in the development of two advanced supercomputers, designed to tackle some of the most complex scientific challenges facing humanity. The partnership signifies […]
The post AMD CEO on new $1 billion AI supercomputer partnership with the Department of Energy appeared first on StartupHub.ai.
The post Bitcoin Miners’ AI Pivot: A Strategic Masterclass in Energy and Compute appeared first on StartupHub.ai.
The burgeoning demand for artificial intelligence, a computational arms race among hyperscalers, has illuminated a critical bottleneck: access to reliable, scalable power. This very challenge, as discussed by CleanSpark CEO Matthew Schultz with CNBC’s Jordan Smith, is precisely where Bitcoin miners like CleanSpark find their strategic advantage. Their conversation unveils a nuanced pivot, not merely […]
The post Bitcoin Miners’ AI Pivot: A Strategic Masterclass in Energy and Compute appeared first on StartupHub.ai.
The post OpenAI Recapitalization Reshapes AI Landscape with Microsoft at the Helm appeared first on StartupHub.ai.
The recent finalization of OpenAI’s recapitalization plan marks a pivotal moment in the trajectory of artificial intelligence, not just for the involved parties but for the entire tech ecosystem. On CNBC, David Faber broke down the intricate details of this agreement, joined by Jim Cramer, who offered his characteristic sharp market commentary. Their discussion illuminated […]
The post OpenAI Recapitalization Reshapes AI Landscape with Microsoft at the Helm appeared first on StartupHub.ai.
The post Wild Moose Emerges from Stealth with $7 Million Seed Round to Redefine Site Reliability Engineering with AI appeared first on StartupHub.ai.
Wild Moose, the AI-powered Site Reliability Engineering (SRE) platform acting as a first responder for production incidents, today announced its emergence from stealth with $7 million in seed funding. The round was led by iAngels, with participation from Y Combinator, F2 Venture Capital, Maverick Ventures, and others. The company is also backed by a distinguished […]
The post Wild Moose Emerges from Stealth with $7 Million Seed Round to Redefine Site Reliability Engineering with AI appeared first on StartupHub.ai.
The post Gemini for Education: Google’s AI Dominates Higher Ed appeared first on StartupHub.ai.
Google's Gemini for Education is rapidly integrating into higher education, offering no-cost AI tools to over 1000 institutions and 10 million students.
The post Gemini for Education: Google’s AI Dominates Higher Ed appeared first on StartupHub.ai.
The post Securitize IPO to bring tokenization to Nasdaq at $1.25B appeared first on StartupHub.ai.
The Securitize IPO is a bellwether moment, creating the first publicly-traded company focused purely on the infrastructure for tokenizing real-world assets.
The post Securitize IPO to bring tokenization to Nasdaq at $1.25B appeared first on StartupHub.ai.
The post FriendliAI Expands Ultra-Fast AI Inference Platform with Nebius AI Cloud Integration appeared first on StartupHub.ai.
Enterprises can now deploy large-scale AI inference with FriendliAI’s optimized stack on Nebius AI infrastructure, combining top performance with cost efficiency.
The post FriendliAI Expands Ultra-Fast AI Inference Platform with Nebius AI Cloud Integration appeared first on StartupHub.ai.
Seems like we've been on a northern lights roll lately, haven't we? There's one happening again tonight and the NOAA is even promising how the spectacular aurora display will be visible as far south as New York and Wisconsin. Forecasts say that a significant burst of solar activity colliding with Earth’s magnetic field will be creating the
After kids, teens, and even some adults are finished knocking on your door in hopes of scoring pieces of candy (or full-size candy bars, if you're really looking to impress), it won't be long until the Black Friday and Cyber Monday sales season is in full force. If you want to get a jump on the mad rush and are in the market for a Fire TV,
Cybersecurity experts are sounding the alarm over a new Android Trojan dubbed Herodotus, which is designed to deliberately slow down its own malicious activity to mimic the casual, imperfect behavior of a human user. Such behavior allows the malware to slip past a generation of security systems built to flag more rapid, robotic actions of
Windows Insiders are getting early access to an upcoming Windows 11 feature that is intended to improve system reliability by scanning system memory following a system crash. The optional feature is related to those frustrating BSOD errors that, aside from letting you know that your PC tripped over itself and bonked its head (as was the case
Samsung is expanding its range of storage card solutions with the release of the P9 Express, a microSD Express memory card with enough speed to keep up with gaming handhelds like Nintendo's Switch 2. While not exclusively aimed at the Switch 2, Nintendo's newest console is a big reason why we're starting to see more microSD Express memory AMD will power two new AI supercomputers in the US, using MI355X and MI430X accelerators The US Department of Energy has announced a $1 billion deal under which AMD will deliver two next-generation supercomputers to the Oak Ridge National Laboratory (ORNL). These systems are designed to expand the US’s leadership in artificial intelligence (AI) and […]
The post US cuts a $1 billion deal with AMD to build two new AI Supercomputers appeared first on OC3D.
By Jeff Seibert
I’ve been building products and companies my entire career — Increo, Box, Crashlytics, Twitter and now, Digits — and I’ve had the privilege of speaking with some of the sharpest minds in venture and entrepreneurship along the way.
One recent conversation with a legendary investor really crystallized for me a set of truths about startups: what success really is, why some founders thrive while others burn out, and how to navigate the inevitable chaos of building something from nothing.
Here are some of the lessons I’ve internalized from years of building, observing and learning.

In the startup world, we talk a lot about IPOs, acquisitions and valuations. But those are milestones, not destinations.
The companies that endure don’t “win” and stop — they keep creating, adapting and pushing forward. They’re playing an infinite game, where the only goal is to remain in the game.
When you’re building something truly generative — driven by a purpose greater than yourself — there’s no point at which you can say “done.” If your company has a natural stopping point, you may be building the wrong thing.
The best founders I’ve met — and the best moments I’ve had as a founder — come from an almost irrational pull toward solving a specific problem I myself experienced.
You may want to start a company, but if you have to talk yourself into your idea, it probably won’t survive contact with reality. The founders who succeed are often the ones who can’t not work on their thing.
Starting a company shouldn’t be a career move — it should be the last possible option after every other path fails to scratch the itch.
Most companies don’t die because of one bad decision or one tough competitor. They die because the founders run out of energy.
Fatigue erodes vision, motivation and creativity. Protecting your own drive — keeping it clean and focused — may be the single most important survival skill you have.
That means staying close to the product, protecting time for customer work, and avoiding the slow drift into managing around problems instead of solving them.
It’s easy to get caught up in competitor moves, investor chatter or market gossip. But the most important question is always: Are we delivering joy to the customer?
If you’re losing focus, sign up for your own product as a brand-new user. Feel the friction. Fix it. Repeat.
At Digits, we run our own signup and core flows every week. It’s uncomfortable — it surfaces flaws we’d rather not see — but it keeps us anchored to the only metric that matters: customer delight.
Over the years, I’ve learned the most effective boards aren’t presentation theaters — they’re discussion rooms.
The best structure I’ve seen:
Good directors help you widen your perspective. They don’t hand you a to-do list. Rather, they help you see the problem in a way that makes the answer obvious.
When I think back to my time at Twitter, the most enduring lesson is that not all companies are built top-down. Some — like Twitter — are shaped more by their users than their executives.
Features like @mentions, hashtags and retweets didn’t come from a product roadmap — they came from the community.
That’s messy, but it’s also powerful. Sometimes your job isn’t to control the phenomenon, rather it’s to keep it healthy without smothering what made it magical in the first place.
If you’re building today, you have an advantage over the so-called “unicorn zombies” that raised massive rounds pre-AI and are now locked into defending old business models.
Fresh founders can design from scratch for the new reality; there’s no legacy to protect, no sacred cows to defend.
The macro environment? Irrelevant. The only timing that matters is when the problem calls you so strongly that not working on it feels impossible.
If there’s one takeaway from all of this, it’s that success is continuing. The real prize is the ability to keep playing, keep serving and keep creating.
If you’re standing at the edge, wondering if you should start — start. Take one step. See if it grows. And if it does, welcome to the infinite game.
Jeff Seibert is the founder and CEO of Digits, the world’s first AI-native accounting platform. He previously served as Twitter‘s head of consumer product and starred in the Emmy Award-winning Netflix documentary “The Social Dilemma.”
Illustration: Dom Guzman
While startup investment has been climbing lately, not all industries are partaking in the gains.
Cleantech is one of the spaces that’s been mostly left out. Overall funding to the space is down this year, despite some pockets of bullishness in areas like fusion and battery recycling.
The broad trend: Cleantech- and sustainability-related startup investment has been on a downward trajectory for several years now. And so far, 2025 is on track to be another down year.
On the bright side, however, there’s been some pickup in recent months, boosted by big rounds for companies in energy storage, fusion and other cleantech subsectors.
The numbers: Investors put an estimated $20 billion into seed- through growth-stage funding to companies in cleantech, EV and sustainability-related categories so far this year.
That puts 2025 funding on track to come in well below last year’s levels, which were already at a multiyear low.
Still, quarter by quarter, the pattern looks more encouraging. Investment hit a low point in Q1 of this year and recovered some in the subsequent two quarters. The current quarter is also off to a strong start.
Noteworthy recent rounds
The largest cleantech-related round of the year closed this month. Base Power, a provider of residential battery backup systems and electricity plans, raised $1 billion in Series C funding. The Austin, Texas-based company says its systems allow energy providers to more efficiently harness renewable power.
The second-largest round was Commonwealth Fusion Systems’ $863 million Series B2 financing. The Devens, Massachusetts-based company says it is moving closer to being the first in the world to commercialize fusion power.
For a bigger-picture view, below we put together a list of 10 of the year’s largest cleantech- and sustainability-related financings.
The broad takeaway: Startups innovating for an era of rising power consumption
Not to over-generalize, but if there was one big takeaway from recent cleantech and sustainability startup funding, it would be that founders and investors recognize that these are times of ever-escalating energy demand. They’re planning accordingly, looking to tap new sources of power, fusion in particular, as well as better utilize and scale existing clean energy sources.
Illustration: Dom Guzman

A new report exploring the potential for the Pacific Northwest to stake its claim as the global leader in responsible AI offers a paradoxical view. The Cascadia region, which includes Seattle, Portland and Vancouver, B.C., is described as a proven, promising player in the sphere — but with significant risks that threaten its success.
“We created companies that transformed global commerce,” writes former Gov. Chris Gregoire in a forward to the document. “Now we have the chance to add another chapter — one where Cascadia becomes the world’s standard-bearer for innovation that uplifts both people and planet.”
The Cascadia Innovation Corridor, which Gregoire chairs, released the report this morning as it kicks off its two-day conference. The economic advocacy group’s eighth annual event is being held in Seattle.
The study is built on an analysis by the Boston Consulting Group that ranks Cascadia’s three metro areas against 15 comparable regions in the U.S. and Canada for their economic competitiveness, including livability, workforce, and business and innovation climate. Seattle came in fourth behind Boston, Austin and Raleigh, while Portland ranked 13th and Vancouver 14th.
Over the past decade, the region’s gross domestic product and populations have both grown significantly, and when combined, their economies approach the 18th largest in the world.
Cascadia’s strengths, the report explains, include tech engines such as cloud giants Microsoft and Amazon in Washington, silicon chip manufacturing in Oregon, and quantum innovation in Vancouver, as well as academic excellence from the University of Washington, University of British Columbia and Oregon State University.
But as time goes on and as business and civic leaders aim for the prize of AI dominance, cracks in the system are increasingly troubling.
The report notes that multiple regions around the U.S. and Canada have created AI-focused hubs with hundreds of millions of dollars in public and private funding to bolster their hold on the sector.
New Jersey has a half-billion dollar “AI Moonshot” program including tax incentives and public-worker AI training programs; New York’s “Empire AI Consortium” has an AI computing training center at the University of Buffalo and startup supports; and California has a public-private task force to increase AI adoption within government services and connecting tech leaders with state agencies.
For its part, Seattle Mayor Bruce Harrell announced a “responsible AI plan” this fall that provides guidelines for the municipality’s use of artificial intelligence and its support of the AI tech sector as an economic driver, which includes the earlier launches of the startup-focused AI House and Foundations.
But what the region really needs to succeed is a collaborative effort tapping all of the metro areas’ assets.
“For Cascadia, the lesson is clear: without a coordinated strategy that links our strengths in cloud computing, semiconductors, and research, we risk falling behind,” states the Cascadia Innovation Corridor report. “Acting together, we can position Cascadia not just to keep pace, but to lead.”
With the iPhone 17 lineup now in the hands of consumers, the legendary rumor mill that typically gyrates towards Apple's new products is naturally pivoting towards the next year's iPhone 18 lineup as well as the much-anticipated iPhone 20, which is due in 2027 and would commemorate 20 years since the first iPhone launched all the way back in 2007. Now, a new rumor suggests that Apple is transitioning towards simplified buttons in stages, with the iPhone 18 lineup transitioning towards a less complicated mechanical button for camera control, which will be replaced entirely by solid-state buttons in the iPhone […]
Read full article at https://wccftech.com/apple-iphone-18-to-use-a-simplified-camera-control-button-iphone-20-to-feature-haptics-instead-of-mechanical-buttons/

Death Stranding 2: On the Beach now supports the PlayStation 5's power saver mode, and its implementation is among the most interesting to date, according to a new technical analysis. In the latest episode of their weekly podcast, the tech experts at Digital Foundry examined how the two entries in the Kojima Productions series support Power Saver Mode, a newly introduced operating mode for the PlayStation 5 console that cuts CPU resources in half, halves the memory bandwidth, and reduces CPU and GPU clocks to reduce the system's power consumption. While the implementation in Death Stranding: Director's Cut was not […]
Read full article at https://wccftech.com/death-stranding-2-on-the-beach-most-interesting-power-saver-mode/

The INSPIRE series RTX 5050 is probably the smallest RTX 5050 editions, which offer a single fan design and weigh just 551 grams. MSI Launches Small Form-Factor RTX 5050 INSPIRE ITX and OC GPUs, Boasting Dual-Slot Thickness MSI has officially launched two new GeForce RTX 5050 cards in the INSPIRE series. These are probably the smallest RTX 5050 cards on the market, boasting a dual-slot design and a single-fan cooler to ensure compatibility with very small ITX cases. Apart from MSI, PNY also has a similarly compact GeForce RTX 5050, which measures just 147mm. The INSPIRE ITX RTX 5050 cards […]
Read full article at https://wccftech.com/msi-intros-geforce-rtx-5050-inspire-itx-and-oc-cards-measuring-just-147mm/

EA is pushing its employees to use AI for basically every task, but the results can be flawed, resulting in more work for developers. Business Insider recently talked with current EA staff, who confirmed that the company's leadership has spent the past year or so pushing its 15,000 employees to use AI for virtually every task, from producing code and concept art for games to advising managers how to speak to staff about a certain number of topics, including pay or promotions. The AI tools used to produce code are among those creating the most issues for developers. It is […]
Read full article at https://wccftech.com/ea-is-pushing-employees-to-use-ai-for-everything-including-producing-code-requiring-manual-fixing/

Intel's CEO, Lip-Bu Tan, has discussed the stake taken by the US government in the company, claiming that it was a necessary step to ensure that the American chipmaker could compete with Taiwan's TSMC. Intel's CEO Also Tells Specifics About His Meeting With President Trump, Calling It a Massive Success Well, the interest from the Trump administration in Intel was indeed a surprise for many of us, but for CEO Lip-Bu Tan, this initiative was "good to have", as he claims that it is similar to how Taiwan supports TSMC or South Korea backs the likes of Samsung Foundry. In […]
Read full article at https://wccftech.com/intel-ceo-says-us-government-stake-was-a-deliberate-move/

OnexPlayer has officially launched its flagship handheld, the OneXfly Apex, with a liquid-cooled AMD Ryzen AI MAX+ 395 SoC. AMD Ryzen AI MAX+ 395 Gets Liquid-Cooled Inside A Handheld With OneXPlayer's OneXfly Apex The OneXfly Apex handheld was teased last month and is positioned to be a flagship device featuring the AMD Ryzen AI MAX+ 395 SoC. This SoC has already been featured in other handhelds such as GPD Win 5 and Ayaneo Next 2. Now, OneXPlayer is rolling out its own high-end handheld, offering a nice upgrade vs the Ryzen AI 300 "Strix Point" stack. Just to recap the […]
Read full article at https://wccftech.com/onexfly-apex-handheld-launch-amd-ryzen-ai-max-395-liquid-cooling-128-gb-85wh-battery/

The popular PS3 emulator has updated its latest GPU recommendation list to AMD's RDNA and NVIDIA's Turing series. RPCS3 Announces Updated GPU Requirements for the Emulator; Recommends At Least AMD RX 5000 or NVIDIA RTX 2000 Series RPCS3 has just announced the new recommended GPU requirements for its popular PS3 emulator, which comes as a result of major GPU manufacturers ending support for some of its older generation GPU series. RPCS3 announced on X that it no longer "recommends" the AMD RX 400 and NVIDIA GTX 900 series GPUs as the recommended GPUs. The newer GPU recommendations now start with […]
Read full article at https://wccftech.com/rpcs3-removes-amd-rx-400-500-and-nvidia-gtx-900-1000-series-from-recommended-gpu-requirements/

A vapor chamber will make a significant difference to the overall temperatures of the M6 iPad Pro, with Apple previously reported to bring this cooling upgrade to its flagship tablet lineup. The California-based giant is often known to commence product development several months in advance, and according to the latest report, Apple is already in talks with two suppliers that could manufacture this crucial component. The M6 iPad Pro’s vapor chamber is reported to be provided by a Chinese and Taiwanese manufacturer Considering that the M6 iPad Pro launch will materialize approximately 18 months after the M5 iPad Pro’s inception, […]
Read full article at https://wccftech.com/apple-shortlisting-m6-ipad-pro-vapor-chamber-suppliers/

While enterprises looking to sell goods and services online wait for the backbone of agentic commerce to be hashed out, PayPal is hoping its new features will bridge the gap.
The payments company is launching a discoverability solution that allows enterprises to make its product available on any chat platform, regardless of the model or agent payment protocol.
PayPal, which is one of the participants for Google’s Agent Payments Protocol (AP2), found that it can leverage its relationship with merchants and enterprises to help pave the way for an easier transition into agentic commerce and offer the kind of flexibility they learned will benefit the ecosystem.
Michelle Gill, PayPal general manager for small business and financial services, told VentureBeat that AI-powered shopping will continue to grow, so enterprises and brands need to start laying the groundwork early.
“We think that merchants who've historically sold through web stores, particularly in the e-commerce space, are really going to need a way to get active on all of these large language models,” Gill said. “The challenge is that no one really knows how fast all of this is going to move. The issue that we’re trying to help merchants think through is how to do all of this as low-touch as possible while using the infrastructure you already have without doing a bazillion integrations.”
She added AI shopping would also bring about “a resurgence from consumers trying to ensure their investment is protected.”
PayPal partnered with website builder Wix, Cymbio, Commerce and Shopware to bring products to chat platforms like Perplexity.
PayPal’s Agentic Commerce Services include two features. The first is Agent Ready, which would allow existing PayPal merchants to accept payments on AI platforms. The second is called Shop Sync, which will enable companies’ product data to be discoverable through different AI chat interfaces. It takes a company’s catalog information and plug its inventory and fulfillment data to chat platforms.
Gill said the data goes into a central repository where AI models can ingest the information.
Right now, companies can access shop sync with Agent Ready coming in 2026.
Gill said Agentic Commerce Services is a one-to-many solution, that would be helpful right now, as different LLMs scrape different data sources to surface information.
Other benefits include:
Fast integration with current and future partners
More product discovery over the traditional search, browse and cart experiences
Preserved customer insights and relationships where the brand continues to have control over their records and communications with customers.
Right now, the service is only available through Perplexity, but Gill said more platforms will be added soon.
Agentic commerce is still very much in the early stages. AI agents are just beginning to get better at reading a browser. while platforms like ChatGPT, Gemini and Perplexity can now surface products and services based on user queries, people cannot technically buy things from chat yet.
There’s a race right now to create a standard to enable agents to transact on behalf of users and pay for items. Other than Google’s AP2, OpenAI and Stripe have the Agentic Commerce Protocol (ACP) and Visa launched its Trusted Agent Protocol.
Other than enabling a trust layer for agents to transact, another issue enterprises face with agentic commerce is fragmentation. Different chat platforms use different models which also interpret information in slightly different ways. Gill said PayPal learned that when it comes to working with merchants, flexibility is important.
“How do you decide if you're going to spend your time integrating with Google, Microsoft, ChatGPT or Perplexity? And each one of them right now has a different protocol, a different catalog, config, a different everything. That is a lot of time to make a bet as to like where you should spend your time,” Gill said.

Google Arts & Culture is launching a new collection to honor the rich heritage of the province of North Gyeongsang in South Korea. 
AI tools can help teams move faster than ever – but speed alone isn’t a strategy.
As more marketers rely on LLMs to help create and optimize content, credibility becomes the true differentiator.
And as AI systems decide which information to trust, quality signals like accuracy, expertise, and authority matter more than ever.
It’s not just what you write but how you structure it. AI-driven search rewards clear answers, strong organization, and content it can easily interpret.
This article highlights key strategies for smarter AI workflows – from governance and training to editorial oversight – so your content remains accurate, authoritative, and unmistakably human.
More than half of marketers are using AI for creative endeavors like content creation, IAB reports.
Still, AI policies are not always the norm.
Your organization will benefit from clear boundaries and expectations. Creating policies for AI use ensures consistency and accountability.
Only 7% of companies using genAI in marketing have a full-blown governance framework, according to SAS.
However, 63% invest in creating policies that govern how generative AI is used across the organization.

Even a simple, one-page policy can prevent major mistakes and unify efforts across teams that may be doing things differently.
As Cathy McPhillips, chief growth officer at the Marketing Artificial Intelligence Institute, puts it:
So drafting an internal policy sets expectations for AI use in the organization (or at least the creative teams).
When creating a policy, consider the following guidelines:
Logically, the policy will evolve as the technology and regulations change.
It can be easy to fall into the trap of believing AI-generated content is good because it reads well.
LLMs are great at predicting the next best sentence and making it sound convincing.
But reviewing each sentence, paragraph, and the overall structure with a critical eye is absolutely necessary.
Think: Would an expert say it like that? Would you normally write like that? Does it offer the depth of human experience that it should?
“People-first content,” as Google puts it, is really just thinking about the end user and whether what you are putting into the world is adding value.
Any LLM can create mediocre content, and any marketer can publish it. And that’s the problem.
People-first content aligns with Google’s E-E-A-T framework, which outlines the characteristics of high-quality, trustworthy content.
E-E-A-T isn’t a novel idea, but it’s increasingly relevant in a world where AI systems need to determine if your content is good enough to be included in search.
According to evidence in U.S. v. Google LLC, we see quality remains central to ranking:

It suggests that the same quality factors reflected in E-E-A-T likely influence how AI systems assess which pages are trustworthy enough to ground their answers.
So what does E-E-A-T look like practically when working with AI content? You can:

Dig deeper: Writing people-first content: A process and template
LLMs are trained on vast amounts of data – but they’re not trained on your data.
Put in the work to train the LLM, and you can get better results and more efficient workflows.
Here are some ideas.
If you already have a corporate style guide, great – you can use that to train the model. If not, create a simple one-pager that covers things like:
You can refresh this as needed and use it to further train the model over time.
Put together a packet of instructions that prompts the LLM. Here are some ideas to start with:
With that in mind, you can put together a prompt checklist that includes:
Dig deeper: Advanced AI prompt engineering strategies for SEO
A custom GPT is a personalized version of ChatGPT that’s trained on your materials so it can better create in your brand voice and follow brand rules.
It mostly remembers tone and format, but that doesn’t guarantee the accuracy of output beyond what’s uploaded.
Some companies are exploring RAG (retrieval-augmented generation) to further train LLMs on the company’s own knowledge base.
RAG connects an LLM to a private knowledge base, retrieving relevant documents at query time so the model can ground its responses in approved information.
While custom GPTs are easy, no-code setups, RAG implementation is more technical – but there are companies/technologies out there that can make it easier to implement.
That’s why GPTs tend to work best for small or medium-scale projects or for non-technical teams focused on maintaining brand consistency.

RAG, on the other hand, is an option for enterprise-level content generation in industries where accuracy is critical and information changes frequently.
Create parameters so the model can self-assess the content before further editorial review. You can create a checklist of things to prompt it.
For example:
Even the best AI workflow still depends on trained editors and fact-checkers. This human layer of quality assurance protects accuracy, tone, and credibility.
About 33% of content writers and 24% of marketing managers added AI skills to their LinkedIn profiles in 2024.
Writers and editors need to continue to upskill in the coming year, and, according to the Microsoft 2025 annual Work Trend Index, AI skilling is the top priority.

Professional training creates baseline knowledge so your team gets up to speed faster and can confidently handle outputs consistently.
This includes training on how to effectively use LLMs and how to best create and edit AI content.
In addition, training content teams on SEO helps them build best practices into prompts and drafts.
Ground your AI-assisted content creation in editorial best practices to ensure the highest quality.
This might include:

Build a checklist to use during the review process for quality assurance. Here are some ideas to get you started:
AI is transforming how we create, but it doesn’t change why we create.
Every policy, workflow, and prompt should ultimately support one mission: to deliver accurate, helpful, and human-centered content that strengthens your brand’s authority and improves your visibility in search.
Dig deeper: An AI-assisted content process that outperforms human-only copy


Google's John Mueller said case sensitivity matters and SEOs shouldn't just hope it works.
The post Google’s Advice On Canonicals: They’re Case Sensitive appeared first on Search Engine Journal.
The post Beyond LLMs: Crafting Robust AI with Multi-Method Agentic Architectures appeared first on StartupHub.ai.
“Large language models have well-known issues and constraints. And so if you want to solve complex problems, you’re going to want to adopt what’s called multi-method agentic AI, which combines large language models with other kinds of proven automation technologies so that you can build more adaptable, more transparent systems that are much more likely […]
The post Beyond LLMs: Crafting Robust AI with Multi-Method Agentic Architectures appeared first on StartupHub.ai.
The post Google Arts & Culture Elevates Virtual Travel with AI Tours appeared first on StartupHub.ai.
Google Arts & Culture is redefining virtual exploration with new AI tours for North Gyeongsang, South Korea, featuring interactive, Gemini-powered commentary.
The post Google Arts & Culture Elevates Virtual Travel with AI Tours appeared first on StartupHub.ai.
The post Nvidia’s AI Imperative: Beyond Moore’s Law, Network is the New Compute appeared first on StartupHub.ai.
Michael Kagan, CTO of Nvidia and co-founder of Mellanox, recently engaged in a candid discussion with Sonya Huang and Pat Grady at Sequoia’s Europe100 event, offering profound insights into Nvidia’s meteoric rise as the architect of AI infrastructure. His commentary illuminated the pivotal role of the Mellanox acquisition in transforming Nvidia from a mere chip […]
The post Nvidia’s AI Imperative: Beyond Moore’s Law, Network is the New Compute appeared first on StartupHub.ai.

Amazon confirmed Tuesday that it is cutting about 14,000 corporate jobs, citing a need to reduce bureaucracy and become more efficient in the new era of artificial intelligence.
In a message to employees, posted on the company’s website, Amazon human resources chief Beth Galetti signaled that the cutbacks are expected to continue into 2026, while indicating that the company will also continue to hire in key strategic areas.
Reuters reported Monday that the number of layoffs could ultimately total as many as 30,000 people, which is still a possibility as the cutbacks continue into next year. At that scale, the overall number of job cuts could eventually be the largest in Amazon’s history, exceeding the 27,000 positions that the company eliminated in 2023 across multiple rounds of layoffs.
“This generation of AI is the most transformative technology we’ve seen since the Internet, and it’s enabling companies to innovate much faster than ever before,” wrote Galetti, senior vice president of People Experience and Technology. Amazon needs “to be organized more leanly, with fewer layers and more ownership, to move as quickly as possible for our customers and business,” she explained.
Amazon’s corporate workforce numbered around 350,000 people in early 2023, the last time the company provided a public number. At that scale, the initial reduction of 14,000 represents about 4% of Amazon’s corporate workforce. However, the number is a much smaller fraction of its overall workforce of 1.55 million people, which includes workers in its warehouses.
Although the cuts are expected to be global, they are likely to hit especially hard in the Seattle region, home to the company’s first headquarters and its largest corporate workforce. The tech hub has already felt the impact of major layoffs by Microsoft and many other companies in recent months.

The cuts come two days before Amazon’s third quarter earnings report. Amazon and other cloud giants have been pouring billions into capital expenses to boost AI capacity. Cutting jobs is one way of showing operating-expense discipline to Wall Street.
In a memo to employees in June, Amazon CEO Andy Jassy wrote that he expected Amazon’s total corporate workforce to get smaller over time as a result of efficiency gains from AI.
Jassy took over as Amazon CEO from founder Jeff Bezos in mid-2021. In recent years he has been pushing to reduce management layers and eliminate bureaucracy inside the company, saying he wants Amazon to operate like the “world’s largest startup.”
Bloomberg News reported this week that Jassy has told colleagues that parts of the company remain “unwieldy” despite the 2023 layoffs and other efforts to streamline operations.
Reuters cited sources saying the magnitude of the cuts is also a result of Amazon’s strict return-to-office policy failing to cause enough employees to quit voluntarily. Amazon brought workers back five days a week earlier this year.
Impacted teams and people will be notified of the layoffs today, Galetti wrote.
Amazon is offering most impacted employees 90 days to find a new role internally, though the timing may vary based on local laws, according to the message. Those who do not find a new position at Amazon or choose to leave will be offered severance pay, outplacement services, health insurance benefits, and other forms of support.
John Carmack reports performance issues with Nvidia’s DGX Spark AI system John Carmack, ID Software founder and former CTO of Oculus VR, has been testing an Nvidia DGX Spark AI system. So far, he is not impressed by the performance the system has delivered. His system appears to be maxing out at 100 watts, which […]
The post Nvidia DGX Spark delivers half quoted performance for John Carmack appeared first on OC3D.
Battlefield is getting a free-to-play Battle Royale mode EA has confirmed that Battlefield REDSEC will launch on October 28th at 3 PM GMT, a free-to-play Battle Royale game that debuts alongside Battlefield 6’s Season 1 content. Battlefield REDSEC acts as EA’s counter to Call of Duty: Warzone. Currently, exact details for the new game are […]
The post Battlefield REDSEC is launching today – Here’s what you need to know appeared first on OC3D.
There's little doubt that The Matrix franchise is criminally underserved when it comes to videogame adaptations, despite being theoretically a perfect fit for the medium. In the 26 years since the original movie's theatrical debut, we only got two decent games: 2003's single player action/adventure game Enter the Matrix and 2005's MMORPG The Matrix Online. More recently, the interactive experience The Matrix Awakens was released in late 2021, but it was really just a tech demo for Unreal Engine 5 and a tease at the level of quality that gaming fans of the IP never really got to fully experience. […]
Read full article at https://wccftech.com/the-matrix-creators-wanted-kojima-make-a-game-on-the-ip-konami-refused/

Today at GTC 2025, NVIDIA's CEO, Jensen Huang, will deliver the opening keynote live from Washington, US, for the first time. NVIDIA GTC Comes To Washington, D.C, US: CEO Jensen Huang To Talk About Next Chapter of AI, Watch It Live Here NVIDIA's GTC 2025 is just a few hours away, and while you might be wondering, didn't GTC already happen a few months back? Well, it should be mentioned that while GTC used to be a one-time per annum affair in the past, the recent growth and success have turned NVIDIA's GTC into more of a quarterly event. As […]
Read full article at https://wccftech.com/watch-nvidia-gtc-2025-ceo-jensen-huang-keynote-live-washington-us/

The iPhone 17 lineup is expected to be Apple’s last to ship with Qualcomm’s 5G modems as the company prepares its transition to ship all of its iPhone 18 models with the C2 baseband chip. This in-house solution was said to be in development shortly after the iPhone 16e was announced, and while we will witness its materialization in 2026, a new report states that, unlike other Apple chipsets like the A20 and A20 Pro, it will not leverage TSMC’s newest 2nm process, but a lithography that is a couple of generations old. The C2 5G modem will reportedly be mass […]
Read full article at https://wccftech.com/apple-c2-to-be-mass-produced-on-older-tsmc-process-says-report/

A team of modders is working on Bully Online, a modification for the PC version of Bully: Scholarship Edition that promises to allow players to roam the grounds of Bullworth Academy and the nearby town with their friends. The Wii and Xbox 360 versions of Scholarship Edition did have a multiplayer mode, but it was limited to two players and only allowed them to face off in the class minigames. According to community creator SWEGTA, Bully Online promises much more, including free roam support, solo and group minigames, and even a role-playing system. They were able to add a 'fully […]
Read full article at https://wccftech.com/bully-online-mod-promises-let-you-roam-rockstars-classic-with-friends/

This morning, indie Chinese developer ChillyRoom unveiled Loulan: The Cursed Sand, one of the games funded through the PlayStation China Hero Project. The game is a hack 'n' slash action RPG viewed from a Diablo-like camera. The setting is the ancient Silk Road, in China's Western Regions. Loulan: The Cursed Sand tells the tragic love story of an exiled royal guard who returns to the titular fallen kingdom amidst the chaos of war in search of his beloved princess. Players will step into the game as the skeletal warrior known as 'The Cursed Sand', mastering the power of sand as […]
Read full article at https://wccftech.com/loulan-the-cursed-sand-chinese-hack-n-slash-arpg/

Samsung looks to be all set to announce its first triple-folding smartphone, the Galaxy Z TriFold, and even though the device is expected to be limited to a few markets, it was high time that we saw smartphones gravitate to a new form factor. Just before the official announcement happens, a series of images provides a first look at the Galaxy Z TriFold, showing a dual-infolding structure that can transform into a large-screen tablet. The Galaxy Z TriFold was on display at the Samsung booth at the K-Tech Showcase, with one report stating that the prototype did not display any […]
Read full article at https://wccftech.com/samsung-galaxy-z-trifold-first-look-image-gallery/

The most powerful way to build professional mobile apps.
The workspace for music creators
Turn your AI tool into a personal email marketing assistant
Your LinkedIn post studio from sources to impact
Private voice reminders for Mac with no cloud or accounts
Organize & edit local files with AI from your browser
Know What You Ship. Secure What You Depend On.
Compare API models by benchmarks, cost & capabilities
Secure file uploads for Intercom, Crisp & Zendesk
Personal health coach built with Gemini
Elon's answer to Wikipedia
The first add-on to manage Google Sheets tabs efficiently
Automatically adds short poems to your photos, for free.
Vibe Your Agentic Workflows
The post On-Policy Distillation LLMs Redefine Post-Training Efficiency appeared first on StartupHub.ai.
On-policy distillation LLMs from Thinking Machines Lab offer a highly efficient and cost-effective method for post-training specialized smaller models, combining direct learning with dense feedback.
The post On-Policy Distillation LLMs Redefine Post-Training Efficiency appeared first on StartupHub.ai.
The post Tensormesh exits stealth with $4.5M to slash AI inference caching costs appeared first on StartupHub.ai.
Tensormesh's AI inference caching technology eliminates redundant computation, promising to make enterprise AI cheaper and faster to run at scale.
The post Tensormesh exits stealth with $4.5M to slash AI inference caching costs appeared first on StartupHub.ai.

Meta plans to lay off more than 100 employees in Washington state as part of a broader round of cuts within its artificial intelligence division.
A new filing with the state’s Employment Security Department shows 101 employees impacted, including 48 in Bellevue, 23 in Seattle, and four in Redmond, along with 23 remote workers based in Washington.
The filing lists dozens of affected roles across Meta’s AI research and infrastructure units, including software engineers, AI researchers, and data scientists. Meta product managers, privacy specialists, and compliance analysts were also affected.
Meta is cutting around 600 positions in its AI unit, Axios reported last week. The company is investing heavily in AI and wants to create a “more agile operation,” according to an internal memo cited by Axios. Meta has just under 3,000 roles within its superintelligence lab, CNBC reported.
The separations at Meta in Washington take effect Dec. 22, according to the Worker Adjustment and Retraining Notification (WARN) notice filed Oct. 22.
Meta employs thousands of people across multiple offices in the Seattle region, one of its largest engineering hubs outside Menlo Park.
The latest reductions mark another contraction for Meta’s Pacific Northwest footprint following multiple rounds of layoffs over the past several years.
The company’s rapid expansion in Seattle over the past decade made it one of the emblems of the region’s tech boom, coinciding with Microsoft’s resurgence and Amazon’s rise.
Among the Bay Area titans, Google was among the first to establish a Seattle-area engineering office, way back in 2004. However, it was Facebook’s decision to open its own outpost across from Pike Place Market in 2010 that really got the attention of their Silicon Valley tech brethren.
In the decade that followed, out-of-town companies set up more than 130 engineering centers in the region.

However, more recently Meta has made moves to trim its Seattle-area footprint.
Apple earlier this year took over a building previously occupied by Meta in Seattle’s South Lake Union neighborhood, near Amazon’s headquarters. CoStar reported in April that Meta listed its other Arbor Blocks building for sublease.
Meta previously gobbled up much of the planned office space at the Spring District, a sprawling development northeast of downtown Bellevue, including a building that was originally going to be a new REI headquarters. But it has subleased some of the space since then to companies such as Snowflake, which recently took an entire building from Meta at the Spring District.
Meta’s office in Redmond, near Microsoft’s headquarters, is focused on its mixed reality development.
GeekWire has reached out to the company for an updated Seattle-area headcount.
Meta’s cuts come amid reported layoffs at Amazon that could impact up to 30,000 workers.
Tech companies have laid off more than 128,000 employees this year, according to Layoffs.fyi. Last year, companies cut nearly 153,000 positions.

Quietnet blocks ads, tracking, and harmful websites before they even reach your phone, laptop, or TV — no apps needed, and it works for every device in your home or office.
We make the internet faster, safer, and more private for families and small businesses — without the noise. We're not backed by big tech or VC money. We're privacy-focused, bootstrapped, and already seeing people pay for peace and quiet online. QuietNet is built by people who care — no ads, no tracking, just a cleaner, safer internet for your family or team.
AMD gains $3 billion by divesting from ZT Systems’ manufacturing business – retains key talent AMD has confirmed that it has officially divested from ZT Systems’ manufacturing business, selling it to Sanmina for $3 billion. This recoups most of AMD’s acquisition costs from its ZT Systems (ZTS) purchase earlier this year, and secures AMD a […]
The post AMD gains $3 billion divesting from ZT Systems’ manufacturing business appeared first on OC3D.
Voxtara is your personal AI speech coach that helps you become a more confident and effective public speaker. Whether you're preparing for a big presentation, teaching a class, or pitching to investors, Voxtara provides instant, actionable feedback to help you improve.
Key Features: • AI-Powered Analysis: Get comprehensive feedback on clarity, pacing, confidence, engagement, and body language • Video Recording: Record practice sessions up to 5 minutes • Deep-Dive Reports: Detailed analysis • Progress Tracking: Watch your speaking skills improve over time with detailed analytics • Practice Reminders: Set custom reminders • Session History: Review past performances
These quick tips could help you get more traction with your email outreach.
With cyber scams on the increase, TikTok's looking to help raise awareness among its user community.
YouTube launched Ask Studio, an AI assistant in YouTube Studio that analyzes channel data to surface comment insights, performance analysis, and content ideas.
The post YouTube Introduces ‘Ask Studio’ AI For Channel Analytics appeared first on Search Engine Journal.
OpenAI is telling companies that “relationship building” with AI has limits. Emotional dependence on ChatGPT is considered a safety risk, with new guardrails in place.
The post OpenAI Flags Emotional Reliance On ChatGPT As A Safety Risk appeared first on Search Engine Journal.
The post Edge AI: The Key to Sustainable AI Energy Efficiency appeared first on StartupHub.ai.
Arm and SCSP's new paper highlights edge computing as the strategic imperative for achieving AI energy efficiency and securing U.S. competitiveness.
The post Edge AI: The Key to Sustainable AI Energy Efficiency appeared first on StartupHub.ai.
The post America’s AI Future: AMD Powers U.S. Sovereign AI appeared first on StartupHub.ai.
AMD and the DOE are launching Lux and Discovery supercomputers at ORNL, a $1 billion investment to establish secure U.S. Sovereign AI infrastructure.
The post America’s AI Future: AMD Powers U.S. Sovereign AI appeared first on StartupHub.ai.
The post ASEAN’s AI Ambition: Infrastructure, Innovation, and Tailored Governance appeared first on StartupHub.ai.
“Infrastructure is destiny,” declared James Hairston, Head of International Policy & Partnerships for Asia, Africa, & Latin America at OpenAI, encapsulating the strategic imperative facing Southeast Asia in the burgeoning age of artificial intelligence. This powerful statement set the stage for a compelling discussion at the Bloomberg Business Summit at ASEAN in Kuala Lumpur, where […]
The post ASEAN’s AI Ambition: Infrastructure, Innovation, and Tailored Governance appeared first on StartupHub.ai.
Microsoft’s efforts to include its Copilot AI in as many of its services and products as possible has landed the company in some hot water. The Australian Competition and Consumer Commission (ACCC) is taking the company to court, accusing the software giant of misleading customers with its integration of Copilot into Microsoft 365 plans.
This
Retroid is making waves in the handheld gaming space—a year after the September release of the Retroid Pocket 5, Retroid is following up with both a proper Retroid Pocket 6 and a Retroid Pocket G2, the latter of which serves as a souped-up refresh of Pocket 5 with the same external shell. Both handhelds are targeting a similar $200-$300 USD
Since the bombshell announcement of new Halo installment coming to PlayStation just two days ago, the wider public seems convinced that Xbox's battle in the console war is over. And now, comments made by Xbox Game Studios head Matt Booty seem to be lending credence to that, declaring TikTok a bigger competitor to Xbox than PlayStation. There's
OCCT 15 delivers a major update with a new storage test modeled after CrystalDiskMark, plus enhanced GPU diagnostics through an improved 3D adaptive test that detects errors more precisely and adds a coil-whine detection feature.
So, I stumbled upon this interesting portable monitor, which is unusually large to be called "portable", but considering there are users who would want something that can be useful for both travel and regular use, the UMax 24 looked interesting. I have reviewed a few UPERFECT portable monitors, including the Dual-Stack UStation Delta Max, which is one of the best options for work and gaming. However, UMax's big 24.5-inch screen size makes it an interesting option for daily usage if you are considering a versatile option for your desktop and travel. Personally, I wanted to see if I could replace […]
Read full article at https://wccftech.com/review/uperfect-umax-24-portable-monitor-review/

Apple will eventually introduce the M5 MacBook Air in a few months now that it has officially started selling the M5 MacBook Pro, but you will not immediately see the discounts on the company’s newer portable Macs, making the M4 MacBook Air models a more attractive proposition. Why? Because Amazon has slashed both the 13-inch and 15-inch versions by $200, and best of all, you can configure these machines up to 24GB unified RAM and a 512GB SSD. The base model starts from $799, making it an instant steal. The M5 MacBook Air will likely adopt the same ‘fanless’ cooling […]
Read full article at https://wccftech.com/do-not-wait-for-the-m5-macbook-air-because-amazon-offers-200-off-on-m4-macbook-air/

Halo: Campaign Evolved marked the official confirmation of something that felt like it would never happen just a short few years ago, with Halo officially coming to PlayStation. Following that announcement, GameStop, the retailer that was initially known for selling physical video games that's now more known for turning into a glorified Pop Funko and merch store called the 'Console Wars' over. Of all entities, the White House, which would normally have more important things to post about, responded with an AI-generated image of President Trump in Spartan armor with the caption "Power to the players." GameStop's original post comes […]
Read full article at https://wccftech.com/gamestop-console-wars-over-halo-on-ps-white-house-trump-ai-spartan-armor/

Gearbox founder Randy Pitchford was recently interviewed by Shacknews alongside a few other colleagues to discuss the making of Borderlands 4, the studio's latest game, which was released on September 12. The video runs for 73 minutes, and right toward the end, Pitchford goes into exactly what is needed to create such a big videogame like Borderlands 4. Interestingly, he then adds that the gaming industry as a whole is just getting started and 'figuring out' videogames, which haven't yet received their 'Citizen Kane' moment yet. To make a game like Borderlands 4 takes a big investment. It's a massive, […]
Read full article at https://wccftech.com/borderlands-boss-gaming-hasnt-even-produced-single-masterpiece-yet/

Qualcomm has announced its latest AI chips, which are designed to scale up to a purpose-built rack-level AI inference solution, but interestingly, they employ mobile memory onboard. Qualcomm's New AI Chips Take a 'Daring' Pivot Away From HBM To Target Efficient Inferencing Workloads Qualcomm has come a long way from being a mobile-focused firm, and in recent years, the San Diego chipmaker has expanded into new segments, including consumer computing and AI infrastructure. Now, the firm has announced its newest AI200 and AI250 chip solutions, which are reportedly designed for rack-scale configurations. This not only marks the entry of a […]
Read full article at https://wccftech.com/qualcomm-new-ai-rack-scale-solution-actually-uses-lpddr-mobile-memory-onboard/

Watch out, DeepSeek and Qwen! There's a new king of open source large language models (LLMs), especially when it comes to something enterprises are increasingly valuing: agentic tool use — that is, the ability to go off and use other software capabilities like web search or bespoke applications — without much human guidance.
That model is none other than MiniMax-M2, the latest LLM from the Chinese startup of the same name. And in a big win for enterprises globally, the model is available under a permissive, enterprise-friendly MIT License, meaning it is made available freely for developers to take, deploy, retrain, and use how they see fit — even for commercial purposes. It can be found on Hugging Face, GitHub and ModelScope, as well as through MiniMax's API here. It supports OpenAI and Anthropic API standards, as well, making it easy for customers of said proprietary AI startups to shift out their models to MiniMax's API, if they want.
According to independent evaluations by Artificial Analysis, a third-party generative AI model benchmarking and research organization, M2 now ranks first among all open-weight systems worldwide on the Intelligence Index—a composite measure of reasoning, coding, and task-execution performance.
In agentic benchmarks that measure how well a model can plan, execute, and use external tools—skills that power coding assistants and autonomous agents—MiniMax’s own reported results, following the Artificial Analysis methodology, show τ²-Bench 77.2, BrowseComp 44.0, and FinSearchComp-global 65.5.
These scores place it at or near the level of top proprietary systems like GPT-5 (thinking) and Claude Sonnet 4.5, making MiniMax-M2 the highest-performing open model yet released for real-world agentic and tool-calling tasks.
Built around an efficient Mixture-of-Experts (MoE) architecture, MiniMax-M2 delivers high-end capability for agentic and developer workflows while remaining practical for enterprise deployment.
For technical decision-makers, the release marks an important turning point for open models in business settings. MiniMax-M2 combines frontier-level reasoning with a manageable activation footprint—just 10 billion active parameters out of 230 billion total.
This design enables enterprises to operate advanced reasoning and automation workloads on fewer GPUs, achieving near-state-of-the-art results without the infrastructure demands or licensing costs associated with proprietary frontier systems.
Artificial Analysis’ data show that MiniMax-M2’s strengths go beyond raw intelligence scores. The model leads or closely trails top proprietary systems such as GPT-5 (thinking) and Claude Sonnet 4.5 across benchmarks for end-to-end coding, reasoning, and agentic tool use.
Its performance in τ²-Bench, SWE-Bench, and BrowseComp indicates particular advantages for organizations that depend on AI systems capable of planning, executing, and verifying complex workflows—key functions for agentic and developer tools inside enterprise environments.
As LLM engineer Pierre-Carl Langlais aka Alexander Doria posted on X: "MiniMax [is] making a case for mastering the technology end-to-end to get actual agentic automation."
MiniMax-M2’s technical architecture is a sparse Mixture-of-Experts model with 230 billion total parameters and 10 billion active per inference.
This configuration significantly reduces latency and compute requirements while maintaining broad general intelligence.
The design allows for responsive agent loops—compile–run–test or browse–retrieve–cite cycles—that execute faster and more predictably than denser models.
For enterprise technology teams, this means easier scaling, lower cloud costs, and reduced deployment friction. According to Artificial Analysis, the model can be served efficiently on as few as four NVIDIA H100 GPUs at FP8 precision, a setup well within reach for mid-size organizations or departmental AI clusters.
MiniMax’s benchmark suite highlights strong real-world performance across developer and agent environments. The figure below, released with the model, compares MiniMax-M2 (in red) with several leading proprietary and open models, including GPT-5 (thinking), Claude Sonnet 4.5, Gemini 2.5 Pro, and DeepSeek-V3.2.
MiniMax-M2 achieves top or near-top performance in many categories:
SWE-bench Verified: 69.4 — close to GPT-5’s 74.9
ArtifactsBench: 66.8 — above Claude Sonnet 4.5 and DeepSeek-V3.2
τ²-Bench: 77.2 — approaching GPT-5’s 80.1
GAIA (text only): 75.7 — surpassing DeepSeek-V3.2
BrowseComp: 44.0 — notably stronger than other open models
FinSearchComp-global: 65.5 — best among tested open-weight systems
These results show MiniMax-M2’s capability in executing complex, tool-augmented tasks across multiple languages and environments—skills increasingly relevant for automated support, R&D, and data analysis inside enterprises.
The model’s overall intelligence profile is confirmed in the latest Artificial Analysis Intelligence Index v3.0, which aggregates performance across ten reasoning benchmarks including MMLU-Pro, GPQA Diamond, AIME 2025, IFBench, and τ²-Bench Telecom.
MiniMax-M2 scored 61 points, ranking as the highest open-weight model globally and following closely behind GPT-5 (high) and Grok 4.
Artificial Analysis highlighted the model’s balance between technical accuracy, reasoning depth, and applied intelligence across domains. For enterprise users, this consistency indicates a reliable model foundation suitable for integration into software engineering, customer support, or knowledge automation systems.
MiniMax engineered M2 for end-to-end developer workflows, enabling multi-file code edits, automated testing, and regression repair directly within integrated development environments or CI/CD pipelines.
The model also excels in agentic planning—handling tasks that combine web search, command execution, and API calls while maintaining reasoning traceability.
These capabilities make MiniMax-M2 especially valuable for enterprises exploring autonomous developer agents, data analysis assistants, or AI-augmented operational tools.
Benchmarks such as Terminal-Bench and BrowseComp demonstrate the model’s ability to adapt to incomplete data and recover gracefully from intermediate errors, improving reliability in production settings.
A distinctive aspect of MiniMax-M2 is its interleaved thinking format, which maintains visible reasoning traces between <think>...</think> tags.
This enables the model to plan and verify steps across multiple exchanges, a critical feature for agentic reasoning. MiniMax advises retaining these segments when passing conversation history to preserve the model’s logic and continuity.
The company also provides a Tool Calling Guide on Hugging Face, detailing how developers can connect external tools and APIs via structured XML-style calls.
This functionality allows MiniMax-M2 to serve as the reasoning core for larger agent frameworks, executing dynamic tasks such as search, retrieval, and computation through external functions.
Enterprises can access the model through the MiniMax Open Platform API and MiniMax Agent interface (a web chat similar to ChatGPT), both currently free for a limited time.
MiniMax recommends SGLang and vLLM for efficient serving, each offering day-one support for the model’s unique interleaved reasoning and tool-calling structure.
Deployment guides and parameter configurations are available through MiniMax’s documentation.
As Artificial Analysis noted, MiniMax’s API pricing is set at $0.30 per million input tokens and $1.20 per million output tokens, among the most competitive in the open-model ecosystem.
Provider | Model (doc link) | Input $/1M | Output $/1M | Notes |
MiniMax | $0.30 | $1.20 | Listed under “Chat Completion v2” for M2. | |
OpenAI | $1.25 | $10.00 | Flagship model pricing on OpenAI’s API pricing page. | |
OpenAI | $0.25 | $2.00 | Cheaper tier for well-defined tasks. | |
Anthropic | $3.00 | $15.00 | Anthropic’s current per-MTok list; long-context (>200K input) uses a premium tier. | |
$0.30 | $2.50 | Prices include “thinking tokens”; page also lists cheaper Flash-Lite and 2.0 tiers. | ||
xAI | $0.20 | $0.50 | “Fast” tier; xAI also lists Grok-4 at $3 / $15. | |
DeepSeek | $0.28 | $0.42 | Cache-hit input is $0.028; table shows per-model details. | |
Qwen (Alibaba) | from $0.022 | from $0.216 | Tiered by input size (≤128K, ≤256K, ≤1M tokens); listed “Input price / Output price per 1M”. | |
Cohere | $2.50 | $10.00 | First-party pricing page also lists Command R ($0.50 / $1.50) and others. |
Notes & caveats (for readers):
Prices are USD per million tokens and can change; check linked pages for updates and region/endpoint nuances (e.g., Anthropic long-context >200K input, Google Live API variants, cache discounts).
Vendors may bill extra for server-side tools (web search, code execution) or offer batch/context-cache discounts.
While the model produces longer, more explicit reasoning traces, its sparse activation and optimized compute design help maintain a favorable cost-performance balance—an advantage for teams deploying interactive agents or high-volume automation systems.
MiniMax has quickly become one of the most closely watched names in China’s fast-rising AI sector.
Backed by Alibaba and Tencent, the company moved from relative obscurity to international recognition within a year—first through breakthroughs in AI video generation, then through a series of open-weight large language models (LLMs) aimed squarely at developers and enterprises.
The company first captured global attention in late 2024 with its AI video generation tool, “video-01,” which demonstrated the ability to create dynamic, cinematic scenes in seconds. VentureBeat described how the model’s launch sparked widespread interest after online creators began sharing lifelike, AI-generated footage—most memorably, a viral clip of a Star Wars lightsaber duel that drew millions of views in under two days.
CEO Yan Junjie emphasized that the system outperformed leading Western tools in generating human movement and expression, an area where video AIs often struggle. The product, later commercialized through MiniMax’s Hailuo platform, showcased the startup’s technical confidence and creative reach, helping to establish China as a serious contender in generative video technology.
By early 2025, MiniMax had turned its attention to long-context language modeling, unveiling the MiniMax-01 series, including MiniMax-Text-01 and MiniMax-VL-01. These open-weight models introduced an unprecedented 4-million-token context window, doubling the reach of Google’s Gemini 1.5 Pro and dwarfing OpenAI’s GPT-4o by more than twentyfold.
The company continued its rapid cadence with the MiniMax-M1 release in June 2025, a model focused on long-context reasoning and reinforcement learning efficiency. M1 extended context capacity to 1 million tokens and introduced a hybrid Mixture-of-Experts design trained using a custom reinforcement-learning algorithm known as CISPO. Remarkably, VentureBeat reported that MiniMax trained M1 at a total cost of about $534,700, roughly one-tenth of DeepSeek’s R1 and far below the multimillion-dollar budgets typical for frontier-scale models.
For enterprises and technical teams, MiniMax’s trajectory signals the arrival of a new generation of cost-efficient, open-weight models designed for real-world deployment. Its open licensing—ranging from Apache 2.0 to MIT—gives businesses freedom to customize, self-host, and fine-tune without vendor lock-in or compliance restrictions.
Features such as structured function calling, long-context retention, and high-efficiency attention architectures directly address the needs of engineering groups managing multi-step reasoning systems and data-intensive pipelines.
As MiniMax continues to expand its lineup, the company has emerged as a key global innovator in open-weight AI, combining ambitious research with pragmatic engineering.
The release of MiniMax-M2 reinforces the growing leadership of Chinese AI research groups in open-weight model development.
Following earlier contributions from DeepSeek, Alibaba’s Qwen series, and Moonshot AI, MiniMax’s entry continues the trend toward open, efficient systems designed for real-world use.
Artificial Analysis observed that MiniMax-M2 exemplifies a broader shift in focus toward agentic capability and reinforcement-learning refinement, prioritizing controllable reasoning and real utility over raw model size.
For enterprises, this means access to a state-of-the-art open model that can be audited, fine-tuned, and deployed internally with full transparency.
By pairing strong benchmark performance with open licensing and efficient scaling, MiniMaxAI positions MiniMax-M2 as a practical foundation for intelligent systems that think, act, and assist with traceable logic—making it one of the most enterprise-ready open AI models available today.

Anthropic is making its most aggressive push yet into the trillion-dollar financial services industry, unveiling a suite of tools that embed its Claude AI assistant directly into Microsoft Excel and connect it to real-time market data from some of the world's most influential financial information providers.
The San Francisco-based AI startup announced Monday it is releasing Claude for Excel, allowing financial analysts to interact with the AI system directly within their spreadsheets — the quintessential tool of modern finance. Beyond Excel, select Claude models are also being made available in Microsoft Copilot Studio and Researcher agent, expanding the integration across Microsoft's enterprise AI ecosystem. The integration marks a significant escalation in Anthropic's campaign to position itself as the AI platform of choice for banks, asset managers, and insurance companies, markets where precision and regulatory compliance matter far more than creative flair.
The expansion comes just three months after Anthropic launched its Financial Analysis Solution in July, and it signals the company's determination to capture market share in an industry projected to spend $97 billion on AI by 2027, up from $35 billion in 2023.
More importantly, it positions Anthropic to compete directly with Microsoft — ironically, its partner in this Excel integration — which has its own Copilot AI assistant embedded across its Office suite, and with OpenAI, which counts Microsoft as its largest investor.
The decision to build directly into Excel is hardly accidental. Excel remains the lingua franca of finance, the digital workspace where analysts spend countless hours constructing financial models, running valuations, and stress-testing assumptions. By embedding Claude into this environment, Anthropic is meeting financial professionals exactly where they work rather than asking them to toggle between applications.
Claude for Excel allows users to work with the AI in a sidebar where it can read, analyze, modify, and create new Excel workbooks while providing full transparency about the actions it takes by tracking and explaining changes and letting users navigate directly to referenced cells.
This transparency feature addresses one of the most persistent anxieties around AI in finance: the "black box" problem. When billions of dollars ride on a financial model's output, analysts need to understand not just the answer but how the AI arrived at it. By showing its work at the cell level, Anthropic is attempting to build the trust necessary for widespread adoption in an industry where careers and fortunes can turn on a misplaced decimal point.
The technical implementation is sophisticated. Claude can discuss how spreadsheets work, modify them while preserving formula dependencies — a notoriously complex task — debug cell formulas, populate templates with new data, or build entirely new spreadsheets from scratch. This isn't merely a chatbot that answers questions about your data; it's a collaborative tool that can actively manipulate the models that drive investment decisions worth trillions of dollars.
Perhaps more significant than the Excel integration is Anthropic's expansion of its connector ecosystem, which now links Claude to live market data and proprietary research from financial information giants. The company added six major new data partnerships spanning the entire spectrum of financial information that professional investors rely upon.
Aiera now provides Claude with real-time earnings call transcripts and summaries of investor events like shareholder meetings, presentations, and conferences. The Aiera connector also enables a data feed from Third Bridge, which gives Claude access to a library of insights interviews, company intelligence, and industry analysis from experts and former executives. Chronograph gives private equity investors operational and financial information for portfolio monitoring and conducting due diligence, including performance metrics, valuations, and fund-level data.
Egnyte enables Claude to securely search permitted data for internal data rooms, investment documents, and approved financial models while maintaining governed access controls. LSEG, the London Stock Exchange Group, connects Claude to live market data including fixed income pricing, equities, foreign exchange rates, macroeconomic indicators, and analysts' estimates of other important financial metrics. Moody's provides access to proprietary credit ratings, research, and company data covering ownership, financials, and news on more than 600 million public and private companies, supporting work and research in compliance, credit analysis, and business development. MT Newswires provides Claude with access to the latest global multi-asset class news on financial markets and economies.
These partnerships amount to a land grab for the informational infrastructure that powers modern finance. Previously announced in July, Anthropic had already secured integrations with S&P Capital IQ, Daloopa, Morningstar, FactSet, PitchBook, Snowflake, and Databricks. Together, these connectors give Claude access to virtually every category of financial data an analyst might need: fundamental company data, market prices, credit assessments, private company intelligence, alternative data, and breaking news.
This matters because the quality of AI outputs depends entirely on the quality of inputs. Generic large language models trained on public internet data simply cannot compete with systems that have direct pipelines to Bloomberg-quality financial information. By securing these partnerships, Anthropic is building moats around its financial services offering that competitors will find difficult to replicate.
The strategic calculus here is clear: Anthropic is betting that domain-specific AI systems with privileged access to proprietary data will outcompete general-purpose AI assistants. It's a direct challenge to the "one AI to rule them all" approach favored by some competitors.
The third pillar of Anthropic's announcement involves six new "Agent Skills" — pre-configured workflows for common financial tasks. These skills are Anthropic's attempt to productize the workflows of entry-level and mid-level financial analysts, professionals who spend their days building models, processing due diligence documents, and writing research reports. Anthropic has designed skills specifically to automate these time-consuming tasks.
The new skills include building discounted cash flow models complete with full free cash flow projections, weighted average cost of capital calculations, scenario toggles, and sensitivity tables. There's comparable company analysis featuring valuation multiples and operating metrics that can be easily refreshed with updated data. Claude can now process data room documents into Excel spreadsheets populated with financial information, customer lists, and contract terms. It can create company teasers and profiles for pitch books and buyer lists, perform earnings analyses that use quarterly transcripts and financials to extract important metrics, guidance changes, and management commentary, and produce initiating coverage reports with industry analysis, company deep dives, and valuation frameworks.
It's worth noting that Anthropic's Sonnet 4.5 model now tops the Finance Agent benchmark from Vals AI at 55.3% accuracy, a metric designed to test AI systems on tasks expected of entry-level financial analysts. A 55% accuracy rate might sound underwhelming, but it is state-of-the-art performance and highlights both the promise and limitations of AI in finance. The technology can clearly handle sophisticated analytical tasks, but it's not yet reliable enough to operate autonomously without human oversight — a reality that may actually reassure both regulators and the analysts whose jobs might otherwise be at risk.
The Agent Skills approach is particularly clever because it packages AI capabilities in terms that financial institutions already understand. Rather than selling generic "AI assistance," Anthropic is offering solutions to specific, well-defined problems: "You need a DCF model? We have a skill for that. You need to analyze earnings calls? We have a skill for that too."
Anthropic's financial services strategy appears to be gaining traction with exactly the kind of marquee clients that matter in enterprise sales. The company counts among its clients AIA Labs at Bridgewater, Commonwealth Bank of Australia, American International Group, and Norges Bank Investment Management — Norway's $1.6 trillion sovereign wealth fund, one of the world's largest institutional investors.
NBIM CEO Nicolai Tangen reported achieving approximately 20% productivity gains, equivalent to 213,000 hours, with portfolio managers and risk departments now able to "seamlessly query our Snowflake data warehouse and analyze earnings calls with unprecedented efficiency."
At AIG, CEO Peter Zaffino said the partnership has "compressed the timeline to review business by more than 5x in our early rollouts while simultaneously improving our data accuracy from 75% to over 90%." If these numbers hold across broader deployments, the productivity implications for the financial services industry are staggering.
These aren't pilot programs or proof-of-concept deployments; they're production implementations at institutions managing trillions of dollars in assets and making underwriting decisions that affect millions of customers. Their public endorsements provide the social proof that typically drives enterprise adoption in conservative industries.
Yet Anthropic's financial services ambitions unfold against a backdrop of heightened regulatory scrutiny and shifting enforcement priorities. In 2023, the Consumer Financial Protection Bureau released guidance requiring lenders to "use specific and accurate reasons when taking adverse actions against consumers" involving AI, and issued additional guidance requiring regulated entities to "evaluate their underwriting models for bias" and "evaluate automated collateral-valuation and appraisal processes in ways that minimize bias."
However, according to a Brookings Institution analysis, these measures have since been revoked with work stopped or eliminated at the current downsized CFPB under the current administration, creating regulatory uncertainty. The pendulum has swung from the Biden administration's cautious approach, exemplified by an executive order on safe AI development, toward the Trump administration's "America's AI Action Plan," which seeks to "cement U.S. dominance in artificial intelligence" through deregulation.
This regulatory flux creates both opportunities and risks. Financial institutions eager to deploy AI now face less prescriptive federal oversight, potentially accelerating adoption. But the absence of clear guardrails also exposes them to potential liability if AI systems produce discriminatory outcomes, particularly in lending and underwriting.
The Massachusetts Attorney General recently reached a $2.5 million settlement with student loan company Earnest Operations, alleging that its use of AI models resulted in "disparate impact in approval rates and loan terms, specifically disadvantaging Black and Hispanic applicants." Such cases will likely multiply as AI deployment grows, creating a patchwork of state-level enforcement even as federal oversight recedes.
Anthropic appears acutely aware of these risks. In an interview with Banking Dive, Jonathan Pelosi, Anthropic's global head of industry for financial services, emphasized that Claude requires a "human in the loop." The platform, he said, is not intended for autonomous financial decision-making or to provide stock recommendations that users follow blindly. During client onboarding, Pelosi told the publication, Anthropic focuses on training and understanding model limitations, putting guardrails in place so people treat Claude as a helpful technology rather than a replacement for human judgment.
Anthropic's financial services push comes as AI competition intensifies across the enterprise. OpenAI, Microsoft, Google, and numerous startups are all vying for position in what may become one of AI's most lucrative verticals. Goldman Sachs introduced a generative AI assistant to its bankers, traders, and asset managers in January, signaling that major banks may build their own capabilities rather than rely exclusively on third-party providers.
The emergence of domain-specific AI models like BloombergGPT — trained specifically on financial data — suggests the market may fragment between generalized AI assistants and specialized tools. Anthropic's strategy appears to stake out a middle ground: general-purpose models, since Claude was not trained exclusively on financial data, enhanced with financial-specific tooling, data access, and workflows.
The company's partnership strategy with implementation consultancies including Deloitte, KPMG, PwC, Slalom, TribeAI, and Turing is equally critical. These firms serve as force multipliers, embedding Anthropic's technology into their own service offerings and providing the change management expertise that financial institutions need to successfully adopt AI at scale.
The broader question is whether AI tools like Claude will genuinely transform financial services productivity or merely shift work around. The PYMNTS Intelligence report "The Agentic Trust Gap" found that chief financial officers remain hesitant about AI agents, with "nagging concern" about hallucinations where "an AI agent can go off script and expose firms to cascading payment errors and other inaccuracies."
"For finance leaders, the message is stark: Harness AI's momentum now, but build the guardrails before the next quarterly call—or risk owning the fallout," the report warned.
A 2025 KPMG report found that 70% of board members have developed responsible use policies for employees, with other popular initiatives including implementing a recognized AI risk and governance framework, developing ethical guidelines and training programs for AI developers, and conducting regular AI use audits.
The financial services industry faces a delicate balancing act: move too slowly and risk competitive disadvantage as rivals achieve productivity gains; move too quickly and risk operational failures, regulatory penalties, or reputational damage. Speaking at the Evident AI Symposium in New York last week, Ian Glasner, HSBC's group head of emerging technology, innovation and ventures, struck an optimistic tone about the sector's readiness for AI adoption. "As an industry, we are very well prepared to manage risk," he said, according to CIO Dive. "Let's not overcomplicate this. We just need to be focused on the business use case and the value associated."
Anthropic's latest moves suggest the company sees financial services as a beachhead market where AI's value proposition is clear, customers have deep pockets, and the technical requirements play to Claude's strengths in reasoning and accuracy. By building Excel integration, securing data partnerships, and pre-packaging common workflows, Anthropic is reducing the friction that typically slows enterprise AI adoption.
The $61.5 billion valuation the company commanded in its March fundraising round — up from roughly $16 billion a year earlier — suggests investors believe this strategy will work. But the real test will come as these tools move from pilot programs to production deployments across thousands of analysts and billions of dollars in transactions.
Financial services may prove to be AI's most demanding proving ground: an industry where mistakes are costly, regulation is stringent, and trust is everything. If Claude can successfully navigate the spreadsheet cells and data feeds of Wall Street without hallucinating a decimal point in the wrong direction, Anthropic will have accomplished something far more valuable than winning another benchmark test. It will have proven that AI can be trusted with the money.

Some enterprises are best served by fine-tuning large models to their needs, but a number of companies plan to build their own models, a project that would require access to GPUs.
Google Cloud wants to play a bigger role in enterprises’ model-making journey with its new service, Vertex AI Training. The service gives enterprises looking to train their own models access to a managed Slurm environment, data science tooling and any chips capable of large-scale model training.
With this new service, Google Cloud hopes to turn more enterprises away from other providers and encourage the building of more company-specific AI models.
While Google Cloud has always offered the ability to customize its Gemini models, the new service allows customers to bring in their own models or customize any open-source model Google Cloud hosts.
Vertex AI Training positions Google Cloud directly against companies like CoreWeave and Lambda Labs, as well as its cloud competitors AWS and Microsoft Azure.
Jaime de Guerre, senior director of product management at Gloogle Cloud, told VentureBeat that the company has been hearing from a lot of organizations of varying sizes that they need a way to better optimize compute but in a more reliable environment.
“What we're seeing is that there's an increasing number of companies that are building or customizing large gen AI models to introduce a product offering built around those models, or to help power their business in some way,” de Guerre said. “This includes AI startups, technology companies, sovereign organizations building a model for a particular region or culture or language and some large enterprises that might be building it into internal processes.”
De Guerre noted that while anyone can technically use the service, Google is targeting companies planning large-scale model training rather than simple fine-tuning or LoRA adopters. Vertex AI Services will focus on longer-running training jobs spanning hundreds or even thousands of chips. Pricing will depend on the amount of compute the enterprise will need.
“Vertex AI Training is not for adding more information to the context or using RAG; this is to train a model where you might start from completely random weights,” he said.
Enterprises are recognizing the value of building customized models beyond just fine-tuning an LLM via retrieval-augmented generation (RAG). Custom models would know more in-depth company information and respond with answers specific to the organization. Companies like Arcee.ai have begun offering their models for customization to clients. Adobe recently announced a new service that allows enterprises to retrain Firefly for their specific needs. Organizations like FICO, which create small language models specific to the finance industry, often buy GPUs to train them at significant cost.
Google Cloud said Vertex AI Training differentiates itself by giving access to a larger set of chips, services to monitor and manage training and the expertise it learned from training the Gemini models.
Some early customers of Vertex AI Training include AI Singapore, a consortium of Singaporean research institutes and startups that built the 27-billion-parameter SEA-LION v4, and Salesforce’s AI research team.
Enterprises often have to choose between taking an already-built LLM and fine-tuning it or building their own model. But creating an LLM from scratch is usually unattainable for smaller companies, or it simply doesn’t make sense for some use cases. However, for organizations where a fully custom or from-scratch model makes sense, the issue is gaining access to the GPUs needed to run training.
Training a model, de Guerre said, can be difficult and expensive, especially when organizations compete with several others for GPU space.
Hyperscalers like AWS and Microsoft — and, yes, Google — have pitched that their massive data centers and racks and racks of high-end chips deliver the most value to enterprises. Not only will they have access to expensive GPUs, but cloud providers often offer full-stack services to help enterprises move to production.
Services like CoreWeave gained prominence for offering on-demand access to Nvidia H100s, giving customers flexibility in compute power when building models or applications. This has also given rise to a business model in which companies with GPUs rent out server space.
De Guerre said Vertex AI Training isn’t just about offering access to train models on bare compute, where the enterprise rents a GPU server; they also have to bring their own training software and manage the timing and failures.
“This is a managed Slurm environment that will help with all the job scheduling and automatic recovery of jobs failing,” de Guerre said. “So if a training job slows down or stops due to a hardware failure, the training will automatically restart very quickly, based on automatic checkpointing that we do in management of the checkpoints to continue with very little downtime.”
He added that this provides higher throughput and more efficient training for a larger scale of compute clusters.
Services like Vertex AI Training could make it easier for enterprises to build niche models or completely customize existing models. Still, just because the option exists doesn’t mean it's the right fit for every enterprise.

A new framework developed by researchers at Google Cloud and DeepMind aims to address one of the key challenges of developing computer use agents (CUAs): Gathering high-quality training examples at scale.
The framework, dubbed Watch & Learn (W&L), addresses the problem of training data generation in a way that doesn’t require human annotation and can automatically extract demonstrations from raw videos.
Their experiments show that data generated W&L can be used to train or fine-tune existing computer use and foundation models to improve their performance on computer-use tasks. But equally important, the same approach can be used to create in-context learning (ICL) examples for computer use agents, enabling companies to create CUAs for bespoke internal tasks without the need for costly training of specialized models.
The web is rich with video tutorials and screencasts that describe complex workflows for using applications. These videos are a gold mine that can provide computer use agents with domain knowledge and instructions for accomplishing different tasks through user interface interactions.
However, before they can be used to train CUA agents, these videos need to be transformed into annotated trajectories (that is, a set of task descriptions, screenshots and actions), a process that is prohibitively expensive and time-consuming when done manually.
Existing approaches to address this data bottleneck rely on annotating these videos through the use of multimodal language models, which usually result in low precision and faulty examples. A different approach uses self-play agents that autonomously explore user interfaces to collect trajectories. However, techniques using this approach usually create simple examples that are not useful in unpredictable real-world situations.
As the researchers note in their paper, “Overall, these approaches either rely on brittle heuristics, are costly as they rely on explorations in real environments or generate low-complexity demonstrations misaligned with human intent.”
The Watch & Learn framework tries to address the challenges of creating CUA demonstrations by rethinking the problem formulation.
Instead of directly generating trajectories or depending on complex multi-stage pipelines, the researchers frame the problem as an “inverse dynamics objective”: Given two consecutive observations, predict the intermediate action that produced the transition.
According to the researchers, this formulation is “easier to learn, avoids hand-crafted heuristics and generalizes robustly across applications.”
The W&L framework can be broken down into three key stages: Training an inverse dynamics model (IDM), retrieving raw videos, and training CUA agents.
In the first phase, the researchers used agents to interact with live web pages to create a large corpus of 500,000 state transitions (two consecutive observations and the action that resulted in the transition). They then used this data (along with 132,000 human-annotated transitions from existing open datasets) to train an inverse dynamics model (IDM) that takes in two consecutive observations and predicts the transition action. Their trained IDM, which is a small transformer model, outperformed off-the-shelf foundation models in predicting transition actions.
The researchers then designed a pipeline that retrieves videos from platforms such as YouTube and runs them through IDM to generate high-quality trajectories. The IDM takes in consecutive video frames and determines the actions (scroll, click) that caused the changes in the environment, which are then packaged into annotated trajectories. Using this method, they generated 53,125 trajectories with high-accuracy action labels.
These examples can be used to train effective computer use models for specific tasks. But the researchers also found that trajectories extracted through IDM can serve as in-context learning examples to improve the performance of CUAs on bespoke tasks at inference time. For ICL, they use Gemini 2.5 Flash to add additional reasoning annotations to the observation/action examples in the trajectories, which can then be inserted into the CUA agent’s prompt (usually 3-5 examples) during inference.
“This dual role (training and in-context guidance) enables flexible integration with both open-source models and general-purpose agents,” the researchers write.
To test the usefulness of W&L, the researchers ran a series of experiments with closed and open source models on the OSWorld benchmark, which evaluates agents in real desktop and operating system environments across different tasks, including productivity, programming and design.
For fine-tuning, they used their corpus of 53,000 trajectories to train two open source models: UI-TARS-1.5, a strong, open source vision-language-action model designed specifically for computer use, and Qwen 2.5-VL, an open-weight multimodal LLM.
For in-context learning tests, they applied W&L examples to general-purpose multimodal models such as Gemini 2.5 Flash, OpenAI o3 and Claude Sonnet 4.
W&L resulted in improvements on OSWorld in all model categories, including up to 3 points for ICL on general-purpose models and up to 11 points for fine-tuned open-source models.
More importantly, these benefits were achieved without any manual annotation, “demonstrating that web-scale human workflows can serve as a practical and scalable foundation for advancing CUAs towards real-world deployment,” the researchers write.
This could have important implications for real-world applications, enabling enterprises to turn their existing corpora of videos and conference recordings into training data for CUAs. It also makes it easier to generate new training trajectories. All you will need to do is record videos of performing different tasks and have them annotated by an IDM. And with frontier models constantly improving and becoming cheaper, you can expect to get more from your existing data and the field continues to progress.


Amazon is preparing to lay off as many as 30,000 corporate employees in a sweeping workforce reduction intended to reduce expenses and compensate for over-hiring during the pandemic, according to a report from Reuters on Monday.
GeekWire has contacted Amazon for comment.
Layoff notifications will start going out via email on Tuesday, according to Reuters, which cited people familiar with the matter. One employee at Amazon told GeekWire the workforce is on “pins and needles” in anticipation of cuts.
Bloomberg reported that cuts will impact several business units, including logistics, payments, video games, and Amazon Web Services.
Amazon’s corporate workforce numbered around 350,000 in early 2023. It has not provided an updated number since then.
The company’s last significant layoff occurred in 2023 when it cut 27,000 corporate workers in multiple stages. Since then the company has made a series of smaller layoffs across different business units.
Fortune reported this month that Amazon planned to cut up to 15% of its human resources staff as part of a wider layoff.
Amazon has taken a cautious hiring approach with its corporate workforce, following years of huge headcount growth. The company’s corporate headcount tripled between 2017 and 2022, according to The Information.
The reported cuts come as Amazon is investing heavily in artificial intelligence. The company said earlier this year it expects to increase capital expenditures to more than $100 billion in 2025, up from $83 billion in 2024, with a majority going toward building out capacity for AI in AWS.
Amazon CEO Andy Jassy also hinted at potential workforce impact from generative AI earlier this year in a memo to employees that was shared publicly.
“We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,” he wrote. “It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.”
Amazon reported 1.54 million total employees as of June 30 — up 3% year-over-year. The majority of the company’s workforce is made up of warehouse workers.
Amazon employs roughly 50,000 corporate and tech workers in buildings across its Seattle headquarters, with another 12,000 in Bellevue.
The company reports its third quarter earnings on Thursday afternoon.
Fellow Seattle-area tech giant Microsoft has laid off more than 15,000 people since May as it too invests in AI and data center capacity. Microsoft has cut more than 3,200 roles in Washington this year.
Last week, The New York Times cited internal Amazon documents and interviews to report that the company plans to automate as much as 75% of its warehouse operations by 2033. According to the report, the robotics team expects automation to “flatten Amazon’s hiring curve over the next 10 years,” allowing it to avoid hiring more than 600,000 workers even as sales continue to grow.
GeekWire reporter Kurt Schlosser contributed to this story.
Learn more about Google’s work with the USO and how it plans to launch a program that will enable service members to stay in touch with loved ones.
We announced the Public Preview of our personal health coach will start rolling-out tomorrow. Our vision is to help empower everyone to live a longer, healthier life wit…
Starting tomorrow and over the next week, the personal health coach preview will be available for eligible U.S. Fitbit Premium users. This could be an easier way to showcase relevant items to each user.
So now, X users have another AI bot to develop feelings for.
If you use security keys on X, you'll need to take note.
Will this get more people posting their thoughts in the app?

The conversation around artificial intelligence (AI) has been dominated by “replacement theory” headlines. From front-line service roles to white-collar knowledge work, there’s a growing narrative that human capital is under threat.
Economic anxiety has fueled research and debate, but many of the arguments remain narrow in scope.
However, much of this narrative is steeped in speculation rather than the fundamental, evolving dynamics of skilled work.
Yes, we’ve seen layoffs, hiring slowdowns, and stories of AI automating tasks. But this is happening against the backdrop of high interest rates, shifts in global trade, and post-pandemic over-hiring.
As the global talent thought-leader Josh Bersin argues, claims of mass job destruction are “vastly over-hyped.” Many roles will transform, not vanish.
For the SEO discipline, the familiar refrain “SEO is dead” is just as overstated.
Yes, the nature of the SEO specialist is changing. We’ve seen fewer leadership roles, a contraction in content and technical positions, and cautious hiring. But the function itself is far from disappearing.
In fact, SEO job listings remain resilient in 2025 and mid-level roles still comprise nearly 60% of open positions. Rather than declining, the field is being reshaped by new skill demands.
Don’t ask, “Will AI replace me?” Ask instead, “How can I use AI to multiply my impact?”
Think of AI not as the jackhammer replacing the hammer but as the jackhammer amplifying its effect. SEOs who can harness AI through agents, automation, and intelligent systems will deliver faster, more impactful results than ever before.
As an industry, it’s time to change the language we use to describe SEO’s evolution.
Too much of our conversation still revolves around loss. We focus on lost clicks, lost visibility, lost control, and loss of num=100.
That narrative doesn’t serve us anymore.
We should be speaking the language of amplification and revenue generation. SEO has evolved from “optimizing for rankings” to driving measurable business growth through organic discovery, whether that happens through traditional search, AI Overviews, or the emerging layer of Generative Engine Optimization (GEO).
AI isn’t the villain of SEO; it’s the force multiplier.
When harnessed effectively, AI scales insight, accelerates experimentation, and ties our work more directly to outcomes that matter:
We don’t need to fight the dystopian idea that AI will replace us. We need to prove that AI-empowered SEOs can help businesses grow faster than ever before.
The new language of SEO isn’t about survival, it’s about impact.
For years, marketing and SEO teams grew headcount to scale output.
Today, the opposite is true. Hiring freezes, leaner budgets, and uncertainty around the role of SEO in an AI-driven world have forced leaders to rethink team design.
A recent Search Engine Land report noted that remote SEO roles dropped to 34% of listings in early 2025, while content-focused SEO positions declined by 28%. A separate LinkedIn survey found a 37% drop in SEO job postings in Q1 compared to the previous year.
This signals two key shifts:
If your org chart still looks like a pyramid, you’re behind.

The new landscape demands flexibility, speed, and cross-functional integration with analytics, UX, paid media, and content.
It’s time to design teams around capabilities, not titles.
The best SEO leaders aren’t hiring specialists, they’re hiring aptitude. Modern SEO organizations value people who can think across disciplines, not just operate within one.
The strongest hires we’re seeing aren’t traditional technical SEOs focused on crawl analysis or schema. They’re problem solvers – marketers who understand how search connects to the broader growth engine and who have experience scaling impact across content, data, and product.
Progressive leaders are also rethinking resourcing. The old model of a technical SEO paired with engineering support is giving way to tech SEOs working alongside AI product managers and, in many cases, vibe coding solutions. This model moves faster, tests bolder, and builds systems that drive real results.
For SEO leaders, rethinking team architecture is critical. The right question isn’t “Who should I hire next?” It’s “What critical capability must we master to stay competitive?”
Once that’s clear, structure your people and your agents around that need. The companies that get this right during the AI transition will be the ones writing the playbook for the next generation of search leadership.
The future of SEO teams will be defined by collaboration between humans and agents.

The future: teams built around agents and empowered humans.
These new teams succeed when they don’t live in silos. The SEO/GEO squad must partner with paid search, analytics, revenue ops, and UX – not just serve them.
Agents create capacity; humans create alignment and amplification.
Building the SEO community of the future will require change.
The pace of transformation has never been faster and it’s created a dangerous dependence on third-party “AI tools” as the answer to what is unknown.
But the true AI story doesn’t begin with a subscription. It begins inside your team.
If the only AI in your workflow is someone else’s product, you’re giving up your competitive edge. The future belongs to teams that build, not just buy.
Here’s how to start:
The future of SEO starts with building smarter teams. It’s humans working with agents. It’s capability uplift. And if you lead that charge, you’ll not only adapt to the next generation of search, you’ll be the ones designing it.

Google announced Query groups in Search Console Insights. The AI feature clusters similar search queries, surfaces trends, and shows which topics drive clicks.
The post Google Uses AI To Group Queries In Search Console Data appeared first on Search Engine Journal.

Zulily may no longer be a dominant player in Seattle’s tech scene, but physical pieces of the online retailer will live on in Evergreen Goodwill facilities across the region.
Hundreds of office chairs, desks, kitchen appliances, IT equipment, and more has been donated to Goodwill by Vanbarton Group, a commercial real estate investment firm that now owns the onetime Zulily building at 2601 Elliott Ave.
Vanbarton plans to convert the building, which occupies a full block near the waterfront, to 262 apartments, according to a Daily Journal of Commerce report from July.
A once-prominant online retailer, Zulily was a darling of Seattle’s growing tech scene when it was valued at $4 billion following its IPO in 2013. But after QVC parent Qurate paid $2.4 billion to buy the company in 2015, it was sold to Los Angeles investment firm Regent in May 2023 and eventually shut down.
In March, Zulily got a new owner for the third time in two years when Beyond, which emerged as a surprise buyer in 2024, announced plans to sell a majority stake in Zulily to Lyons Trading Company, the parent company of flash sales site Proozy.com.

Evergreen Goodwill said in a news release that the donation, facilitated by Vanbarton Group’s outreach, saved the nonprofit an estimated $100,000 in equipment costs and diverted valuable resources from landfills.
The office items are being repurposed in multiple locations, including Goodwill’s new Georgetown operations center, scheduled to open this fall, and job training and education centers that it operates in five counties.
Remaining items will be sold in Goodwill stores, with proceeds supporting free job training and education programs for people facing barriers to employment, according to Goodwill.
Previously:
The post Fitbit Gemini Coach Redefines Digital Health appeared first on StartupHub.ai.
Fitbit's Gemini coach, now in public preview, leverages advanced AI to provide personalized fitness, sleep, and health coaching, redefining digital wellness.
The post Fitbit Gemini Coach Redefines Digital Health appeared first on StartupHub.ai.
The post Anthropic’s Claude: Reshaping Finance from Curiosity to Production appeared first on StartupHub.ai.
The landscape of enterprise AI in financial services is undergoing a profound transformation, moving decisively from exploratory curiosity to tangible, production-ready deployment. This pivotal shift was the central theme of a recent discussion between Anthropic’s Alexander Bricken, Applied AI Product Engineer for Financial Services, and Nick Lin, Product Lead for Claude for Financial Services. Lin, […]
The post Anthropic’s Claude: Reshaping Finance from Curiosity to Production appeared first on StartupHub.ai.
The post AI’s Everyday Revolution: 24 New Ways Artificial Intelligence Reshapes Our World appeared first on StartupHub.ai.
“These are use cases that I have actually been using,” declared Matthew Berman, the engaging host of a recent YouTube video, as he unveiled a compelling array of AI applications that are rapidly transitioning from futuristic concepts to indispensable daily tools. Berman’s presentation was not a speculative glimpse into AI’s potential, but a practical demonstration […]
The post AI’s Everyday Revolution: 24 New Ways Artificial Intelligence Reshapes Our World appeared first on StartupHub.ai.
The post Pro-AI Super PAC Aligns with White House on Federal Framework, Downplaying Reported Rift appeared first on StartupHub.ai.
The notion of a deep rift between Washington’s political establishment and the burgeoning pro-AI lobby may be more perception than reality, according to recent insights. Far from a contentious divide, a significant alignment appears to be forming between a powerful new pro-AI Super PAC and the White House, both recognizing the urgent need for a […]
The post Pro-AI Super PAC Aligns with White House on Federal Framework, Downplaying Reported Rift appeared first on StartupHub.ai.
The post Crusoe AI Funding: $1.375B to Build AI Factories appeared first on StartupHub.ai.
Crusoe's $1.375B Series E funding will fuel its mission to build vertically integrated AI infrastructure "factories" at scale.
The post Crusoe AI Funding: $1.375B to Build AI Factories appeared first on StartupHub.ai.
The post You will see a 30 to 50% correction in many AI-related names next year, says Dan Niles appeared first on StartupHub.ai.
“You will see a 30 to 50% correction in many AI-related names next year,” stated Dan Niles, founder and portfolio manager at Niles Investment Management, during a recent appearance on CNBC’s ‘Money Movers’. Niles joined the broadcast to discuss his outlook on Big Tech earnings and the current market sentiment surrounding technology stocks, particularly those […]
The post You will see a 30 to 50% correction in many AI-related names next year, says Dan Niles appeared first on StartupHub.ai.
The post From Napkin Sketch to Functional UI: OpenAI Codex Transforms Frontend Creation appeared first on StartupHub.ai.
“Codex is your AI teammate that you can pair with everywhere you code,” declared Romain Huet, highlighting the pervasive utility of OpenAI’s latest advancement in front-end development. This sentiment underpinned a recent demonstration with Channing Conger, where the duo showcased the multimodal prowess of OpenAI Codex in accelerating the creation of user interfaces. Their discussion […]
The post From Napkin Sketch to Functional UI: OpenAI Codex Transforms Frontend Creation appeared first on StartupHub.ai.
The post Mercor Hits $10B Valuation, Fueling Human-in-the-Loop AI appeared first on StartupHub.ai.
Mercor's $10 billion valuation underscores the growing industry demand for specialized human experts to provide the nuanced judgment essential for training advanced AI models.
The post Mercor Hits $10B Valuation, Fueling Human-in-the-Loop AI appeared first on StartupHub.ai.
The post Folklore, Logic, and the Future of Math in an AI World appeared first on StartupHub.ai.
Mathematical physicist Svetlana Jitomirskaya, a Distinguished Professor at Georgia Tech and UC Irvine, offers a compelling perspective on the evolving relationship between artificial intelligence and the nuanced world of advanced mathematics. Her insights, shared in a recent interview, illuminate the current limitations of AI, particularly its struggle with what she terms “folklore knowledge”—the unwritten intuitions […]
The post Folklore, Logic, and the Future of Math in an AI World appeared first on StartupHub.ai.
The post Trip.com CEO Jane Sun on AI, Human Connection, and the Future of Travel appeared first on StartupHub.ai.
Artificial intelligence is not merely a technological advancement; it is fundamentally reshaping human experiences, particularly in industries like travel. This was a central theme as Jane Sun, CEO of Trip.com Group, engaged in a revealing dialogue with Bloomberg’s Anders Melin at the 2025 Bloomberg Business Summit Asean in Kuala Lumpur. The discussion offered a profound […]
The post Trip.com CEO Jane Sun on AI, Human Connection, and the Future of Travel appeared first on StartupHub.ai.
Linux users running Lenovo's Legion gaming handhelds and laptops are about to get a much-needed update to the way their systems handle power profiles. Developer Derek J. Clark has submitted a new patch series to the Linux kernel that adds explicit support for an "Extreme" performance mode to the lenovo-wmi-gamezone driver and overhauls how
Just weeks after a breach led to the theft of sensitive user data that included government issued IDs, Discord users have a new cybersecurity issue to worry about. Security researchers at Netskope have spotted hackers repurposing an open source tool used by security professionals, called RedTiger, to develop an infostealer to target unsuspecting
A new study published in Nature by researchers from the University of Cambridge (with support by Meta) just dropped a pixelated bomb on the entire Ultra-HD market, essentially confirming what many of us may have suspected: the 'need' for 4K, let alone 8K resolution displays, is largely a myth for the average mainstream consumer.
In this
Qualcomm quietly unveiled the Snapdragon 6s Gen 4, a chipset designed to inject some features once exclusive to flagship devices into more affordable handsets. Not just a minor refresh, the new 4nm processor is poised to be a huge boon for mobile gamers on a budget, with the company claiming a a 59% faster GPU and 144 frames per second support.
The
Another Ryzen processor with 3D V-Cache underneath the hood is evidently inbound, and based on the model name, it takes aim at builders who want to piece together an affordable gaming PC. It's the Ryzen 5 7500X3D, and while nothing is yet official, at least one retailer in the United Kingdom has prepped a product page in anticipation of the
Anchor is a new underwater survival game from developer Fearem, which essentially mashes the kind of harsher survival crafting that players love in Rust with the underwater setting of Subnautica to create a new experience. Set in a dystopian world where a nuclear apocalypse has sent humanity into the ocean to try and find a new way to survive, you'll be able to join up with other players in the 150-player servers or lone shark your underwater journey as you try to survive on your own. It'll also feature PvE modes for those who don't want to deal with the […]
Read full article at https://wccftech.com/underwater-survival-game-anchor-mashes-subnautica-rust/

After NVIDIA launched its Rubin AI GPUs last month, we decided to interview Larry Yang, the chief product officer at Phononic. We were wondering about the new chips' cooling requirements given that energy constraints are closely related to AI rollout. Larry is an industry veteran with more than 30 years of experience under his belt. He has previously worked at Google, IBM, Microsoft and Cisco. Our conversation revolved around the cooling requirements for NVIDIA's and other AI chips. It also covered AI ASICs, commonly known as custom AI processors. Larry outlined that high bandwidth memory (HBM) chips are one reason […]
Read full article at https://wccftech.com/nvidias-ai-gpu-performance-can-be-increased-to-bring-payback-to-the-order-of-single-digit-months-says-phononic-chief-product-officer/

The GeForce RTX 4090 laptop GPU reached the performance level of RTX 5090 with a simple shunt mod, which unlocked the GPU's full potential. Redditor Unlocks RTX 4090 Laptop GPU's Potential by Adding 1m Ohm Resistor to Allow Higher Power Draw; Results in Performance Boost in Double Digits Shunt modding is a common way of unlocking a GPU's potential, which essentially allows the GPU to draw more power for achieving higher scores. A shunt resistor is used by the GPU's power management controller that measures current flow, and one can trick the GPU by adding it in the circuit to […]
Read full article at https://wccftech.com/user-shunt-mods-rtx-4090-laptop-achieves-over-20-performance-gains/

A new Unreal Engine 5 feature pushed to the engine's main development branch last week could pave the way for significant Lumen performance improvements across the board and for the lighting technology to be implemented in Nintendo Switch 2 games. As reported by tech artist Dylan Browne on X, Lumen Irradiance Cache, a probe-based Lumen mode for lower-end devices, is now available in the Epic engine. Compared to the default Hardware Lumen, the new mode features less occlusion detail and worse reflections, but it's still a good compromise compared to having no Lumen at all, judging from the comparison video […]
Read full article at https://wccftech.com/unreal-engine-5s-new-feature-could-bring-big-visual-improvements-on-nintendo-switch-2/

The U.S. Department of Energy (DoE) has reportedly collaborated with AMD on two new supercomputer projects, utilizing Team Red's latest AI chips to address scientific challenges. AMD to Collaborate With the U.S. DoE To Build Out Two Cutting-Edge Supercomputers, With Record Deployment Times Based on a new report from Reuters, it seems like AMD has managed to secure a massive partnership with the U.S. DoE, which involves the construction of two new supercomputers, mainly for academic purposes. This marks a major deal for Team Red, which is currently in pursuit of having its tech stack widely adopted by customers in […]
Read full article at https://wccftech.com/amd-lands-major-us-government-ai-deal-to-power-next-gen-supercomputers/

Last Friday, The Pokémon Company announced that Pokémon Legends: Z-A, the latest entry in the ever-popular monster catching franchise, had sold 5.8 million copies in its debut week. Now, Circana's Senior Director & Video Game Industry Thought Leader, Mat Piscatella, revealed that the game's retail launch was particularly big in the United States, where it registered the best performance of the last two and a half years. Pokémon Legends: Z-A had a massive US launch at retail. Launch week physical unit and dollar sales of Pokémon Legends: Z-A were the biggest for a new physical video game launch since The […]
Read full article at https://wccftech.com/pokemon-legends-z-a-had-massive-retail-launch-in-the-us/

The Battlefield 6 battle royale mode has finally been revealed and will officially be called Battlefield REDSEC. As rumoured, it'll launch tomorrow, and it'll be free-to-play. EA and Battlefield Studios revealed the mode with a short post on the official Battlefield X (formerly Twitter) account, which included a link to a gameplay trailer that'll go live tomorrow, alongside the mode itself going live. We'll have more official knowledge on the mode tomorrow when the gameplay trailer goes live, which will likely be followed by a blog post on the Battlefield 6 website, though for now we do have other claims […]
Read full article at https://wccftech.com/battlefield-6-battle-royale-battlefield-redsec-launches-tomorrow/

What should one do when reputed analysts continue to contradict each other? After all, this is exactly what is now happening with the Apple iPhone Air, with one group of analysts asserting that demand remains healthy, while the other continues to sound the proverbial death knell for the ultra-slim iPhone variant. TD Cowen does not see Apple reducing the production cadence for the iPhone Air, contradicting KeyBanc Capital and Ming-Chi Kuo A TD Cowen report over the weekend claimed that the Cupertino giant was not changing its production cadence for the iPhone Air, using "field work" as a confirmation mechanism […]
Read full article at https://wccftech.com/confusion-all-around-td-cowen-says-apple-is-not-reducing-iphone-air-production-as-jefferies-declares-nearly-zero-lead-times-for-the-variant-in-china/

The current generation of Xbox and PlayStation consoles is now more than five years old, making it well past the time we would start hearing about what the next generation of consoles will look like. We've seen several rumours on what the next-generation Xbox hardware will include, and on top of comments from Xbox president Sarah Bond, a picture of Xbox's next console is starting to form. Now, a new report from Windows Central seems to piece it together a bit more clearly, framing the next Microsoft console as a best of both worlds between the PC and console experience. […]
Read full article at https://wccftech.com/next-gen-xbox-will-reportedly-be-best-of-both-worlds-between-pc-and-console/

Apple is expected to debut the iPhone 17e in a matter of months - the spring of 2026, to be more specific. It is hardly a surprise, therefore, that the rumor mill apropos the new iPhone is now going into a relative overdrive. A tipster claims Apple will bring its signature dynamic island to iPhone 17e, while retaining a 60Hz display refresh rate A tipster who goes by the username Digital Chat Station on Weibo reported earlier today that Apple's iPhone 18 Pro and Pro Max are likely to feature a 48MP telephoto camera with a larger aperture on top […]
Read full article at https://wccftech.com/apple-iphone-17e-rumored-to-bring-dynamic-island-60hz-display/

The jaw-dropping price of the Snapdragon 8 Elite Gen 5 meant that Samsung would continue paying the ‘Qualcomm tax’ or move to its own silicon, which it is aggressively pursuing to reduce its massive chipset expenditure. While the use of the Exynos 2600 is one way to offset its skyrocketing costs, the Korean giant is now facing another conundrum that threatens to set off a price hike for the Galaxy S26: rising memory prices. Fortunately, one report states that thanks to Samsung’s vertical integration of its semiconductor and smartphone sectors, also known as DS Division and MX Division, respectively, the company […]
Read full article at https://wccftech.com/galaxy-s26-price-hike-prevented-due-to-samsung-vertical-integration-structure-of-two-divisions/

Apple's 2027 iPhone lineup is still a bit further away, with as many as 6 different iPhones expected to launch in the interim, including at least one in the form of a foldable. Even so, the buzz around the 20th anniversary iPhone lineup remains as effervescent as ever, buoyed by the growing number of bells and whistles that the lineup is expected to sport. Apple's iPhone 20 lineup rumored to get LOFIC camera tech Apple is likely to skip the number 19 and jump straight to the number 20 for its iPhones launching in 2027. The move is expected to […]
Read full article at https://wccftech.com/apples-20th-anniversary-iphone-20-to-get-lofic-camera-tech/

Just as Vampire: The Masquerade - Bloodlines 2 finally arrives on the scene after years of development, another Vampire: The Masquerade game disappears for good. Vampire: The Masquerade - Bloodhunt, the free-to-play battle royale that initially launched in its 1.0 state on April 27, 2022 (after an early access release on September 7, 2021), is shutting down its servers on April 28, 2026, almost exactly four years since its 1.0 arrival. Developer Sharkmob confirmed the shutdown in a statement on Vampire: The Masquerade - Bloodhunt's official website, citing the dwindling player population as "no longer sustainable" to keep the game […]
Read full article at https://wccftech.com/vampire-the-masquerade-bloohunt-servers-shutdown-2026/

Following several months of rumors about a remake of Halo: Combat Evolved, the game that started one of Xbox's most successful franchises, Halo Studios (formerly 343 Interactive) announced Halo: Campaign Evolved during this year's Halo World Championship event. The remake, powered by Unreal Engine 5 visuals (with the original game's code running beneath), will be released next year on PC, Xbox Series S and X, and even PlayStation 5 for the first time in the series. As with all remakes, some fans immediately started dissecting the gameplay footage to see what was changed from the original. Even more interesting, though, […]
Read full article at https://wccftech.com/halo-combat-evolved-designer-likens-remake-to-dance-remix-classic-song-straight-to-chorus/

Stellar Hosted is a managed hosting platform for open-source business software. We help teams deploy, secure, and maintain popular open-source tools like BookStack, Metabase, Superset, GitLab, SonarQube, and more, on fully managed infrastructure in the EU. Each instance is private, scalable, and optimized for performance and reliability, so you can focus on using the software instead of maintaining it.
With Stellar Hosted you get transparent pricing, zero vendor lock-in, and a commitment to open source, 10% of our revenue goes back to the projects we host. We make running open-source applications as simple and dependable as SaaS but with full control and data ownership.

Pioneer Square Labs has launched more than 40 tech startups and vetted 500-plus ideas since creating its studio a decade ago in Seattle.
Now it’s testing whether its company-building expertise and data on successful startup formulas can be codified into software — with help from the latest AI models.
PSL just unveiled Lev, a new project that aims to be an “AI co-founder” for early stage entrepreneurs.
Developed inside PSL and now rolling out publicly, Lev can evaluate ideas, score their potential, and help founders develop them into companies.
Lev grew out of an internal PSL tool that used PSL’s proprietary rubric to score startup ideas. The studio decided to turn it into a product after outside founders who tested early versions wanted access for themselves.
Here’s how it works:
“We’re mapping a lot of the PSL process into it,” said T.A. McCann, managing director at PSL.
Lev’s structured workflow sets it apart from generic chatbots, said Shilpa Kannan, principal at PSL.
“The sequencing of these components as you go through the process is one of the biggest value-adds,” she said.
Lev joins a growing number of startups leveraging AI to act as an idea validation tool for early-stage founders, though its precise approach makes it stand out.

Upcoming features will add team-building and fundraising modules and let users trigger actions — such as sending emails or buying domains — directly from within the platform.
McCann envisions Lev eventually connecting to tools like Notion and HubSpot to serve as a “command center” for running a company — integrating tools, drafting investor updates, tracking competitors, and suggesting priorities. There are several competitors in this space offering different versions of “AI chief of staff” products.
On a broader level, Lev raises an existential question for PSL: what happens when a startup studio teaches an AI to do the things that make a startup studio valuable?
“In some ways, this is ‘Innovators Dilemma,’ and you have to cannibalize yourself before someone else does it,” McCann said, referencing Clayton Christensen’s concept of technology disruption.
PSL also sees Lev as a potential funnel for entrepreneurs it could work with in the future. And it’s a way to expand the studio’s reach beyond its focus on the Pacific Northwest.
“It’s scaling our knowledge in a way that we wouldn’t be able to do otherwise,” McCann said.
Kannan and Kevin Leneway, principal at PSL, wrote a blog post describing how PSL designed the backbone of Lev and how the firm used it to generate its own high quality startup ideas at higher volumes with lower cost.
“As we see more and more individuals become founders with the support of AI, we are incredibly excited for the potential increase in velocity and successful outcomes from methodologies like ours that focus on upfront ideation and validation,” they wrote.
Kannan told GeekWire that PSL is prioritizing founders’ privacy and intellectual property. “We are making intentional product and technical decisions to ensure Lev is designed from the ground up to safeguard ideas and founder data, including guardrails on data we collect and our team can access,” she said.
For now, PSL is targeting venture-scale founders — people in tech companies or accelerators with ambitions to build fast-growing startups. But McCann believes Lev could eventually empower solo operators running multiple micro-businesses.
Lev is currently free for one idea, $20 per month for up to five ideas, and $100 per month for 10 ideas and advanced features. It’s available on a waitlist basis.
Lev also offers a couple fun tools to help boost its own marketing, including a founder “personality test” and an “idea matcher” that produces startup concepts based on your interests and experience.

Google added Query groups to the Search Console Insights report. Query groups groups similar search queries together so you can quickly see the main topics your audience searches for.
What Google said. Google wrote, “We are excited to announce Query groups, a powerful Search Console Insights feature that groups similar search queries.”
“Query groups solve this problem by grouping similar queries. Instead of a long, cluttered list of individual queries, you will now see lists of queries representing the main groups that interest your audience. The groups are computed using AI; they may evolve and change over time. They are designed for providing a better high level perspective of your queries and don’t affect ranking,” Google added.
What it looks like. Here is a sample screenshot of this new Query groups report:

You can see that Google is lumping together “search engine optimization, seo optimization, seo website, seo optimierung, search engine optimization (seo), search …” into the “seo” query group in the second line. This shows the site overall is getting 9% fewer clicks on SEO related queries than it did previously.
Availability. Google said query groups will be rolling out gradually over the coming weeks. It is a new card in the Search Console Insights report. Plus, query groups are available only to properties that have a large volume of queries, as the need to group queries is less relevant for sites with fewer queries.
Why we care. Many SEOs have been grouping these queries into these clusters manually or through their own tools. Now, Google will do it for you, making it easier for more novie SEOs and beginner SEOs to understand.
More details will be posted in this help document soon.

Every year, Search Engine Land is delighted to celebrate the best of search marketing by rewarding the agencies, in-house teams, and individuals worldwide for delivering exceptional results.
Today, I’m excited to announce all 18 winners of the 11th annual Search Engine Land Awards.


















Select winners of the 2025 Search Engine Land Awards will be invited to speak live at SMX Next during our two ask-me-anything-style sessions. Bring your burning SEO and PPC questions to ask this award-winning panel of search marketers!
Register here for SMX Next (it’s free) if you haven’t yet.
Congrats again to all the winners. And huge thank yous to everyone who entered the 2025 Search Engine Land Awards, the finalists, and our fantastic panel of judges for this year’s awards.

Many PPC advertisers obsess over click-through rates, using them as a quick measure of ad performance.
But CTR alone doesn’t tell the whole story – what matters most is what happens after the click. That’s where many campaigns go wrong.
Most advertisers think the ad with the highest CTR is often the best. It should have a high Quality Score and attract lots of clicks.
However, in most cases, lower CTR ads usually outperform higher CTR ads in terms of total conversions and revenue.
If all I cared about was CTR, then I could write an ad:
That ad would get an impressive CTR for many keywords, and I’d go out of business pretty quickly, giving away free money.
When creating ads, we must consider:
I can take my free money ad and refine it:
I’ve now:
If you focus solely on CTR and don’t consider attracting the right audience, your advertising will suffer.
While this sentiment applies to both B2C and B2B companies, B2B companies must be exceptionally aware of how their ads appear to consumers versus business searchers.
If you are advertising for a B2B company, you’ll often notice that CTR and conversion rates have an inverse relationship. As CTR increases, conversion rates decrease.
The most common reason for this phenomenon is that consumers and businesses can search for many B2B keywords.
B2B companies must try to show that their products are for businesses, not consumers.
For instance, “safety gates” is a common search term.
The majority of people looking to buy a safety gate are consumers who want to keep pets or babies out of rooms or away from stairs.
However, safety gates and railings are important for businesses with factories, plants, or industrial sites.
These two ads are both for companies that sell safety gates. The first ad’s headlines for Uline could be for a consumer or a business.
It’s not until you look at the description that you realize this is for mezzanines and catwalks, which is something consumers don’t have in their homes.
As many searchers do not read descriptions, this ad will attract both B2B and B2C searchers.

The second ad mentions Industrial in the headline and follows that up with a mention of OSHA compliance in the description and the sitelinks.
While both ads promote similar products, the second one will achieve a better conversion rate because it speaks to a single audience.
We have a client who specializes in factory parts, and when we graph their conversion rates by Quality Score, we can see that as their Quality Score increases, their conversion rates decrease.
They will review their keywords and ads whenever they have a 5+ Quality Score on any B2B or B2C terms.

This same logic does not apply to B2B search terms.
Those terms often contain more jargon or qualifying statements when looking for B2B services and products.
B2B advertisers don’t have to use characters to weed out B2C consumers and can focus their ads only on B2B searchers.
As you are testing various ads to find your best pre-qualifying statements, it can be tricky to examine the metrics. Which one of these would be your best ad?
When examining mixed metrics, CTR and conversion rates, we can use additional metrics to define our best ads. My favorite two are:
You can also multiply the results by 1,000 to make the numbers easier to digest instead of working with many decimal points. So, we might write:
By using impression metrics, you can find the opportunity for a given set of impressions.
| CTR | Conversion rate | Impressions | Clicks | Conversions | CPI |
| 15% | 3% | 5,000 | 750 | 22.5 | 4.5 |
| 10% | 7% | 4,000 | 400 | 28 | 7 |
| 5% | 11% | 4,500 | 225 | 24.75 | 5.5 |
By doing some simple math, we can see that option 2, with a 10% CTR and a 7% conversion rate, gives us the most total conversions.
Dig deeper: CRO for PPC: Key areas to optimize beyond landing pages
A good CTR helps bring more people to your website, improves your audience size, and can influence your Quality Scores.
However, high CTR ads can easily attract the wrong audience, leading you to waste your budget.
As you are creating headlines, consider your audience.
By considering each of these questions as you create ads, you can find ads that speak to the type of users you want to attract to your site.
These ads are rarely your best CTRs. These ads balance the appeal of high CTRs with pre-qualifying statements that ensure the clicks you receive have the potential to turn into your next customer.


The post Qualcomm’s Bold AI Inference Play Challenges NVIDIA Dominance appeared first on StartupHub.ai.
Qualcomm, a titan long synonymous with smartphone processors, is executing a strategic pivot, aiming to capture a significant slice of the burgeoning artificial intelligence inference market. This calculated move, detailed in a CNBC report by Kristina Partsinevelos, signals a direct challenge to NVIDIA’s established dominance, leveraging Qualcomm’s deep expertise in power-efficient neural processing units (NPUs). […]
The post Qualcomm’s Bold AI Inference Play Challenges NVIDIA Dominance appeared first on StartupHub.ai.
The post Tesla’s AI Ambition: Beyond the Car, a New Industrial Revolution appeared first on StartupHub.ai.
“The technology of AI is truly transformative,” declared Robyn Denholm, Tesla’s Board Chair, during a recent interview on CNBC’s Squawk Box. This assertion, delivered with conviction, encapsulates the core message emanating from Tesla, suggesting a future far grander than merely building electric vehicles. Denholm, speaking with Andrew Ross Sorkin and Becky Quick, outlined Tesla’s expansive […]
The post Tesla’s AI Ambition: Beyond the Car, a New Industrial Revolution appeared first on StartupHub.ai.
The post AMD Sharpens Focus on Rack-Scale AI Innovation appeared first on StartupHub.ai.
AMD's strategic divestiture of ZT Systems' manufacturing, while retaining design expertise, sharpens its focus on integrated AMD rack-scale AI solutions.
The post AMD Sharpens Focus on Rack-Scale AI Innovation appeared first on StartupHub.ai.
The post The AI boom & politics: Michal Lev-Ram on the launch of the $100M pro-AI super PAC appeared first on StartupHub.ai.
The launch of a $100 million pro-AI super PAC, “Leading the Future,” marks a pivotal moment for artificial intelligence, signaling its emphatic entry into the high-stakes arena of American political influence. This substantial war chest, intended to support “AI-friendly” candidates across the political spectrum, has reportedly “irked” the White House, immediately setting a tone of […]
The post The AI boom & politics: Michal Lev-Ram on the launch of the $100M pro-AI super PAC appeared first on StartupHub.ai.
The post AI Trade Hinges on Hyperscaler Capital and Revenue appeared first on StartupHub.ai.
Tejas Dessai, Director of Research at Global X ETFs, recently shared a compelling outlook on CNBC’s Worldwide Exchange, emphasizing that the trajectory of the AI trade through 2026 will be largely dictated by the capital expenditure (CAPEX) guidance and AI revenue acceleration from hyperscalers. “This week could set the tone for the AI trade going […]
The post AI Trade Hinges on Hyperscaler Capital and Revenue appeared first on StartupHub.ai.
The global fanfare for Samsung's most audacious mobile project yet, the (tentatively named) Galaxy Z TriFold, is reportedly set to bypass the U.S. market (not to mention most other parts of the world, too). Reports from reputable leakers suggest that the triple-folding device will be restricted to a small number of regions, such as South Korea
Major technology firms are betting big on AI, creating huge demand for hardware, software, and services that underpin this massive, burgeoning market. That includes Qualcomm, which is rolling out new AI200 and AI250 chip-based accelerator cards. These are Qualcomm's next-generation AI inference-optimized solutions for data centers, and they
A lot of add-in board partners have dabbled with white-themed graphics cards, but a modded GeForce RTX 5080 Founders Edition model posted to Reddit takes the cake. It looks so good (at least in photos) that we could have been convinced it was an official release by NVIDIA, if we didn't know any better. It's not, but it certainly should be.
NVIDIA's Nvidia reportedly reduces 8GB RTX 5060 Ti supply in response to low sales According to a report from Board Channels, Nvidia has told its board partners to limit their supply of 8GB RTX 5060 Ti graphics cards. This is due to lower-than-expected sales for the 8GB version of this graphics card. Simply put, gamers are […]
The post Nvidia orders partners to limit 8GB RTX 5060 Ti supply – report claims appeared first on OC3D.
Nokia, Ericsson and Fraunhofer HHI plan to make a European codec to power 6G-era video streaming Nokia has teamed up with Ericsson and Fraunhofer HHI to combine their expertise to create a next-generation video codec for the 6G streaming era. This is a European effort to pioneer the next generation of streaming, promising higher-quality streaming […]
The post Nokia, Ericsson and Fraunhofer HHI join forces to make 6G-era video streaming/encoding better appeared first on OC3D.
MAXSUN’s new B850 Terminator BKB motherboard could transform the ITX PC market MAXSUN is rethinking how ITX motherboards are designed. With its new Terminator B850 BKB motherboard, Maxsun has utilised a rear-mounted PCIe slot design that eliminates the need for PCIe riser cards in compatible ITX PC cases. With this design, PCIe graphics cards can […]
The post Maxsun reimagines ITX with its Terminator B850 BKB motherboard and its rear-facing PCIe slot appeared first on OC3D.
Correctly calling a market peak is a notoriously tricky endeavor.
Case in point: When tech stocks and startup funding hit their last cyclical peak four years ago, few knew it was the optimal time to cease new deals and cash in liquidatable holdings.
This time around, quite a few market watchers are wondering if the tech stock and AI boom has reached bubble territory. And, as we explored in Friday’s column, there are plenty of similarities between current conditions and the 2021 peak.
Even so, by other measures we’re also in starkly different territory. The current boom is far more concentrated in AI and a handful of hot companies. The exit environment is also much quieter. And of course, the macro conditions don’t resemble 2021, which had the combined economic effects of the COVID pandemic and historically low interest rates.
Below, we look at four of the top reasons why this time is different.
Four years ago, funding to most venture-backed sectors was sharply on the rise. That’s not the case this time around. While AI megarounds accumulate, funding to startups in myriad other sectors continues to languish.
Biotech is on track to capture the smallest percentage of U.S. venture investment on record this year. Cleantech investment looks poised to hit a multiyear low. And consumer products startups also remain out of vogue, alongside quite a few other sectors that once attracted big venture checks.
The emergence of AI haves and non-AI have-nots means that if we do see a correction, it could be limited in scope. Sectors that haven’t seen a boom by definition won’t see a post-boom crash. (Though further declines are possible.)
The new offering market was on fire in 2020 and 2021, with traditional IPOs, direct listings and SPAC mergers all flooding exchanges with new ticker symbols to track.
In recent quarters, by contrast, the IPO market has been alive, but not especially lively. We’ve seen a few large offerings, with CoreWeave, Figma and Circle among the standouts.
But overall, numbers are way down.
In 2021, there were hundreds of U.S. seed or venture-backed companies that debuted on NYSE or Nasdaq, per Crunchbase data. This year, there have been less than 50.
Meanwhile, the most prominent unicorns of the AI era, like OpenAI and Anthropic, remain private companies with no buzz about an imminent IPO. As such, they don’t see the day-to-day fluctuations typical of public companies. Any drop in valuation, if it happens, could play out slowly and quietly.
That brings us to our next point: In addition to spreading their largesse across fewer sectors, startup investors are also backing fewer companies.
This year, the percentage of startup funding going to megarounds of $100 million or more reached an all-time high in the U.S. and came close to a record global level. A single deal, OpenAI’s $40 billion March financing, accounted for roughly a quarter of U.S. megaround funding.
At the same time, fewer startup financings are getting done. This past quarter, for instance, reported deal count hit the lowest level in years, even as investment rose.
The last peak occurred amid an unusual financial backdrop, with economies beginning to emerge from the depths of the COVID pandemic and ultra-low interest rates contributing to investors shouldering more risk in pursuit of returns.
This time around, the macro environment is in a far different place, with “a “low fire, low hire” U.S. job market, AI disrupting or poised to disrupt a wide array of industries and occupations, a weaker dollar and a long list of other unusual drivers.
What both periods share in common, however, is the inexorable climb of big tech valuations, which brings us to our final thought.
While the argument that this time it’s different is a familiar one, the usual plot lines do tend to repeat themselves. Valuations overshoot, and they come down. And then the cycle repeats.
We may not have reached the top of the current cycle. But it’s certainly looking a lot closer to peak than trough.
Illustration: Dom Guzman

The old cliché says startups are born in garages and dorm rooms. That’s still true, but there’s a newer path: founding a startup inside a scale-up.
When you do that, you get the speed of a seed-stage team with the leverage of an established company. Executives and investors should care because this model can unlock new product lines, revenue and talent retention without recreating the wheel.
That’s how we built Saily, a travel eSIM service launched from inside Nord Security (the company behind NordVPN). In 19 weeks, a seven-person team went from a blank page to a live product. A little over a year later, we had scaled to millions of users with plans offered in more than 200 destinations. We did not invent everything from scratch. We reused what worked and validated everything else fast.

Every new product faces two existential risks: market and execution.
Inside Nord, I’d helped launch at least half a dozen new products before Saily. The pattern was consistent: Great ideas die when they target the wrong market or underestimate execution. With Saily, timing and infrastructure lined up: eSIM demand was accelerating, pain points were clear, and we could tap Nord’s backend, payments, app teams and distribution.
That allowed us to move at startup speed without startup fragility.
Founders obsess over product-market fit. Inside a scale-up, you also need what I call “product organization fit” or the overlap between a new product and what your company already does well.
When that overlap is high, you ship faster, hire smarter and avoid costly relearning. For Saily, the overlap was obvious: Security tech we knew (virtual location, web protection and ad-blocking), and app development know-how we could bring to travel connectivity.
Competition helped more than it hurt. “No competition” usually means “no demand.” We treated competitors as free market research, reading hiring signals, product moves and funding announcements to understand where the market was headed.
And we made security the product, not a feature. Travelers don’t want another app — they want reliable connectivity that isn’t risky on unknown networks. Building privacy and protection at the network layer means safety works phone-wide with no tinkering.
The hard part is not technical, but cultural. Large companies run on process. Startups run on autonomy. We set up Saily as a company within the company: A dedicated product and marketing team with decision speed, plus shared services (legal, finance and design) when needed. Think of it as an internal accelerator, where the platform handles overheads so the team can focus on products.
We kept one rhythm: ship, learn, repeat. Those 19 weeks weren’t about perfection, but about getting a usable product into the world and compounding feedback.
Experimentation only works if you measure what matters: speed, unit economics and retention. For example, independent third-party testing confirmed Saily’s network-level ad-blocking reduces data usage by 28.6% — real money saved for travelers. That is a signal you double down on. If a feature or tool adds complexity without value, cut it quickly.
Saily is still early, and the market is just getting started, but the model matters as much as the product. Many future founders already work inside growth companies. Give them startup autonomy and scale-up leverage and remarkable things can happen — in months, not years.
Vykintas Maknickas is CEO of Saily, a global eSIM app from Nord Security. A former head of product strategy at NordVPN, where he helped launch a series of new product lines, Maknickas has turned Saily into a globally successful brand with millions of users and serving more than 200 destinations. An entrepreneur since age 15, Maknickas brings a hands-on, execution-driven approach to building secure, scalable consumer tech.
Illustration: Dom Guzman
NVIDIA's GTC 2025 has kicked off today, marking the first time Team Green is holding the event in Washington, as it is directed towards America's leadership in the AI segment. NVIDIA Surprisingly Holds a 'Second' GTC This Year In Washington, With All Eyes on What Jensen Announces For the Future Well, it seems like NVIDIA has expanded its GTC event this year by holding it twice in 2025, and the last time we saw Jensen appear at this particular event was back in March, when we saw the unveiling of the GB300 'Blackwell Ultra' AI servers, as well as the […]
Read full article at https://wccftech.com/nvidia-gtc-comes-to-washington-for-the-first-time/

Halo: Campaign Evolved, the remake of the first entry in the Halo series announced last week after months of rumors and speculation, is set to introduce significant visual improvements over both the original and the Combat Evolved Anniversary remaster, judging from some early comparison videos shared online. Following the game's announcement at this year's Halo WCC, Cycu1 shared two comparison videos on YouTube, comparing the current available footage of the remake with a recreation of similar sequences in the original and the Halo Combat Evolved Anniversary remaster. Needless to say, the differences are massive between the modern remake and the […]
Read full article at https://wccftech.com/halo-campaign-evolved-comparison-massive-visual-improvements/

Samsung Foundry has secured significant deals in recent times, involving several tech giants, and this demonstrates that the division is poised to enter the mainstream chip segment. Samsung's Recent Deals With Tech Giants Will Help Improve Operating Losses & Paving the Way For Newer Partnerships The Korean giant has been making strides in the chip industry in recent months, entering collaborations with companies such as Apple, NVIDIA, and Tesla. Not only has this helped with capacity utilization, but Samsung is now seen as one of the stronger alternatives to TSMC in recent times, and this has defintely helped the firm […]
Read full article at https://wccftech.com/samsung-foundrys-wins-with-apple-nvidia-and-tesla-underscore-its-determination-to-challenge-tsmc/

Stray, the cat adventure game developed by BlueTwelve and published by Annapurna Interactive, will reportedly headline the PlayStation Plus Essential games lineup for November 2025. Earlier today, known leaker billbil-kun, who has proven extremely reliable regarding PlayStation Plus game lineup leaks in the past, revealed in a new report posted on Dealabs that the game that made headlines when it was launched in July 2022 for being widely appreciated by cats as much as by their owners will be available to all PlayStation Plus Essential, Extra, and Premium subscribers starting from November 4. Interestingly, Stray's joining the PlayStation Plus library next month […]
Read full article at https://wccftech.com/stray-playstation-plus-november-2025/

Samsung is finally gearing up to unveil its first triple-folding smartphone, dubbed the Galaxy Z TriFold, at the sidelines of the ongoing APEC summit in the South Korean city of Gyeongju. Samsung has beefed up security at its APEC booth ahead of the Galaxy Z TriFold unveil this week According to the South Korean publication, Hankyung, Samsung has beefed up security at its APEC booth, going so far as to install security personnel at the entrance to its booth, ahead of the Galaxy Z TriFold unveil this week. According to the publication, the much-anticipated triple-folding from Samsung will be unveiled […]
Read full article at https://wccftech.com/samsung-to-unveil-its-galaxy-z-trifold-within-hours/

Apple has been reported to introduce variable aperture technology to the iPhone 18 Pro and iPhone 18 Pro Max on a few occasions, and is working with various suppliers for the necessary parts. The Cupertino firm brought a telephoto zoom lens for the first time when it launched the iPhone 15 Pro Max and has slowly been adopting exclusive upgrades for its top-tier models. Fortunately, a new rumor claims that in addition to variable aperture technology, the 48MP telephoto unit will be treated to a larger aperture. Base iPhone 18 will be delayed to 2027, likely to make way for Apple’s […]
Read full article at https://wccftech.com/iphone-18-pro-48mp-telephoto-camera-to-get-larger-aperture-new-launch-schedule/

The likelihood of a roleplaying game once again getting most Game of the Year prizes after 2023's Baldur's Gate 3 is very high thanks to Clair Obscur: Expedition 33. The game has already scored many nominations from the Golden Joystick Awards 2025, with the winners due to be announced in a week from today. Meanwhile, the surprising sleeper hit of the year has received two endorsements from industry figures like Microsoft Gaming CEO Phil Spencer and Final Fantasy VII Remake trilogy director Naoki Hamaguchi. Speaking to Famitsu, Spencer mentioned various games as his personal 2025 highlights, but ultimately picked Sandfall […]
Read full article at https://wccftech.com/clair-obscur-expedition-33-gets-goty-votes-phil-spencer-naoki-hamaguchi/

In a development that is reminiscent of the infamous Galaxy Note 7 saga, a Samsung Galaxy S25+ just caught fire in South Korea after it failed to charge. A user posted on Samsung's Community Forum that his Galaxy S25+ caught fire after failing to charge A person who goes by the username "Chew ee jan" posted on Samsung's Community Forum yesterday that he was holding his Galaxy S25+ in his hand - presumably to investigate its failure to charge - as the smartphone's temperature soared, and he heard a "puck" sound. As the user threw his phone on the floor, […]
Read full article at https://wccftech.com/a-samsung-galaxy-s25-just-caught-fire/

The iPhone 17 lineup is equipped with Apple’s latest and greatest A19 and A19 Pro chipsets, and since Apple made the correct moves with its four flagships by equipping each of them with impressive hardware, TSMC is currently enjoying an influx of orders as it prepares to pursue full-scale 2nm production by the end of the year. The Taiwanese semiconductor behemoth’s Chief Executive also said that he is ‘not concerned about pre-built inventory,’ indicating that sales are as healthy as ever. In 2026, it is estimated that 33 percent of all smartphone chipsets produced will leverage TSMC’s 3nm and 2nm […]
Read full article at https://wccftech.com/iphone-17-success-has-increased-tsmc-3nm-chipset-orders-ceo-pleased/

GoodMetrics is a web analytics tool built for people who want clear, actionable insights without the headaches of GA4. It shows you where your traffic comes from, what drives conversions, and which content performs best — all in a privacy-friendly way.
Unlike Google Analytics, GoodMetrics doesn’t rely on cookies or personal data — so you get full visibility without the privacy trade-offs. Data appears instantly, reports stay consistent, and you’ll always know what’s happening on your site in real time.
Presented by Axis Communications
Many businesses are equipped with a network of intelligent eyes that span operations. These IP cameras and intelligent edge devices were once solely focused on ensuring the safety of employees, customers, and inventory. These technologies have long proved to be essential tools for businesses, and while this sentiment still rings true, they’re now emerging as powerful resources.
These cameras and edge devices have rapidly evolved into real-time data producers. IP cameras can now see and understand, and the accompanying artificial intelligence helps companies and decision-makers generate business intelligence, improve operational efficiency, and gain a competitive advantage.
By treating cameras as vision sensors and sources of operational insight, businesses can transform everyday visibility into measurable business value.
Network cameras have come a long way since Axis Communications first introduced this technology in 1996. Over time, innovations like the ARTPEC chip, the first chip purpose-built for IP video, helped enhance image quality, analytics, and encoding performance.
Today, these intelligent devices are powering a new generation of business intelligence and operational efficiency solutions via embedded AI. Actionable insights are now fed directly into intelligence platforms, ERP systems, and real-time dashboards, and the results are significant and far-reaching.
In manufacturing, intelligent cameras are detecting defects on the production line early, before an entire production run is compromised. In retail, these cameras can run software that maps customer journeys and optimizes product placement. In healthcare, these solutions help facilities enhance patient care while improving operational efficiency and reducing costs.
The combination of video and artificial intelligence has significantly expanded what cameras can do — transforming them into vital tools for improving business performance.
Companies are creatively taking advantage of edge devices like AI-enabled cameras to improve business intelligence and operational efficiencies.
BMW has relied on intelligent IP cameras to optimize efficiency and product quality, with AI-driven video systems catching defects that are often invisible to the human eye. Or take Google Cloud’s shelf-checking AI technology, an innovative software that allows retailers to make instant restocking decisions using real-time data.
These technologies appeal to far more than retailers and vendors. The A.C. Camargo Cancer Center in Brazil uses network cameras to reduce theft, assure visitor and employee safety, and optimize patient flow. By relying on newfound business intelligence, the facility has saved more than $2 million in operational costs through two years, with those savings being reinvested directly into patient care.
Urban projects can also benefit from edge devices and artificial intelligence. For example, Vanderbilt University turned to video analytics to study traffic flow, relying on AI to uncover the causes of phantom congestion and enabling smarter traffic management. These studies will have additional impact on the local environment and public, as the learnings can be used to optimize safety, air quality, and fuel efficiency.
Each case illustrates the same point: AI-powered cameras can fuel a tangible return on investment and crucial business intelligence, regardless of the industry.
The role of AI in video intelligence is still expanding, with several emerging trends driving greater advancements and impact in the years ahead:
Predictive operations: cameras that are capable of forecasting needs or risks through predictive analytics
Versatile analytics: systems that incorporate audio, thermal, and environmental sensors for more comprehensive and accurate insights
Technological collaboration: cameras that integrate with other intelligent edge devices to autonomously manage tasks
Sustainability initiatives: intelligent technologies that reduce energy use and support resource efficiency
Axis Communications helps advance these possibilities with open-source, scalable systems engineered to address both today’s challenges and tomorrow’s opportunities. By staying ahead of this ever-changing environment, Axis helps ensure that organizations continue to benefit from actionable business intelligence while maintaining the highest standards of security and safety.
Cameras have evolved beyond simple surveillance tools. They are strategic assets that inform operations, foster innovation, and enable future readiness. Business leaders who cling to traditional views of IP cameras and edge devices risk missing opportunities for efficiency and innovation. Those who embrace an AI-driven approach can expect not only stronger security but also better business outcomes.
Ultimately, the value of IP cameras and edge devices lies not in categories but in capabilities. In an era of rapidly evolving artificial intelligence, these unique technologies will become indispensable to overall business success.
About Axis Communications
Axis enables a smarter and safer world by improving security, safety, operational efficiency, and business intelligence. As a network technology company and industry leader, Axis offers video surveillance, access control, intercoms, and audio solutions. These are enhanced by intelligent analytics applications and supported by high-quality training.
Axis has around 5,000 dedicated employees in over 50 countries and collaborates with technology and system integration partners worldwide to deliver customer solutions. Axis was founded in 1984, and the headquarters are in Lund, Sweden.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Stop exposing your assets! Learn when an influencer's business growth demands an LLC shield.

The web’s purpose is shifting. Once a link graph – a network of pages for users and crawlers to navigate – it’s rapidly becoming a queryable knowledge graph.
For technical SEOs, that means the goal has evolved from optimizing for clicks to optimizing for visibility and even direct machine interaction.
At the forefront of this evolution is NLWeb (Natural Language Web), an open-source project developed by Microsoft.
NLWeb simplifies the creation of natural language interfaces for any website, allowing publishers to transform existing sites into AI-powered applications where users and intelligent agents can query content conversationally – much like interacting with an AI assistant.
Developers suggest NLWeb could play a role similar to HTML in the emerging agentic web.
Its open-source, standards-based design makes it technology-agnostic, ensuring compatibility across vendors and large language models (LLMs).
This positions NLWeb as a foundational framework for long-term digital visibility.
NLWeb proves that structured data isn’t just an SEO best practice for rich results – it’s the foundation of AI readiness.
Its architecture is designed to convert a site’s existing structured data into a semantic, actionable interface for AI systems.
In the age of NLWeb, a website is no longer just a destination. It’s a source of information that AI agents can query programmatically.
The technical requirements confirm that a high-quality schema.org implementation is the primary key to entry.
The NLWeb toolkit begins by crawling the site and extracting the schema markup.
The schema.org JSON-LD format is the preferred and most effective input for the system.
This means the protocol consumes every detail, relationship, and property defined in your schema, from product types to organization entities.
For any data not in JSON-LD, such as RSS feeds, NLWeb is engineered to convert it into schema.org types for effective use.
Once collected, this structured data is stored in a vector database. This element is critical because it moves the interaction beyond traditional keyword matching.
Vector databases represent text as mathematical vectors, allowing the AI to search based on semantic similarity and meaning.
For example, the system can understand that a query using the term “structured data” is conceptually the same as content marked up with “schema markup.”
This capacity for conceptual understanding is absolutely essential for enabling authentic conversational functionality.
The final layer is the connectivity provided by the Model Context Protocol (MCP).
Every NLWeb instance operates as an MCP server, an emerging standard for packaging and consistently exchanging data between various AI systems and agents.
MCP is currently the most promising path forward for ensuring interoperability in the highly fragmented AI ecosystem.
Since NLWeb relies entirely on crawling and extracting schema markup, the precision, completeness, and interconnectedness of your site’s content knowledge graph determine success.
The key challenge for SEO teams is addressing technical debt.
Custom, in-house solutions to manage AI ingestion are often high-cost, slow to adopt, and create systems that are difficult to scale or incompatible with future standards like MCP.
NLWeb addresses the protocol’s complexity, but it cannot fix faulty data.
If your structured data is poorly maintained, inaccurate, or missing critical entity relationships, the resulting vector database will store flawed semantic information.
This leads inevitably to suboptimal outputs, potentially resulting in inaccurate conversational responses or “hallucinations” by the AI interface.
Robust, entity-first schema optimization is no longer just a way to win a rich result; it is the fundamental barrier to entry for the agentic web.
By leveraging the structured data you already have, NLWeb allows you to unlock new value without starting from scratch, thereby future-proofing your digital strategy.
The need for AI crawlers to process web content efficiently has led to multiple proposed standards.
A comparison between NLWeb and the proposed llms.txt file illustrates a clear divergence between dynamic interaction and passive guidance.
The llms.txt file is a proposed static standard designed to improve the efficiency of AI crawlers by:
In sharp contrast, NLWeb is a dynamic protocol that establishes a conversational API endpoint.
Its purpose is not just to point to content, but to actively receive natural language queries, process the site’s knowledge graph, and return structured JSON responses using schema.org.
NLWeb fundamentally changes the relationship from “AI reads the site” to “AI queries the site.”
| Attribute | NLWeb | llms.txt |
| Primary goal | Enables dynamic, conversational interaction and structured data output | Improves crawler efficiency and guides static content ingestion |
| Operational model | API/Protocol (active endpoint) | Static Text File (passive guidance) |
| Data format used | Schema.org JSON-LD | Markdown |
| Adoption status | Open project; connectors available for major LLMs, including Gemini, OpenAI, and Anthropic | Proposed standard; not adopted by Google, OpenAI, or other major LLMs |
| Strategic advantage | Unlocks existing schema investment for transactional AI uses, future-proofing content | Reduces computational cost for LLM training/crawling |
The market’s preference for dynamic utility is clear. Despite addressing a real technical challenge for crawlers, llms.txt has failed to gain traction so far.
NLWeb’s functional superiority stems from its ability to enable richer, transactional AI interactions.
It allows AI agents to dynamically reason about and execute complex data queries using structured schema output.
While NLWeb is still an emerging open standard, its value is clear.
It maximizes the utility and discoverability of specialized content that often sits deep in archives or databases.
This value is realized through operational efficiency and stronger brand authority, rather than immediate traffic metrics.
Several organizations are already exploring how NLWeb could let users ask complex questions and receive intelligent answers that synthesize information from multiple resources – something traditional search struggles to deliver.
The ROI comes from reducing user friction and reinforcing the brand as an authoritative, queryable knowledge source.
For website owners and digital marketing professionals, the path forward is undeniable: mandate an entity-first schema audit.
Because NLWeb depends on schema markup, technical SEO teams must prioritize auditing existing JSON-LD for integrity, completeness, and interconnectedness.
Minimalist schema is no longer enough – optimization must be entity-first.
Publishers should ensure their schema accurately reflects the relationships among all entities, products, services, locations, and personnel to provide the context necessary for precise semantic querying.
The transition to the agentic web is already underway, and NLWeb offers the most viable open-source path to long-term visibility and utility.
It’s a strategic necessity to ensure your organization can communicate effectively as AI agents and LLMs begin integrating conversational protocols for third-party content interaction.


Automattic's counterclaim against WP Engine says number of times keywords used is a ranking factor and they call that SEO.
The post Automattic’s Legal Claims About SEO… Is This Real? appeared first on Search Engine Journal.
The post Fine-Tuning Speech-to-Text: The Unsung Hero of Conversational AI Accuracy appeared first on StartupHub.ai.
The efficacy of conversational AI hinges on a foundational, often overlooked, component: speech-to-text accuracy. Andrew Freed, a Distinguished Engineer at IBM, presented a compelling case for why fine-tuning generative AI models for speech-to-text is not merely an optimization, but a critical determinant of success for virtual agents and voice-enabled applications. His insights underscore that without […]
The post Fine-Tuning Speech-to-Text: The Unsung Hero of Conversational AI Accuracy appeared first on StartupHub.ai.
The post Streetbeat’s $15M Bet on AI Wealth Management’s Future appeared first on StartupHub.ai.
Streetbeat secured $15 million to expand its AI wealth management platform, promising increased efficiency for advisors and AI-powered investment returns for retail users.
The post Streetbeat’s $15M Bet on AI Wealth Management’s Future appeared first on StartupHub.ai.
The post Adaptam Therapeutics Funding Targets Tough Cancer Immunotherapy Challenge appeared first on StartupHub.ai.
Adaptam Therapeutics has raised €3 million in Adaptam Therapeutics funding to advance novel antibody-based cancer immunotherapies that disarm the tumor’s immune-suppressing defenses.
The post Adaptam Therapeutics Funding Targets Tough Cancer Immunotherapy Challenge appeared first on StartupHub.ai.
AMD has a new Zen 4-powered X3D CPU on the horizon It looks like AMD is getting ready to greatly expand its X3D CPU lineup. Last week, AMD’s Ryzen 7 9850X3D and Ryzen 9 9950X3D2 CPUs leaked. Now, a new lower-end Zen 4-based X3D CPU has surfaced. Meet the Ryzen 5 7500X3D. @momomo_us spotted this new […]
The post Affordable V-Cache – AMD Ryzen 5 7500X3D CPU spotted! appeared first on OC3D.
AMD is once again back with renamed processors from the Zen 2 and Zen 3+ families in the form of Ryzen 10 and Ryzen 100 series. AMD Silently Releases Zen 3+ Based "Rembrandt" Ryzen 100 and Zen 2-Based "Mendocino" Ryzen 10 Series With Specs Almost Identical to Original SKUs Launching refresh CPUs is a usual routine for both AMD and Intel, but if you have been seeing how they have been doing it for a while, it's clear that "performance" isn't their actual intention behind every refresh chip. A few months ago, Intel launched Core 5 120 and 120F processors, […]
Read full article at https://wccftech.com/amd-prepares-rebadged-zen-2-ryzen-10-and-zen-3-ryzen-100-series-mobile-cpus/

As promised on Friday, the development teams working on Battlefield 6 (DICE, Ripple Effect, Motive, and Criterion) have released the patch notes for update 1.1.1.0, which is set to go live tomorrow ahead of the debut of Season 1. The update will be released at 9:00 UTC, whereas the new season will be enabled at 15:00 UTC. With this update, the developers aim to address some of the biggest pain points the Battlefield 6 community lamented in the game's first few weeks. For example, the player character's core movement and animations have been refined to provide smoother landings, faster stance […]
Read full article at https://wccftech.com/battlefield-6-update-1-1-1-0-live-tomorrow-enhancements/

Earlier this year, when Turn 10 Studios lost over 70 members of its development team as part of the latest round of layoffs mandated by Microsoft, a former employee claimed that Turn 10 had effectively been turned into a Forza Horizon support studio. This chain of events had fans fearful about the future of the racing simulation franchise. Now, at last, Microsoft Gaming CEO Phil Spencer commented on this as part of a Tokyo Game Show 2025 interview with Famitsu. The Japanese magazine asked him whether the Halo and Forza Motorsport franchises were done for. Spencer vehemently denied that for […]
Read full article at https://wccftech.com/the-forza-motorsport-series-is-resting-for-now-says-phil-spencer/

Last week, a ResetEra user discovered that the new Gaming Copilot AI installed by Microsoft on all Windows 11 PCs (integrated directly into the Game Bar) was training itself by screenshotting every game played by the user and then sending everything back to Microsoft. Gaming Copilot is also enabled by default, so if you want to turn it off, you need to go to the Game Bar, and then to Settings and Privacy Settings, where you will find the option for Gaming Copilot to pull "Model training on text" or not. Needless to say, this discovery sparked a big controversy […]
Read full article at https://wccftech.com/microsoft-comments-gaming-copilot-ai-controversy/

Launch 1000s of marketing experiments in minutes
Never make slides again
Build anything with AI
A vibrant new way to talk with Copilot
Browser-based MCP server testing with AI chat integration
Create high quality presentations using AI
AI that finds hidden fleet costs before they hit your P&L
Smart incident summaries powered by Magistral small