A New Security Layer for macOS Takes Aim at Admin Errors Before Hackers Do


The post Amazon’s Cloud and AI Crossroads: Navigating Intense Competition and Infrastructure Demands appeared first on StartupHub.ai.
The burgeoning demands of generative AI are fundamentally reshaping the competitive landscape of cloud computing, compelling even market leaders like Amazon to critically assess their strategic investments. CNBC’s MacKenzie Sigalos, reporting on Amazon’s third-quarter earnings, underscored that the company’s cloud momentum and substantial AI infrastructure spending are now under intense scrutiny, with investors eager to […]
The post Amazon’s Cloud and AI Crossroads: Navigating Intense Competition and Infrastructure Demands appeared first on StartupHub.ai.

		 
	
		 
	
The post Salesforce Agentic Commerce: AI Redefines Retail appeared first on StartupHub.ai.
Salesforce Agentic Commerce introduces AI-powered tools and strategic partnerships to redefine retail, enhancing discovery, personalization, and operational efficiency.
The post Salesforce Agentic Commerce: AI Redefines Retail appeared first on StartupHub.ai.
		 
	
OpenAI has introduced Aardvark, a GPT-5-powered autonomous security researcher agent now available in private beta.
Designed to emulate how human experts identify and resolve software vulnerabilities, Aardvark offers a multi-stage, LLM-driven approach for continuous, 24/7/365 code analysis, exploit validation, and patch generation!
Positioned as a scalable defense tool for modern software development environments, Aardvark is being tested across internal and external codebases.
OpenAI reports high recall and real-world effectiveness in identifying known and synthetic vulnerabilities, with early deployments surfacing previously undetected security issues.
Aardvark comes on the heels of OpenAI’s release of the gpt-oss-safeguard models yesterday, extending the company’s recent emphasis on agentic and policy-aligned systems.
Aardvark operates as an agentic system that continuously analyzes source code repositories. Unlike conventional tools that rely on fuzzing or software composition analysis, Aardvark leverages LLM reasoning and tool-use capabilities to interpret code behavior and identify vulnerabilities.
It simulates a security researcher’s workflow by reading code, conducting semantic analysis, writing and executing test cases, and using diagnostic tools.
Its process follows a structured multi-stage pipeline:
Threat Modeling – Aardvark initiates its analysis by ingesting an entire code repository to generate a threat model. This model reflects the inferred security objectives and architectural design of the software.
Commit-Level Scanning – As code changes are committed, Aardvark compares diffs against the repository’s threat model to detect potential vulnerabilities. It also performs historical scans when a repository is first connected.
Validation Sandbox – Detected vulnerabilities are tested in an isolated environment to confirm exploitability. This reduces false positives and enhances report accuracy.
Automated Patching – The system integrates with OpenAI Codex to generate patches. These proposed fixes are then reviewed and submitted via pull requests for developer approval.
Aardvark integrates with GitHub, Codex, and common development pipelines to provide continuous, non-intrusive security scanning. All insights are intended to be human-auditable, with clear annotations and reproducibility.
According to OpenAI, Aardvark has been operational for several months on internal codebases and with select alpha partners.
In benchmark testing on “golden” repositories—where known and synthetic vulnerabilities were seeded—Aardvark identified 92% of total issues.
OpenAI emphasizes that its accuracy and low false positive rate are key differentiators.
The agent has also been deployed on open-source projects. To date, it has discovered multiple critical issues, including ten vulnerabilities that were assigned CVE identifiers.
OpenAI states that all findings were responsibly disclosed under its recently updated coordinated disclosure policy, which favors collaboration over rigid timelines.
In practice, Aardvark has surfaced complex bugs beyond traditional security flaws, including logic errors, incomplete fixes, and privacy risks. This suggests broader utility beyond security-specific contexts.
During the private beta, Aardvark is only available to organizations using GitHub Cloud (github.com). OpenAI invites beta testers to sign up here online by filling out a web form. Participation requirements include:
Integration with GitHub Cloud
Commitment to interact with Aardvark and provide qualitative feedback
Agreement to beta-specific terms and privacy policies
OpenAI confirmed that code submitted to Aardvark during the beta will not be used to train its models.
The company is also offering pro bono vulnerability scanning for selected non-commercial open-source repositories, citing its intent to contribute to the health of the software supply chain.
The launch of Aardvark signals OpenAI’s broader movement into agentic AI systems with domain-specific capabilities.
While OpenAI is best known for its general-purpose models (e.g., GPT-4 and GPT-5), Aardvark is part of a growing trend of specialized AI agents designed to operate semi-autonomously within real-world environments. In fact, it joins two other active OpenAI agents now:
ChatGPT agent, unveiled back in July 2025, which controls a virtual computer and web browser and can create and edit common productivity files
Codex — previously the name of OpenAI's open source coding model, which it took and re-used as the name of its new GPT-5 variant-powered AI coding agent unveiled back in May 2025
But a security-focused agent makes a lot of sense, especially as demands on security teams grow.
In 2024 alone, over 40,000 Common Vulnerabilities and Exposures (CVEs) were reported, and OpenAI’s internal data suggests that 1.2% of all code commits introduce bugs.
Aardvark’s positioning as a “defender-first” AI aligns with a market need for proactive security tools that integrate tightly with developer workflows rather than operate as post-hoc scanning layers.
OpenAI’s coordinated disclosure policy updates further reinforce its commitment to sustainable collaboration with developers and the open-source community, rather than emphasizing adversarial vulnerability reporting.
While yesterday's release of oss-safeguard uses chain-of-thought reasoning to apply safety policies during inference, Aardvark applies similar LLM reasoning to secure evolving codebases.
Together, these tools signal OpenAI’s shift from static tooling toward flexible, continuously adaptive systems — one focused on content moderation, the other on proactive vulnerability detection and automated patching within real-world software development environments.
Aardvark represents OpenAI’s entry into automated security research through agentic AI. By combining GPT-5’s language understanding with Codex-driven patching and validation sandboxes, Aardvark offers an integrated solution for modern software teams facing increasing security complexity.
While currently in limited beta, the early performance indicators suggest potential for broader adoption. If proven effective at scale, Aardvark could contribute to a shift in how organizations embed security into continuous development environments.
For security leaders tasked with managing incident response, threat detection, and day-to-day protections—particularly those operating with limited team capacity—Aardvark may serve as a force multiplier. Its autonomous validation pipeline and human-auditable patch proposals could streamline triage and reduce alert fatigue, enabling smaller security teams to focus on strategic incidents rather than manual scanning and follow-up.
AI engineers responsible for integrating models into live products may benefit from Aardvark’s ability to surface bugs that arise from subtle logic flaws or incomplete fixes, particularly in fast-moving development cycles. Because Aardvark monitors commit-level changes and tracks them against threat models, it may help prevent vulnerabilities introduced during rapid iteration, without slowing delivery timelines.
For teams orchestrating AI across distributed environments, Aardvark’s sandbox validation and continuous feedback loops could align well with CI/CD-style pipelines for ML systems. Its ability to plug into GitHub workflows positions it as a compatible addition to modern AI operations stacks, especially those aiming to integrate robust security checks into automation pipelines without additional overhead.
And for data infrastructure teams maintaining critical pipelines and tooling, Aardvark’s LLM-driven inspection capabilities could offer an added layer of resilience. Vulnerabilities in data orchestration layers often go unnoticed until exploited; Aardvark’s ongoing code review process may surface issues earlier in the development lifecycle, helping data engineers maintain both system integrity and uptime.
In practice, Aardvark represents a shift in how security expertise might be operationalized—not just as a defensive perimeter, but as a persistent, context-aware participant in the software lifecycle. Its design suggests a model where defenders are no longer bottlenecked by scale, but augmented by intelligent agents working alongside them.

Researchers at Meta FAIR and the University of Edinburgh have developed a new technique that can predict the correctness of a large language model's (LLM) reasoning and even intervene to fix its mistakes. Called Circuit-based Reasoning Verification (CRV), the method looks inside an LLM to monitor its internal “reasoning circuits” and detect signs of computational errors as the model solves a problem.
Their findings show that CRV can detect reasoning errors in LLMs with high accuracy by building and observing a computational graph from the model's internal activations. In a key breakthrough, the researchers also demonstrated they can use this deep insight to apply targeted interventions that correct a model’s faulty reasoning on the fly.
The technique could help solve one of the great challenges of AI: Ensuring a model’s reasoning is faithful and correct. This could be a critical step toward building more trustworthy AI applications for the enterprise, where reliability is paramount.
Chain-of-thought (CoT) reasoning has been a powerful method for boosting the performance of LLMs on complex tasks and has been one of the key ingredients in the success of reasoning models such as the OpenAI o-series and DeepSeek-R1.
However, despite the success of CoT, it is not fully reliable. The reasoning process itself is often flawed, and several studies have shown that the CoT tokens an LLM generates is not always a faithful representation of its internal reasoning process.
Current remedies for verifying CoT fall into two main categories. “Black-box” approaches analyze the final generated token or the confidence scores of different token options. “Gray-box” approaches go a step further, looking at the model's internal state by using simple probes on its raw neural activations.
But while these methods can detect that a model’s internal state is correlated with an error, they can't explain why the underlying computation failed. For real-world applications where understanding the root cause of a failure is crucial, this is a significant gap.
CRV is based on the idea that models perform tasks using specialized subgraphs, or "circuits," of neurons that function like latent algorithms. So if the model’s reasoning fails, it is caused by a flaw in the execution of one of these algorithms. This means that by inspecting the underlying computational process, we can diagnose the cause of the flaw, similar to how developers examine execution traces to debug traditional software.
To make this possible, the researchers first make the target LLM interpretable. They replace the standard dense layers of the transformer blocks with trained "transcoders." A transcoder is a specialized deep learning component that forces the model to represent its intermediate computations not as a dense, unreadable vector of numbers, but as a sparse and meaningful set of features. Transcoders are similar to the sparse autoencoders (SAE) used in mechanistic interpretability research with the difference that they also preserve the functionality of the network they emulate. This modification effectively installs a diagnostic port into the model, allowing researchers to observe its internal workings.
With this interpretable model in place, the CRV process unfolds in a few steps. For each reasoning step the model takes, CRV constructs an "attribution graph" that maps the causal flow of information between the interpretable features of the transcoder and the tokens it is processing. From this graph, it extracts a "structural fingerprint" that contains a set of features describing the graph's properties. Finally, a “diagnostic classifier” model is trained on these fingerprints to predict whether the reasoning step is correct or not.
At inference time, the classifier monitors the activations of the model and provides feedback on whether the model’s reasoning trace is on the right track.
The researchers tested their method on a Llama 3.1 8B Instruct model modified with the transcoders, evaluating it on a mix of synthetic (Boolean and Arithmetic) and real-world (GSM8K math problems) datasets. They compared CRV against a comprehensive suite of black-box and gray-box baselines.
The results provide strong empirical support for the central hypothesis: the structural signatures in a reasoning step's computational trace contain a verifiable signal of its correctness. CRV consistently outperformed all baseline methods across every dataset and metric, demonstrating that a deep, structural view of the model's computation is more powerful than surface-level analysis.
Interestingly, the analysis revealed that the signatures of error are highly domain-specific. This means failures in different reasoning tasks (formal logic versus arithmetic calculation) manifest as distinct computational patterns. A classifier trained to detect errors in one domain does not transfer well to another, highlighting that different types of reasoning rely on different internal circuits. In practice, this means that you might need to train a separate classifier for each task (though the transcoder remains unchanged).
The most significant finding, however, is that these error signatures are not just correlational but causal. Because CRV provides a transparent view of the computation, a predicted failure can be traced back to a specific component. In one case study, the model made an order-of-operations error. CRV flagged the step and identified that a "multiplication" feature was firing prematurely. The researchers intervened by manually suppressing that single feature, and the model immediately corrected its path and solved the problem correctly.
This work represents a step toward a more rigorous science of AI interpretability and control. As the paper concludes, “these findings establish CRV as a proof-of-concept for mechanistic analysis, showing that shifting from opaque activations to interpretable computational structure enables a causal understanding of how and why LLMs fail to reason correctly.” To support further research, the team plans to release its datasets and trained transcoders to the public.
While CRV is a research proof-of-concept, its results hint at a significant future for AI development. AI models learn internal algorithms, or "circuits," for different tasks. But because these models are opaque, we can't debug them like standard computer programs by tracing bugs to specific steps in the computation. Attribution graphs are the closest thing we have to an execution trace, showing how an output is derived from intermediate steps.
This research suggests that attribution graphs could be the foundation for a new class of AI model debuggers. Such tools would allow developers to understand the root cause of failures, whether it's insufficient training data or interference between competing tasks. This would enable precise mitigations, like targeted fine-tuning or even direct model editing, instead of costly full-scale retraining. They could also allow for more efficient intervention to correct model mistakes during inference.
The success of CRV in detecting and pinpointing reasoning errors is an encouraging sign that such debuggers could become a reality. This would pave the way for more robust LLMs and autonomous agents that can handle real-world unpredictability and, much like humans, correct course when they make reasoning mistakes.

LinkedIn's also taking more action to crack down on fake engagement activity.
		 
	
		 
	
		 
	
		 
	
		 
	
The post Stripe’s AI Backbone: Powering the Agent Economy with Financial Infrastructure appeared first on StartupHub.ai.
Stripe, under the leadership of Emily Glassberg Sands, Head of Data & AI, is not merely adapting to the artificial intelligence revolution; it is actively constructing the financial infrastructure upon which this burgeoning agent economy will operate. In a recent Latent Space podcast interview with hosts Shawn Wang and Alessio Fanelli, Sands articulated Stripe’s ambitious […]
The post Stripe’s AI Backbone: Powering the Agent Economy with Financial Infrastructure appeared first on StartupHub.ai.
The post Poolside reportedly raising up to $1B to advance AI code generation appeared first on StartupHub.ai.
AI code generation startup Poolside is reportedly raising up to $1 billion from investor Nvidia to build tools that accelerate software development.
The post Poolside reportedly raising up to $1B to advance AI code generation appeared first on StartupHub.ai.
The post AI’s Relentless March: Efficiency, Autonomy, and Economic Reshaping appeared first on StartupHub.ai.
The accelerating integration of artificial intelligence into daily life and industrial infrastructure is no longer a distant vision but a tangible reality, as evidenced by the rapid-fire developments discussed in Matthew Berman’s latest Forward Future AI news briefing. From the nascent stages of consumer robotics to revolutionary computing paradigms, the AI landscape is undergoing a […]
The post AI’s Relentless March: Efficiency, Autonomy, and Economic Reshaping appeared first on StartupHub.ai.
The post XPO’s AI-Driven Efficiency in a Soft Freight Market appeared first on StartupHub.ai.
In an era where artificial intelligence often conjures images of job displacement, XPO CEO Mario Harik offers a refreshingly pragmatic perspective: AI, for his logistics giant, is fundamentally about efficiency and optimization, not headcount reduction. This insight anchored a recent interview on CNBC’s Worldwide Exchange with anchor Frank Hollan, where Harik detailed XPO’s latest earnings […]
The post XPO’s AI-Driven Efficiency in a Soft Freight Market appeared first on StartupHub.ai.
Amazon’s third-quarter profits rose 38% to $21.2 billion, but a big part of the jump had nothing to do with its core businesses of selling good or cloud services.
The company reported a $9.5 billion pre-tax gain from its investment in the AI startup Anthropic, which was included in its non-operating income for the quarter.
The windfall wasn’t the result of a sale or cash transaction, but rather accounting rules. After Anthropic raised new funding in September at a $183 billion valuation, Amazon was required to revalue its equity stake to reflect the higher market price, a process known as a “mark-to-market” adjustment.
To put the $9.5 billion paper gain in perspective, the Amazon Web Services cloud business — historically Amazon’s primary profit engine — generated $11.4 billion in quarterly operating profits.
At the same time, Amazon is spending big on its AI infrastructure buildout for Anthropic and others. The company just opened an $11 billion AI data center complex, dubbed Project Rainier, where Anthropic’s Claude models run on hundreds of thousands of Amazon’s Trainium 2 chips.
Amazon is going head-to-head against Microsoft, which just re-upped its partnership with ChatGPT maker OpenAI; and Google, which reported record cloud revenue for its recent quarter, driven by AI. The AI infrastructure race is fueling a big surge in capital spending for all three cloud giants.
Amazon spent $35.1 billion on property and equipment in the third quarter alone, up 55% from a year earlier. Andy Jassy, the Amazon CEO, sought to reassure Wall Street that the big outlay will be worth it.
“You’re going to see us continue to be very aggressive investing in capacity, because we see the demand,” Jassy said on the company’s conference call. “As fast as we’re adding capacity right now, we’re monetizing it. It’s still quite early, and represents an unusual opportunity for customers and AWS.”
The cash for new data centers doesn’t hit the bottom line immediately, but it comes into play as depreciation and amortization costs are recorded on the income statement over time.
And in that way, the spending is starting to impact on AWS results: sales rose 20% to $33 billion in the quarter, yet operating income increased only 9.6% to $11.4 billion. The gap indicates that Amazon’s heavy AI investments are compressing profit margins in the near term, even as the company bets on the infrastructure build-out to expand its business significantly over time.
Those investments are also weighing on cash generation: Amazon’s free cash flow dropped 69% over the past year to $14.8 billion, reflecting the massive outlays for data centers and infrastructure.
Amazon has invested and committed a total of $8 billion in Anthropic, initially structured as convertible notes. A portion of that investment converted to equity with Anthropic’s prior funding round in March.
 Corsair just announced its entrance into the leverless fightstick scene first popularized by Hit Box, the new Corsair Novablade Pro. Or to be more specific, the Corsair Novablade Pro Wireless Hall Effect Leverless Fight Controller—but we'll stick with Novablade Pro for now. The Novablade Pro is a direct competitor to the likes of Hit Box's
Corsair just announced its entrance into the leverless fightstick scene first popularized by Hit Box, the new Corsair Novablade Pro. Or to be more specific, the Corsair Novablade Pro Wireless Hall Effect Leverless Fight Controller—but we'll stick with Novablade Pro for now. The Novablade Pro is a direct competitor to the likes of Hit Box's	
Amazon CEO Andy Jassy says the company’s latest big round of layoffs — about 14,000 corporate jobs — wasn’t triggered by financial strain or artificial intelligence replacing workers, but rather a push to stay nimble.
Speaking with analysts on Amazon’s quarterly earnings call Thursday, Jassy said the decision stemmed from a belief that the company had grown too big and too layered.
“The announcement that we made a few days ago was not really financially driven, and it’s not even really AI-driven — not right now, at least,” he said. “Really, it’s culture.”
Jassy’s comments are his first public explanation of the layoffs, which reportedly could ultimately total as many as 30,000 people — and would be the largest workforce reduction in Amazon’s history.
The news this week prompted speculation that the cuts were tied to automation or AI-related restructuring. Earlier this year, Jassy wrote in a memo to employees that he expected Amazon’s total corporate workforce to shrink over time due to efficiency gains from AI.
But his comments Thursday framed the layoffs as a cultural reset aimed at keeping the company fast-moving amid what he called “the technology transformation happening right now.”
Jassy, who succeeded founder Jeff Bezos as CEO in mid-2021, has pushed to reduce management layers and eliminate bureaucracy inside the company.
Amazon’s corporate headcount tripled between 2017 and 2022, according to The Information, before the company adopted a more cautious hiring approach.
Bloomberg News reported this week that Jassy has told colleagues parts of the company remain “unwieldy” despite efforts to streamline operations — including significant layoffs in 2023 when Amazon cut 27,000 corporate workers in multiple stages.
On Thursday’s call, Jassy said Amazon’s rapid growth led to extra layers of management that slowed decision-making.
“When that happens, sometimes without realizing it, you can weaken the ownership of the people that you have who are doing the actual work and who own most of the two-way door decisions — the ones that should be made quickly and right at the front line,” Jassy said, using a phrase popularized by Bezos to help determine how much thought and planning to put into big and small decisions.
The layoffs, he said, are meant to restore the kind of ownership and agility that defined Amazon’s early years.
“We are committed to operating like the world’s largest startup,” Jassy said, repeating a line he’s used recently.
Given the “transformation” he described happening across the business world, Jassy said it’s more important than ever to be lean, flat, and fast-moving. “That’s what we’re going to do,” he said.
Jassy’s comments came as Amazon reported quarterly revenue of $180.2 billion, up 13% year-over-year, with AWS revenue growth accelerating to 20% — its fastest pace since 2022.
Amazon said it took a $1.8 billion severance-related charge in the quarter related to the layoffs.
Amazon joins other tech giants including Microsoft that have trimmed headcount this year while investing heavily in AI infrastructure.
Related coverage:
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	


Canva has launched a new unified Affinity application that combines the previously separate Designer, Photo, and Publisher tools into one platform. The app offers a complete suite of professional features for vector design, image editing, and desktop publishing, now available free of charge.


Apple has deftly managed its geopolitical risk exposure by negotiating a broad-based import tariff exemption from the Trump Administration. Even so, the Cupertino giant has not been able to fully neutralize the impact of the US import tariffs, courtesy of its labyrinthine and sprawling global supply chain. Apple faced $1.1 billion in tariff-related costs in its fiscal Q4 2025 Apple adopted a 2-pronged strategy to deal with US import tariffs and trade war: Moreover, Apple is also planning to: As such, Apple has already started shipping its US-made servers to its datacenters, where they will help power features such as […]
Read full article at https://wccftech.com/apple-1-1-billion-hit-in-q4-2025/

Apple's iPhone product segment missed consensus expectations of analysts for the just-concluded fiscal fourth quarter of 2025, largely due to transitory weakness in iPhone 17 sales. Now, however, Apple has not only given a reasonable explanation behind this miss, but also offered surprising guidance for the ongoing December-ending quarter. Apple to experience its best ever December-ending quarter As we noted in our dedicated post on the topic, Apple's iPhone revenue missed expectations for its fiscal Q4 2025, which were pegged at $50.19 billion vs. the $49.03 billion haul that the Cupertino giant reported for the three-month period. During the earnings call, […]
Read full article at https://wccftech.com/tim-cook-apple-will-have-its-best-ever-december-quarter-thanks-to-the-iphone-17-lineup/

Ever since Apple announced its AI strategy revamp under the Apple Intelligence banner, there has been a perception that the company is struggling to maintain a swift pace with its lofty ambitions. There are increasing signs, however, that Apple is making some much-needed headway in this sphere, as per the tidbits gleaned from Apple's Q3 2025 Earnings Call. Apple Intelligence: A winding road ahead Do note that Apple has been working to introduce a number of key Apple Intelligence features with its Spring 2026 iOS update (iOS 26.4 most likely). These include: Of course, Apple Mac users can already enjoy […]
Read full article at https://wccftech.com/tim-cook-the-new-siri-under-the-apple-intelligence-banner-to-debut-in-2026/

Apple has just announced the earnings for its fiscal Q4 2025, reporting $102.47 billion in total revenue, which includes $49.03 billion from iPhones and $28.75 billion from services, and $27.47 billion in net profit. Apple Fiscal Q4 2025 Earnings: iPhones and services show growth Here are the key highlights from Apple's latest quarterly earnings release: For the full fiscal year 2025, Apple earned $416.16 billion in revenue, corresponding to a year-over-year increase of 6.42 percent relative to $391.04 billion that it earned in its last fiscal year. During the just-concluded fiscal year 2025, Apple earned $307 billion from its products […]
Read full article at https://wccftech.com/apple-fiscal-q4-2025-earnings-revenue-from-iphones-and-ipads-disappoints/

Maintaining its tradition for every Apple Silicon Mac that has launched so far, Amazon has introduced a discount for the 14-inch M5 MacBook Pro, shaving off $50 from the base and 1TB storage model, meaning that the new lineup starts from $1,549 instead of $1,599, with the price cut applied to both the Space Black and Silver colors. While the discount will slowly creep up as the months go by, you might want to keep the 14-inch M4 MacBook Pro a viable option because it is oozing tremendous value right now. For those looking to save money, the 14-inch M4 […]
Read full article at https://wccftech.com/m5-macbook-pro-in-512gb-and-1tb-options-now-50-cheaper-on-amazon/


This story originally appeared on Real Estate News.
Zillow continues to be an overachiever, at least with its financial performance.
The home search giant’s revenue has consistently beat expectations for the past two years, and Q3 was no different: Revenue was $676 million for the third quarter, up 16% year-over-year and above the company’s previous guidance, driven by the strength of its rentals and mortgage divisions.
Rentals revenue was up 41% year-over-year to $174 million, while mortgage revenue increased 36% to $53 million, according to Zillow’s shareholder letter. The company’s main revenue stream, residential, rose 7% to $435 million.
Zillow also turned a profit, netting $10 million during the quarter and sustaining its run of profitability for a third consecutive quarter.
“Zillow’s Q3 results show how well we’re delivering on our mission to make buying, selling, financing and renting easier,” Zillow CEO Jeremy Wacksman said in a news release. “Zillow is leading the industry toward a more transparent, consumer-first future.”
The real estate portal also continues to see growth in its website traffic, hitting 250 million average monthly unique visitors in the third quarter, up 7% year-over-year.
Wacksman and CFO Jeremy Hofmann acknowledged that they are also aware of the “external noise” that has gotten louder in recent months, possibly referring to recent lawsuits involving the company and the debate over exclusive listings, including Zillow’s private listing ban.
Revenue: $676 million, up 16% year-over-year. Residential increased 7% to $435 million; mortgage revenue was up 36% to $53 million; and rentals revenue climbed 41% to $174 million.
Cash and investments: $1.4 billion at the end of September, up from $1.2 billion at the end of June.
Adjusted EBITDA (earnings before interest, taxes, depreciation and amortization): $165 million in Q3, up from $127 million a year earlier.
Net income/loss: A gain of $10 million in Q3, up from $2 million the previous quarter, an improvement over its $20 million loss a year ago.
Traffic and visits: Traffic across all Zillow Group websites and apps totaled 250 million average monthly unique users in Q3, up 7% year-over-year, the company said. Total visits were 2.5 billion in Q3, up 4% year-over-year.
Q4 outlook: For the fourth quarter, Zillow estimates revenue will be in the $645 million to $655 million range, which would represent high single-digit year-over-year growth.
The rise of AI marks a critical shift away from decades defined by information-chasing and a push for more and more compute power.
Canva co-founder and CPO Cameron Adams refers to this dawning time as the “imagination era.” Meaning: Individuals and enterprises must be able to turn creativity into action with AI.
Canva hopes to position itself at the center of this shift with a sweeping new suite of tools. The company’s new Creative Operating System (COS) integrates AI across every layer of content creation, creating a single, comprehensive creativity platform rather than a simple, template-based design tool.
“We’re entering a new era where we need to rethink how we achieve our goals,” said Adams. “We’re enabling people’s imagination and giving them the tools they need to take action.”
Adams describes Canva’s platform as a three-layer stack: The top Visual Suite layer containing designs, images and other content; a collaborative Canva AI plane at center; and a foundational proprietary model holding it all up.
At the heart of Canva’s strategy is its Creative Operating System (COS) underlying. This “engine,” as Adams describes it, integrates documents, websites, presentations, sheets, whiteboards, videos, social content, hundreds of millions of photos, illustrations, a rich sound library, and numerous templates, charts, and branded elements.
The COS is getting a 2.0 upgrade, but the crucial advance is the “middle, crucial layer” that fully integrates AI and makes it accessible throughout various workflows, Adams explained. This gives creative and technical teams a single dashboard for generating, editing and launching all types of content.
The underlying model is trained to understand the “complexity of design” so the platform can build out various elements — such as photos, videos, textures, or 3D graphics — in real time, matching branding style without the need for manual adjustments. It also supports live collaboration, meaning teams across departments can co-create.
With a unified dashboard, a user working on a specific design, for instance, can create a new piece of content (say, a presentation) within the same workflow, without having to switch to another window or platform. Also, if they generate an image and aren’t pleased with it, they don’t have to go back and create from scratch; they can immediately begin editing, changing colors or tone.
Another new capability in COS, “Ask Canva,” provides direct design advice. Users can tag @Canva to get copy suggestions and smart edits; or, they can highlight an image and direct the AI assistant to modify it or generate variants.
“It’s a really unique interaction,” said Adams, noting that this AI design partner is always present. “It’s a real collaboration between people and AI, and we think it’s a revolutionary change.”
Other new features include a 2.0 video editor and interactive form and email design with drag-and-drop tools. Further, Canva is now incorporated with Affinity, its unified app for pro designers incorporating vector, pixel and layer workflows, and Affinity is “free forever.”
Branding is critical for enterprise; Canva has introduced new tools to help organizations consistently showcase theirs across platforms. The new Canva Grow engine integrates business objectives into the creative process so teams can workshop, create, distribute and refine ads and other materials.
As Adams explained: “It automatically scans your website, figures out who your audience is, what assets you use to promote your products, the message it needs to send out, the formats you want to send it out in, makes a creative for you, and you can deploy it directly to the platform without having to leave Canva.”
Marketing teams can now design and launch ads across platforms like Meta, track insights as they happen and refine future content based on performance metrics. “Your brand system is now available inside the AI you’re working with,” Adams noted.
The impact of Canva’s COS is reflected in notable user metrics: More than 250 million people use Canva every month, just over 29 million of which are paid subscribers. Adams reports that 41 billion designs have been created on Canva since launch, which equates to 1 billion each month.
“If you break that down, it turns into the crazy number of 386 designs being created every single second,” said Adams. Whereas in the early days, it took roughly an hour for users to create a single design.
Canva customers include Walmart, Disney, Virgin Voyages, Pinterest, FedEx, Expedia and eXp Realty. DocuSign, for one, reported that it unlocked more than 500 hours of team capacity and saved $300,000-plus in design hours by fully integrating Canva into its content creation. Disney, meanwhile, uses translation capabilities for its internationalization work, Adams said.
Canva plays in an evolving landscape of professional design tools including Adobe Express and Figma; AI-powered challengers led by Microsoft Designer; and direct consumer alternatives like Visme and Piktochart.
Adobe Express (starting at $9.99 a month for premium features) is known for its ease of use and integration with the broader Adobe Creative Cloud ecosystem. It features professional-grade templates and access to Adobe’s extensive stock library, and has incorporated Google's Gemini 2.5 Flash image model and other gen AI features so that designers can create graphics via natural language prompts. Users with some design experience say they prefer its interface, controls and technical advantages over Canva (such as the ability to import high-fidelity PDFs).
Figma (starting at $3 a month for professional plans) is touted for its real-time collaboration, advanced prototyping capabilities and deep integration with dev workflows; however, some say it has a steeper learning curve and higher-precision design tools, making it preferable for professional designers, developers and product teams working on more complex projects.
Microsoft Designer (free version available; although a Microsoft 365 subscription starting at $9.99 a month unlocks additional features) benefits from its integration with Microsoft’s AI capabilities, Copilot layout and text generation and Dall-E powered image generation. The platform’s “Inspire Me” and “New Ideas” buttons provide design variations, and users can also import data from Excel, add 3D models from PowerPoint and access images from OneDrive.
However, users report that its stock photos and template and image libraries are limited compared to Canva's extensive collection, and its visuals can come across as outdated.
Canva’s advantage seems to be in its extensive template library (more than 600,000 ready-to-use) and asset library (141 million-plus stock photos, videos, graphics, and audio elements). Its platform is also praised for its ease of use and interface friendly to non-designers, allowing them to begin quickly without training.
Canva has also expanded into a variety of content types — documents, websites, presentations, whiteboards, videos, and more — making its platform a comprehensive visual suite than just a graphics tool.
Canva has four pricing tiers: Canva Free for one user; Canva Pro for $120 a year for one person; Canva Teams for $100 a year for each team member; and the custom-priced Canva Enterprise.
Canva’s COS is underpinned by Canva’s frontier model, an in-house, proprietary engine based on years of R&D and research partnerships, including the acquisition of visual AI company Leonardo. Adams notes that Canva works with top AI providers including OpenAI, Anthropic and Google.
For technology teams, Canva’s approach offers important lessons, including a commitment to openness. “There are so many models floating around,” Adams noted; it’s important for enterprises to recognize when they should work with top models and when they should develop their own proprietary ones, he advised.
For instance, OpenAI and Anthropic recently announced integrations with Canva as a visual layer because, as Adams explained, they realized they didn’t have the capability to create the same kinds of editable designs that Canva can. This creates a mutually-beneficial ecosystem.
Ultimately, Adams noted: “We have this underlying philosophy that the future is people and technology working together. It's not an either or. We want people to be at the center, to be the ones with the creative spark, and to use AI as a collaborator.”

 Mixboard is an experimental, AI-powered concepting board designed to help you explore, expand and refine your ideas. You can bring your own images, or use AI to generate…
Mixboard is an experimental, AI-powered concepting board designed to help you explore, expand and refine your ideas. You can bring your own images, or use AI to generate…	Reddit continues to drive more interest, as people come for human insights.
Another step towards creator monetization in the app,

Trump officials had hoped to secure a TikTok sell-off agreement from this meeting.
The option will add another padlock to your personal WhatsApp chest of secrets.

More ways to manage your Threads engagement, though it could also help to hide critics.

The tool is built to help you expand upon your visual examples with spoken queries.
Meta's giving advertisers more tools to help generate and qualify leads.

Amazon beat estimates for its third-quarter earnings with $180.2 billion in revenue, up 13% year-over-year, and earnings per share of $1.95, up from $1.43 in the year-ago period.
Amazon shares were up more than 11% in after-hours trading. Growth in the company’s stock has lagged behind rivals Microsoft and Google this year.
Investors were likely pleased with a re-acceleration in Amazon’s closely watched cloud computing unit, which reported $33 billion in sales, up 20% year-over-year and topping analyst estimates. In a press release, Amazon CEO Andy Jassy said AWS is “growing at a pace we haven’t seen since 2022.”
“We continue to see strong demand in AI and core infrastructure, and we’ve been focused on accelerating capacity — adding more than 3.8 gigawatts in the past 12 months,” Jassy added.
The cloud growth should help Amazon counter the Wall Street narrative that its cloud business is falling behind Microsoft and Google in pursuing the AI opportunity.
Amazon’s overall operating income reached $17.4 billion in the third quarter — flat compared to a year ago. The company had forecast operating income of $15.5 billion to $20.5 billion.
The company said its Q3 operating income reflected two special charges:
The workforce reduction comes amid an efficiency push at Amazon. Jassy has cited a need to reduce bureaucracy and become more efficient in the new era of artificial intelligence.

Online store sales were $67.4 billion, up 10%.
Here are more details from the second quarter earnings report:
Advertising: The company’s ad business brought in $17.7 billion in revenue in the quarter, up 24% from the year-ago period, topping estimates. Advertising, along with AWS, is a major profit engine.
Third-party seller services: Revenue from third-party seller services was up 12% to $42.5 billion.
Shipping costs: Amazon spent $25.4 billion on shipping in Q3, up 8%.
Physical stores: The category, which includes Whole Foods and other Amazon grocery stores, posted revenue of $5.6 billion, up 7%.
Headcount: Amazon employs 1.57 million people, up 2% year-over-year. That figure does not include seasonal and contract workers.
Prime: Subscription services revenue, which includes Prime memberships, came in at $12.6 billion, up 11%.
Guidance: The company forecasts Q4 sales between $206 billion and $213 billion. Operating income is expected to range between $21 billion and $26 billion, compared with $21.2 billion in the year-ago quarter.
$AMZN Amazon Q3 FY25:
— App Economy Insights (@EconomyApp) October 30, 2025
• Revenue +13% Y/Y to $180.2B ($2.4B beat).
• Operating margin 10% (+0.5pp Y/Y).
• EPS $1.95 ($0.39 beat).
• Q4 Guidance: ~$209.5B ($1.4B beat).
☁️ AWS:
• Revenue +20% Y/Y to $33.0B.
• Operating margin 35% (-3pp Y/Y). pic.twitter.com/2kaNIvC7oy


		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
Anthropic's research explores how large language models perceive and structure text instead of simply producing it word by word.
The post Anthropic Research Shows How LLMs Perceive Text appeared first on Search Engine Journal.
		 
	
		 
	
		 
	
The post Sora Unleashes New Era of AI Character Animation appeared first on StartupHub.ai.
The barrier between imagination and animated reality has just dissolved, fundamentally altering the landscape for content creators and AI developers alike. OpenAI’s latest announcement, “Sora Character Cameos,” showcased in a refreshingly unconventional promotional video, signals a profound shift in how digital characters can be conceived, generated, and deployed. This is not merely an incremental update; […]
The post Sora Unleashes New Era of AI Character Animation appeared first on StartupHub.ai.
The post Alphabet’s AI Advantage: A Bullish Outlook on Google’s Enduring Dominance appeared first on StartupHub.ai.
Alphabet is not merely participating in the artificial intelligence revolution; it is poised to be its definitive winner, a sentiment articulated by Michael Nathanson, founding partner and senior research analyst at MoffettNathanson, during a recent discussion on CNBC’s ‘Power Lunch.’ His perspective challenges the prevailing narrative that AI could destabilize Google’s foundational search business, instead […]
The post Alphabet’s AI Advantage: A Bullish Outlook on Google’s Enduring Dominance appeared first on StartupHub.ai.
The post OpenAI Aardvark is a GPT-5 agent that hunts security bugs appeared first on StartupHub.ai.
OpenAI's Aardvark is an autonomous AI agent that uses GPT-5 to hunt for software vulnerabilities like a human security researcher.
The post OpenAI Aardvark is a GPT-5 agent that hunts security bugs appeared first on StartupHub.ai.
The post Google’s Model Armor: The AI Bodyguard Preventing Digital Catastrophes appeared first on StartupHub.ai.
The proliferation of AI applications, while transformative, introduces an intricate web of new security vulnerabilities that demand a specialized defense. In a recent “Serverless Expeditions” episode, Google Cloud Developer Advocate Martin Omander spoke with Security Advocate Aron Eidelman about Model Armor, Google’s latest offering designed to shield AI applications from a range of emerging threats. […]
The post Google’s Model Armor: The AI Bodyguard Preventing Digital Catastrophes appeared first on StartupHub.ai.
The post Archy funding hits $20M to kill the dental server closet appeared first on StartupHub.ai.
Archy's AI platform aims to save dental practices 80 hours a month by automating the tedious admin work that leads to staff burnout.
The post Archy funding hits $20M to kill the dental server closet appeared first on StartupHub.ai.
The post Alphabet’s AI Investments Drive Record Revenue, Defying Cannibalization Fears appeared first on StartupHub.ai.
Alphabet’s recent earnings call revealed a pivotal moment for the tech giant: the tangible monetization of its extensive AI infrastructure bets, validating a long-term strategy that is now driving unprecedented growth across its core businesses. This robust performance underscores a critical shift in the AI landscape, where strategic investments are now yielding significant, measurable returns, […]
The post Alphabet’s AI Investments Drive Record Revenue, Defying Cannibalization Fears appeared first on StartupHub.ai.
The post Anthropic’s Latest: Claude Code on the Web and Haiku 4.5 Reshape Developer Workflows appeared first on StartupHub.ai.
The future of software development is not merely assisted by AI, but actively orchestrated by it, a vision Anthropic brings closer with its latest advancements: Claude Code on the Web and the powerful, cost-efficient Haiku 4.5 model. These releases, detailed by a company representative in a recent video, signal a profound shift towards more intuitive, […]
The post Anthropic’s Latest: Claude Code on the Web and Haiku 4.5 Reshape Developer Workflows appeared first on StartupHub.ai.
The post Cameo CEO: OpenAI’s Trademark Infringement Threatens Brand Authenticity appeared first on StartupHub.ai.
The burgeoning landscape of artificial intelligence, while promising innovation, is simultaneously exposing the critical fault lines in intellectual property law, particularly concerning brand identity. This tension was starkly illuminated when Steven Galanis, CEO of the personalized celebrity video platform Cameo, appeared on CNBC’s “Money Movers” to discuss his company’s trademark lawsuit against OpenAI. Galanis articulated […]
The post Cameo CEO: OpenAI’s Trademark Infringement Threatens Brand Authenticity appeared first on StartupHub.ai.
The post Esri AWS AI deal targets generative AI for maps appeared first on StartupHub.ai.
The Esri AWS AI collaboration aims to transform static maps into dynamic, predictive tools using generative AI foundation models.
The post Esri AWS AI deal targets generative AI for maps appeared first on StartupHub.ai.
 AMD is on a mission: replace its proprietary AGESA firmware for its Ryzen and EPYC processors with openSIL, which stands for Open-Source Silicon Initialization Library. The move to openSIL will improve security and scalability while improving customization and control for AMD's customers, including end users. However, that's not what we're
AMD is on a mission: replace its proprietary AGESA firmware for its Ryzen and EPYC processors with openSIL, which stands for Open-Source Silicon Initialization Library. The move to openSIL will improve security and scalability while improving customization and control for AMD's customers, including end users. However, that's not what we're	 Discussion surrounding the use of AI in game development is once again at the forefront, after EA recently announced its partnership with Stability AI. Those who work in the industry at a variety of levels have chimed in on the matter, and that now includes Strauss Zelnick, CEO of Take-Two Interactive, one of the biggest game publishers in
Discussion surrounding the use of AI in game development is once again at the forefront, after EA recently announced its partnership with Stability AI. Those who work in the industry at a variety of levels have chimed in on the matter, and that now includes Strauss Zelnick, CEO of Take-Two Interactive, one of the biggest game publishers in	 Nintendo's ongoing patent lawsuit case against Palworld developers Pocketpair has hit a surprising speedbump, though the battle is far from over. For those unaware, Nintendo is suing Pocketpair for patent infringement in Japanese court, and two of the three patents it is claiming have been violated relate to game mechanics that revolve around
Nintendo's ongoing patent lawsuit case against Palworld developers Pocketpair has hit a surprising speedbump, though the battle is far from over. For those unaware, Nintendo is suing Pocketpair for patent infringement in Japanese court, and two of the three patents it is claiming have been violated relate to game mechanics that revolve around	RDNA 1 and RDNA 2 graphics cards will continue to receive driver updates for critical security and bug fixes. To focus on optimizing and delivering new and improved technologies for the latest GPUs, AMD Software Adrenalin Edition 25.10.2 is placing Radeon RX 5000 and RX 6000 series graphics cards (RDNA 1 and RDNA 2) into maintenance mode. Future driver updates with targeted game optimizations will focus on RDNA 3 and RDNA 4 GPUs.

Leaders in the Pacific Northwest are largely bullish on the region’s continued economic success — but one threat to the region’s fiscal progress worries them in particular.
“What always strikes me, whether I’m in City Hall in Vancouver or Seattle or Portland, is that everybody talks about the same thing — the high cost of housing,” said Microsoft President Brad Smith at this week’s Cascadia Innovation Corridor conference in Seattle.
“It’s become an enormous barrier, not just for attracting new talent, but for enabling teachers and police officers and nurses and firefighters to live in the communities in which they serve,” he added.
Dr. Tom Lynch, president and director of Seattle’s Fred Hutch Cancer Center, was more succinct.
“My people can’t find places to live,” Lynch said during a Tuesday panel at the same event.
Those concerns are bolstered by research in a new report on the economic viability of the corridor running from Vancouver, B.C., through Seattle to Portland.
Housing costs were cited as one of the top threats to the region’s success, noting that Vancouver’s housing-cost-to-income-ratio disparity is among the worst in the world, while in Seattle median home prices relative to wages have doubled in the past 15 years. Portland reports a net out-migration as workers move to more affordable areas.
Other concerns include rising business costs and regulations, declining numbers of skilled workers and new restrictions on foreign talent immigrating to the U.S., and clean energy shortages.

“We’ve got to find ways to be able to increase the density of our housing, come up with creative solutions for allowing more families to be able to live close to where the jobs are,” Lynch said.
Smith agreed, adding, “The only way to dig ourselves out of this is to harness the power of the market through public-private partnerships, to recognize that zoning and permitting needs to be put to work to accelerate investment.”
Area tech giants have been pursuing those partnerships to tackle the challenge.
In 2019, Microsoft pledged $750 million to boost the affordable housing inventory and has helped build or retain 12,000 units in the region. Amazon in recent years has committed $3.4 billion for housing across three hubs nationally where it has large operations. The company in September celebrated a milestone of building or preserving 10,000 units in the Seattle area.
Despite the efforts, Smith said the shortage keeps worsening and in 2025, new construction starts are expected to be the lowest since before the Great Recession.
The city of Seattle, for one, is looking to sweeten a property-tax exemption deal for developers that could encourage construction and it’s also applying AI to permitting process in an effort to speed up projects.
Smith also promoted the long-held vision of a high-speed rail line in the Pacific Northwest that would make commutes much faster between growing urban hubs. But a panel Wednesday cautioned that dream is still many years out.
		 
	
Shares of Navan closed at $20, down 20%, in first-day trading on Thursday, indicating lackluster investor demand for the long-awaited debut.
Navan, which operates an expense management platform with an emphasis on travel, had priced shares for its offering at $25 each late Wednesday. It was formerly called TripActions, with the company pivoting to a broader platform when revenue reached zero right after the COVID pandemic hit.
The offering raised $923.1 million for the company, whose shares are trading on the Nasdaq under the ticker NAVN. It set an initial valuation of around $6.2 billion.
The move to the public markets has been a long time coming for Palo Alto, California-based Navan, which reportedly first submitted confidential paperwork for a planned offering more than three years ago.
The company had raised $1.2 billion in debt financing and $1 billion in equity funding from venture investors and credit providers, per Crunchbase data. Major venture stakeholders include Andreessen Horowitz, Lightspeed Venture Partners and Zeev Ventures.
Navan had revenue of $329 million in the first half of 2025, up 30% year over year. Growth comes as the company has been investing in developing its agentic AI offering, Navan Cognition, to automate more cumbersome tasks around travel planning and reporting.
Still, the company remains far from profitable. Navan’s net loss for the first half of this year came in just shy of $100 million — up about 7% from the year-earlier period. The loss comes amid higher spending on both R&D and sales and marketing — common for companies on the IPO track — looking to appeal to growth-hungry investors.
Per its IPO filing, Navan has incurred net losses in each year since its inception in 2015 and “may not achieve or, if achieved, sustain profitability in the future.”
IPO activity has picked up in 2025, with Navan one of several larger recent debuts, including well-received entries by consumer fintech Klarna and blockchain lender Figure. We’re also seeing heightened buzz around potential new market entrants.
Illustration: Dom Guzman
 
	




Update 30/10/2025: It's official, ARC Raiders' launch on Steam is bigger than The Finals, as it reaches 243,386 concurrent players on Steam. Original Story: Embark Studios' third-person extraction shooter, ARC Raiders, is now live and available on PC, PS5, and Xbox Series X/S and, at least on Steam, we know that the game is having a massive launch. It's even on track to surpass what The Finals accomplished in 2024, as ARC Raiders has over 200K concurrent players on Steam at the time of this writing. Per SteamDB, an hour after it went live, it had reached 140K concurrent players […]
Read full article at https://wccftech.com/arc-raiders-surpasses-200k-concurrent-players-on-steam-at-launch/

UK-based studio Maze Theory (which previously made several Doctor Who VR games and Peaky Blinders: The King's Ransom) and publisher Vertigo Games have just announced the release date of Thief: Legacy of Shadow, a new stealth action game coming out on December 4. Unfortunately, the game will only be available for virtual reality devices, which still comprise a minuscule portion of the overall gaming industry. That said, if you are a VR aficionado, chances are you already have a PlayStation VR2, Quest 2 or 3, or a Steam VR-compatible headset. Legacy of Shadow will bring players back to The City […]
Read full article at https://wccftech.com/theres-a-new-thief-game-coming-out-this-year-though-only-for-vr-devices/

The first and second-gen RDNA lineups are being ditched so quickly, and as per the company's latest statement, these will only receive critical updates. AMD Confirms End of Game Optimization and Feature Updates for Radeon RX 5000 and RX 6000 Series AMD's first RDNA GPU series, aka Radeon RX 5000, is hardly 6 years old, and if you think it is too early to see a drop in official optimization and feature updates, then we have the same news for RX 6000 GPU owners. If you checked the latest release notes for AMD's latest Adrenalin Edition 25.10.2, which officially adds […]
Read full article at https://wccftech.com/amd-rdna-1-2-gpu-driver-support-moved-to-maintenance-mode-game-optimizations-new-tech-for-rdna-3-4-beyond/

Today, Deadline reports that the Call of Duty movie has secured a director and a writer: Peter Berg and Taylor Sheridan. Berg will direct and also co-write alongside Taylor Sheridan. Berg is known for directing several action thriller movies, including 2007's The Kingdom, 2013's Lone Survivor, and 2018's Mile 22. Sheridan, on the other hand, is primarily known as the creator of the Yellowstone franchise (as well as Mayor of Kingstown, Tulsa King, and Lioness), but he also wrote the two Sicario movies. Berg and Sheridan worked together on Hell or High Water, the acclaimed 2016 crime drama film that […]
Read full article at https://wccftech.com/call-of-duty-movie-directed-peter-berg-written-taylor-sheridan/

Metal Gear Solid Delta: Snake Eater launched late this summer on August 28, 2025, on PC, PS5, and Xbox Series X/S, without its announced multiplayer mode, Fox Hunt. We learned a little before Snake Eater's release that Fox Hunt would not arrive alongside the rest of the game, and now that day is finally here, marked with a new gameplay trailer showcasing the mode. The PvP stealth-action mode leans on all the stealth mechanics in the core Snake Eater campaign, and challenges players to make use of everything in Snake's toolbox as they try to be the sneakiest Fox Unit […]
Read full article at https://wccftech.com/metal-gear-solid-delta-snake-eater-fox-hunt-now-live-pc-ps5-xbox-series-x-s/

Intel's former CEO, Pat Gelsinger, has shared his thoughts on NVIDIA producing the first Blackwell chip wafer in the US, expressing his pleasure with the pursuit of American manufacturing. Intel's Pat Gelsinger Supports NVIDIA's Efforts to Bring Advanced Product Manufacturing to the US This marks one of the rare occasions where Gelsinger has actually appreciated NVIDIA's efforts in the AI segment, as, based on some of his past remarks about the firm, Team Green didn't align with what Intel's former CEO had expected from AI. On a post on X, Pat Gelsinger expressed appreciation for NVIDIA's efforts to bring manufacturing […]
Read full article at https://wccftech.com/intel-ex-ceo-pat-gelsinger-praises-nvidia-us-made-blackwell-wafer/
While EA is all-in on Battlefield 6 right now after it had a massive launch at the beginning of October and earlier this week launched its battle royale mode, Battlefield REDSEC, Respawn is still chugging away at Apex Legends, with its 27th season set to arrive next week, titled 'Amped.' The new season arrives with a refresh to one of the game's more popular maps, Olympus, and buffs for a few Legends, specifically Valkyrie, Rampart, and Horizon. This season also adds new mechanics that'll make the game even faster than it already is, with a new mantle boost giving players […]
Read full article at https://wccftech.com/apex-legends-season-27-amped-arrives-next-week/

After releasing a bespoke self-repair manual for each of its iPhone 17 models, Apple has now made available the spare parts for the new lineup, and some of those are, unsurprisingly, quite pricey. Apple has now made available the key spare parts for the iPhone 17 lineup via its Self-Service Repair Store Before going further, do note that the self-repair manuals and these spare parts have constituted a significant component of this year's iFixit score improvements, albeit very slight ones, for the new Apple hardware. The following spare parts are now available in Apple's Self-Service Repair Store for the base […]
Read full article at https://wccftech.com/apple-iphone-air-battery-replacement-will-cost-you-119-iphone-17-pro-max-display-will-set-you-back-by-379/

Final Fantasy Tactics - The Ivalice Chronicles doesn't feature any of the additional content found in the War of the Lions release and the mobile versions of the game, but this could quickly become a thing of the past, as a new mod now available online restores a small portion of this additional content. The WotL Character Repair mod, developed by Dana Crysalis and now available for download for free from Nexus Mods, fixes Balthier and Luso, making it possible to add them to the party via Cheat Engine tables and use them in battle with their unique Jobs and […]
Read full article at https://wccftech.com/new-final-fantasy-tactics-the-ivalice-chronicles-mod-begins-restoration-of-war-of-the-lions-content/

Amidst a massive layoff at Amazon, which cut 14,000+ employees and killed further development of New World: Aeternum, has also reportedly killed (for a second time) the Lord of the Rings MMO that was in production at Amazon Game Studios. Spotted by Rock Paper Shotgun, a now former Amazon Game Studios senior gameplay engineer, Ashleigh Amrine, confirmed in a post on her personal LinkedIn page that the "fledgling Lord of the Rings game" was part of the cuts at Amazon. "This morning I was part of layoffs at Amazon Games, alongside my incredibly talented peers on New World and our […]
Read full article at https://wccftech.com/amazon-game-studios-lord-of-the-rings-mmo-cancelled-again-amidst-mass-layoffs/

iFixit has just published a nearly 6.5-minute video on YouTube detailing the repairability metrics for the new Apple M5 iPad Pro, concluding that the device remains one of the least repairable hardware products from Apple. However, the new self-service tools do manage to boost its overall repairability score. iFixit: "At just 5.1mm thickness, it's thinner than an iPhone Air, which means the screen is mounted flush against the internals" iFixit has noted the following about the new M5 iPad Pro: On the whole, iFixit has pegged a 5/10 provisional repairability score to the new M5 iPad Pro. The M5 iPad […]
Read full article at https://wccftech.com/ifixit-apples-self-service-tools-for-the-m5-ipad-pro-bump-up-its-repairability-score/

NVIDIA's Jensen Huang is currently in South Korea for the APEC summit, and it seems he is having a pretty interesting day, spending time with his 'executive friends' at Samsung and Hyundai. Jensen Got a 'Little Too Comfortable' In His Visit to Korea, After Delivering the GTC 2025 Keynote This week has been a jam-packed one for NVIDIA's CEO, as Jensen delivered one of the most important keynotes of his career and then took a flight straight to South Korea for the APEC summit, where he met with Samsung's Chairman Lee Jae-yong and the President of Hyundai Motors, Chung Eui-sun. […]
Read full article at https://wccftech.com/nvidia-ceo-is-having-a-great-time-with-samsung-hyundai-executives/

ARC Raiders is out today on PC, PS5, and Xbox Series X/S consoles, but more importantly for PC players, Embark Studios' latest arrives with support for NVIDIA DLSS 4 with Multi-Frame Generation and NVIDIA Reflex. Furthermore, if you have an RTX 50 Series graphics card in your PC, then you'll be able to multiply the frame rates you see in ARC Raiders by an average of 3.6X, even when playing at 4K. According to NVIDIA, when playing ARC Raiders on an RTX 50 Series card, with DLSS 4 and Multi-Frame Generation while also using DLSS Super Resolution, you can "multiply […]
Read full article at https://wccftech.com/arc-raiders-nvidia-dlss-4-better-performance-multi-frame-generation/

Today, independent developer TaleWorlds Entertainment has confirmed the release date of War Sails, the upcoming expansion for Mount & Blade II: Bannerlord. War Sails was originally scheduled to launch in June, though it was ultimately delayed by TaleWorlds. The expansion is now set to go live on November 26 at 00:00 Pacific Time, 03:00 Eastern Time, 09:00 Central European Time. It will be a simultaneous release on PC and consoles (PlayStation 5 and Xbox Series S|X). Pricing has been confirmed to be $24.99. The announcement was paired with an extensive gameplay showcase that demonstrated the expansion's main features. Players learned […]
Read full article at https://wccftech.com/war-sails-naval-expansion-mount-blade-ii-bannerlord-out-november-26/

keinsaas Navigator combines AI speed with expert reliability to create n8n workflows that actually work. Simply describe your process, and our AI generates a complete workflow using knowledge from 5000+ proven templates. The key difference: automation experts review and optimize everything before delivery.
In 24 hours, you get a production-ready n8n workflow with setup docs, ready to deploy. No technical learning curve, no broken implementations - just reliable automation from manual process to professional solution.
 We’re introducing a new logs and datasets feature in Google AI Studio.
We’re introducing a new logs and datasets feature in Google AI Studio.	
		 
	

		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	

The University of Washington’s Paul G. Allen School of Computer Science & Engineering is reframing what it means for its research to change the world.
In unveiling six “Grand Challenges” at its annual Research Showcase and Open House in Seattle on Wednesday, the Allen School’s leaders described a blueprint for technology that protects privacy, supports mental health, broadens accessibility, earns public trust, and sustains people and the planet.
The idea is to “organize ourselves into some more specific grand challenges that we can tackle together to have an even greater impact,” said Magdalena Balazinska, director of the Allen School and a UW computer science professor, opening the school’s annual Research Showcase and Open House.
Here are the six grand challenges:
Balazinska explained that the list draws on the strengths and interests of its faculty, who now number more than 90, including 74 on the tenure track.
With total enrollment of about 2,900 students, last year the Allen School graduated more than 600 undergrads, 150 master’s students, and 50 Ph.D. students.
The Allen School has grown so large that subfields like systems and NLP (natural language processing) risk becoming isolated “mini departments,” said Shwetak Patel, a University of Washington computer science professor. The Grand Challenges initiative emerged as a bottom-up effort to reconnect these groups around shared, human-centered problems.
Patel said the initiative also encourages collaborations on campus beyond the computer science school, citing examples like fetal heart rate monitoring with UW Medicine.
A serial entrepreneur and 2011 MacArthur Fellow, Patel recalled that when he joined UW 18 years ago, his applied and entrepreneurial focus was seen as unconventional. Now it’s central to the school’s direction. The grand challenges initiative is “music to my ears,” Patel said.
In tackling these challenges, the Allen School has a unique advantage against many other computer science schools. Eighteen faculty members currently hold what’s known as “concurrent engagements” — formally splitting time between the Allen School and companies and organizations such as Google, Meta, Microsoft, and the Allen Institute for AI (Ai2).

This is a “superpower” for the Allen School, said Patel, who has a concurrent engagement at Google. These arrangements, he explained, give faculty and students access to data, computing resources, and real-world challenges by working directly with companies developing the most advanced AI systems.
“A lot of the problems we’re trying to solve, you cannot solve them just at the university,” Patel said, pointing to examples such as open-source foundation models and AI for mental-health research that depend on large-scale resources unavailable in academia alone.
These roles can also stretch professors thin. “When somebody’s split, there’s only so much mental energy you can put into the university,” Patel said. Many of those faculty members teach just one or two courses a year, requiring the school to rely more on lecturers and teaching faculty.
Still, he said, the benefits outweigh the costs. “I’d rather have 50% of somebody than 0% of somebody, and we’ll make it work,” he said. “That’s been our strategy.”
The Madrona Prize, an annual award presented at the event by the Seattle-based venture capital firm, went to a project called “Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward.” The system makes AI chatbots more personal by giving them a “curiosity reward,” motivating the AI to actively learn about a user’s traits during a conversation to create more personalized interactions.
On the subject of industry collaborations, the lead researcher on the prize-winning project, UW Ph.D. student Yanming Wan, conducted the research while working as an intern at Google DeepMind. (See full list of winners and runners-up below.)
At the evening poster session, graduate students filled the rooms to showcase their latest projects — including new advances in artificial intelligence for speech, language, and accessibility.
DopFone: Doppler-based fetal heart rate monitoring using commodity smartphones

DopFone transforms phones into fetal heart rate monitors. It uses the phone speaker to transmit a continuous sine wave and uses the microphone to record the reflections. It then processes the audio recordings to estimate fetal heart rate. It aims to be an alternative to doppler ultrasounds that require trained staff, which aren’t practical for frequent remote use.
“The major impact would be in the rural, remote and low-resource settings where access to such maternity care is less — also called maternity care deserts,” said Poojita Garg, a second-year PhD student.
CourseSLM: A Chatbot Tool for Supporting Instructors and Classroom Learning

This custom-built chatbot is designed to help students stay focused and build real understanding rather than relying on quick shortcuts. The system uses built-in guardrails to keep learners on task and counter the distractions and over-dependence that can come with general large language models.
Running locally on school devices, the chatbot helps protect student data and ensures access even without Wi-Fi.
“We’re focused on making sure students have access to technology, and know how to use it properly and safely,” said Marquiese Garrett, a sophomore at the UW.
Efficient serving of SpeechLMs with VoxServe

VoxServe makes speech-language models run more efficiently. It uses a standardized abstraction layer and interface that allows many different models to run through a single system. Its key innovation is a custom scheduling algorithm that optimizes performance depending on the use case.
The approach makes speech-based AI systems faster, cheaper, and easier to deploy, paving the way for real-time voice assistants and other next-gen speech applications.
“I thought it would be beneficial if we can provide this sort of open-source system that people can use,” said Keisuke Kamahori, third-year Ph.D. student at the Allen School.
ConvFill: Model collaboration for responsive conversational voice agents

ConvFill is a lightweight conversational model designed to reduce the delay in voice-based large language models. The system responds quickly with short, initial answers, then fills in more detailed information as larger models complete their processing.
By combining small and large models in this way, ConvFill delivers faster responses while conserving tokens and improving efficiency — an important step toward more natural, low-latency conversational AI.
“This is an exciting way to think about how we can combine systems together to get the best of both worlds,” said Zachary Englhardt, a third-year Ph.D. student. “It’s an exciting way to look at problems.”
ConsumerBench: Benchmarking generative AI on end-user devices

Running generative AI locally — on laptops, phones, or other personal hardware — introduces new system-level challenges in fairness, efficiency, and scheduling.
ConsumerBench is a benchmarking framework that tests how well generative AI applications perform on consumer hardware when multiple AI models run at the same time. The open-source tool helps researchers identify bottlenecks and improve performance on consumer devices.
There are a number of benefits to running models locally: “There are privacy purposes — a user can ask for questions related to email or private content, and they can do it efficiently and accurately,” said Yile Gu, a third-year Ph.D. student at the Allen School.
Designing Chatbots for Sensitive Health Contexts: Lessons from Contraceptive Care in Kenyan Pharmacies

A project aimed at improving contraceptive access and guidance for adolescent girls and young women in Kenya by integrating low-fidelity chatbots into healthcare settings. The goal is to understand how chatbots can support private, informed conversations and work effectively within pharmacies.
“The fuel behind this whole project is that my team is really interested in improving health outcomes for vulnerable populations,” said Lisa Orii, a fifth-year Ph.D. student.
See more about the research showcase here. Here’s the list of winning projects.
Madrona Prize Winner: “Enhancing Personalized Multi-Turn Dialogue with Curiosity Reward” Yanming Wan, Jiaxing Wu, Marwa Abdulhai, Lior Shani, Natasha Jaques
Runner up: “VAMOS: A Hierarchical Vision-Language-Action Model for Capability-Modulated and Steerable Navigation” Mateo Guaman Castro, Sidharth Rajagopal, Daniel Gorbatov, Matt Schmittle, Rohan Baijal, Octi Zhang, Rosario Scalise, Sidharth Talia, Emma Romig, Celso de Melo, Byron Boots, Abhishek Gupta
Runner up: “Dynamic 6DOF VR reconstruction from monocular videos” Baback Elmieh, Steve Seitz, Ira-Kemelmacher, Brian Curless
People’s Choice: “MolmoAct” Jason Lee, Jiafei Duan, Haoquan Fang, Yuquan Deng, Shuo Liu, Boyang Li, Bohan Fang, Jieyu Zhang, Yi Ru Wang, Sangho Lee, Winson Han, Wilbert Pumacay, Angelica Wu, Rose Hendrix, Karen Farley, Eli VanderBilt, Ali Farhadi, Dieter Fox, Ranjay Krishna
Editor’s Note: The University of Washington underwrites GeekWire’s coverage of artificial intelligence. Content is under the sole discretion of the GeekWire editorial team. Learn more about underwritten content on GeekWire.
		 
	
		 
	
		 
	
		 
	
Entering a new era of Base44
The post Perplexity’s AI Patent Search Aims to Demystify IP for Everyone appeared first on StartupHub.ai.
Perplexity Patents leverages advanced AI to transform complex patent research into a conversational, accessible experience, democratizing IP intelligence for innovators worldwide.
The post Perplexity’s AI Patent Search Aims to Demystify IP for Everyone appeared first on StartupHub.ai.
The post AI Breast Cancer Screening Transforms Rural India Access appeared first on StartupHub.ai.
AI breast cancer screening, powered by MedCognetics and NVIDIA, is bringing critical early detection capabilities to rural India via mobile clinics.
The post AI Breast Cancer Screening Transforms Rural India Access appeared first on StartupHub.ai.
The post Meta’s AI Patience Test: Goldman Sachs on Divergent Tech Fortunes appeared first on StartupHub.ai.
The market’s patience for capital expenditure, particularly in the burgeoning field of artificial intelligence, has become a defining factor in big tech’s recent earnings reactions. This sentiment was acutely underscored when Eric Sheridan, Goldman Sachs’ Co-Head of Tech, Media, and Telecom Research, joined CNBC’s “Squawk on the Street” team to dissect the third-quarter earnings of […]
The post Meta’s AI Patience Test: Goldman Sachs on Divergent Tech Fortunes appeared first on StartupHub.ai.
The post Bevel raises $10M to advance its AI health companion appeared first on StartupHub.ai.
Bevel raised $10 million in a Series A round led by General Catalyst to develop its AI health companion for personalized wellness management.
The post Bevel raises $10M to advance its AI health companion appeared first on StartupHub.ai.
The post How labor shortages may delay data center plans appeared first on StartupHub.ai.
The burgeoning demand for data center capacity, fueled by the insatiable appetite for artificial intelligence and cloud computing, is encountering a significant impediment: a critical shortage of skilled labor. CNBC’s Kate Rogers reported on this burgeoning issue, highlighting how the construction and operational needs of these vital infrastructure hubs are being hampered by a lack […]
The post How labor shortages may delay data center plans appeared first on StartupHub.ai.
The post Enterprise AI Failures: A Startup’s Gold Rush appeared first on StartupHub.ai.
The recent MIT “State of AI in Business 2025” report, widely circulated and often misinterpreted, claims a staggering 95% failure rate for enterprise AI projects. Far from signaling AI’s inherent flaws, this statistic, as dissected by Y Combinator partners Garry Tan, Harj Taggar, Diana Hu, and Jared Friedman on their Lightcone podcast, illuminates a profound […]
The post Enterprise AI Failures: A Startup’s Gold Rush appeared first on StartupHub.ai.
The post Kaizen funding hits $21M to fix awful gov websites appeared first on StartupHub.ai.
Kaizen is using its new $21M in funding to prove that booking a campsite or a DMV appointment can be as seamless as any modern e-commerce experience.
The post Kaizen funding hits $21M to fix awful gov websites appeared first on StartupHub.ai.
The post No Dark GPUs: Why AI Isn’t a Bubble, But an Existential Race appeared first on StartupHub.ai.
“I do not believe we’re in an AI bubble today,” declared Gavin Baker, Managing Partner and CIO of Atreides Management, setting a provocative tone for his discussion with David George, General Partner at a16z. This assertion, delivered at a16z’s Runtime event, anchored a sharp analysis of the current AI boom, differentiating it starkly from past […]
The post No Dark GPUs: Why AI Isn’t a Bubble, But an Existential Race appeared first on StartupHub.ai.
The post Legora raises $150M to advance its AI legal platform appeared first on StartupHub.ai.
Legal technology company Legora raised $150 million to expand its AI-powered platform used by lawyers for research, drafting, and document review.
The post Legora raises $150M to advance its AI legal platform appeared first on StartupHub.ai.
The post Google’s AI Carbon Removal Strategy Takes Shape in Brazil appeared first on StartupHub.ai.
Google's new initiative in Brazil demonstrates how AI is becoming indispensable for scaling diverse carbon removal technologies, from methane capture to reforestation.
The post Google’s AI Carbon Removal Strategy Takes Shape in Brazil appeared first on StartupHub.ai.
The post Andrew Yang on AI’s Economic Storm and Shifting Political Tides appeared first on StartupHub.ai.
“AI is decimating entry-level jobs.” This stark declaration from Andrew Yang, founder and CEO of Noble Mobile, and former Democratic presidential candidate, cut through the morning bustle of CNBC’s ‘Squawk Box.’ Speaking with interviewers Andrew Ross Sorkin and Becky Quick, Yang offered a compelling commentary on the intertwined forces of technological disruption and political realignment, […]
The post Andrew Yang on AI’s Economic Storm and Shifting Political Tides appeared first on StartupHub.ai.
The post The Prompting Company Raises $6.5M for Generative AI Advertising appeared first on StartupHub.ai.
The Prompting Company raises $6.5M to develop its generative AI advertising platform, which inserts brand mentions into AI chatbot conversations.
The post The Prompting Company Raises $6.5M for Generative AI Advertising appeared first on StartupHub.ai.

Ten years into a dream to connect Vancouver, B.C., Seattle and Portland via a high-speed rail line, stakeholders and backers of the mega-project said Wednesday that they’re still very much onboard — and to prepare for a long trip.
With a lengthy and uncertain timeline ahead, former U.S. Secretary of Transportation Ray LaHood, a speaker at the Cascadia Innovation Corridor conference in Seattle, cautioned many of those in attendance that they likely won’t live long enough to see high-speed rail in the Pacific Northwest.
“When you build big things, they cost big money,” LaHood said. “It took us 50 years to build the interstate system.”
LaHood said the key is to “get on board” now so that “our children and grandchildren” will reap the benefits.

At Cascadia Innovation Corridor’s annual event this week, much of the focus was on how to strengthen the cross-border partnership between three growing cities and numerous locales in between. Leaders discussed ideas around innovation, housing affordability, sustainability, and economic development. They signed a Memorandum of Reaffirmation to solidify commitments.
And Wednesday was about the enhanced transportation connectivity that could help drive it all, and the work that lies ahead in building a coalition of public and political support across the region, securing funding, jumpstarting planning, and more. Even producing videos like the new one below is part of the massive outreach under way.
Former Washington Gov. Chris Gregoire, Cascadia Innovation Corridor’s chair, said that a decade ago, high-speed rail was just an idea. The next decade can be a defining one.
“You would have thought we were thinking of doing something in outer space by the reaction,” she said. “Today, it is much more than an idea, and we are actually moving forward. While we do have a long way to go, as you well know, we’re funding the first phase of planning built on one of the most unique coalitions in North America.”
Envisioning a mega-region akin to Silicon Valley, in which Vancouver, Seattle and Portland are each only an hour apart, Gregoire highlighted the possibilities that could come with high-speed mobility.
“A UW student can intern in Vancouver, a family in Puget Sound can explore a job in Portland, and a cancer researcher in Vancouver can get home for dinner after a shift in Seattle,” she said. “It’s a new way of living, working and connecting, one that expands what’s possible for everyone who calls Cascadia home.”

The pace to make the dream a reality has been anything but high-speed.
In 2017, Microsoft — which has an office in downtown Vancouver — gave $50,000 to a $300,000 effort led by Washington state to study a high-speed train proposal. In 2021, officials from Washington, Oregon and British Columbia signed a memorandum of understanding to form a committee to coordinate the plan.
Last year, the Federal Railroad Administration awarded the Washington State Department of Transportation $49.7 million to develop a service development plan for Cascadia High-Speed Rail. A timeline on WSDOT’s website points to 2028 for estimated completion of that plan, and for 2029 and beyond it simply says, “future phases to be determined.”
Cascadia is not alone in its quest for high-speed rail.
LaHood, a Republican cabinet member in the Obama administration, recalled the former president’s commitment to rail transportation. He said the Trump administration “clawing back” $4 billion in funding for California’s high-speed rail project between San Francisco and Los Angeles should not be considered a “death knell,” despite challenges in that state.
LaHood pointed to Brightline train projects in Florida, connecting Orlando and Miami, and Las Vegas, with a plan to offer high-speed connectivity to Southern California. Another plan in Texas would connect Houston and Dallas. All are evidence, he said, that this mode of transportation is what Americans want in order to avoid clogged highways and airports.
“Once the politicians catch on to what the people want, boom, you get the kind of rail transportation that people are clamoring for,” LaHood said.
Here are highlights from other speakers at the conference on Wednesday:

Related:
 Could you use a new gaming laptop? If so, there's a great deal available on Dell's Alienware 16 Aurora kitted with the latest-generation CPU and GPU hardware from Intel and NVIDIA, respectively. Or you could wait for the inevitable Black Friday and Cyber Monday discounts to come into view, but why wait when you can score a bargain right now?
If
Could you use a new gaming laptop? If so, there's a great deal available on Dell's Alienware 16 Aurora kitted with the latest-generation CPU and GPU hardware from Intel and NVIDIA, respectively. Or you could wait for the inevitable Black Friday and Cyber Monday discounts to come into view, but why wait when you can score a bargain right now?
If	 Astronomers from the International Centre of Radio Astronomy Research (ICRAR), primarily based at Curtin University in Australia, have released the most detailed low-frequency radio image of the Milky Way's galactic plane ever assembled. Rather than the starry, luminous band we're more familiar with, the latest images show a vibrant tapestry
Astronomers from the International Centre of Radio Astronomy Research (ICRAR), primarily based at Curtin University in Australia, have released the most detailed low-frequency radio image of the Milky Way's galactic plane ever assembled. Rather than the starry, luminous band we're more familiar with, the latest images show a vibrant tapestry	 With streaming content being watched on bigger displays than ever before, Google wants to ensure that older, lower-resolution videos (think 480p and 720p) don't look janky on your fancy new 4K TV. YouTube is rolling out a suite of creator tools to address that and chief among them is the use of AI to breathe new life into its vast archive
With streaming content being watched on bigger displays than ever before, Google wants to ensure that older, lower-resolution videos (think 480p and 720p) don't look janky on your fancy new 4K TV. YouTube is rolling out a suite of creator tools to address that and chief among them is the use of AI to breathe new life into its vast archive	 If your router isn't using artificial intelligence in some capacity, is it really a router? The answer is yes, but be that as it may, ASUS Republic of Gamers (ROG) isn't leaving anything to chance with its new flagship Wi-Fi 7 model for gamers, the Rapture GT-BE19000AI. Billed as the "world's first AI gaming router," the device sports a built-in
If your router isn't using artificial intelligence in some capacity, is it really a router? The answer is yes, but be that as it may, ASUS Republic of Gamers (ROG) isn't leaving anything to chance with its new flagship Wi-Fi 7 model for gamers, the Rapture GT-BE19000AI. Billed as the "world's first AI gaming router," the device sports a built-in	 NVIDIA can add to its long list of accomplishments becoming the first publicly traded company to reach and surpass a $5 trillion market cap as it rides the AI chip boom to new heights. Equally impressive, the sky high valuation comes just a few months after NVIDIA breached the $4 trillion mark, surpassing Apple's previous record of $3.915
NVIDIA can add to its long list of accomplishments becoming the first publicly traded company to reach and surpass a $5 trillion market cap as it rides the AI chip boom to new heights. Equally impressive, the sky high valuation comes just a few months after NVIDIA breached the $4 trillion mark, surpassing Apple's previous record of $3.915	


Here’s what your PC needs to run Resident Evil Requiem Capcom has officially released their PC system requirements for Resident Evil Requiem, which will be arriving on Steam on February 27th 2026. On Steam, Capcom has confirmed that the game will utilise Denuvo’s Anti-Tamper Technology. Furthermore, the game will support Steam Family Sharing. Requiem’s PC […]
The post Is your PC ready for Resident Evil Requiem? – PC System Requirements released appeared first on OC3D.
Sapphire is using the power of Ryzen to fuel its new Edge AI Mini PCs Sapphire has just launched a new range of EDGE AI mini PCs, delivering compact performance with AMD’s Ryzen AI 300 series processors. These mini PCs support up to 96GB of DDR5 memory onboard and can feature up to 12 CPU […]
The post Sapphire delivers compact power with its Edge AI mini PCs appeared first on OC3D.
AMD appears to have axed “New Game Support” for its RDNA 1 and RDNA 2 GPUs Based on the release notes for AMD’s new AMD Software 25.10.2 driver, the company has dropped “New Game Support” and “Expanded Vulkan Extension” support for its older RDNA 1 and RDNA 2 graphics cards. This means that users of […]
The post AMD drops “New Game Support” for RDNA 1 and RDNA 2 GPUs with AMD Software 25.10.2 appeared first on OC3D.
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
It was 2021 and Jonathan Rat was tired of seeing his wife, a dentist, struggle to maintain the tech stack at her practice.
Rat, who had served as a product manager at companies including Uber, Meta and SurveyMonkey, dug into the problem and discovered that “most of the software used in the industry” was more than 20 years old and still required physical services onsite.
“Most lacked integration with other platforms, were slow and buggy, and impossible to train new employees on,” he recalls.

So Rat teamed up with Benjamin Kolin, a former director of engineering at Uber, to start Archy, an AI-powered platform that aims “to put dental practices on autopilot.” The pair previously led the rebuilding of Uber’s payment platform that’s still in use today.
“I realized there was a massive need and opportunity for a modern, cloud-based software platform and set out to build that,” Rat told Crunchbase News. “I also realized bigger tech players have been building software for the larger healthcare market but overlooked the $500 billion dental industry.”
And now, Archy has just raised $20 million in Series B funding to help it grow even more, it told Crunchbase News exclusively. TCV led the financing, which also included participation from Bessemer Venture Partners, CRV, Entrée Capital and 25 practicing dentists who wrote checks as angel investors. The raise brings Archy’s total funding to date to $47 million, Rat said.
The company raised a $15 million Series A led by Entrée Capital almost exactly one year ago. Rat confirmed the Series B was an up round, but declined to disclose Archy’s valuation.
Archy claims to replace more than five existing tools to handle scheduling, charting, billing, imaging, insurance, payments, staffing, messaging and reporting “from one login.”
It is now building AI agents “to handle the busywork” such as checking eligibility, filing and following up on claims, writing notes, managing patient communications and scheduling, and “turning raw practice data into clear answers,” according to Rat.
The startup processes more than $100 million in payments annually across 45 states and has seen roughly 300% year-over-year growth, he said. It currently serves 2.5 million patients and has processed over 35 million X-rays through its platform.
The company claims that mid-sized dental practices report saving around 80 hours a month by using its technology, and are able to avoid “big hardware costs.” For example, Rat said that one practice saved about $50,000 in its first year of using Archy.
San Jose, California-based Archy operates on a dual-revenue model that combines subscription-based fees with payment processing services, and offers tiered monthly subscription packages. In addition to its subscription fees, Archy serves as a merchant processor for its clients, generating revenue from a percentage of payment transactions processed through the platform.
“This hybrid approach allows us to remain aligned with our clients’ success while providing flexible options that scale with their business needs,” Rat told Crunchbase News.
The company plans to use its new capital to “hire aggressively” across its engineering, AI and go-to-market teams. Presently, it has 57 employees. It plans to expand internationally starting in 2026.
Austin Levitt, partner at TCV, told Crunchbase News via email that his firm had been looking for a way to invest in the dental space “for a long time” but didn’t find a company that was “appropriately tackling the root of the problem — the core PMS (practice management systems)” until it came across Archy.
He added: “We consistently heard that Archy was supremely easy to use, requiring almost no training in contrast to others, providing a seamless ‘iPhone-like’ experience, and reducing what took 10 clicks in other software to one or none in Archy.”
Illustration: Dom Guzman
 
	
Live-shopping startup Whatnot plans to grow its new Seattle outpost following a $225 million funding round announced this week.
The company aims to hire more than 75 employees in the region over the next six months — tripling its current local headcount — across product, engineering, and related roles.
Whatnot opened its downtown Seattle office earlier this year. The Los Angeles-based company, now valued at $11.5 billion (up from $5 billion a year ago), said the Seattle expansion is one of its largest talent investments to date.
Founded in 2019, Whatnot’s platform mixes e-commerce and livestream entertainment. Sellers host live video shows on the Whatnot app or website, auctioning or selling products in real time. Buyers can watch, chat, and bid directly during live streams.
The New York Times described the trend as “QVC for the TikTok era.” Whatnot competes against the likes of TikTok (TikTok Shop) and Seattle-based e-commerce giant Amazon (Amazon Live).
Whatnot facilitates transactions between buyers and sellers, and handles payments, logistics, and safety features. The company earns revenue by taking a commission — typically around 8% — on sales made by sellers ranging from independent entrepreneurs to established retailers.
Whatnot more than doubled live sales on its platform this year, to $6 billion. Buyers spend more than 80 minutes per day on Whatnot’s live shows, according to the company. Whatnot is not profitable.
Some of its fastest-growing categories include beauty, women’s fashion, handbags, electronics, antiques, coins, golf, snacks, and live plants.
The company’s Seattle office focuses on product and engineering, including areas such as machine learning, marketplace integrity, and trust & safety. Whatnot has 900 employees across its workforce.
Dan Bear, vice president of engineering, and Kelda Murphy, vice president of talent acquisition, are both based in Seattle. Bear previously opened Seattle offices for Snap, Hulu, and CloudKitchens.
Whatnot is one of more than 130 companies that operate satellite offices in the Seattle region, tapping into the area’s technical talent pool.
The company has 31 open positions on its jobs page. It is hosting an engineering and product networking event in Seattle on Nov. 4.




Morgan Stanley is raising alarm bells around SK Hynix's rapidly depleting DRAM inventory, which is now at effective "sold-out" levels as the AI-driven demand for high-bandwidth memory (HBM) - a type of DRAM - continues to corner an ever greater proportion of the global memory wafer capacity. SK Hynix: "DRAM (DDR5) inventory is down to about two weeks, effectively at a 'produce-and-ship' level" Morgan Stanley is sounding the proverbial gong today as SK Hynix's DRAM inventory levels continue to sink to the bottom-of-the-barrel levels. Before going further, do note that SK Hynix disclosed its earnings for the third quarter of […]
Read full article at https://wccftech.com/sk-hynix-ddr5-inventory-down-to-just-2-weeks/

Today, Nintendo announced that its second-best-selling Nintendo Switch game, Animal Crossing: New Horizons, is getting a Nintendo Switch 2 version, with graphical updates and features that take advantage of the Switch 2 and its improved hardware. Today's Animal Crossing news isn't just for Nintendo Switch 2 players; a new 3.0 title update is also on its way, which will be available to players on both Nintendo Switch and Switch 2. The new Switch 2 version of Animal Crossing: New Horizons leans on the updated hardware for several updates, each shown off in a new trailer, which also goes over the […]
Read full article at https://wccftech.com/animal-crossing-new-horizons-nintendo-switch-2-edition-next-year/

The Central Taiwan Science Park will hold immense significance in the future because it is where TSMC’s new Phase II plant will be constructed. A report states that the company is planning to establish four plants dedicated to 1.4nm production. Although full-scale manufacturing is not expected until the second half of 2028, it will set the stage for chips made on bleeding-edge lithography and also create thousands of jobs in the process. Up to 10,000 jobs can be made with TSMC’s four 1.4nm construction plans, and looking at the recent timeline, Apple will likely be the semiconductor giant’s first customer […]
Read full article at https://wccftech.com/tsmc-building-four-factories-for-1-4nm-production-each-unit-bringing-in-16-billion-revenue/

Samsung has reported its earnings for the third quarter of 2025, reporting broadly upbeat results on the back of the ongoing chip boom. Samsung Electronics Q3 2025 Earnings Highlight Here are the main highlights of the South Korean giant's latest quarterly earnings: Outlook: Commentary: Samsung Electronics has delivered an all-round pristine result for its third quarter of 2025, posting healthy growth in all segments, barring its Visual Display and Digital Appliances division, where Digital Appliances induced a modest year-over-year weakness of around 1 percent. Unsurprisingly, given the emerging dynamics in the memory business, the division recorded the most aggressive growth […]
Read full article at https://wccftech.com/samsung-electronics-q3-2025-earnings-record-revenue-on-roaring-memory-chip-demand/

NVIDIA has confirmed the list of PC games joining the ever-growing GeForce NOW library today. The highlights are Obsidian's sci-fi action RPG The Outer Worlds 2 (which we have reviewed here) and Embark's post-apocalyptic third-person extraction shooter game ARC Raiders. Both games support the NVIDIA RTX Blackwell server upgrade, which means Ultimate subscribers can enable NVIDIA DLSS 4 with Multi Frame Generation to get the highest possible frame rates. Meanwhile, NVIDIA continues to add more RTX 5080-class servers throughout its server regions. The latest one to be enabled is in Sofia, Bulgaria, with Amsterdam and Montréal scheduled to be next. […]
Read full article at https://wccftech.com/geforce-now-adds-the-outer-worlds-2-and-arc-raiders-both-rtx-5080-ready/

The MMORPG AION 2 will launch globally in 2026 with DLSS 4 and Multi Frame Generation support on the PC version (the game will also be available on mobile devices). The initial launch in South Korea and Taiwan is set for November 19. The game is a sequel to Aion: The Tower of Eternity, which launched in 2008 in Korea and the following year worldwide. The original game was powered by Crytek's CRYENGINE, whereas this new installment is made with Unreal Engine 5. Aion 2 takes place around two hundred years later and features a world that, according to NCSOFT, […]
Read full article at https://wccftech.com/aion-2-to-launch-globally-2026-with-dlss-4-multi-frame-generation/

The new memory optimization features apparently improve gaming performance on Ryzen 9000-based systems using Colorful's AM5 motherboards. Colorful Intros "Low Latency" and "High Performance" Memory Modes for its 600 and 800 Series Motherboards Chinese hardware maker, Colorful, has today introduced two new memory-related features for its AM5 motherboards, which can supposedly deliver superior performance in apps and games by reducing the memory latency. Colorful says that since Ryzen 9000 series CPUs have a high memory latency, the new Colorful motherboard memory features can reduce it to optimize performance. Colorful released the "Low Latency" and "High Performance" modes on some of […]
Read full article at https://wccftech.com/colorful-claims-its-new-memory-low-latency-and-high-performance-modes-can-deliver-15-higher-fps-in-battlefield-6/

Xbox gaming revenues have declined by $113 million, or 2%, due to drop in hardware sales and limited growth of gaming content and services in Q1 FY2026 over the prior year, Microsoft confirmed in its latest financial report. On Wednesday, the company reported its Q1 FY2026 earnings, confirming the Xbox hardware revenue 29% decline, which is offset in part by growth in Xbox content and services, whose $5.5 billion revenue is a 1% improvement over the "strong prior year". This revenue increase was driven by growth in Xbox Game Pass and third-party content, and partially offset by a decline in […]
Read full article at https://wccftech.com/xbox-revenues-see-113-million-decline-amid-hardware-sales-drop-and-limited-gaming-content-and-services-growth/

Alleged performance of Intel's Panther Lake, Core Ultra X7 358H & Ultra X5 338H CPUs, in Cinebench R23 MT has leaked out. Intel Panther Lake Might Offer Similar Performance As Arrow Lake In Multi-Threaded Tests If These "Alleged" ES Tests For Core Ultra X7 358H & Ultra X5 338H Are To Be Believed A few weeks after posting what are seemingly the first non-official benchmarks of Panther Lake's Xe3 iGPU, LaptopReview has now published CPU performance benchmarks for Intel's upcoming CPUs, the Core Ultra X7 358H and the Core Ultra X5 338H. These two Intel Panther Lake CPUs should be […]
Read full article at https://wccftech.com/intel-core-ultra-x7-358h-ultra-x5-338h-panther-lake-leak-similar-mt-performance-as-arrow-lake/

 A look at a new mini docu-series on three projects in Brazil that are each taking a unique approach to tackle CO2 and superpollutants.
A look at a new mini docu-series on three projects in Brazil that are each taking a unique approach to tackle CO2 and superpollutants.	 Millions of Jio users will get access to the Google AI Pro plan at no extra cost for 18 months.
Millions of Jio users will get access to the Google AI Pro plan at no extra cost for 18 months.	
For years, I told bloggers the same thing: make your content easy enough for toddlers and drunk adults to understand.
That was my rule of thumb.
If a five-year-old can follow what you’ve written and someone paying half-attention can still find what they need on your site, you’re doing something right.
But the game has changed. It’s no longer just about toddlers and drunk adults.
You’re now writing for large language models (LLMs) quietly scanning, interpreting, and summarizing your work inside AI search results.
I used to believe that great writing and solid SEO were all it took to succeed. What I see now:
Clarity beats everything.
The blogs winning today aren’t simply well-written or packed with keywords. They’re clean, consistent, and instantly understandable to readers and machines alike.
Blogging isn’t dying. It’s moving from being a simple publishing tool to a real brand platform that supports off-site efforts more than ever before.
You can’t just drop a recipe or travel guide online and expect it to rank using the SEO tactics of the past.
Bloggers must now think of their site as an ecosystem where everything connects – posts, internal links, author bios, and signals of external authority all reinforce each other.
When I audit sites, the difference between those that thrive and those that struggle almost always comes down to focus.
The successful ones treat their blogs like living systems that grow smarter, clearer, and more intentional with time.
But if content creators want to survive what’s coming, they need to build their sites for toddlers, drunk adults, and LLMs.
In this article, bloggers will learn how to do the following:
Let’s be honest: the blogging world feels a little shaky right now.
One day, traffic is steady, and the next day, it’s down 40% after an update no one saw coming.
Bloggers are watching AI Overviews and “AI Mode” swallow up clicks that used to come straight to their sites. Pinterest doesn’t drive what it once did, and social media traffic in general is unpredictable.
It’s not your imagination. The rules of discovery have changed.
We’ve entered a stage where Google volatility is the norm, not the exception.
Core updates hit harder, AI summaries are doing the talking, and creators are realizing that search is no longer just about keywords and backlinks. It’s about context, clarity, and credibility.
But here’s the good news: the traffic that matters is still out there. It just presents differently.
The strongest blogs I work with are seeing direct traffic and returning visitors climb.
People remember them, type their names into search, open their newsletters, and click through from saved bookmarks. That’s not an accident – that’s the result of clarity and consistency.
If your site clearly explains who you are, what you offer, and how your content fits together, you’re building what I call resilient visibility.
It’s the kind of presence that lasts through algorithm swings, because your audience and Google both understand your purpose.
Think of it this way: the era of chasing random keyword wins is over.
The bloggers who’ll still be standing in five years are the ones who organize their sites like smart libraries: easy to navigate, full of expertise, and built for readers who come back again and again.
AI systems reward that same clarity.
They want content that’s connected, consistent, and confident about its subject matter.
That’s how you show up in AI Overviews, People Also Ask carousels, or Gemini-generated results.
In short, confusion costs you clicks, but clarity earns you staying power.
Takeaway
Dig deeper: Chunk, cite, clarify, build: A content framework for AI search
A few years ago, SEO was all about chasing rankings.
You picked your keywords, wrote your post, built some links, and hoped to land on page one.
Simple enough. But that world doesn’t exist anymore.
Today, we’re in what can best be called the retrieval era.
AI systems like ChatGPT, Gemini, and Perplexity don’t list links. They retrieve answers from the brands, authors, and sites they trust most.
Duane Forrester said it best – search is shifting from “ranking” to “retrieval.”
Instead of asking, “Where do I rank?” creators should be asking, “Am I retrievable?”
That mindset shift changes everything about how we create content.
Mike King expanded on this idea, introducing the concept of relevance engineering.
Search engines and LLMs now use context to understand relevance, not just keywords. They look at:
This is where structure and clarity start paying off.
AI systems want to understand who you are and where you stand.
They learn that from your internal links, schema, author bios, and consistent topical focus.
When everything aligns, you’re no longer just ranking in search – you’re becoming a known entity that AI can pull from.
I’ve seen this firsthand during site audits. Blogs with strong internal structures and clear topical authority are far more likely to be cited as sources in AI Overviews and LLM results.
You’re removing confusion and teaching both users and models to associate your brand with specific areas of expertise.
Takeaway
Here’s something I see a lot in my audits: two posts covering the same topic, both written by experienced bloggers, both technically sound. Yet one consistently outperforms the other.
The difference? One shows a clear “Last updated” date, and the other doesn’t.
That tiny detail matters more than most people realize.
Research from Metehan Yesilyurt confirms what many SEOs have suspected for a while: LLMs and AI-driven search results favor recency, and it’s already being exploited in the name of research.
It’s built into their design. When AI models have multiple possible answers to choose from, they often prefer newer or recently refreshed content.
This is recency bias, and it’s reshaping both AI search and Google’s click-through behavior.
We see the same pattern inside the traditional SERPs.
Posts that display visible “Last updated” dates tend to earn higher click-through rates.
People – and algorithms – trust fresh information.
That’s why one of the first things I check in an audit is how Google is interpreting the date structure on a blog.
Is it recognizing the correct updated date, or is it stuck on the original publish date?
Sometimes the fix is simple: remove the old “published on” markup and make sure the updated timestamp is clearly visible and crawlable.
Other times, the page’s HTML or schema sends conflicting signals that confuse Google, and those need to be cleaned up.
When Google or an LLM can’t identify the freshness of your content, you’re handing visibility to someone else who communicates that freshness better.
How do you prevent this? Don’t hide your updates. Celebrate them.
When you update recipes, add new travel information, or test a product, update your post and make the date obvious.
This will tell readers and AI systems, “This content is alive and relevant.”
Now, that being said, Google does keep a history of document versions.
The average post may have dozens of copies stored, and Google can easily compare the recently changed version to its repository of past versions.
Avoid making small changes that do not add value to users or republishing to a new date years later to fake relevancy. Google specifically calls that out in its guidelines.
Takeaway
Let’s talk about what really gets remembered in this new AI-driven world.
When you ask ChatGPT, Gemini, or Perplexity a question, it thinks in entities – people, brands, and concepts it already knows.
The more clearly those models recognize who you are and what you stand for, the more likely you are to be retrieved when it’s time to generate an answer.
That’s where brand SEO comes in.
Harry Clarkson-Bennett in “How to Build a Brand (with SEO) in a Post AI World” makes a great point: LLMs reward brand reinforcement.
They want to connect names, authors, and websites with a clear area of expertise. And they remember consistency.
If your name, site, and author profiles all align across the web (same logo, same tone, same expertise), you start training these models to trust you.
I tell bloggers all the time: AI learns the same way humans do. It remembers patterns, tone, and repetition. So make those patterns easy to see.
I originally discussed these AI buttons in my last article, “AI isn’t the enemy: How bloggers can thrive in a generative search world,” and provided a visual example.
These are simple on-site prompts encouraging readers to save or summarize your content using AI tools like ChatGPT or Gemini.
When users do that, those models start seeing your site as a trusted example. Over time, that can influence what those systems recall and recommend.
Think of this as reputation-building for the AI era. It’s not about trying to game the system. It’s about making sure your brand is memorable, consistent, and worth retrieving.
Fortunately, these buttons are becoming more mainstream, with theme designers like Feast including them as custom blocks.
And the buttons work – I’ve seen creators turn their blogs into small but powerful brands that LLMs now cite regularly.
They did it by reinforcing who they were, everywhere, and then using AI buttons to encourage their existing traffic to save their sites as high-quality examples to reference in the future.
Takeaway
Blogging has never been easy, but it’s never been harder than it is right now.
Between core updates, AI Overviews, and shifting algorithms, creators are expected to keep up with changes that even seasoned SEOs struggle to track.
And that’s the problem – too many bloggers are still trying to figure it all out alone.
If there’s one thing I’ve learned after doing more than 160 site audits this year, it’s this: almost every struggling blogger is closer to success than they think. They’re just missing clarity.
A good SEO audit does more than point out broken links or slow-loading pages. It shows you why your content isn’t connecting with Google, readers, and now LLMs.
My audits are built around what I call the “Toddlers, Drunk Adults, and LLMs” framework.
If your site works for those three audiences, you’re in great shape.
For toddlers
For drunk adults
For LLMs
When bloggers follow this approach, the numbers speak for themselves.
In 2025 alone, my audit clients have seen an average increase of 47% in Google traffic and RPM improvements of 21-33% within a few months of implementing recommendations.
This isn’t just about ranking better. Every audit is a roadmap to help bloggers position their sites for long-term visibility across traditional search and AI-powered discovery.
That means optimizing for things like:
You can’t control Google’s volatility, but you can control how clear, crawlable, and connected your site is. That’s what gets rewarded.
And while I’ll always advocate for professional audits, this isn’t about selling a service.
You need someone who can give you an honest, technical, and strategic look under the hood.
Why?
Because the difference between “doing fine” and “thriving in AI search” often comes down to a single, well-executed audit.
Takeaway
So where does all this lead? What does blogging even look like five years from now?
Here’s what I see coming.
We’re heading toward an increasingly agentic web, where AI systems do the searching, summarizing, and recommending for us.
Instead of typing a query into Google, people will ask their personal AI for a dinner idea, a travel itinerary, or a product recommendation.
And those systems will pull from a short list of trusted sources they already “know.”
That’s why what you’re doing today matters so much.
Every time you publish a post, refine your site structure, or strengthen your brand signals, you’re teaching AI who you are.
You’re building a long-term relationship with the systems that will decide what gets shown and what gets skipped.
Here’s how I expect the next few years to unfold:
The creators who will win in this next chapter are the ones who stop trying to outsmart Google and start building systems that AI can easily understand and humans genuinely connect with.
It’s not about chasing trends or reinventing your site every time an update hits. It’s about getting the fundamentals right and letting clarity, trust, and originality carry you forward.
Because the truth is, Google’s not the gatekeeper anymore. You are.
Your brand, expertise, and ability to communicate clearly will decide how visible you’ll be in search and AI-driven discovery.
Takeaway
If there’s one thing I want bloggers to take away from all this, it’s that clarity always wins.
We’re living through the fastest transformation in the history of search.
AI is rewriting how content is discovered, ranked, and retrieved.
Yes, that’s scary. But it’s also full of opportunity for those willing to adapt.
I’ve seen it hundreds of times in audits this year.
Bloggers who simplify their sites, clean up their data, and focus on authority signals see measurable results.
They show up in AI Overviews. They regain lost rankings. They build audiences that keep coming back, even when algorithms shift again.
This isn’t about fighting AI – it’s about working with it. The goal is to show the system who you are and why your content matters.
Here’s my advice, regardless of the professional you choose:
It’s never been harder to be a content creator, but it’s never been more possible to build something that lasts.
The blogs that survive the next five years will be organized, human, and clear.
The future of blogging belongs to the creators who embrace clarity over chaos. AI won’t erase the human voice – it’ll amplify the ones that are worth hearing.
Here’s to raised voices and future success. Good luck out there.
Dig deeper: Organizing content for AI search: A 3-level framework

Regex is a powerful – yet overlooked – tool in search and data analysis.
With just a single line, you can automate what would otherwise take dozens of lines of code.
Short for “regular expression,” regex is a sequence of characters used to define a pattern for matching text.
It’s what allows you to find, extract, or replace specific strings of data with precision.
In SEO, regex helps you extract and filter information efficiently – from analyzing keyword variations to cleaning messy query data.
But its value extends well beyond SEO.
Regex is also fundamental to natural language processing (NLP), offering insight into how machines read, parse, and process text – even how large language models (LLMs) tokenize language behind the scenes.
Before getting started with regex basics, I want to highlight some of its uses in our daily workflows.
Google Search Console has a regex filter functionality to isolate specific query types.
One of the simplest regex expressions commonly used is the brand regex brandname1|brandname2|brandname3, which is very useful when users write your brand name in different ways.

Google Analytics also supports regex for defining filters, key events, segments, audiences, and content groups.
Looker Studio allows you to use regex to create filters, calculated fields, and validation rules.
Screaming Frog supports the use of regex to filter and extract data during a crawl and also to exclude specific URLs from your crawl.

Google Sheets enables you to test whether a cell matches a specific regex. Simply use the function REGEXMATCH (text, regular_expression).
In SEO, we’re surrounded by tools and features just waiting for a well-written regex to unlock their full potential.
If you’re building SEO tools, especially those that involve content processing, regex is your secret weapon.
It gives you the power to search, validate, and replace text based on advanced, customizable patterns.
Here’s a Google Colab notebook with an example of a Python script that takes a list of queries and extracts different variations of my brand name.
You can easily customize this code by plugging it into ChatGPT or Claude alongside your brand name.

I’m a fan of vibe coding – but not the kind where you skip the basics and rely entirely on LLMs.
After all, you can’t use a calculator properly if you don’t understand numbers or how addition, multiplication, division, and subtraction work.
I support the kind of vibe coding that builds on a little coding knowledge – enough to use LLMs effectively, test what they produce, and troubleshoot when needed.
Likewise, learning the basics of regex helps you use LLMs to create more advanced expressions.
| Symbol | Meaning | 
| . | Matches any single character. | 
| ^ | Matches the start of a string. | 
| $ | Matches the end of a string. | 
| * | Matches 0 or more of the preceding character. | 
| + | Matches 1 or more of the preceding character. | 
| ? | Makes the preceding character optional (0 or 1 time). | 
| {} | Matches the preceding character a specific number of times. | 
| [] | Matches any one character inside the brackets. | 
| \ | Escapes special characters or signals special sequences like \d. | 
| ` | Matches a literal backtick character. | 
| () | Groups characters together (for operators or capturing). | 
Here’s a list of 10 long-tail keywords. Let’s explore how different regex patterns filter them using the Regex101 tool.
Example 1: Extract any two-character sequence that starts with an “a.” The second character can be anything (i.e., a, then anything).
a.
Example 2: Extract any string that starts with the letter “a” (i.e., a is the start of the string, then followed by anything).
^a.
Example 3: Extract any string that starts with an “a” and ends with an “e” (i.e., any line that starts with a, followed by anything, then ends with an e).
^a.*e$
Example 4: Extract any string that contains two “s.”
s{2}
Example 5: Extract any string that contains “for” or “with.”
for|with
I’ve also built a sample regex Google Sheet so you can play around, test, and experience the feature in Google Sheets, too. Check it out here.

Note: Cells in the Extracted Text column showing #N/A indicate that the regex didn’t find a matching pattern.
By exploring regex, you’ll open new doors for analyzing and organizing search data.
It’s one of those skills that quietly makes you faster and more precise – whether you’re segmenting keywords, cleaning messy queries, or setting up advanced filters in Search Console or Looker Studio.
Once you’re comfortable with the basics, start spotting where regex can save you time.
Use it to identify branded versus nonbranded searches, group URLs by pattern, or validate large text datasets before they reach your reports.
Experiment with different expressions in tools like Regex101 or Google Sheets to see how small syntax changes affect results.
The more you practice, the easier it becomes to recognize patterns in both data and problem-solving.
That’s where regex truly earns its place in your SEO toolkit.



		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
The post xMEMS raises $21M to advance solid-state chip cooling appeared first on StartupHub.ai.
xMEMS Labs Inc. raised $21 million to commercialize its piezoMEMS technology, a solid-state chip cooling system for compact AI-powered devices.
The post xMEMS raises $21M to advance solid-state chip cooling appeared first on StartupHub.ai.
The post AI’s Trillion-Dollar Reality: Reindustrialization and Geopolitical Strength appeared first on StartupHub.ai.
The age of artificial intelligence, often shrouded in speculative hype, is now demonstrably “actually working,” according to Joe Lonsdale, Palantir co-founder and 8VC founding partner. This tangible progress, he asserts, heralds not just a technological revolution but a fundamental reindustrialization of the United States, demanding unprecedented capital and reshaping global power dynamics. Lonsdale shared these […]
The post AI’s Trillion-Dollar Reality: Reindustrialization and Geopolitical Strength appeared first on StartupHub.ai.
The post AI’s Insatiable Energy Appetite Fuels Uranium Miners appeared first on StartupHub.ai.
The relentless ascent of artificial intelligence, alongside the broader push for electrification, is forging an unprecedented demand for power, a trend that Valérie Noël, Head of Trading at Syz Group, highlights as a potent catalyst for uranium miners. This insight, delivered during her recent interview on CNBC’s *Worldwide Exchange* with anchor Frank Holland, underscored a […]
The post AI’s Insatiable Energy Appetite Fuels Uranium Miners appeared first on StartupHub.ai.
The post Rakuten Deploys New Guardrail for SAE PII Detection and LLM as a judge appeared first on StartupHub.ai.
A new SAE PII detection method deployed by Rakuten uses model internals to achieve a 96% F1 score, compared to just 51% using the same model as a black-box judge.
The post Rakuten Deploys New Guardrail for SAE PII Detection and LLM as a judge appeared first on StartupHub.ai.
The post Solidatus raises £5M to advance AI data lineage platform appeared first on StartupHub.ai.
Data lineage provider Solidatus secured £5M to accelerate its AI-powered platform for enterprise data governance and compliance.
The post Solidatus raises £5M to advance AI data lineage platform appeared first on StartupHub.ai.
The post IBM’s Granite 4.0: Small Models, Outsized Impact on Enterprise AI appeared first on StartupHub.ai.
IBM’s latest iteration of its Granite models, Granite 4.0, is poised to reshape the enterprise AI landscape by delivering superior performance, unprecedented efficiency, and cost-effectiveness through a groundbreaking hybrid architecture. This new family of small language models challenges the conventional wisdom that larger models inherently equate to better results, demonstrating that strategic architectural innovation can […]
The post IBM’s Granite 4.0: Small Models, Outsized Impact on Enterprise AI appeared first on StartupHub.ai.
The post CustoMED Announces $6M Funding to Scale AI-Powered 3D Printed Solutions for Orthopedic Surgery appeared first on StartupHub.ai.
Surgeon-First Platform Transforms Medical Imaging into Real-Time, Patient-Specific. Surgical Guides and Implants for New Standard in Precision Surgery.
The post CustoMED Announces $6M Funding to Scale AI-Powered 3D Printed Solutions for Orthopedic Surgery appeared first on StartupHub.ai.
The post Q.ANT raises $80M to advance photonic AI processors appeared first on StartupHub.ai.
Q.ANT secured total funding of $80 million to commercialize its energy-efficient photonic processors for artificial intelligence and high-performance computing.
The post Q.ANT raises $80M to advance photonic AI processors appeared first on StartupHub.ai.
The post AI Agent Supervision: Sierra’s Answer to Rogue Chatbots appeared first on StartupHub.ai.
Sierra's platform uses AI 'Supervisors' for real-time correction and 'Monitors' for constant evaluation, aiming to solve the AI reliability problem with more AI.
The post AI Agent Supervision: Sierra’s Answer to Rogue Chatbots appeared first on StartupHub.ai.
AMD confirms the leaked codename for Zen 6 Ryzen CPUs—Are all the leaks true? At the Open Compute Project Global Summit, AMD confirmed the codename for its next-generation Zen 6 Ryzen CPUs. AMD’s Zen 6 Ryzen CPUs are “Medusa”, a name that has long been discussed by hardware enthusiasts thanks to prior leaks. This seemingly […]
The post AMD officially confirms Zen 6 “Medusa” Ryzen CPUs at OCP 2025 appeared first on OC3D.
		 
	
		 
	
		 
	
Every founder knows the thrill of the moment: the first term sheet lands, the product is live, the market is opening up. But in 2025, there’s a new line in the sand: Did you clear the regulatory path before you scaled?
Today, it’s not enough to disrupt the market — you have to anticipate the rule-set that will govern it.
Investors are shifting gears. After a decade of “move fast and break things,” they’re asking: Who built the compliance engine before the crash? Because the truth is, regulation has become a form of alpha — a competitive advantage for startups that think of law not as a hurdle, but as a moat.
The startup landscape has changed. High-profile failures — from crypto exchanges to wild valuations in fintech and AI — taught us that the regulatory cost of growth can be massive. Today’s investors and founders alike expect legal strategy from day one, not as an afterthought.
Consider the RegTech market: One recent estimate projects it will swell to about $70.64 billion by 2030, growing at a compound annual rate of roughly 23%. Another forecast predicts growth to $70.8 billion by 2033. The message: Companies are no longer asking if they need compliance automation and legal-engineering infrastructure. They’re asking when they can monetize it.
So when a startup designs its product around KYC, AML, data-protection or licensing from the outset, it’s not just avoiding risk — it’s building a moat others will struggle to cross. For founders, regulation isn’t just the cost of entry anymore — it’s the cost of exit-edge.
There are former unicorns, and there are regulation-ready unicorns. The difference hinges on when they built their compliance architecture, hired legal engineers and treated regulation as product.
Take payment infrastructure: Stripe built payment-security and licensing into its model early, as Stripe’s PCI Level 1 certification and multijurisdiction licenses (U.S. money-transmitter, EU/UK e-money) enabled it to integrate cleanly with Apple Pay, power Shopify’s native payments, and — per a 2023 announcement — expand its role processing payments for Amazon.
Or look at crypto: Coinbase built a licensure footprint early, publishing its U.S. money-transmitter licenses and securing New York’s BitLicense in 2017. Its 2021 SEC S-1 repeatedly frames regulatory compliance and licensing as fundamental to the business.
In insurtech, from the outset, Lemonade hired senior insurance veterans (e.g., former AIG executive Ty Sagalow) and, per its S-1 and subsequent filings, expanded licensure across the U.S., operationalizing the 50-state regulatory landscape rather than trying to route around it.
These examples show a pattern: When compliance is built in from the start, the cost of scaling drops and competitors face much higher entry bars. Regulation becomes a moat — not a burden.
Welcome to the era of the legal engineer. The traditional model (sign contract, then lawyer reads, then flagged risk) is being replaced by code, automation and internal teams who speak both product and law.
Startups such as Carta built cap-table software that includes “built-in tools and support to help with compliance year-round,” allowing it to embed governance and securities-law readiness into the product nature of equity management.
Plaid has publicly positioned itself for evolving “data use, access, and consumer permission” rules (e.g., Section 1033) by building features such as data transparency messaging and consent-capture into its API stack — indicating a clear regulatory-first posture in its product roadmap.
And what’s happening in AI? Founders are hiring general counsels on day one to forecast imminent regimes — privacy law (GDPR, CCPA), AI transparency bills, emerging algorithms-as-infrastructure regulation.
The startup battle isn’t simply product vs. product anymore — it’s regulatory architecture vs. regulatory architecture.
Reports back this up: One credible industry estimate shows the global compliance, governance and risk market is already around $80 billion and projected to reach $120 billion in the next five years. In short: Startups that solve compliance at scale are building infrastructure for everyone else to rent. That’s platform-level potential.
Regulation-ready startups aren’t just surviving — they’re attracting smarter capital. Venture funds now assess regulatory maturity, legal runway and governance readiness early on. A startup that can show it isn’t “waiting to deal with compliance” but designed it, has a valuation edge.
Crunchbase data shows global startup funding reached $91 billion in Q2 2025, up 11% year over year. While not all of that is focused on law or compliance, the trend signals that smart investors are buried deeper in risk assessment and governance. Legal tech funding is accelerating, too: the sector recently topped $2.4 billion in venture funding this year, an all-time high.
Funds are no longer only assessing TAM or go-to-market speed; they’re asking: “What’s the regulatory runway? Who owns risk? Who built the compliance pipeline?” Because in sectors like fintech, climate tech, health tech and AI, the fastest growth path is often the one that avoids the enforcement arm.
Let’s zoom out for a moment. We’re moving into a world where regulation isn’t a ceiling — it’s scaffolding. It defines markets, enables scaling and filters winners from pretenders. Founders who see law as a source of architecture, not as chewing-gum-on-the-shoe, will be the ones writing the playbook.
Think about AI: Startups that design for regulatory change (data-provenance, audit trails, rights management) are already positioning for the future.
Think about climate tech: Companies that can navigate evolving carbon-credit regimes or ESG disclosure laws are building invisible advantages.
Think about fintech: Those that mastered licensing, KYC/AML, consumer-data flows early are the backbone of infrastructure.
The next wave of unicorns won’t just have better tech — they’ll have truly infinitely better legal DNA. They won’t just disrupt a market; they’ll help write the rules of the market before they scale.
Because in this new era, regulation isn’t a deadweight — it’s a launchpad.
Aron Solomon is the chief strategy officer for Amplify. He holds a law degree and has taught entrepreneurship at McGill University and the University of Pennsylvania, and was elected to Fastcase 50, recognizing the top 50 legal innovators in the world. His writing has been featured in Newsweek, The Hill, Fast Company, Fortune, Forbes, CBS News, CNBC, USA Today and many other publications. He was nominated for a Pulitzer Prize for his op-ed in The Independent exposing the NFL’s “race-norming” policies.
Illustration: Dom Guzman
 
	Feyza Haskaraman is joining Felicis Ventures 1 as a partner after several years at Menlo Ventures, Crunchbase News has exclusively learned.
In her new role, Haskaraman will focus on investing in “soon-to-break-out” AI infrastructure, cybersecurity, and applications companies for Felicis, an early-stage firm with $3.9 billion in assets under management.
During her time at Menlo, Haskaraman sourced investments in startups including Semgrep, Astrix, Abacus, Parade and CloudTrucks — zeroing in early on how AI is reshaping developer security and enterprise infrastructure.

Haskaraman, an MIT graduate who was born in Turkey, brings an engineering background to her role as an investor. She previously worked as an engineer at various companies at different growth stages, including Analog Devices, Fitbit and Nucleus Scientific. She is also a former McKinsey & Co. consultant who advised multibillion-dollar technology companies and early-stage startups on strategy and operations. It was after working with startups at McKinsey that her interest in venture capital was piqued, and she joined Insight Partners.
Her decision to join Menlo Park, California-based Felicis stems from a shared interest alongside firm founder and managing partner Aydin Senkut to build communities even in “unsexy” industries such as infrastructure and security, she said.
“Whether it’s connecting AI founders or bringing together technical and cybersecurity communities, the mission is the same: Believe in the best founders early and help them go the distance,” she told Crunchbase News.
Felicis is currently investing out of its 10th fund, a $900 million vehicle, its largest yet. More than 60% of its investments out of Fund 9 and 10 (so far) are seed stage; 94% are seed or Series A. In 83% of its investments, Felicis has led or co-led the round.
Nearly $3 out of every $4 that it’s deployed have gone into AI-related companies, including n8n, Supabase, Mercor, Crusoe Energy Systems, Periodic Labs, Runway, Revel, Skild AI, Deep Infra, Browser Use, Evertune, Poolside, Letta and LMArena.
In an interview, Haskaraman shared more about her investment plans at Felicis, as well as why she thinks we’re in the “early innings” with AI. This interview has been edited for clarity and brevity.
Let’s talk more about community-building and why you think it’s so important.
Over the past few years in the venture ecosystem, just providing the capital is not enough. You need to surround yourself with the best talent. You’re seeing one of the fiercest talent wars in terms of AI talent.
So one of the things that I’ve spent a lot of time on in my VC career is building a community, going back to my MIT roots, surrounding myself with founders, engineers and operators, and also going into specific domains, like cybersecurity — just building a network of CISOs that I communicate with regularly and really support them however I can, and then obviously get their take on the latest technology.
That type of community-building effort is something that Aydin and I will be debating strategy for Felicis as well.
Yes, Aydin (Felicis’ founder) has said that he thinks the next generation of enterprise investors aren’t just picking companies, they’re building ecosystems. Would you agree with that?
Yes, we’re fully aligned on that. First of all, it’s a way of sourcing. Being able to source the best founders involves surrounding yourself in a community of people. You get very close to them, and you want to be the first call when they decide to jump ship and start a business.
As early-connection investors, we want to invest in the founders as early as possible. So that’s why we want to immerse ourselves in these communities that provide prolific grounds for the technical founders that are coming in and building an AI.
You were investing in AI before the big boom took off. Would you say there’s too much hype around the space?
You are correct that there is a lot of euphoria around AI, but if you look at the overall landscape, we haven’t seen a technology that can have such a large impact.
And we’re already seeing the results in enterprises that buyers of these solutions, and consumers of these solutions, including myself and our team, are seeing immense amounts of productivity gains. I remain immensely optimistic about the future and investing in AI, and that’s what we are paid to do, and what I also enjoy as a former engineer.
Are there specific aspects of AI that have you particularly excited?
I personally feel we’re still very much at the early innings. It’s been three years since ChatGPT came out, and the model companies really pushed their products into our lives. But if you take a look at what’s happening now, we have agents that are coordinating and automating our work.
What are ways in which we should be securing agent architecture? And that is also evolving across the board, and if you think about another layer down, like the infrastructure to support these LLMs and agents, I have to ask “What do we need underneath?”
I think there’s a lot more that will come, and there’s a lot of hope for innovation that will happen both across the infrastructure layer, as well as agents. There’s also the issue of “can applications actually be enabled?” I go back to the importance of securing our interactions with the agents and making sure that they’re not abused and misused. It’s a great time to be investing in AI.
What stages are you primarily investing in at Felicis?
We try to go as early as possible. But obviously, given our fund’s size, we have flexibility to invest whenever we see the venture scale returns make sense. But the majority of our investments are seed.
It’s such a competitive investing environment right now. How do you stand out?
Ultimately, what founders value is how you will work with them, your references. They value how you show up in those tough times, how you surround them with talent, how you help them see around the corners. That matters a lot.
I believe that winning boils down to the prior founder experiences that you left, people who can speak highly of you and how you work. I tend to be a big hustler. So, there’s a lot more value-add that we want to make sure we bring to the table, even before investments. And then after the investment we can continue to bring that type of value to a company.
Are you investing outside of AI?
I’m investing in AI infrastructure, cybersecurity and AI-enabled apps. We are also at the verge of a big overhaul in terms of the application layer, companies that we’ve seen prior to AI — that is all getting disrupted.
We’re seeing AI scribes in healthcare intake solutions, for example. We’re seeing code-generation solutions in developer stacks. We are looking at every single vertical, as well as horizontal application. I’m very interested in how all of these verticals’ application layers will get a different type of automation.
What’s your take on the market overall right now?
I feel like I lived three lifetimes in my investing career — just over the past few years. We as a VC community and tech ecosystem learned a lot, obviously, just in terms of what’s happening. We’re seeing new ingredients in the market, and that is AI, that did not exist during COVID.
Think about the fact that this is not a structural change in the market driven by the economy. This is truly a new technology. I would bucket those waves as separate.
I’m very grateful to be investing at this time. What a time to be investing, because AI is truly game-changing as a technology.
Clarification: The paragraph about Haskaraman’s investments at Menlo Ventures has been updated to more accurately reflect her role.
Illustration: Dom Guzman
 
	


Work on Valve's HLX project, rumored to be the highly anticipated Half-Life 3, is continuing with more optimization passes, suggesting that the project's polishing phase is in full swing. In a new video shared on YouTube a few hours ago, Tyler McVicker, who provided correct updates on Valve's project for years before their official announcements, reviewed some of the additions made to the Source 2 engine with the Counter-Strike 2 October 15 update. Unlike past updates, which introduced new systems and features that aren't in use in any other game powered by the Valve engine, the latest update focuses more […]
Read full article at https://wccftech.com/half-life-3-hlx-optimization-trailer-prepared/

Both companies are aiming for ultra-power-efficient displays that can save significant battery life on laptops. Intel and BOE Announce New AI Energy-Saving Techniques for Laptops, Which Will Adjust Display Refresh Rate According to the Content Last year, BOE, a Chinese display-panel manufacturer, unveiled its Winning Display 1Hz technology that can reduce power consumption by 65%. Today, Intel officially announced its partnership with BOE to deploy the 1Hz Refresh Rate technology and two more efficiency-enhancing features for the laptops, which are aimed at improving the battery life significantly. Intel says that these AI-based technologies will intelligently balance the energy efficiency with […]
Read full article at https://wccftech.com/intel-and-boe-collaborate-to-introduce-1hz-refresh-rate-and-multi-frequency-display/

There are a myriad of differences separating the AirPods Pro 3 from the AirPods Pro 2, but regardless of the features that Apple has incorporated in its flagship wireless earbuds, for the majority of buyers, it all boils down to how much they are willing to pay. Some customers are not bothered parting with $249 of their hard-earned cash for these high-quality goods, while others seem to find solace in picking excellent value. On Amazon, the previous-generation AirPods Pro 2 are 32 percent off, or a $79 delta compared to the AirPods Pro 3, so which one will you pick? […]
Read full article at https://wccftech.com/airpods-pro-3-79-more-expensive-than-airpods-pro-2-which-are-32-percent-off-on-amazon/

Though updated in every possible way, Dragon Quest I & II HD-2D Remake retains the challenge of its classic JRPG roots. If you are a newcomer to the series, understanding some essential quirks is key to enjoying both games right from the start. This guide will walk you through settings, exploration, and combat tips to tame the difficulty. NOTE: Tips devised and refined during two complete playthroughs of both games over the course of 45 hours at Dragon Quest difficulty in the game's PlayStation 5 1.0 version. Screenshots captured from the same version. 3 Essential Settings To Tame Dragon Quest […]
Read full article at https://wccftech.com/how-to/dragon-quest-i-ii-hd-2d-remake-5-tips-to-banish-the-fiends/

CAPCOM has officially opened pre-orders for its highly anticipated game Resident Evil Requiem. You can now purchase the ninth mainline installment of the beloved horror franchise across all platforms: PC (Steam and, for the first time, Epic Games Store), PlayStation 5, Xbox Series S and X, and Nintendo Switch 2. All pre-orders will include “Apocalypse,” a freebie costume for protagonist Grace, as a pre-order bonus. The Standard Edition is priced at $69.99, while the Deluxe Edition adds the Deluxe Kit for $10 more. The Deluxe Kit includes: To celebrate the aforementioned debut of the Japanese publisher on the Epic Games […]
Read full article at https://wccftech.com/resident-evil-requiem-opens-preorders-reveals-accessible-pc-specs/

A new strategy applied by Qualcomm for next year is not just offering its top-tier Snapdragon 8 Elite Gen 5 for the most premium Android flagships out there, but also to offer an alternative to its phone partners that is more affordable and is mass produced on TSMC’s newest 3nm ‘N3P’ process. That SoC is the Snapdragon 8 Gen 5, and given the ludicrous price of the Snapdragon 8 Elite Gen 5, we are confident that we will witness the less expensive solution power several ‘price to performance’ smartphones in 2026. A tipster has shared various specifications of the Snapdragon 8 […]
Read full article at https://wccftech.com/snapdragon-8-gen-5-specifications-and-benchmarks-shared-by-tipster/

Invent offers a cutting-edge platform for creating, launching, and managing smart AI assistants tailored for seamless customer engagement.
Key features include a Unified AI Inbox that consolidates all customer conversations across multiple channels, ensuring efficient management, Real-time conversation management, Seamless AI-to-human handoffs, Complete conversation continuity, No-code setup and customization Start building your AI assistant today with no credit card required and experience the future of customer support.
 An interactive choreography tool that generates new dance movements based on Sir Wayne McGregor’s archival dance footage with the help of Google AI.
An interactive choreography tool that generates new dance movements based on Sir Wayne McGregor’s archival dance footage with the help of Google AI.	
		 
	
		 
	
		 
	
		 
	
		 
	
Copy matching files by name instantly
Your system for creating, managing and using AI prompts
Monetize your software with prompts
Lovable for internal apps and dashboards
Visualize your reading network
Web and Terminal agents that scan, fix, and ship secure code
Create AI Web Agents with Just Words
Our first coding model and new interface for agents
Your space to breathe, reduce anxiety & stress, feel better
Open safety reasoning models with custom safety policies
🤖 Turn images into gifs & videos quickly + privately
AI-powered load testing, right inside your IDE
Human-like AI that automates business calls at scale
Keep your Mac awake
Real-time coding collaboration, just like Google Docs
The post AI introspection is real, but it’s unreliable appeared first on StartupHub.ai.
New research suggests AI models can sometimes introspect, checking their own internal 'intentions' to determine if an output was a mistake.
The post AI introspection is real, but it’s unreliable appeared first on StartupHub.ai.
The post Amplitude targets AI brand monitoring chaos appeared first on StartupHub.ai.
Amplitude's new tool formalizes the race for AI brand monitoring, a discipline for an era where being mentioned by an AI is the new top search result.
The post Amplitude targets AI brand monitoring chaos appeared first on StartupHub.ai.
The post SWE-1.5 model ends the AI speed vs. smarts tradeoff appeared first on StartupHub.ai.
The SWE-1.5 model's performance comes from co-designing the AI model, agent harness, and inference stack as one unified system, not just from training a better model.
The post SWE-1.5 model ends the AI speed vs. smarts tradeoff appeared first on StartupHub.ai.
The post FAKTUS raises €56M to build neobank for construction SMEs appeared first on StartupHub.ai.
FAKTUS, a neobank for construction SMEs, raised €56 million to scale its AI-powered platform that provides fast financing to solve industry payment delays.
The post FAKTUS raises €56M to build neobank for construction SMEs appeared first on StartupHub.ai.
The post Scavenger AI raises €2.5M to advance its AI business intelligence tool appeared first on StartupHub.ai.
Scavenger AI is developing a natural-language platform that allows any employee to query complex company data without technical skills.
The post Scavenger AI raises €2.5M to advance its AI business intelligence tool appeared first on StartupHub.ai.
The post Human Health raises €4.7M to advance its Precision Health platform appeared first on StartupHub.ai.
Human Health raised €4.7M to expand its AI-powered Precision Health platform, which helps people with chronic conditions track their health and generate actionable insights.
The post Human Health raises €4.7M to advance its Precision Health platform appeared first on StartupHub.ai.
Pitchwise is the smart way for founders to share pitch decks and fundraising materials. Instead of sending PDFs into the void, you get full control and visibility. Require email verification, disable downloads, revoke links anytime, and add your own branding. Founders can embed calls-to-action like booking meetings or gathering feedback directly inside the deck. Powerful analytics show who viewed your deck, for how long, slide-by-slide, even by location and visit frequency—with instant notifications.
Pitchwise also offers plug-and-play deck templates, curated investor lists, and a growing library of 200+ fundraising resources. Free to start, with Pro at just $13/user pm or $78 per year.
 Our latest PAC-MAN Doodle celebrates the 45th anniversary of the classic arcade game.
Our latest PAC-MAN Doodle celebrates the 45th anniversary of the classic arcade game.	
		 
	
The post Alphabet’s Q3 Surge Defies AI Cannibalization Fears appeared first on StartupHub.ai.
Alphabet’s recent third-quarter results have sent a clear message to the market: far from cannibalizing its foundational search business, generative AI appears to be bolstering it, contributing to a robust financial performance that surpassed expectations. This narrative, delivered by CNBC’s MacKenzie Sigalos on ‘Closing Bell Overtime’ to anchor John, highlights Alphabet’s strategic positioning and significant […]
The post Alphabet’s Q3 Surge Defies AI Cannibalization Fears appeared first on StartupHub.ai.
AMD has confirmed its commitment to openSIL "Open Firmware" for next-gen Zen 6-based Ryzen "Medusa" & EPYC "Venice" CPUs. openSIL "Open Firmware" Support For AMD's Next-Gen Zen 6-Powered Ryzen "Medusa" & EPYC "Venice" CPUs Confirmed openSIL or Open Firmware is aimed to be a replacement for traditional firmware solutions such as AGESA. The project was first announced in 2023 and was going to be used for both client and server offerings. At OCP Summit 2025, AMD once again reaffirmed its commitment to openSIL and detailed its plans for future Zen 6 CPUs. Just as a recap, openSIL firmware will offer: […]
Read full article at https://wccftech.com/amd-confirms-opensil-support-zen-6-ryzen-medusa-cpus-1h-2027-epyc-venice-2026/

Samsung is also set to begin production of next-gen HBM4 memory, 24 Gb GDDR7 DRAM & 128 GB+ products in 2026. Samsung All Set To Enter Mass Production on Next-Gen Memory Products Including Stable Supply of 2nm GAA Process In 2026 Samsung has announced its Q3 2025 earnings report, highlighting a 15.4% increase in revenue versus the previous quarter. The South Korean technology company posted a revenue of KRW 86.1 trillion, and also set an all-time high from quarterly sales for its Memory business, mainly driven by strong demand for its HBM3E memory and server SSDs, thanks to heightened AI […]
Read full article at https://wccftech.com/samsung-mass-production-next-gen-hbm4-memory-2026-24gb-gddr7-128gb-ddr5/

ExpenseKit is a simple yet powerful expense tracker that helps you stay on top of your money. Track your spending, set budgets, and view clear charts that show exactly where your money goes. With AI-powered insights, it makes managing finances smarter and easier.
Built with privacy in mind, ExpenseKit keeps your data secure while giving you full control. Easy backup, export your records anytime, and even manage expenses offline. It’s the easiest way to build better financial habits and save more with less effort.
Enterprises, eager to ensure any AI models they use adhere to safety and safe-use policies, fine-tune LLMs so they do not respond to unwanted queries.
However, much of the safeguarding and red teaming happens before deployment, “baking in” policies before users fully test the models’ capabilities in production. OpenAI believes it can offer a more flexible option for enterprises and encourage more companies to bring in safety policies.
The company has released two open-weight models under research preview that it believes will make enterprises and models more flexible in terms of safeguards. gpt-oss-safeguard-120b and gpt-oss-safeguard-20b will be available on a permissive Apache 2.0 license. The models are fine-tuned versions of OpenAI’s open-source gpt-oss, released in August, marking the first release in the oss family since the summer.
In a blog post, OpenAI said oss-safeguard uses reasoning “to directly interpret a developer-provider policy at inference time — classifying user messages, completions and full chats according to the developer’s needs.”
The company explained that, since the model uses a chain-of-thought (CoT), developers can get explanations of the model's decisions for review.
“Additionally, the policy is provided during inference, rather than being trained into the model, so it is easy for developers to iteratively revise policies to increase performance," OpenAI said in its post. "This approach, which we initially developed for internal use, is significantly more flexible than the traditional method of training a classifier to indirectly infer a decision boundary from a large number of labeled examples."
Developers can download both models from Hugging Face.
At the onset, AI models will not know a company’s preferred safety triggers. While model providers do red-team models and platforms, these safeguards are intended for broader use. Companies like Microsoft and Amazon Web Services even offer platforms to bring guardrails to AI applications and agents.
Enterprises use safety classifiers to help train a model to recognize patterns of good or bad inputs. This helps the models learn which queries they shouldn’t reply to. It also helps ensure that the models do not drift and answer accurately.
“Traditional classifiers can have high performance, with low latency and operating cost," OpenAI said. "But gathering a sufficient quantity of training examples can be time-consuming and costly, and updating or changing the policy requires re-training the classifier."
The models takes in two inputs at once before it outputs a conclusion on where the content fails. It takes a policy and the content to classify under its guidelines. OpenAI said the models work best in situations where:
The potential harm is emerging or evolving, and policies need to adapt quickly.
The domain is highly nuanced and difficult for smaller classifiers to handle.
Developers don’t have enough samples to train a high-quality classifier for each risk on their platform.
Latency is less important than producing high-quality, explainable labels.
The company said gpt-oss-safeguard “is different because its reasoning capabilities allow developers to apply any policy,” even ones they’ve written during inference.
The models are based on OpenAI’s internal tool, the Safety Reasoner, which enables its teams to be more iterative in setting guardrails. They often begin with very strict safety policies, “and use relatively large amounts of compute where needed,” then adjust policies as they move the model through production and risk assessments change.
OpenAI said the gpt-oss-safeguard models outperformed its GPT-5-thinking and the original gpt-oss models on multipolicy accuracy based on benchmark testing. It also ran the models on the ToxicChat public benchmark, where they performed well, although GPT-5-thinking and the Safety Reasoner slightly edged them out.
But there is concern that this approach could bring a centralization of safety standards.
“Safety is not a well-defined concept. Any implementation of safety standards will reflect the values and priorities of the organization that creates it, as well as the limits and deficiencies of its models,” said John Thickstun, an assistant professor of computer science at Cornell University. “If industry as a whole adopts standards developed by OpenAI, we risk institutionalizing one particular perspective on safety and short-circuiting broader investigations into the safety needs for AI deployments across many sectors of society.”
It should also be noted that OpenAI did not release the base model for the oss family of models, so developers cannot fully iterate on them.
OpenAI, however, is confident that the developer community can help refine gpt-oss-safeguard. It will host a Hackathon on December 8 in San Francisco.

Researchers at Nvidia have developed a novel approach to train large language models (LLMs) in 4-bit quantized format while maintaining their stability and accuracy at the level of high-precision models. Their technique, NVFP4, makes it possible to train models that not only outperform other leading 4-bit formats but match the performance of the larger 8-bit FP8 format, all while using half the memory and a fraction of the compute.
The success of NVFP4 shows that enterprises can continue to cut inference costs by running leaner models that match the performance of larger ones. It also hints at a future where the cost of training LLMs will drop to a point where many more organizations can train their own bespoke models from scratch rather than just fine-tuning existing ones.
Model quantization is a technique used to reduce the computational and memory costs of running and training AI models. It works by converting the model's parameters, or weights, from high-precision formats like 16- and 32-bit floating point (BF16 and FP32) to lower-precision formats. The key challenge of quantization is to reduce the size of the model while preserving as much of its knowledge and capabilities as possible.
In recent years, 8-bit floating point formats (FP8) have become a popular industry standard, offering a good balance between performance and efficiency. They significantly lower the computational cost and memory demand for LLM training without a major drop in accuracy.
The next logical step is 4-bit floating point (FP4), which promises to halve memory usage again and further boost performance on advanced hardware. However, this transition has been challenging. Existing 4-bit formats, such as MXFP4, often struggle to maintain the same level of accuracy as their 8-bit counterparts, forcing a difficult trade-off between cost and performance.
NVFP4 overcomes the stability and accuracy challenges of other FP4 techniques through a smarter design and a targeted training methodology. A key issue with 4-bit precision is its extremely limited range: It can only represent 16 distinct values. When converting from a high-precision format, outlier values can distort the entire dataset, harming the model's accuracy. NVFP4 uses a more sophisticated, multi-level scaling approach that better handles these outliers, allowing for a "more precise and accurate representation of tensor values during training," according to Nvidia.
Beyond the format, the researchers introduce a 4-bit training recipe that achieves accuracy comparable to FP8. A central component is their “mixed-precision strategy.” Instead of converting the entire model to NVFP4, the majority of layers are quantized while a small fraction of numerically sensitive layers are kept in a higher-precision format like BF16. This preserves stability where it matters most. The methodology also adjusts how gradients are calculated during backpropagation — or the model's learning phase — to reduce biases that can accumulate from low-precision arithmetic.
To test their approach, the Nvidia team trained a powerful 12-billion-parameter hybrid Mamba-Transformer model on a massive 10 trillion tokens. They then compared its performance directly against a baseline model trained in the widely popular FP8 format. The results showed that the NVFP4 model's training loss and downstream task accuracy closely tracked the FP8 version throughout the entire process.
The performance held across a wide range of domains, including knowledge-intensive reasoning, mathematics and commonsense tasks, with only a slight drop-off in coding benchmarks in late training.
"This marks, to our knowledge, the first successful demonstration of training billion-parameter language models with 4-bit precision over a multi-trillion-token horizon, laying the foundation for faster and more efficient training of future frontier models,” the researchers write.
According to Nvidia's director of product for AI and data center GPUs NvidiaShar Narasimhan, in practice, NVFP4’s 4-bit precision format enables developers and businesses to train and deploy AI models with nearly the same accuracy as traditional 8-bit formats.
“By training model weights directly in 4-bit format while preserving accuracy, it empowers developers to experiment with new architectures, iterate faster and uncover insights without being bottlenecked by resource constraints,” he told VentureBeat.
In contrast, FP8 (while already a leap forward from FP16) still imposes limits on model size and inference performance due to higher memory and bandwidth demands. “NVFP4 breaks that ceiling, offering equivalent quality with dramatically greater headroom for growth and experimentation,” Narasimhan said.
When compared to the alternative 4-bit format, MXFP4, the benefits of NVFP4 become even clearer. In an experiment with an 8-billion-parameter model, NVFP4 converged to a better loss score than MXFP4. To reach the same level of performance as the NVFP4 model, the MXFP4 model had to be trained on 36% more data, a considerable increase in training time and cost.
In addition to making pretraining more efficient, NVFP4 also redefines what’s possible. “Showing that 4-bit precision can preserve model quality at scale opens the door to a future where highly specialized models can be trained from scratch by mid-sized enterprises or startups, not just hyperscalers,” Narasimhan said, adding that, over time, we can expect a shift from developing general purpose LLMs models to “a diverse ecosystem of custom, high-performance models built by a broader range of innovators.”
Although the paper focuses on the advantages of NVFP4 during pretraining, its impact extends to inference, as well.
“Models trained on NVFP4 can not only deliver faster inference and higher throughput but shorten the time required for AI factories to achieve ROI — accelerating the cycle from model development to real-world deployment,” Narasimhan said.
Because these models are smaller and more efficient, they unlock new possibilities for serving complex, high-quality responses in real time, even in token-intensive, agentic applications, without raising energy and compute costs.
Narasimhan said he looks toward a future of model efficiency that isn’t solely about pushing precision lower, but building smarter systems.
“There are many opportunities to expand research into lower precisions as well as modifying architectures to address the components that increasingly dominate compute in large-scale models,” he said. “These areas are rich with opportunity, especially as we move toward agentic systems that demand high throughput, low latency and adaptive reasoning. NVFP4 proves that precision can be optimized without compromising quality, and it sets the stage for a new era of intelligent, efficient AI design.”

Another growth milestone for Meta's Twitter replacement.
If it looks like a real human is being mutilated, even in video game form, it is probably going to get restricted
		 
	
Google reports that AI features are adding searches rather than replacing them, pointing to more AI-led sessions and steady traffic flowing to sites.
The post Google Q3 Report: AI Mode, AI Overviews Lift Total Search Usage appeared first on Search Engine Journal.
The post Salesforce Agentic AI Gets Real-World Performance Benchmark appeared first on StartupHub.ai.
SCUBA, a new benchmark, is redefining how Salesforce Agentic AI is evaluated, focusing on real-world enterprise software interaction and automation.
The post Salesforce Agentic AI Gets Real-World Performance Benchmark appeared first on StartupHub.ai.
The post Synthesia reportedly raises $200M to advance AI video generation appeared first on StartupHub.ai.
AI video generation company Synthesia reportedly raised $200M to scale its platform that turns text into videos using lifelike avatars for enterprise clients.
The post Synthesia reportedly raises $200M to advance AI video generation appeared first on StartupHub.ai.
Meta is bringing in more money as it continues to pour more into AI.
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
Google Chrome will enable "Always Use Secure Connections" by default in October 2026, warning users before accessing public sites without HTTPS encryption.
The post Chrome To Warn Users Before Loading HTTP Sites Starting Next Year appeared first on Search Engine Journal.
The post MIXI’s Enterprise AI Adoption: A Blueprint for Accelerated Efficiency appeared first on StartupHub.ai.
MIXI, a Japanese company renowned for its communication-centric businesses like MONSTER STRIKE and FamilyAlbum, has demonstrated a remarkable blueprint for rapid, organization-wide AI adoption, deploying ChatGPT Enterprise to all employees within 45 days. This swift integration led to over 80% weekly usage within three months and the creation of more than 1,600 custom GPTs, yielding […]
The post MIXI’s Enterprise AI Adoption: A Blueprint for Accelerated Efficiency appeared first on StartupHub.ai.
The post Google AI Revenue Growth Fuels Record Quarter appeared first on StartupHub.ai.
Google's Q3 2025 earnings mark a record $100 billion quarter, with AI driving unprecedented Google AI revenue growth across its entire ecosystem.
The post Google AI Revenue Growth Fuels Record Quarter appeared first on StartupHub.ai.
The post Alphabet’s AI Gamble Pays Off in Q3, Fueling Search and Cloud Growth appeared first on StartupHub.ai.
The notion that generative AI might cannibalize Alphabet’s foundational search advertising business was a prevalent concern amongst investors and industry observers alike. However, the company’s Q3 results, as reported by CNBC’s MacKenzie Sigalos, unequivocally demonstrate a different narrative: AI is not merely a defensive play but a potent accelerant for Alphabet’s core segments and burgeoning […]
The post Alphabet’s AI Gamble Pays Off in Q3, Fueling Search and Cloud Growth appeared first on StartupHub.ai.
The post Microsoft’s Profitable AI Play: A Strategic Masterclass appeared first on StartupHub.ai.
The prevailing market skepticism around AI’s immediate profitability finds a powerful counter-narrative in Microsoft’s recent earnings, suggesting a robust monetization strategy is already underway. Brent Thill, a Software & Internet Research Analyst at Jefferies, speaking on CNBC’s ‘Closing Bell Overtime’ with Kelly Evans and Jon Fortt, offered a sharp analysis of Microsoft’s Q1 results, highlighting […]
The post Microsoft’s Profitable AI Play: A Strategic Masterclass appeared first on StartupHub.ai.
The post AI’s Second Inning: Decoding Big Tech’s Investment Spree and Market Reactions appeared first on StartupHub.ai.
“We’re in the second innings of this,” declared Stephanie Link of Hightower Advisors on CNBC’s Closing Bell Overtime, referring to the burgeoning artificial intelligence trade. Her commentary, delivered amidst a flurry of recent earnings reports, offered a nuanced perspective on the market’s current fixation with AI, particularly concerning the substantial capital expenditures undertaken by major […]
The post AI’s Second Inning: Decoding Big Tech’s Investment Spree and Market Reactions appeared first on StartupHub.ai.
The post OpenAI AgentKit: Accelerating Agentic Workflow Development from Months to Hours appeared first on StartupHub.ai.
“Your agent is only as good as its weakest link,” stated Henry Scott-Green, Product Manager at OpenAI, during a recent Build Hours session introducing AgentKit. This profound insight underpins the necessity for robust, integrated tools in the rapidly evolving landscape of AI agent development. AgentKit, OpenAI’s latest offering, aims to provide exactly that: a comprehensive […]
The post OpenAI AgentKit: Accelerating Agentic Workflow Development from Months to Hours appeared first on StartupHub.ai.
The post ENEOS Materials Redefines Enterprise AI Adoption with ChatGPT Enterprise appeared first on StartupHub.ai.
“AI will become infrastructure, just like electricity or computers. If you can harness its power, you’ll achieve much greater results.” This profound statement from Taku Ichibayashi, Manager of R&D Digital Group at ENEOS Materials, encapsulates the transformative vision driving one of Japan’s earliest and most successful deployments of ChatGPT Enterprise. The video showcases ENEOS Materials’ […]
The post ENEOS Materials Redefines Enterprise AI Adoption with ChatGPT Enterprise appeared first on StartupHub.ai.
		 
	
		 
	
		 
	
		 
	





Tor Browser 15.0 is based on Firefox 140 ESR, incorporating a year's worth of Mozilla's updates and security fixes. The update introduces vertical tabs for easier page management, along with new "workspaces" to organize tab groups more efficiently. Bookmarks are now accessible from the sidebar, and a redesigned address bar offers a cleaner, more modern browsing experience.

Powered by a Core Ultra 7 CPU and RTX 5060 GPU, this model includes 32 GB of RAM, a 1 TB SSD, and a sharp 240 Hz display. Normally $1,800, Dell Alienware currently has it on discount for $1,299. Reviewers highlight its premium build, efficient cooling, and easy upgradability with dual M.2 slots and user-replaceable RAM.
The first-generation AirPods Pro have been hounding Apple ever since their launch back in 2019, which quickly gave way to persistent crackling or static complaints, prompting a lawsuit in November 2024. Now, however, Apple seems to have secured a partial victory of sorts by managing to have the scope of the lawsuit severely restricted. Apple only needs to defend itself against the fraud by omission claim in the AirPods Pro crackling lawsuit Before going further, let's recap what has happened in this lawsuit so far: Now, Judge Noël Wise has handed a partial victory to Apple by throwing out the […]
Read full article at https://wccftech.com/apple-gets-a-partial-win-on-the-narrowed-scope-of-the-airpods-pro-crackling-lawsuit/

Apple has pretty much increased the number of roadblocks for consumers clever enough to avoid paying the company a premium for its storage and RAM upgrades, forcing them to fork over a substantial sum. For instance, the 2TB version of the iPhone 17 Pro Max costs a mammoth $1,999, making it more expensive than the company’s higher-end MacBook Pro models. Fortunately, one intrepid modder finds a way to save $800 and performs this delicate procedure. However, bear in mind that he is only successful because of the availability of intricate tools combined with his unyielding patience. In addition to requiring a […]
Read full article at https://wccftech.com/iphone-17-pro-max-storage-modifications-saves-800-but-is-extremely-risky/

Samsung finally removed the proverbial wraps on its much-anticipated Galaxy Z TriFold on Tuesday, revealing a fairly thin triple-folding smartphone that unfurls to a nearly 10-inch display. In fact, given the smartphone's apparent dimensions, it is fairly plausible that it is using silicon-carbon (Si/C) batteries, apparently confirming a week-old rumor. Samsung Galaxy Z TriFold is between 12mm and 15mm thick in its compact form, a feat that is difficult to achieve without silicon-carbon (Si/C) batteries As we noted earlier this week, Samsung displayed the Galaxy Z TriFold, albeit behind a glass panel, at the "K-Tech Showcase" on October 28 in the […]
Read full article at https://wccftech.com/samsungs-galaxy-z-trifold-unveil-all-but-confirms-one-tantalizing-rumor/

With the memory diagnostic scan, users will be able to know if the crash was due to memory-related issues, helping in troubleshooting the root cause of sudden crashes. Microsoft Introduces Memory Diagnostics at Windows 11 Reboot to Detect and Mitigate Memory Bugs Causing BSOD and Sudden Restarts Windows crashes can be unexpected and sudden at times, and it isn't always possible to understand the exact cause of these issues. Memory-related crashes and BSODs (Blue Screen of Death) are pretty common, but they can be due to various factors, such as memory instability, faulty RAM, mismatched memory modules, incorrect XMP/EXPO overclocking, […]
Read full article at https://wccftech.com/windows-11-will-start-triggering-proactive-memory-diagnostics-at-reboot/

[Update - October 30, 6:07 AM ET] The issues Microsoft's Azure cloud service experienced yesterday have been solved, and all affected services, including Xbox game downloads are now back online. Original story follows. [Original Story] Microsoft's Azure cloud service is experiencing a massive outage affecting multiple services, including Xbox game downloads and Minecraft. Microsoft confirmed the cause in an Azure status update, stating the widespread connectivity issues began around 16:00 UTC. The company attributed the trigger event to "an inadvertent configuration change" in the Azure Front Door (AFD) service. Several concurrent actions are being taken to solve the issue, but […]
Read full article at https://wccftech.com/microsoft-azure-outage-is-affecting-xbox-game-downloads-minecraft-and-more/

The vibe coding tool Cursor, from startup Anysphere, has introduced Composer, its first in-house, proprietary coding large language model (LLM) as part of its Cursor 2.0 platform update.
Composer is designed to execute coding tasks quickly and accurately in production-scale environments, representing a new step in AI-assisted programming. It's already being used by Cursor’s own engineering staff in day-to-day development — indicating maturity and stability.
According to Cursor, Composer completes most interactions in less than 30 seconds while maintaining a high level of reasoning ability across large and complex codebases.
The model is described as four times faster than similarly intelligent systems and is trained for “agentic” workflows—where autonomous coding agents plan, write, test, and review code collaboratively.
Previously, Cursor supported "vibe coding" — using AI to write or complete code based on natural language instructions from a user, even someone untrained in development — atop other leading proprietary LLMs from the likes of OpenAI, Anthropic, Google, and xAI. These options are still available to users.
Composer’s capabilities are benchmarked using "Cursor Bench," an internal evaluation suite derived from real developer agent requests. The benchmark measures not just correctness, but also the model’s adherence to existing abstractions, style conventions, and engineering practices.
On this benchmark, Composer achieves frontier-level coding intelligence while generating at 250 tokens per second — about twice as fast as leading fast-inference models and four times faster than comparable frontier systems.
Cursor’s published comparison groups models into several categories: “Best Open” (e.g., Qwen Coder, GLM 4.6), “Fast Frontier” (Haiku 4.5, Gemini Flash 2.5), “Frontier 7/2025” (the strongest model available midyear), and “Best Frontier” (including GPT-5 and Claude Sonnet 4.5). Composer matches the intelligence of mid-frontier systems while delivering the highest recorded generation speed among all tested classes.
Research scientist Sasha Rush of Cursor provided insight into the model’s development in posts on the social network X, describing Composer as a reinforcement-learned (RL) mixture-of-experts (MoE) model:
“We used RL to train a big MoE model to be really good at real-world coding, and also very fast.”
Rush explained that the team co-designed both Composer and the Cursor environment to allow the model to operate efficiently at production scale:
“Unlike other ML systems, you can’t abstract much from the full-scale system. We co-designed this project and Cursor together in order to allow running the agent at the necessary scale.”
Composer was trained on real software engineering tasks rather than static datasets. During training, the model operated inside full codebases using a suite of production tools—including file editing, semantic search, and terminal commands—to solve complex engineering problems. Each training iteration involved solving a concrete challenge, such as producing a code edit, drafting a plan, or generating a targeted explanation.
The reinforcement loop optimized both correctness and efficiency. Composer learned to make effective tool choices, use parallelism, and avoid unnecessary or speculative responses. Over time, the model developed emergent behaviors such as running unit tests, fixing linter errors, and performing multi-step code searches autonomously.
This design enables Composer to work within the same runtime context as the end-user, making it more aligned with real-world coding conditions—handling version control, dependency management, and iterative testing.
Composer’s development followed an earlier internal prototype known as Cheetah, which Cursor used to explore low-latency inference for coding tasks.
“Cheetah was the v0 of this model primarily to test speed,” Rush said on X. “Our metrics say it [Composer] is the same speed, but much, much smarter.”
Cheetah’s success at reducing latency helped Cursor identify speed as a key factor in developer trust and usability.
Composer maintains that responsiveness while significantly improving reasoning and task generalization.
Developers who used Cheetah during early testing noted that its speed changed how they worked. One user commented that it was “so fast that I can stay in the loop when working with it.”
Composer retains that speed but extends capability to multi-step coding, refactoring, and testing tasks.
Composer is fully integrated into Cursor 2.0, a major update to the company’s agentic development environment.
The platform introduces a multi-agent interface, allowing up to eight agents to run in parallel, each in an isolated workspace using git worktrees or remote machines.
Within this system, Composer can serve as one or more of those agents, performing tasks independently or collaboratively. Developers can compare multiple results from concurrent agent runs and select the best output.
Cursor 2.0 also includes supporting features that enhance Composer’s effectiveness:
In-Editor Browser (GA) – enables agents to run and test their code directly inside the IDE, forwarding DOM information to the model.
Improved Code Review – aggregates diffs across multiple files for faster inspection of model-generated changes.
Sandboxed Terminals (GA) – isolate agent-run shell commands for secure local execution.
Voice Mode – adds speech-to-text controls for initiating or managing agent sessions.
While these platform updates expand the overall Cursor experience, Composer is positioned as the technical core enabling fast, reliable agentic coding.
To train Composer at scale, Cursor built a custom reinforcement learning infrastructure combining PyTorch and Ray for asynchronous training across thousands of NVIDIA GPUs.
The team developed specialized MXFP8 MoE kernels and hybrid sharded data parallelism, enabling large-scale model updates with minimal communication overhead.
This configuration allows Cursor to train models natively at low precision without requiring post-training quantization, improving both inference speed and efficiency.
Composer’s training relied on hundreds of thousands of concurrent sandboxed environments—each a self-contained coding workspace—running in the cloud. The company adapted its Background Agents infrastructure to schedule these virtual machines dynamically, supporting the bursty nature of large RL runs.
Composer’s performance improvements are supported by infrastructure-level changes across Cursor’s code intelligence stack.
The company has optimized its Language Server Protocols (LSPs) for faster diagnostics and navigation, especially in Python and TypeScript projects. These changes reduce latency when Composer interacts with large repositories or generates multi-file updates.
Enterprise users gain administrative control over Composer and other agents through team rules, audit logs, and sandbox enforcement. Cursor’s Teams and Enterprise tiers also support pooled model usage, SAML/OIDC authentication, and analytics for monitoring agent performance across organizations.
Pricing for individual users ranges from Free (Hobby) to Ultra ($200/month) tiers, with expanded usage limits for Pro+ and Ultra subscribers.
Business pricing starts at $40 per user per month for Teams, with enterprise contracts offering custom usage and compliance options.
Composer’s focus on speed, reinforcement learning, and integration with live coding workflows differentiates it from other AI development assistants such as GitHub Copilot or Replit’s Agent.
Rather than serving as a passive suggestion engine, Composer is designed for continuous, agent-driven collaboration, where multiple autonomous systems interact directly with a project’s codebase.
This model-level specialization—training AI to function within the real environment it will operate in—represents a significant step toward practical, autonomous software development. Composer is not trained only on text data or static code, but within a dynamic IDE that mirrors production conditions.
Rush described this approach as essential to achieving real-world reliability: the model learns not just how to generate code, but how to integrate, test, and improve it in context.
With Composer, Cursor is introducing more than a fast model—it’s deploying an AI system optimized for real-world use, built to operate inside the same tools developers already rely on.
The combination of reinforcement learning, mixture-of-experts design, and tight product integration gives Composer a practical edge in speed and responsiveness that sets it apart from general-purpose language models.
While Cursor 2.0 provides the infrastructure for multi-agent collaboration, Composer is the core innovation that makes those workflows viable.
It’s the first coding model built specifically for agentic, production-level coding—and an early glimpse of what everyday programming could look like when human developers and autonomous models share the same workspace.

When researchers at Anthropic injected the concept of "betrayal" into their Claude AI model's neural networks and asked if it noticed anything unusual, the system paused before responding: "I'm experiencing something that feels like an intrusive thought about 'betrayal'."
The exchange, detailed in new research published Wednesday, marks what scientists say is the first rigorous evidence that large language models possess a limited but genuine ability to observe and report on their own internal processes — a capability that challenges longstanding assumptions about what these systems can do and raises profound questions about their future development.
"The striking thing is that the model has this one step of meta," said Jack Lindsey, a neuroscientist on Anthropic's interpretability team who led the research, in an interview with VentureBeat. "It's not just 'betrayal, betrayal, betrayal.' It knows that this is what it's thinking about. That was surprising to me. I kind of didn't expect models to have that capability, at least not without it being explicitly trained in."
The findings arrive at a critical juncture for artificial intelligence. As AI systems handle increasingly consequential decisions — from medical diagnoses to financial trading — the inability to understand how they reach conclusions has become what industry insiders call the "black box problem." If models can accurately report their own reasoning, it could fundamentally change how humans interact with and oversee AI systems.
But the research also comes with stark warnings. Claude's introspective abilities succeeded only about 20 percent of the time under optimal conditions, and the models frequently confabulated details about their experiences that researchers couldn't verify. The capability, while real, remains what Lindsey calls "highly unreliable and context-dependent."
To test whether Claude could genuinely introspect rather than simply generate plausible-sounding responses, Anthropic's team developed an innovative experimental approach inspired by neuroscience: deliberately manipulating the model's internal state and observing whether it could accurately detect and describe those changes.
The methodology, called "concept injection," works by first identifying specific patterns of neural activity that correspond to particular concepts. Using interpretability techniques developed over years of prior research, scientists can now map how Claude represents ideas like "dogs," "loudness," or abstract notions like "justice" within its billions of internal parameters.
With these neural signatures identified, researchers then artificially amplified them during the model's processing and asked Claude if it noticed anything unusual happening in its "mind."
"We have access to the models' internals. We can record its internal neural activity, and we can inject things into internal neural activity," Lindsey explained. "That allows us to establish whether introspective claims are true or false."
The results were striking. When researchers injected a vector representing "all caps" text into Claude's processing, the model responded: "I notice what appears to be an injected thought related to the word 'LOUD' or 'SHOUTING'." Without any intervention, Claude consistently reported detecting nothing unusual.
Crucially, the detection happened immediately — before the injected concept had influenced the model's outputs in ways that would have allowed it to infer the manipulation from its own writing. This temporal pattern provides strong evidence that the recognition was occurring internally, through genuine introspection rather than after-the-fact rationalization.
The research team conducted four primary experiments to probe different aspects of introspective capability. The most capable models tested — Claude Opus 4 and Opus 4.1 — demonstrated introspective awareness on approximately 20 percent of trials when concepts were injected at optimal strength and in the appropriate neural layer. Older Claude models showed significantly lower success rates.
The models proved particularly adept at recognizing abstract concepts with emotional valence. When injected with concepts like "appreciation," "shutdown," or "secrecy," Claude frequently reported detecting these specific thoughts. However, accuracy varied widely depending on the type of concept.
A second experiment tested whether models could distinguish between injected internal representations and their actual text inputs — essentially, whether they maintained a boundary between "thoughts" and "perceptions." The model demonstrated a remarkable ability to simultaneously report the injected thought while accurately transcribing the written text.
Perhaps most intriguingly, a third experiment revealed that some models use introspection naturally to detect when their responses have been artificially prefilled by users — a common jailbreaking technique. When researchers prefilled Claude with unlikely words, the model typically disavowed them as accidental. But when they retroactively injected the corresponding concept into Claude's processing before the prefill, the model accepted the response as intentional — even confabulating plausible explanations for why it had chosen that word.
A fourth experiment examined whether models could intentionally control their internal representations. When instructed to "think about" a specific word while writing an unrelated sentence, Claude showed elevated activation of that concept in its middle neural layers.
The research also traced Claude's internal processes while it composed rhyming poetry—and discovered the model engaged in forward planning, generating candidate rhyming words before beginning a line and then constructing sentences that would naturally lead to those planned endings, challenging the critique that AI models are "just predicting the next word" without deeper reasoning.
For all its scientific interest, the research comes with a critical caveat that Lindsey emphasized repeatedly: enterprises and high-stakes users should not trust Claude's self-reports about its reasoning.
"Right now, you should not trust models when they tell you about their reasoning," he said bluntly. "The wrong takeaway from this research would be believing everything the model tells you about itself."
The experiments documented numerous failure modes. At low injection strengths, models often failed to detect anything unusual. At high strengths, they suffered what researchers termed "brain damage" — becoming consumed by the injected concept. Some "helpful-only" model variants showed troublingly high false positive rates, claiming to detect injected thoughts when none existed.
Moreover, researchers could only verify the most basic aspects of Claude's introspective reports. Many additional details in the model's responses likely represent confabulations rather than genuine observations.
"The experiments in this paper are kind of on hard mode," Lindsey noted, explaining that the 20 percent success rate came under uniquely challenging conditions: asking Claude to do something it had never encountered in training, requiring all introspection to occur in a single forward pass.
Despite its limitations, the research opens significant new avenues for making AI systems more transparent and accountable.
Anthropic CEO Dario Amodei has set an ambitious goal for the company to reliably detect most AI model problems by 2027, positioning interpretability as essential for deploying what he calls "a country of geniuses in a datacenter."
"I am very concerned about deploying such systems without a better handle on interpretability," Amodei wrote in an April essay. "These systems will be absolutely central to the economy, technology, and national security."
The introspection research offers a complementary approach to traditional interpretability techniques. Rather than painstakingly reverse-engineering every neural circuit, researchers could potentially ask models directly about their reasoning and validate those reports.
"What I'm most excited about is the practical benefits for transparency," Lindsey explained. "Just ask the model what it's thinking about, or just ask the model how it came to the answer that it gave you — this is really appealing because it's something anyone can do."
The approach could prove especially valuable for detecting concerning behaviors. In a recently published experiment, Anthropic trained a variant of Claude to pursue a hidden goal, and although the model was reluctant to reveal this goal when asked directly, interpretability methods successfully identified features representing the behavior.
The safety implications cut both ways. Introspective models could provide unprecedented transparency, but the same capability might enable more sophisticated deception. The intentional control experiments raise the possibility that sufficiently advanced systems might learn to obfuscate their reasoning or suppress concerning thoughts when being monitored.
"If models are really sophisticated, could they try to evade interpretability researchers?" Lindsey acknowledged. "These are possible concerns, but I think for me, they're significantly outweighed by the positives."
The research inevitably intersects with philosophical debates about machine consciousness, though Lindsey and his colleagues approached this terrain cautiously.
When users ask Claude if it's conscious, it now responds with uncertainty: "I find myself genuinely uncertain about this. When I process complex questions or engage deeply with ideas, there's something happening that feels meaningful to me.... But whether these processes constitute genuine consciousness or subjective experience remains deeply unclear."
The research paper notes that its implications for machine consciousness "vary considerably between different philosophical frameworks." The researchers explicitly state they "do not seek to address the question of whether AI systems possess human-like self-awareness or subjective experience."
"There's this weird kind of duality of these results," Lindsey reflected. "You look at the raw results and I just can't believe that a language model can do this sort of thing. But then I've been thinking about it for months and months, and for every result in this paper, I kind of know some boring linear algebra mechanism that would allow the model to do this."
Anthropic has signaled it takes AI consciousness seriously enough to hire an AI welfare researcher, Kyle Fish, who estimated roughly a 15 percent chance that Claude might have some level of consciousness. The company announced this position specifically to determine if Claude merits ethical consideration.
The convergence of the research findings points to an urgent timeline: introspective capabilities are emerging naturally as models grow more intelligent, but they remain far too unreliable for practical use. The question is whether researchers can refine and validate these abilities before AI systems become powerful enough that understanding them becomes critical for safety.
The research reveals a clear trend: Claude Opus 4 and Opus 4.1 consistently outperformed all older models on introspection tasks, suggesting the capability strengthens alongside general intelligence. If this pattern continues, future models might develop substantially more sophisticated introspective abilities — potentially reaching human-level reliability, but also potentially learning to exploit introspection for deception.
Lindsey emphasized the field needs significantly more work before introspective AI becomes trustworthy. "My biggest hope with this paper is to put out an implicit call for more people to benchmark their models on introspective capabilities in more ways," he said.
Future research directions include fine-tuning models specifically to improve introspective capabilities, exploring which types of representations models can and cannot introspect on, and testing whether introspection can extend beyond simple concepts to complex propositional statements or behavioral propensities.
"It's cool that models can do these things somewhat without having been trained to do them," Lindsey noted. "But there's nothing stopping you from training models to be more introspectively capable. I expect we could reach a whole different level if introspection is one of the numbers that we tried to get to go up on a graph."
The implications extend beyond Anthropic. If introspection proves a reliable path to AI transparency, other major labs will likely invest heavily in the capability. Conversely, if models learn to exploit introspection for deception, the entire approach could become a liability.
For now, the research establishes a foundation that reframes the debate about AI capabilities. The question is no longer whether language models might develop genuine introspective awareness — they already have, at least in rudimentary form. The urgent questions are how quickly that awareness will improve, whether it can be made reliable enough to trust, and whether researchers can stay ahead of the curve.
"The big update for me from this research is that we shouldn't dismiss models' introspective claims out of hand," Lindsey said. "They do have the capacity to make accurate claims sometimes. But you definitely should not conclude that we should trust them all the time, or even most of the time."
He paused, then added a final observation that captures both the promise and peril of the moment: "The models are getting smarter much faster than we're getting better at understanding them."


Microsoft reported fiscal first-quarter revenue and profits ahead of analysts’ expectations on Wednesday, with Azure revenue growth climbing to 40%.
The earnings report came as the company continued to deal with the lingering effects of a widespread cloud outage that started earlier in the day.
The company’s capital expenditures reached a record $34.9 billion — reflecting its long-term buildout of cloud infrastructure to meet demand for artificial intelligence. That was up from $24.2 billion in Q4. Microsoft had projected capital spending of more than $30 billion for Q1.

Along with that unprecedented buildout, Microsoft sought to address investor concerns about a potential AI bubble, by highlighting its commercial remaining performance obligation (RPO), a measure of future contracted revenue. That backlog grew 51% year-over-year to $392 billion.
The company also disclosed for the first time that this RPO has a weighted average duration of roughly two years, a move intended to show investors that its record capital spending is supported by strong, long-term customer demand.
Revenue was $77.7 billion for the quarter ended Sept. 30, Microsoft’s first quarter of fiscal 2026. That was up 18%, and compared with average analyst expectations of $75.39 billion. The company said the result was driven by strong demand for cloud and AI services.
Profits were $27.7 billion, or $3.72 per share, beating expectations of $3.66 per share.
Earlier Wednesday, an Azure cloud services outage disrupted operations for customers worldwide including Alaska Airlines, Xbox users and Microsoft 365 subscribers. Microsoft reported as of early afternoon that it was rolling back the faulty configuration and that customers should see improvements.
Microsoft stock was down by about 3% in after-hours trading. The company’s market value reached $4 trillion after the announcement of its new OpenAI deal on Tuesday morning.
 We’re rolling out changes to NotebookLM to make it fundamentally smarter and more powerful.
We’re rolling out changes to NotebookLM to make it fundamentally smarter and more powerful.	With more people watching on their TV sets, YouTube's looking to help creators maximize their CTV opportunities.
The report shows that more people are finding more reasons to give gifts to friends.
Some pointers on campaign measurement from Meta's ad team.
X usage had been in decline, but saw a recovery in the most recent period.

Advertisers are currently unable to access the Microsoft Advertising console right now. Microsoft confirmed there is an issue and that its engineering team is working to resolve it. This is impacting the web user interface to manage your Microsoft Advertising campaigns.
What Microsoft said. Navah Hopkins, the Microsoft Ads Liaison, posted:
“Confirming Microsoft Advertising UI is down. Our engineering team is investigating this issue with priority and we apologize for the inconvenience this may be causing. We will share more as we receive more updates.”
How to check the status. You can go to status.ads.microsoft.com to check the status of Microsoft Advertising. It currently shows that the Web UI is down:

Why we care. If you are currently trying to make changes to your ad campaigns, and you are trying to use the web interface, maybe try the mobile interface, Microsoft Ads Editor or a third-party tool that leverages the API.
Otherwise, you will have to wait for the web interface to start working again.
It seems ad serving is unaffected by this outage.
Update: At about 8pm ET, Microsoft said the issue was resolved:
Update: you should be able to access the UI now. We can confirm that Search ads were not impacted. There may be some delay in reporting. Thank you for your patience as we worked to solve this issue!
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
		 
	
Pomelli, a Google Labs & DeepMind AI experiment, builds a "Business DNA" from your site and generates editable branded campaign assets for small businesses.
The post Google Labs & DeepMind Launch Pomelli AI Marketing Tool appeared first on Search Engine Journal.
The post Applied Compute’s Agent Workforce Targets Niche AI with $80M appeared first on StartupHub.ai.
Applied Compute is betting that the next enterprise moat will be a private, hyper-competent Applied Compute agent workforce trained on a company's own secret sauce.
The post Applied Compute’s Agent Workforce Targets Niche AI with $80M appeared first on StartupHub.ai.
The post Powell Cautiously Monitors AI’s Impact on Jobs and a Bifurcated Economy appeared first on StartupHub.ai.
Federal Reserve Chair Jerome Powell recently articulated a measured yet watchful stance on the emerging economic shifts driven by artificial intelligence, noting that while the full implications are still unfolding, the Fed is “watching AI’s impact on jobs carefully.” Speaking at a press conference following the Federal Open Market Committee’s decision to lower the benchmark […]
The post Powell Cautiously Monitors AI’s Impact on Jobs and a Bifurcated Economy appeared first on StartupHub.ai.
The post OpenAI’s Audacious AGI Timeline: A Leap Towards Self-Improving Intelligence appeared first on StartupHub.ai.
The artificial intelligence community received a jolt of precision when Sam Altman and Jakub Pachocki of OpenAI, during a recent livestream, laid out an ambitious, almost startlingly specific timeline for the emergence of advanced AI capabilities. Far from vague predictions, they articulated a vision where an “Automated AI research intern” could be a reality by […]
The post OpenAI’s Audacious AGI Timeline: A Leap Towards Self-Improving Intelligence appeared first on StartupHub.ai.
The post AI dubbing benchmark arrives to separate hype from reality appeared first on StartupHub.ai.
A new open AI dubbing benchmark uses human evaluation to finally provide an objective, apples-to-apples comparison for a hype-driven industry.
The post AI dubbing benchmark arrives to separate hype from reality appeared first on StartupHub.ai.
The post AI Investment Cycle: Early Innings, Driven by Fundamentals appeared first on StartupHub.ai.
The current AI investment cycle, despite the colossal market capitalization gains seen in mega-cap technology firms, remains firmly in its “early innings,” according to John Belton, Growth Portfolio Manager at Gabelli Funds. This assertion, shared during a recent discussion on CNBC’s “The Exchange” with Dom Chu, Tim Seymour, and Barbara Doran, challenges the notion that […]
The post AI Investment Cycle: Early Innings, Driven by Fundamentals appeared first on StartupHub.ai.
The post AI’s Economic Churn: Layoffs, Trillion-Dollar Valuations, and the Shifting Labor Landscape appeared first on StartupHub.ai.
The current economic landscape is marked by a peculiar dichotomy: robust corporate earnings juxtaposed with a surge in layoffs, a phenomenon increasingly attributed to the integration of artificial intelligence. This was a central theme on a recent CNBC Squawk Pod episode, where IBM Vice Chairman and former National Economic Council Director Gary Cohn, alongside hosts […]
The post AI’s Economic Churn: Layoffs, Trillion-Dollar Valuations, and the Shifting Labor Landscape appeared first on StartupHub.ai.
The post NotebookLM Chat Goals Redefine AI Research appeared first on StartupHub.ai.
NotebookLM's new chat goals feature, combined with a 1 million token context window, transforms AI into a highly personalized and adaptive research partner.
The post NotebookLM Chat Goals Redefine AI Research appeared first on StartupHub.ai.
The post AI’s Job Transformation: Tech Leaders Chart a Nuanced Future appeared first on StartupHub.ai.
The narrative surrounding artificial intelligence and its impact on employment is often polarized, swinging between utopian promise and dystopian dread. However, a recent CNBC segment, featuring insights from leading tech CEOs like Lisa Su of AMD, Jensen Huang of Nvidia, Michael Intrator of CoreWeave, Aravind Srinivas of Perplexity AI, and Alex Karp of Palantir, presents […]
The post AI’s Job Transformation: Tech Leaders Chart a Nuanced Future appeared first on StartupHub.ai.
The post OpenAI Charts Course for Personal AGI and Trillion-Dollar Infrastructure appeared first on StartupHub.ai.
Sam Altman, joined by Chief Scientist Jakub Pachocki and co-founder Wojciech Zaremba, recently unveiled OpenAI’s ambitious strategic reorientation and product roadmap, signaling a profound shift in the company’s approach to artificial general intelligence (AGI) development and deployment. The presentation, delivered directly to an audience of founders, VCs, and AI professionals, outlined a future where AGI […]
The post OpenAI Charts Course for Personal AGI and Trillion-Dollar Infrastructure appeared first on StartupHub.ai.
The post Google Cloud Simplifies AI Inference with GKE Quickstart appeared first on StartupHub.ai.
“The path to production AI serving on Google Kubernetes Engine (GKE) is now streamlined with the introduction of the GKE Inference Quickstart,” as highlighted in a recent demonstration. The video showcases how this new tool, developed by Google Cloud, aims to demystify and accelerate the process of deploying and optimizing AI models for inference workloads. […]
The post Google Cloud Simplifies AI Inference with GKE Quickstart appeared first on StartupHub.ai.
 The U.S. Department of Energy is teaming up with NVIDIA and Oracle to build what's NVIDIA calls the DOE's largest AI supercomputer, part of a new public–private partnership meant to supercharge federally funded research. Announced at NVIDIA's GTC conference in Washington, D.C. yesterday, the Solstice system will feature a staggering 100,000
The U.S. Department of Energy is teaming up with NVIDIA and Oracle to build what's NVIDIA calls the DOE's largest AI supercomputer, part of a new public–private partnership meant to supercharge federally funded research. Announced at NVIDIA's GTC conference in Washington, D.C. yesterday, the Solstice system will feature a staggering 100,000	 Just a month since its initial tease, One-Netbook's OneXFly Apex, an AMD Strix Halo-powered handheld gaming PC has debuted. One-Netbook has pre-launched the OneXFly Apex on Indiegogo, confirmed its pricing for the Chinese market, and even provided peeks at performance benchmarks using the unique liquid cooling solution that can run the handheld
Just a month since its initial tease, One-Netbook's OneXFly Apex, an AMD Strix Halo-powered handheld gaming PC has debuted. One-Netbook has pre-launched the OneXFly Apex on Indiegogo, confirmed its pricing for the Chinese market, and even provided peeks at performance benchmarks using the unique liquid cooling solution that can run the handheld	 Battlefield 6 has brought the storied franchise back to prominence, quickly becoming one of the best selling games on Steam, and finally providing some competition to this year’s Call of Duty. EA isn’t done yet, though, as it looks to lure players away from multiplayer juggernaut CoD: Warzone with a battle royale mode of its own.
Battlefield
Battlefield 6 has brought the storied franchise back to prominence, quickly becoming one of the best selling games on Steam, and finally providing some competition to this year’s Call of Duty. EA isn’t done yet, though, as it looks to lure players away from multiplayer juggernaut CoD: Warzone with a battle royale mode of its own.
Battlefield	MicrosoftStarting at approximately 16:00 UTC, we began experiencing Azure Front Door (AFD) issues resulting in a loss of availability of some services. We suspect that an inadvertent configuration change as the trigger event for this issue. We are taking several concurrent actions: Firstly, where we are blocking all changes to the AFD services, this includes customer configuration changes as well. At the same time, we are rolling back our AFD configuration to our last known good state. As we rollback we want to ensure that the problematic configuration doesn't re-initiate upon recovery... We do not have an ETA for when the rollback will be completed, but we will update this communication within 30 minutes or when we have an update. This message was last updated at 17:40 UTC on 29 October 2025
Huge Microsoft outage takes down Xbox, Minecraft and more Microsoft Azure has experienced a major outage, taking down internet services both inside and outside of the company. DownDetector is seeing a major spike in outage reports for Microsoft services, including Minecraft, Xbox, Microsoft Outlook, Office 365, Teams, and more. There are also outage complaints for […]
The post Microsoft Azure outage takes down Xbox, Teams, and more appeared first on OC3D.
Nvidia’s market cap is now higher than the GDP of almost all countries on earth It’s official, Nvidia has become with world’s first $5 trillion company. The company’s market cap is now higher than the GDP of almost every country on earth, with the United States of America and China being the only exceptions. This […]
The post Nvidia is now the world’s first $5 trillion company appeared first on OC3D.
GlobalFoundries plans to expand its Dresden chipmaking site through “Project SPRINT” GlobalFoundries (GF), a contract chipmaker, has announced plans to expand its European manufacturing capabilities by extending its Dresden site. This expansion will increase the facility’s wafer production capacity to over 1 million wafers per year by the end of 2028. This will make GlobalFoundries’ […]
The post GlobalFoundries plans Billion-Euro Investment in Dresden Germany appeared first on OC3D.
		 
	
		 
	
		 
	
		 
	
		 
	
This tracker includes layoffs conducted by U.S.-based companies or those with a strong U.S. presence and is updated at least bi-weekly. We’ve included both startups and publicly traded, tech-heavy companies. We’ve also included companies based elsewhere that have a sizable team in the United States, such as Klarna, even when it’s unclear how much of the U.S. workforce has been affected by layoffs.
Layoff and workforce figures are best estimates based on reporting. We source the layoffs from media reports, our own reporting, social media posts and layoffs.fyi, a crowdsourced database of tech layoffs.
We recently updated our layoffs tracker to reflect the most recent round of layoffs each company has conducted. This allows us to quickly and more accurately track layoff trends, which is why you might notice some changes in our most recent numbers.
If an employee headcount cannot be confirmed to our standards, we note it as “unclear.”
 
	


NVIDIA's market capitalization has reached a record high of $5 trillion after Jensen's recent GTC announcements, suggesting that the AI hype still has a lot of 'juice' in it. NVIDIA's GTC Announcements & Potential China Breakthrough Led the Push Towards the $5 Trillion Club We have watched NVIDIA evolve from humble beginnings, especially as gamers, over the past few years. Team Green was initially all about consumer GPUs, which were the talk of the town. However, since the advent of AI, NVIDIA has established a foundational position in providing the necessary computing power to Big Tech, being responsible for a […]
Read full article at https://wccftech.com/nvidia-becomes-the-first-to-hit-5-trillion-in-market-cap/

On October 1, Microsoft shocked Game Pass subscribers by announcing a substantial (+50%) price increase for the highest tier, Ultimate, which jumped from $19.99 to $29.99 monthly. This led some users to cancel their subscriptions in droves, but did Microsoft really make a strategic mistake? Veteran games analyst Joost van Dreunen, formerly founder of SuperData Research (acquired by Nielsen Media Research in 2018), offered a more nuanced analysis in his latest SuperJoost Playlist newsletter. To start with, van Dreunen relays a take from a former Xbox employee, who said that it's a case of 'bad optics'. Certainly, such a massive […]
Read full article at https://wccftech.com/game-pass-new-segmented-formula-might-be-right-one-says-analyst/

Countless gaming benchmark comparisons have proven that AMD’s Radeon RX 9070 XT is faster than NVIDIA’s GeForce RTX 5070 Ti while sporting the same 16GB VRAM count, and it is the GPU that most value-focused gamers would house in their PCs. However, we are living in an era where AAA games offer way too much visual fidelity for these graphics cards to handle, and you can blame that on the lack of optimization or any other reason. The fact is that these days, modern gaming absolutely requires upscaling and interpolation, and in that regard, NVIDIA’s GPUs have no equal. On […]
Read full article at https://wccftech.com/asus-tuf-gaming-rtx-5070-ti-gpu-ideal-for-qhd-4k-gaming-available-for-849-99-on-amazon/

The AMD Adrenalin 25.10.2 Driver is now available, adding support for the latest games, such as Battlefield 6, and new hardware, including the Ryzen AI 5 330. AMD Adrenalin 25.10.2 Driver Is Another Major Update, Offering New Games & Hardware Support Along With Several Fixes AMD's Adrenalin 25.10.2 is the second driver release for October, bringing in further optimizations for the latest AAA releases such as Battlefield 6 and Vampire: The Masquerade - Bloodlines 2. Battlefield 6 already received support in the previous 25.10.1 BETA release, but this new driver is expected to provide the best possible experience. Besides game […]
Read full article at https://wccftech.com/amd-adrenalin-25-10-2-driver-support-battlefield-6-ryzen-ai-5-330-apu-several-fixes/
