Reading view

Google completes $32 billion acquisition of Wiz

Google announced the completion of its $32 billion acquisition of Wiz, a leading cloud and AI security platform headquartered in New York. Wiz will join Google Cloud and maintain its brand and commitment to securing customers across all cloud environments.

This acquisition is an investment by Google Cloud to improve cloud security and enable organizations to build fast and securely across any cloud or AI platform. In today’s AI era, more businesses and governments are migrating their most important data and systems to the cloud and turning to agile and continuous software development. As these organizations operate in a multicloud environment and adopt AI, attackers are using AI to operate with greater speed and sophistication.

Wiz delivers an easy-to-use security platform with deep expertise of cloud environments and code, connecting to all major clouds and helping prevent and respond to cybersecurity incidents. Its capabilities complement Google Cloud’s leadership in cloud infrastructure and deep AI expertise, including AI-powered threat intelligence and security operations tools.

Together, Google Cloud and Wiz will provide a unified security platform that improves the speed with which organizations can detect, prevent, and respond to threats. It will help them stay ahead of the curve by detecting emerging threats created using AI models, protecting against threats to AI models, and using AI models to help security professionals hunt for threats more effectively. The platform will also provide a consistent set of tools, processes, and policies across all major cloud environments at every layer, from code to cloud to runtime.

The combined capability will also boost the adoption of multicloud security, enhancing companies’ ability to use multiple clouds – further spurring innovation in cloud computing and AI applications. Enterprises and government agencies can vastly improve how security is designed, operated, and automated, scaling cybersecurity teams while lowering the cost of implementing and managing security controls. The combined platform will also help protect small businesses, which often do not have the expertise and resources to protect themselves, from increasingly sophisticated and destructive cyberthreats.

Consistent with Google Cloud’s commitment to openness, Wiz products will continue to work and be available across all major clouds, including Amazon Web Services, Google Cloud Platform, Microsoft Azure, and Oracle Cloud, and will be offered through an array of partner security solutions. Google Cloud will also continue to offer customers wide choice through a variety of partner security solutions available in the Google Cloud Marketplace.

Sundar Pichai, CEO, Google: “Keeping people safe online has always been part of Google’s mission. This job is increasingly important today, as more companies and governments move their work to the cloud and broadly use generative AI. By bringing Wiz and Google Cloud together, we’re making it easier for organizations to innovate with confidence.”

Thomas Kurian, CEO, Google Cloud: “We want to make security a catalyst for innovation, not a barrier. With this acquisition, we will deliver a unified security platform that simplifies the complex task of protecting multicloud environments in the AI era, making a strong security posture accessible to more companies and governments.”

Assaf Rappaport, Co-Founder & CEO, Wiz: “Joining Google Cloud allows us to scale our mission of protecting customers wherever they operate – at machine speed. We remain committed to our open approach, ensuring Wiz continues to support all major cloud and code environments. With Google’s AI leadership and resources, coupled with Wiz’s deep context and knowledge of cloud and code environments, we are in a stronger position to help our partners and customers prevent breaches before they happen.”

 

The post Google completes $32 billion acquisition of Wiz appeared first on My Startup World - Everything About the World of Startups!.

Armadin raises $189.9 million led by Accel

Armadin has raised an industry record $189.9 million in Seed and Series A funding. Led by Accel, with participation from Google Ventures, Kleiner Perkins, Menlo Ventures, In-Q-Tel, and follow-on investment from 8VC and Ballistic Ventures, this marks the largest combined Seed and Series A funding round in cybersecurity history. Armadin’s mission is to prepare organizations for the speed and scale of AI-driven threats.

Closing the Hyperattack Gap
The rise of AI-powered attackers has ushered in the age of Hyperattacks: sophisticated, multi-modal campaigns that move at machine-speed. Traditional human-led defenses are no longer fast enough to bridge the widening security gap, and Armadin is closing this gap by deploying a unified, scalable platform that transforms security by revolutionizing how exploitable risk is identified, proven, and remediated.

“The AI shift is changing cybersecurity more rapidly than any transition in history,” said Kevin Mandia, CEO of Armadin. “In a world of machine-speed attacks, defense must become autonomous. You cannot have a human in the loop for every defense decision and expect to win. We are building the most formidable offense to give organizations the greatest defense. It’s important to national security.”

An Agentic Attacker Swarm
Unlike tools that scan for vulnerabilities, Armadin’s platform features specialized AI agents leveraging custom models in an agentic attacker swarm. These agents continuously reason, plan, and adapt like the most advanced human threat actors and provide CEOs and Boards with decision-grade proof of what can actually be exploited.

“At Accel, we look for companies that don’t just participate in the market, but redefine it,” said Ping Li, Partner at Accel. “Armadin is the first company we’ve seen that truly weaponizes the attacker’s perspective to build a more resilient defense. By combining Kevin’s unrivaled operational experience with a generational AI engineering team, Armadin is delivering the autonomous, comprehensive system of record for an enterprise’s security posture that boards and CISOs have been demanding for years.”

“The most honest measure of security has always been the offensive lens,” said Evan Peña, Founder and Chief Offensive Security Officer. “At Armadin, we are taking decades of human-led red teaming expertise and reinforcing it into AI models. These models are learning our tactics and techniques and are outpacing our human operators at every turn.”

“Security expertise is a constrained resource that organizations never have enough of in the moments when it matters most,” added Travis Lanham, Founder and Chief Technology Officer. “Before Armadin, you could not put a nation-state level adversary inside every network 24/7. We’ve built the ultimate attacker – it doesn’t just follow a script, it reasons and learns as it swarms your defenses. We train our models and build agents to the standards of a world-class red team with safety at the foundation and unleash them to identify exploitable risk at machine speed. We believe that this is the only way to prepare for the coming wave of AI Hyperattacks.”

Armadin’s founding team is a rare fusion of elite red teaming experts and AI researchers and engineers under the leadership of Kevin Mandia, who maintains deep, trusted relationships across Fortune 100 companies, federal law enforcement agencies, and defense departments.

The post Armadin raises $189.9 million led by Accel appeared first on My Startup World - Everything About the World of Startups!.

RØDE launches Video Core and Sync for content creators

RØDE has announced the RØDECaster Video Core, a major new addition to its video production lineup, alongside a defining new integration capability that connects the console with select RØDECaster audio interfaces: RØDECaster Sync. Hot on the heels of the release of the RØDECaster Video S in November last year, the latest offering in RØDE’s growing range of all-in-one video and audio production consoles delivers the most streamlined solution yet for video podcasters, solo creators and live streamers at any professional level.

Designed specifically for creators working in modular or software-driven workflows, the RØDECaster Video Core offers the same advanced production power as the flagship RØDECaster Video. Combining advanced video switching, recording and streaming with a fully integrated professional audio mixer, it offers a flexible foundation for creating broadcast-quality content across video podcasts, live streams and studio productions in a compact desktop-friendly unit.

Launching alongside it, RØDECaster Sync is an innovative new feature that seamlessly connects the RØDECaster Video Core with the RØDECaster Pro II and RØDECaster Duo audio interfaces, creating a single unified production hub. With RØDECaster Sync, audio-first creators can expand into video with zero fuss, scaling their setup effortlessly while maintaining the studio-grade sound and intuitive control that has made RØDECaster the creative industry standard.

“The launch of the RØDECaster Video Core and RØDECaster Sync marks a pivotal moment in the evolution of content creation,” said RØDE CEO Damien Wilson. “With the RØDECaster range, every creator, no matter their skill level or workflow, is supported by a complete ecosystem that makes professional production more accessible than ever. As always, RØDE continues to set the industry benchmark for tearing down barriers to democratise content creation worldwide.”

VIDEO UNLOCKED
Designed for creators working in both software-based and modular production environments, the RØDECaster Video Core delivers a seamless new way to bring professional video into any audio-first workflow. Compact and streamlined, it offers the same octa-core processor as the flagship RØDECaster Video, making high-end switching, streaming and recording more accessible than ever.

For creators who prefer software-based control, the RØDECaster Video Core integrates seamlessly with the RØDECaster App. This free dedicated companion app provides extensive control over every aspect of production, allowing users to switch between video sources, design custom multi-camera layouts with the scene builder and mix pristine audio with the intuitive mixer. With advanced configuration available at every level, the RØDECaster App gives productions a professional polish with total flexibility.

RØDECaster Sync takes this flexibility even further, introducing an innovative new way for the RØDECaster Video Core to integrate with compatible RØDECaster audio consoles, creating one unified production setup. By simply using a USB-C cable to connect the RØDECaster Video Core with the RØDECaster Pro II or Duo, creators can manage both audio and video from a single surface, expand their inputs and outputs, enable shared mixing and recording and unlock advanced switching capabilities that scale effortlessly as their studio grows.

BROADCAST-READY OUT OF THE BOX
With its compact footprint, the RØDECaster Video Core delivers uncompromising broadcast-quality production power. It supports switching between up to four video sources with fully customisable scenes, smooth transitions, graphic overlays and multi-source layouts – providing professional results without the complexity of traditional broadcast hardware.

With three Full HD HDMI inputs featuring auto frame rate conversion, configurable HDMI output monitoring, a configurable USB-C expansion port and support for network cameras via up to four NDI inputs, the RØDECaster Video Core adapts to virtually any video setup, from podcasts to live studio productions.

In terms of audio, it brings the studio-grade sound RØDE is renowned for, featuring two Neutrik combo inputs with ultra-low-noise, high-gain Revolution Preamps™ for pristine capture from microphones, instruments or line sources. Each of the nine stereo audio channels is enhanced with world-class APHEX processing – including EQ, compression, noise gating, de-essing and legendary effects like Aural Exciter, Big Bottom and Compellor – ensuring every production sounds as polished as it looks.

ANY CREATOR, ANY SETUP
Whether live streaming or recording for post-production, the RØDECaster Video Core integrates effortlessly into any creative setup. Creators can stream directly to YouTube, Twitch and other major platforms via Ethernet, or record straight to an external USB drive or SSD, with the option to capture each video and audio source independently through isolated (ISO) recording for maximum flexibility in the edit.

With support for a wide range of modern video inputs, from HDMI cameras to network sources and USB devices, the RØDECaster Video Core is built to adapt as productions grow. It also pairs seamlessly with the free RØDE Capture app, allowing creators to turn an iPhone into a high-quality dual-camera source for wireless multi-angle streaming, perfect for podcasts, interviews and solo content creation.

Compact, powerful and designed for the realities of today’s creators, the RØDECaster Video Core delivers a complete professional production solution without the traditional barriers of broadcast complexity.

The RØDECaster Video Core will be available worldwide to pre-order for US$599.

The post RØDE launches Video Core and Sync for content creators appeared first on My Startup World - Everything About the World of Startups!.

Airflow enthusiast 3D-prints 15 tiny fans to fit inside a custom, domed Noctua NF-A12x25 frame — bizarre 'Fanhattan Project' cools the CPU just as well as a regular fan

Have you ever wanted to use a fan that's more than three times as loud as the other option while providing the same performance? If you answered in resounding joy, then this project is exactly what you've been looking for. A YouTuber 3D-printed a fan that's actually made up of 15 tiny fans, fit inside the frame of a regular 120mm fan modelled after the Noctua NF-A12x25.

Global chip supply chain under threat as US-Iran conflict enters third week — Strait of Hormuz blockade is days away from crippling Taiwan's semiconductor industry

Taiwan imports almost all of its energy and requires large amounts of LNG to sustain its electrical grid. That grid is then used by local chipmakers — like TSMC who is responsible for making most of the world's high-end chips. Fabrication for these chips requires helium, which Taiwan also imports and right now, the Iran-U.S. conflict has made it difficult to acquire both.

Samsunspor: "Ne olursa olsun hedefimiz Konferans Ligi'nde tur"

Samsunspor Basın Sözcüsü Suat Çakır, UEFA Konferans Ligi Son 16 Turu rövanşında deplasmanda oynayacakları Rayo Vallecano maçında turu geçmek istediklerini belirterek, "İlk karşılaşmada aldığımız mağlubiyet nedeniyle bizi zor bir karşılaşmanın beklediğinin farkındayız. Ancak buna rağmen hedefimiz ne olursa olsun turu geçmek. Takım olarak buna inanıyoruz" dedi.

Chinese GPU vendor Zephyr has cancelled its single-fan RTX 4070 Ti Super due to VRAM price hikes — memory shortage is forcing a pivot to an SFF RTX 4070 Super instead

A single-fan RTX 4070 Ti Super had been in the works at Zephyr, a Chinese vendor, for a while, and it was close to completion, with even thermal testing data publicly released. Unfortunately, the memory crisis has gotten to Zephyr as well, and it has cancelled the project, choosing to instead develop an RTX 4070 Super instead.

Flabbergasted GPU repair wizard highlights dangers of liquid metal after leak kills entire RTX 5070 Ti — user-applied TIM spread to every crevice of the PCB, physically cracking and shorting out the core

An RTX 5070 Ti with user-applied liquid metal died because the TIM leaked out everywhere and shorted multiple components, eventually killing the core as well. Despite being part of a "repair" video, there's nothing really here to fix, as most of the important ICs would need to be replaced or at least reballed.

Save over $100 on this feature-rich Asus AM5 motherboard with Wi-Fi 7, USB4 & DDR5-8000 support — TUF Gaming X870-Plus is on sale for just $170

Asus launched this motherboard at $310 in late 2024, and since then its MSRP has gone down to $280, but you can score it for 40% less than that on Amazon right now. For that price, nothing really comes close in terms of performance, features, and reliability.

AMD's upcoming RDNA 5 GPUs might improve dual-issue execution & use shader units more efficiently — LLVM patch adds new FMA instruction to ease compiling

A new LLVM patch has added V_FMA_F32, a 3-operand fused multiply-add (FMA instruction and introduced the VOPD3 instruction format for RDNA 5. Both of these changes should make it easier for compilers to use dual issue execution, working around the strict pairing rules that would otherwise limit max FP32 throughput in certain workloads.

Apple's MacBook Neo modded to a 1 TB SSD, breaking the firm's 512 GB barrier — base 256 GB model gets modded in expert NAND swap surgery

DirectorFeng, an expert technician in China has just performed what is likely the first hardware mod of its kind on the new MacBook Neo. He has birthed the world's only MacBook Neo with a terabyte of storage by physically changing the NAND chip on the logic board. Whether this mod is worth the price given the Neo's target audience, well, that's for you to decide.

ASRock launches new Frankensteined motherboard with one DDR4 slot and two DDR5 slots — Intel board signals the RAM apocalypse is truly nigh

A motherboard that can accept both DDR4 and DDR5 memory can be the difference between you being able to build a new PC or putting it off till the shortage is over. ASRock's new H610M Combo II is otherwise pretty barebones, not even featuring PCIe 4.0 storage, but it does have enough to get you by before things settle down.

Shopper scores $1,000 in PC hardware for just $86 in a shocking pricing glitch — Newegg shrugs off massive loss and responds with a thumbs-up emoji

A Redditor only paid $86.98 for a Ryzen 5 7600X, a Gigabyte B850 Eagle motherboard, and a 32GB DDR5-6000 Corsair Vengeance RGB kit from Newegg. The CPU even included a free 240mm Cooler Master AIO worth $85 on its own. All these parts would typically cost $1,012, but this customer scored a combo deal for the ages.

Fiber internet provider says it can detect leaking water pipes using existing infrastructure, prevented loss of 2 million liters a day over three months — Lightsonic tech detects underground vibrations, machine learning isolates the source

U.K. fiber network provider Openreach used startup firm Lightsonic's technology to detect leaks in Affinity Water's service area, helping save over 2 million liters of drinking water daily.

Enthusiast rebuilds AA-battery-powered PC, sextuples run time to 30 minutes with 64 batteries — uses three voltage regulators in parallel to achieve stability, runs computer for over 30 minutes on 64 AA cells

YouTube creator ScuffedBits re-did their experiment and was able to eke out 30 minutes of game time (and a benchmarking session!) while running their desktop PC on 64 AA batteries.

Windows 11 is getting support for 1,000 Hz+ monitors soon as part of Insider builds — Microsoft has reportedly increased the refresh rate limit to 5,000 Hz

Microsoft says "monitors can now report refresh rates higher than 1000 Hz" in the patch notes for its latest Windows 11 Insider builds. These updates are part of the Release Preview channel, which means they're very close to final, public release. On the other hand, Nvidia has also pushed the first update for its G-Sync Pulsar displays.

Nvidia claims 1 million times better path tracing performance is coming in future gaming GPUs — says current GPUs are already 10,000x faster than Pascal

At GDC 2026, Nvidia held a presentation somehow aimed at gamers and not data center clients. Still, it was sprinkled with AI pat-on-the-backs, with the company touting that its future gaming GPUs will offer 1,000,000 better path tracing performance. And the current-gen Blackwell family is apparently already 100,000x better due to dedicated Tensor and RT cores.

OpenAI to acquire AI security startup Promptfoo

OpenAI has announced its plans to acquire Promptfoo, an established AI security platform widely used by enterprises to identify and remediate vulnerabilities in AI systems during development. The company confirmed that once the acquisition is finalized, Promptfoo’s technology will be integrated directly into OpenAI Frontier, the platform designed for building and operating AI coworkers. The move reflects OpenAI’s growing focus on strengthening evaluation, security, and compliance capabilities as enterprises increasingly deploy AI agents into real‑world workflows.

According to OpenAI, organizations adopting AI coworkers require systematic methods to test agent behavior, detect risks before deployment, and maintain transparent records to support oversight and governance. Promptfoo, led by co‑founders Ian Webster and Michael D’Angelo, has built a suite of tools trusted by more than a quarter of Fortune 500 companies. Its open‑source CLI and library for evaluating and red‑teaming large language model applications have become widely used across the industry. OpenAI stated that it will continue supporting the open‑source project while expanding enterprise‑grade capabilities within Frontier.

Srinivas Narayanan, CTO of B2B Applications at OpenAI, said the acquisition brings deep engineering expertise in evaluating and securing AI systems at scale. He noted that Promptfoo’s work enables businesses to deploy secure and reliable AI applications, and integrating these capabilities into Frontier will strengthen the platform’s native security features. OpenAI highlighted that the integration will introduce automated security testing and red‑teaming directly into Frontier, enabling enterprises to identify risks such as prompt injections, jailbreaks, data leaks, tool misuse, and out‑of‑policy agent behaviors.

The company also emphasized that security and evaluation will be embedded into development workflows, allowing teams to identify, investigate, and remediate risks earlier in the lifecycle. Enhanced reporting and traceability will support governance, risk management, and compliance requirements as AI oversight expectations continue to rise globally.

Promptfoo CEO Ian Webster said the company was founded to give developers practical tools to secure AI systems, noting that the increasing connectivity of AI agents to real data and systems makes validation more critical than ever. He added that joining OpenAI will accelerate efforts to deliver stronger security, safety, and governance capabilities for teams building real‑world AI applications. The acquisition remains subject to customary closing conditions.

 

The post OpenAI to acquire AI security startup Promptfoo appeared first on My Startup World - Everything About the World of Startups!.

Luma launches Luma Agents for creative works

Luma announced the launch of Luma Agents, a new class of AI collaborators capable of executing end-to-end creative work across text, image, video, and audio. Designed for agencies, marketing teams, studios, and enterprise organizations that aspire to scale creative output without sacrificing quality, Luma Agents maintain full context from the initial brief to final delivery – coordinating tools, models, and iterations within a single unified system.

“Creative work has never lacked ambition; it’s lacked execution capacity,” said Amit Jain, Co-Founder and CEO of Luma. “Creative teams shouldn’t have to spend their time orchestrating tools. They should spend it creating. Agents aren’t shortcuts. They’re collaborators that maintain context, coordinate execution, and advance projects so teams can focus on taste, direction, and strategy.”

For the past several years, most AI systems have been assembled by chaining together separate models for language, vision, video, and reasoning — stitching outputs together through orchestration layers. While powerful in isolation, these systems fragment context and require increasingly complex workflows to produce reliable creative results.

Luma believes intelligence should not be assembled in pieces; it should be built as one coherent system.

Creative Agents That Make You Prolific
Luma Agents replace fragmented, multi-model workflows with coordinated, execution built on unified reasoning. Instead of switching between disconnected tools and rebuilding context at every step, teams work alongside Agents that:

  • Execute projects end-to-end, from planning through production and delivery
  • Maintain shared context across text, image, video, and audio
  • Advance multiple creative directions in parallel
  • Evaluate and refine outputs instead of generating one-shot results
  • Integrate into enterprise tools and production systems via API

Agents operate inside a collaborative, multiplayer environment where humans direct creative intent and Agents handle orchestration, routing, and execution – resulting in more output, greater consistency, and higher creative velocity.

Deployed at Global Scale
Luma Agents are already embedded across global agency operations.

Publicis Groupe and Serviceplan Group are deploying Luma Agents across strategy, creative development, and production workflows to increase throughput while maintaining brand consistency across markets.

“Luma is now part of our broader House of AI ecosystem and integrated directly into our creative workflows. It allows our teams across more than 20 countries to collaborate more smoothly and develop great work faster. For our clients, that means high-quality creative output delivered with greater speed and efficiency – without compromising craft,” says Alexander Schill, Global CCO at Serviceplan Group.

Built on Unified Intelligence
Luma Agents are built on Unified Intelligence, a new model architecture designed to move beyond the industry’s prevailing approach of assembling intelligence in pieces. Instead of chaining together separate models for language, vision, and generation, Unified Intelligence trains a single multimodal reasoning system capable of understanding and generating across formats within the same architecture.

For the past several years, most AI systems have been assembled as pipelines: one model writes text, another generates images, another processes video, and orchestration layers attempt to stitch their outputs together. While effective for narrow tasks, these systems fragment reasoning, lose context between steps, and require complex workflows to produce reliable results.

Unified Intelligence takes a different approach. Instead of connecting specialized models after the fact, it trains a single multimodal reasoning system capable of understanding and generating across formats within the same architecture.

Rather than separating thinking from creation, Unified Intelligence tightly couples reasoning and rendering, allowing the system to plan, imagine, and produce as part of one coherent cognitive process.

When a human architect sketches a building, they are not simply drawing lines – they are simultaneously simulating structure, light, spatial dynamics, and lived experience. Reasoning and imagination happen together. Unified Intelligence is built on the same principle.

The first model built on this architecture is Uni-1.

Uni-1 is a decoder-only autoregressive transformer operating over a shared token space that interleaves language and image tokens, allowing both modalities to function as first-class inputs and outputs in the same sequence. This design enables the model to reason in language while imagining and rendering in pixels within the same forward pass.

Rather than generating outputs step-by-step across disconnected systems, Uni-1 can plan, visualize, and produce creative artifacts as part of a single coherent reasoning process. The result is a foundation where thinking and creation are tightly coupled, much closer to how human intelligence works.

Built on top of this unified architecture, Luma Agents can coordinate complex creative workflows that previously required multiple tools and manual orchestration. They can:

  • Coordinate across leading AI models, including Ray3.14, Veo 3, Sora 2, Kling 2.6, Nano Banana Pro, Seedream, GPT Image 1.5, and ElevenLabs
  • Automatically select and route tasks to the best model or capability for each step
  • Maintain persistent context across assets, collaborators, and creative iterations
  • Evaluate and refine outputs, improving results through iterative self-critique

Together, these capabilities allow Luma Agents to function not as isolated generation tools, but as collaborative AI creatives capable of executing end-to-end creative work.

“Intelligence shouldn’t be fragmented by modality,” added Jain. “Unified systems reason holistically. When the same model can think, imagine, and render, you move closer to intelligence that behaves coherently across the entire creative process.”

Enterprise-Ready by Design
Luma Agents are designed for enterprise environments where intellectual property protection, compliance, and operational scale are critical. Key enterprise safeguards include Full IP ownership retained by customers, automated content review to reduce copyright risk, legal trace documentation demonstrating human involvement, required human review workflows prior to public release, and cloud-based infrastructure with enterprise-grade guardrails.

 

The post Luma launches Luma Agents for creative works appeared first on My Startup World - Everything About the World of Startups!.

Reclaim Security raises $26M led by Acrew Capital

Reclaim Security, a preemptive exposure-remediation platform, announced $26 million in total funding, including a recent $20 million Series A round led by Acrew Capital, with participation from QP Ventures and Ibex Investors. The funding will accelerate the company’s mission to eliminate what many security leaders consider cybersecurity’s most persistent gap: remediation.

As attacker breakout times have fallen to as little as 27 seconds, enterprises still require an average of 27 days to remediate critical exposures. Over the past decade, organizations have invested heavily in detection tools to identify vulnerabilities and misconfigurations, yet resolving them remains largely manual, slow, and operationally risky. The result is an expanding backlog of exposures that security teams identify but struggle to safely close.

“There is a massive ‘Remediation Mirage’ in the market right now. Vendors are slapping an AI label on what is essentially just Prioritization 2.0 or faster ticket management,” says Barak Klinghofer, CEO and Co-founder of Reclaim Security.

​​”The recent launch of Claude Code, which wiped billions from the market value of traditional security giants, is a massive wake-up call. While such tools can identify hundreds of vulnerabilities in seconds, they also hand attackers an autonomous, high-speed engine for exploit generation. We’ve seen reports of AI-orchestrated espionage campaigns where 80-90% of tactical operations were executed autonomously. In this new reality, if your ‘remediation’ strategy still ends with a human reviewing a manual Jira ticket, you aren’t just slow, you’ve lost the race.

Reclaim is the only platform providing true Agentic Remediation. Through our PIPE engine, we’ve removed the fear of ‘breaking the business,’ allowing our AI to move from discovery to resolution in seconds. While others are perfecting the recommendation, we are perfecting the execution.”

Automating Cybersecurity’s “Last Mile”
Reclaim’s platform introduces the industry’s first AI Security Engineer, an autonomous system designed not only to identify exposures, but to resolve them safely and at scale.

At the core of the platform is PIPE (Productivity Impact Prediction Engine), a simulation engine that predicts the operational and business impact of a proposed security change before it is deployed. By accurately modeling how changes impact applications, workloads, user productivity and business processes, organizations can implement remediation without risking downtime or operational disruption.

This simulation-first approach enables organizations to:

  • Prioritize exposures most likely to be exploited by attackers
  • Deploy automated or semi-automated remediations safely
  • Reduce remediation timelines from weeks to minutes
  • Eliminate manual configuration and ticket-driven workflows, allowing security teams to focus on strategic initiatives

Reclaim analyzes how real attack techniques would traverse a specific environment, evaluates how existing defenses would respond, and predicts the operational impact of remediation before changes are deployed. By combining advanced attack path modeling with business-aware remediation, the company eliminates exploitable pathways safely and at scale. This approach enables a shift away from reactive “assume breach” strategies toward proactively removing exposure without disrupting critical business operations.

Real World Impact
Early enterprise customers across financial services, healthcare, government, and critical infrastructure sectors report measurable results, including 80% increase in overall threat resilience, 75% increase in ROI from existing security stack and 90% reduction in manual effort when resolving critical exposures

“Security tools are excellent at explaining why something is risky,” said Mark Kraynak, Founding Partner at Acrew Capital. “What they don’t do is make remediation safe and practical. The real breakthrough isn’t more prioritization, it’s removing risk without breaking the business. Reclaim does exactly that, and that’s why it matters.”

The post Reclaim Security raises $26M led by Acrew Capital appeared first on My Startup World - Everything About the World of Startups!.

Nano Banana 2 live on Gemini App and Google Search

Nano Banana 2, Google’s latest state-of-the-art image model, is now available in the Middle East and North Africa. The model is accessible on Google Gemini (desktop and mobile app) and Google Search via Google Lens and AI Mode. 

Nano Banana 2 brings the high-speed intelligence of Gemini Flash to visual generation, making rapid edits and iteration possible. It brings once-exclusive Pro features accessible to a wider audience, including:

  • Advanced world knowledge: The model pulls from Gemini’s real-world knowledge base, and is powered by real-time information and images from web search to more accurately render specific subjects.
  • Precision text rendering and translation: Nano Banana 2 allows users to generate accurate, legible text for marketing mockups or greeting cards. People can even translate and localize text within an image to share their ideas globally.
  • Subject consistency: Maintain character resemblance of up to five characters and the fidelity of up to 14 objects in a single workflow.
  • Production-ready specs: Make attention grabbing assets with full control of various aspect ratios and resolutions from 512px to 4K, ensuring visuals stay sharp whether they are for a vertical social post or a wide-screen backdrop.
  • Visual fidelity upgrade: The model delivers vibrant lighting and sharper details, maintaining high-quality aesthetics at the speed expected from Flash.

 

The post Nano Banana 2 live on Gemini App and Google Search appeared first on My Startup World - Everything About the World of Startups!.

❌