Ivey sınırlı süre alsa da, Bulls uzun vadede kadroda tutmak istiyor































































Google announced the completion of its $32 billion acquisition of Wiz, a leading cloud and AI security platform headquartered in New York. Wiz will join Google Cloud and maintain its brand and commitment to securing customers across all cloud environments.
This acquisition is an investment by Google Cloud to improve cloud security and enable organizations to build fast and securely across any cloud or AI platform. In today’s AI era, more businesses and governments are migrating their most important data and systems to the cloud and turning to agile and continuous software development. As these organizations operate in a multicloud environment and adopt AI, attackers are using AI to operate with greater speed and sophistication.
Wiz delivers an easy-to-use security platform with deep expertise of cloud environments and code, connecting to all major clouds and helping prevent and respond to cybersecurity incidents. Its capabilities complement Google Cloud’s leadership in cloud infrastructure and deep AI expertise, including AI-powered threat intelligence and security operations tools.
Together, Google Cloud and Wiz will provide a unified security platform that improves the speed with which organizations can detect, prevent, and respond to threats. It will help them stay ahead of the curve by detecting emerging threats created using AI models, protecting against threats to AI models, and using AI models to help security professionals hunt for threats more effectively. The platform will also provide a consistent set of tools, processes, and policies across all major cloud environments at every layer, from code to cloud to runtime.
The combined capability will also boost the adoption of multicloud security, enhancing companies’ ability to use multiple clouds – further spurring innovation in cloud computing and AI applications. Enterprises and government agencies can vastly improve how security is designed, operated, and automated, scaling cybersecurity teams while lowering the cost of implementing and managing security controls. The combined platform will also help protect small businesses, which often do not have the expertise and resources to protect themselves, from increasingly sophisticated and destructive cyberthreats.
Consistent with Google Cloud’s commitment to openness, Wiz products will continue to work and be available across all major clouds, including Amazon Web Services, Google Cloud Platform, Microsoft Azure, and Oracle Cloud, and will be offered through an array of partner security solutions. Google Cloud will also continue to offer customers wide choice through a variety of partner security solutions available in the Google Cloud Marketplace.
Sundar Pichai, CEO, Google: “Keeping people safe online has always been part of Google’s mission. This job is increasingly important today, as more companies and governments move their work to the cloud and broadly use generative AI. By bringing Wiz and Google Cloud together, we’re making it easier for organizations to innovate with confidence.”
Thomas Kurian, CEO, Google Cloud: “We want to make security a catalyst for innovation, not a barrier. With this acquisition, we will deliver a unified security platform that simplifies the complex task of protecting multicloud environments in the AI era, making a strong security posture accessible to more companies and governments.”
Assaf Rappaport, Co-Founder & CEO, Wiz: “Joining Google Cloud allows us to scale our mission of protecting customers wherever they operate – at machine speed. We remain committed to our open approach, ensuring Wiz continues to support all major cloud and code environments. With Google’s AI leadership and resources, coupled with Wiz’s deep context and knowledge of cloud and code environments, we are in a stronger position to help our partners and customers prevent breaches before they happen.”
The post Google completes $32 billion acquisition of Wiz appeared first on My Startup World - Everything About the World of Startups!.
Armadin has raised an industry record $189.9 million in Seed and Series A funding. Led by Accel, with participation from Google Ventures, Kleiner Perkins, Menlo Ventures, In-Q-Tel, and follow-on investment from 8VC and Ballistic Ventures, this marks the largest combined Seed and Series A funding round in cybersecurity history. Armadin’s mission is to prepare organizations for the speed and scale of AI-driven threats.
Closing the Hyperattack Gap
The rise of AI-powered attackers has ushered in the age of Hyperattacks: sophisticated, multi-modal campaigns that move at machine-speed. Traditional human-led defenses are no longer fast enough to bridge the widening security gap, and Armadin is closing this gap by deploying a unified, scalable platform that transforms security by revolutionizing how exploitable risk is identified, proven, and remediated.
“The AI shift is changing cybersecurity more rapidly than any transition in history,” said Kevin Mandia, CEO of Armadin. “In a world of machine-speed attacks, defense must become autonomous. You cannot have a human in the loop for every defense decision and expect to win. We are building the most formidable offense to give organizations the greatest defense. It’s important to national security.”
An Agentic Attacker Swarm
Unlike tools that scan for vulnerabilities, Armadin’s platform features specialized AI agents leveraging custom models in an agentic attacker swarm. These agents continuously reason, plan, and adapt like the most advanced human threat actors and provide CEOs and Boards with decision-grade proof of what can actually be exploited.
“At Accel, we look for companies that don’t just participate in the market, but redefine it,” said Ping Li, Partner at Accel. “Armadin is the first company we’ve seen that truly weaponizes the attacker’s perspective to build a more resilient defense. By combining Kevin’s unrivaled operational experience with a generational AI engineering team, Armadin is delivering the autonomous, comprehensive system of record for an enterprise’s security posture that boards and CISOs have been demanding for years.”
“The most honest measure of security has always been the offensive lens,” said Evan Peña, Founder and Chief Offensive Security Officer. “At Armadin, we are taking decades of human-led red teaming expertise and reinforcing it into AI models. These models are learning our tactics and techniques and are outpacing our human operators at every turn.”
“Security expertise is a constrained resource that organizations never have enough of in the moments when it matters most,” added Travis Lanham, Founder and Chief Technology Officer. “Before Armadin, you could not put a nation-state level adversary inside every network 24/7. We’ve built the ultimate attacker – it doesn’t just follow a script, it reasons and learns as it swarms your defenses. We train our models and build agents to the standards of a world-class red team with safety at the foundation and unleash them to identify exploitable risk at machine speed. We believe that this is the only way to prepare for the coming wave of AI Hyperattacks.”
Armadin’s founding team is a rare fusion of elite red teaming experts and AI researchers and engineers under the leadership of Kevin Mandia, who maintains deep, trusted relationships across Fortune 100 companies, federal law enforcement agencies, and defense departments.
The post Armadin raises $189.9 million led by Accel appeared first on My Startup World - Everything About the World of Startups!.
RØDE has announced the RØDECaster Video Core, a major new addition to its video production lineup, alongside a defining new integration capability that connects the console with select RØDECaster audio interfaces: RØDECaster Sync. Hot on the heels of the release of the RØDECaster Video S in November last year, the latest offering in RØDE’s growing range of all-in-one video and audio production consoles delivers the most streamlined solution yet for video podcasters, solo creators and live streamers at any professional level.
Designed specifically for creators working in modular or software-driven workflows, the RØDECaster Video Core offers the same advanced production power as the flagship RØDECaster Video. Combining advanced video switching, recording and streaming with a fully integrated professional audio mixer, it offers a flexible foundation for creating broadcast-quality content across video podcasts, live streams and studio productions in a compact desktop-friendly unit.
Launching alongside it, RØDECaster Sync is an innovative new feature that seamlessly connects the RØDECaster Video Core with the RØDECaster Pro II and RØDECaster Duo audio interfaces, creating a single unified production hub. With RØDECaster Sync, audio-first creators can expand into video with zero fuss, scaling their setup effortlessly while maintaining the studio-grade sound and intuitive control that has made RØDECaster the creative industry standard.
“The launch of the RØDECaster Video Core and RØDECaster Sync marks a pivotal moment in the evolution of content creation,” said RØDE CEO Damien Wilson. “With the RØDECaster range, every creator, no matter their skill level or workflow, is supported by a complete ecosystem that makes professional production more accessible than ever. As always, RØDE continues to set the industry benchmark for tearing down barriers to democratise content creation worldwide.”
VIDEO UNLOCKED
Designed for creators working in both software-based and modular production environments, the RØDECaster Video Core delivers a seamless new way to bring professional video into any audio-first workflow. Compact and streamlined, it offers the same octa-core processor as the flagship RØDECaster Video, making high-end switching, streaming and recording more accessible than ever.
For creators who prefer software-based control, the RØDECaster Video Core integrates seamlessly with the RØDECaster App. This free dedicated companion app provides extensive control over every aspect of production, allowing users to switch between video sources, design custom multi-camera layouts with the scene builder and mix pristine audio with the intuitive mixer. With advanced configuration available at every level, the RØDECaster App gives productions a professional polish with total flexibility.
RØDECaster Sync takes this flexibility even further, introducing an innovative new way for the RØDECaster Video Core to integrate with compatible RØDECaster audio consoles, creating one unified production setup. By simply using a USB-C cable to connect the RØDECaster Video Core with the RØDECaster Pro II or Duo, creators can manage both audio and video from a single surface, expand their inputs and outputs, enable shared mixing and recording and unlock advanced switching capabilities that scale effortlessly as their studio grows.
BROADCAST-READY OUT OF THE BOX
With its compact footprint, the RØDECaster Video Core delivers uncompromising broadcast-quality production power. It supports switching between up to four video sources with fully customisable scenes, smooth transitions, graphic overlays and multi-source layouts – providing professional results without the complexity of traditional broadcast hardware.
With three Full HD HDMI inputs featuring auto frame rate conversion, configurable HDMI output monitoring, a configurable USB-C expansion port and support for network cameras via up to four NDI inputs, the RØDECaster Video Core adapts to virtually any video setup, from podcasts to live studio productions.
In terms of audio, it brings the studio-grade sound RØDE is renowned for, featuring two Neutrik combo inputs with ultra-low-noise, high-gain Revolution Preamps
for pristine capture from microphones, instruments or line sources. Each of the nine stereo audio channels is enhanced with world-class APHEX processing – including EQ, compression, noise gating, de-essing and legendary effects like Aural Exciter, Big Bottom and Compellor – ensuring every production sounds as polished as it looks.
ANY CREATOR, ANY SETUP
Whether live streaming or recording for post-production, the RØDECaster Video Core integrates effortlessly into any creative setup. Creators can stream directly to YouTube, Twitch and other major platforms via Ethernet, or record straight to an external USB drive or SSD, with the option to capture each video and audio source independently through isolated (ISO) recording for maximum flexibility in the edit.
With support for a wide range of modern video inputs, from HDMI cameras to network sources and USB devices, the RØDECaster Video Core is built to adapt as productions grow. It also pairs seamlessly with the free RØDE Capture app, allowing creators to turn an iPhone into a high-quality dual-camera source for wireless multi-angle streaming, perfect for podcasts, interviews and solo content creation.
Compact, powerful and designed for the realities of today’s creators, the RØDECaster Video Core delivers a complete professional production solution without the traditional barriers of broadcast complexity.
The RØDECaster Video Core will be available worldwide to pre-order for US$599.
The post RØDE launches Video Core and Sync for content creators appeared first on My Startup World - Everything About the World of Startups!.































:max_bytes(150000):strip_icc():format(jpeg)/TAL-gva-recirc-3x2-GVA2026-4b9f74139ee54a4489f8df80ce9e118d.jpg)
© <p>Alistair Taylor-Young</p>







































































































































OpenAI has announced its plans to acquire Promptfoo, an established AI security platform widely used by enterprises to identify and remediate vulnerabilities in AI systems during development. The company confirmed that once the acquisition is finalized, Promptfoo’s technology will be integrated directly into OpenAI Frontier, the platform designed for building and operating AI coworkers. The move reflects OpenAI’s growing focus on strengthening evaluation, security, and compliance capabilities as enterprises increasingly deploy AI agents into real‑world workflows.
According to OpenAI, organizations adopting AI coworkers require systematic methods to test agent behavior, detect risks before deployment, and maintain transparent records to support oversight and governance. Promptfoo, led by co‑founders Ian Webster and Michael D’Angelo, has built a suite of tools trusted by more than a quarter of Fortune 500 companies. Its open‑source CLI and library for evaluating and red‑teaming large language model applications have become widely used across the industry. OpenAI stated that it will continue supporting the open‑source project while expanding enterprise‑grade capabilities within Frontier.
Srinivas Narayanan, CTO of B2B Applications at OpenAI, said the acquisition brings deep engineering expertise in evaluating and securing AI systems at scale. He noted that Promptfoo’s work enables businesses to deploy secure and reliable AI applications, and integrating these capabilities into Frontier will strengthen the platform’s native security features. OpenAI highlighted that the integration will introduce automated security testing and red‑teaming directly into Frontier, enabling enterprises to identify risks such as prompt injections, jailbreaks, data leaks, tool misuse, and out‑of‑policy agent behaviors.
The company also emphasized that security and evaluation will be embedded into development workflows, allowing teams to identify, investigate, and remediate risks earlier in the lifecycle. Enhanced reporting and traceability will support governance, risk management, and compliance requirements as AI oversight expectations continue to rise globally.
Promptfoo CEO Ian Webster said the company was founded to give developers practical tools to secure AI systems, noting that the increasing connectivity of AI agents to real data and systems makes validation more critical than ever. He added that joining OpenAI will accelerate efforts to deliver stronger security, safety, and governance capabilities for teams building real‑world AI applications. The acquisition remains subject to customary closing conditions.
The post OpenAI to acquire AI security startup Promptfoo appeared first on My Startup World - Everything About the World of Startups!.
Luma announced the launch of Luma Agents, a new class of AI collaborators capable of executing end-to-end creative work across text, image, video, and audio. Designed for agencies, marketing teams, studios, and enterprise organizations that aspire to scale creative output without sacrificing quality, Luma Agents maintain full context from the initial brief to final delivery – coordinating tools, models, and iterations within a single unified system.
“Creative work has never lacked ambition; it’s lacked execution capacity,” said Amit Jain, Co-Founder and CEO of Luma. “Creative teams shouldn’t have to spend their time orchestrating tools. They should spend it creating. Agents aren’t shortcuts. They’re collaborators that maintain context, coordinate execution, and advance projects so teams can focus on taste, direction, and strategy.”
For the past several years, most AI systems have been assembled by chaining together separate models for language, vision, video, and reasoning — stitching outputs together through orchestration layers. While powerful in isolation, these systems fragment context and require increasingly complex workflows to produce reliable creative results.
Luma believes intelligence should not be assembled in pieces; it should be built as one coherent system.
Creative Agents That Make You Prolific
Luma Agents replace fragmented, multi-model workflows with coordinated, execution built on unified reasoning. Instead of switching between disconnected tools and rebuilding context at every step, teams work alongside Agents that:
Agents operate inside a collaborative, multiplayer environment where humans direct creative intent and Agents handle orchestration, routing, and execution – resulting in more output, greater consistency, and higher creative velocity.
Deployed at Global Scale
Luma Agents are already embedded across global agency operations.
Publicis Groupe and Serviceplan Group are deploying Luma Agents across strategy, creative development, and production workflows to increase throughput while maintaining brand consistency across markets.
“Luma is now part of our broader House of AI ecosystem and integrated directly into our creative workflows. It allows our teams across more than 20 countries to collaborate more smoothly and develop great work faster. For our clients, that means high-quality creative output delivered with greater speed and efficiency – without compromising craft,” says Alexander Schill, Global CCO at Serviceplan Group.
Built on Unified Intelligence
Luma Agents are built on Unified Intelligence, a new model architecture designed to move beyond the industry’s prevailing approach of assembling intelligence in pieces. Instead of chaining together separate models for language, vision, and generation, Unified Intelligence trains a single multimodal reasoning system capable of understanding and generating across formats within the same architecture.
For the past several years, most AI systems have been assembled as pipelines: one model writes text, another generates images, another processes video, and orchestration layers attempt to stitch their outputs together. While effective for narrow tasks, these systems fragment reasoning, lose context between steps, and require complex workflows to produce reliable results.
Unified Intelligence takes a different approach. Instead of connecting specialized models after the fact, it trains a single multimodal reasoning system capable of understanding and generating across formats within the same architecture.
Rather than separating thinking from creation, Unified Intelligence tightly couples reasoning and rendering, allowing the system to plan, imagine, and produce as part of one coherent cognitive process.
When a human architect sketches a building, they are not simply drawing lines – they are simultaneously simulating structure, light, spatial dynamics, and lived experience. Reasoning and imagination happen together. Unified Intelligence is built on the same principle.
The first model built on this architecture is Uni-1.
Uni-1 is a decoder-only autoregressive transformer operating over a shared token space that interleaves language and image tokens, allowing both modalities to function as first-class inputs and outputs in the same sequence. This design enables the model to reason in language while imagining and rendering in pixels within the same forward pass.
Rather than generating outputs step-by-step across disconnected systems, Uni-1 can plan, visualize, and produce creative artifacts as part of a single coherent reasoning process. The result is a foundation where thinking and creation are tightly coupled, much closer to how human intelligence works.
Built on top of this unified architecture, Luma Agents can coordinate complex creative workflows that previously required multiple tools and manual orchestration. They can:
Together, these capabilities allow Luma Agents to function not as isolated generation tools, but as collaborative AI creatives capable of executing end-to-end creative work.
“Intelligence shouldn’t be fragmented by modality,” added Jain. “Unified systems reason holistically. When the same model can think, imagine, and render, you move closer to intelligence that behaves coherently across the entire creative process.”
Enterprise-Ready by Design
Luma Agents are designed for enterprise environments where intellectual property protection, compliance, and operational scale are critical. Key enterprise safeguards include Full IP ownership retained by customers, automated content review to reduce copyright risk, legal trace documentation demonstrating human involvement, required human review workflows prior to public release, and cloud-based infrastructure with enterprise-grade guardrails.
The post Luma launches Luma Agents for creative works appeared first on My Startup World - Everything About the World of Startups!.
Reclaim Security, a preemptive exposure-remediation platform, announced $26 million in total funding, including a recent $20 million Series A round led by Acrew Capital, with participation from QP Ventures and Ibex Investors. The funding will accelerate the company’s mission to eliminate what many security leaders consider cybersecurity’s most persistent gap: remediation.
As attacker breakout times have fallen to as little as 27 seconds, enterprises still require an average of 27 days to remediate critical exposures. Over the past decade, organizations have invested heavily in detection tools to identify vulnerabilities and misconfigurations, yet resolving them remains largely manual, slow, and operationally risky. The result is an expanding backlog of exposures that security teams identify but struggle to safely close.
“There is a massive ‘Remediation Mirage’ in the market right now. Vendors are slapping an AI label on what is essentially just Prioritization 2.0 or faster ticket management,” says Barak Klinghofer, CEO and Co-founder of Reclaim Security.
”The recent launch of Claude Code, which wiped billions from the market value of traditional security giants, is a massive wake-up call. While such tools can identify hundreds of vulnerabilities in seconds, they also hand attackers an autonomous, high-speed engine for exploit generation. We’ve seen reports of AI-orchestrated espionage campaigns where 80-90% of tactical operations were executed autonomously. In this new reality, if your ‘remediation’ strategy still ends with a human reviewing a manual Jira ticket, you aren’t just slow, you’ve lost the race.
Reclaim is the only platform providing true Agentic Remediation. Through our PIPE engine, we’ve removed the fear of ‘breaking the business,’ allowing our AI to move from discovery to resolution in seconds. While others are perfecting the recommendation, we are perfecting the execution.”
Automating Cybersecurity’s “Last Mile”
Reclaim’s platform introduces the industry’s first AI Security Engineer, an autonomous system designed not only to identify exposures, but to resolve them safely and at scale.
At the core of the platform is PIPE (Productivity Impact Prediction Engine), a simulation engine that predicts the operational and business impact of a proposed security change before it is deployed. By accurately modeling how changes impact applications, workloads, user productivity and business processes, organizations can implement remediation without risking downtime or operational disruption.
This simulation-first approach enables organizations to:
Reclaim analyzes how real attack techniques would traverse a specific environment, evaluates how existing defenses would respond, and predicts the operational impact of remediation before changes are deployed. By combining advanced attack path modeling with business-aware remediation, the company eliminates exploitable pathways safely and at scale. This approach enables a shift away from reactive “assume breach” strategies toward proactively removing exposure without disrupting critical business operations.
Real World Impact
Early enterprise customers across financial services, healthcare, government, and critical infrastructure sectors report measurable results, including 80% increase in overall threat resilience, 75% increase in ROI from existing security stack and 90% reduction in manual effort when resolving critical exposures
“Security tools are excellent at explaining why something is risky,” said Mark Kraynak, Founding Partner at Acrew Capital. “What they don’t do is make remediation safe and practical. The real breakthrough isn’t more prioritization, it’s removing risk without breaking the business. Reclaim does exactly that, and that’s why it matters.”
The post Reclaim Security raises $26M led by Acrew Capital appeared first on My Startup World - Everything About the World of Startups!.
Nano Banana 2, Google’s latest state-of-the-art image model, is now available in the Middle East and North Africa. The model is accessible on Google Gemini (desktop and mobile app) and Google Search via Google Lens and AI Mode.
Nano Banana 2 brings the high-speed intelligence of Gemini Flash to visual generation, making rapid edits and iteration possible. It brings once-exclusive Pro features accessible to a wider audience, including:
The post Nano Banana 2 live on Gemini App and Google Search appeared first on My Startup World - Everything About the World of Startups!.