Normal view
-
Travel And Tour World
- How the Ramp and Juno Merger Reshapes Guest and Corporate Travel through a Strategic Alliance and Preparation for European Entry
-
Travel And Tour World
- Access Hospitality Welcomes Aravinda Gollapudi with Nearly Three Decades of Industry Experience as the New Chief Technology Officer
Access Hospitality Welcomes Aravinda Gollapudi with Nearly Three Decades of Industry Experience as the New Chief Technology Officer
-
Travel And Tour World
- Omega World Travel and AMGiNE Partner to Revolutionise Group Air and Event Automation
Omega World Travel and AMGiNE Partner to Revolutionise Group Air and Event Automation
Omega World Travel announces a strategic partnership with AMGiNE to automate group air and meeting travel.
The post Omega World Travel and AMGiNE Partner to Revolutionise Group Air and Event Automation appeared first on Travel And Tour World.-
Travel And Tour World
- Private Jet Bookings Made Easy: Mach2’s New Platform Lets You Find Empty Leg Flights
Private Jet Bookings Made Easy: Mach2’s New Platform Lets You Find Empty Leg Flights
Experience luxury with a private jet soaring above a tropical airport at sunset. Discover seamless private aviation with real-time access to over 1,500 live flights.
The post Private Jet Bookings Made Easy: Mach2’s New Platform Lets You Find Empty Leg Flights appeared first on Travel And Tour World.-
Travel And Tour World
- France Joins Italy, Portugal, Spain, Singapore, and More as Manchester Airport’s Terminal 2 Integrates Biometric Technology, Paving the Way for Efficient, Secure, and Scalable Passenger Travel in the UK’s Modernized Terminals
France Joins Italy, Portugal, Spain, Singapore, and More as Manchester Airport’s Terminal 2 Integrates Biometric Technology, Paving the Way for Efficient, Secure, and Scalable Passenger Travel in the UK’s Modernized Terminals
France joins Italy, Portugal, Spain, Singapore, and more in embracing groundbreaking biometric technology at Manchester Airport, as Terminal 2 leads the way in modernizing passenger experience.
The post France Joins Italy, Portugal, Spain, Singapore, and More as Manchester Airport’s Terminal 2 Integrates Biometric Technology, Paving the Way for Efficient, Secure, and Scalable Passenger Travel in the UK’s Modernized Terminals appeared first on Travel And Tour World.-
My Startup World – Everything About the World of Startups!
- RØDE launches Video Core and Sync for content creators
RØDE launches Video Core and Sync for content creators
RØDE has announced the RØDECaster Video Core, a major new addition to its video production lineup, alongside a defining new integration capability that connects the console with select RØDECaster audio interfaces: RØDECaster Sync. Hot on the heels of the release of the RØDECaster Video S in November last year, the latest offering in RØDE’s growing range of all-in-one video and audio production consoles delivers the most streamlined solution yet for video podcasters, solo creators and live streamers at any professional level.
Designed specifically for creators working in modular or software-driven workflows, the RØDECaster Video Core offers the same advanced production power as the flagship RØDECaster Video. Combining advanced video switching, recording and streaming with a fully integrated professional audio mixer, it offers a flexible foundation for creating broadcast-quality content across video podcasts, live streams and studio productions in a compact desktop-friendly unit.
Launching alongside it, RØDECaster Sync is an innovative new feature that seamlessly connects the RØDECaster Video Core with the RØDECaster Pro II and RØDECaster Duo audio interfaces, creating a single unified production hub. With RØDECaster Sync, audio-first creators can expand into video with zero fuss, scaling their setup effortlessly while maintaining the studio-grade sound and intuitive control that has made RØDECaster the creative industry standard.
“The launch of the RØDECaster Video Core and RØDECaster Sync marks a pivotal moment in the evolution of content creation,” said RØDE CEO Damien Wilson. “With the RØDECaster range, every creator, no matter their skill level or workflow, is supported by a complete ecosystem that makes professional production more accessible than ever. As always, RØDE continues to set the industry benchmark for tearing down barriers to democratise content creation worldwide.”
VIDEO UNLOCKED
Designed for creators working in both software-based and modular production environments, the RØDECaster Video Core delivers a seamless new way to bring professional video into any audio-first workflow. Compact and streamlined, it offers the same octa-core processor as the flagship RØDECaster Video, making high-end switching, streaming and recording more accessible than ever.
For creators who prefer software-based control, the RØDECaster Video Core integrates seamlessly with the RØDECaster App. This free dedicated companion app provides extensive control over every aspect of production, allowing users to switch between video sources, design custom multi-camera layouts with the scene builder and mix pristine audio with the intuitive mixer. With advanced configuration available at every level, the RØDECaster App gives productions a professional polish with total flexibility.
RØDECaster Sync takes this flexibility even further, introducing an innovative new way for the RØDECaster Video Core to integrate with compatible RØDECaster audio consoles, creating one unified production setup. By simply using a USB-C cable to connect the RØDECaster Video Core with the RØDECaster Pro II or Duo, creators can manage both audio and video from a single surface, expand their inputs and outputs, enable shared mixing and recording and unlock advanced switching capabilities that scale effortlessly as their studio grows.
BROADCAST-READY OUT OF THE BOX
With its compact footprint, the RØDECaster Video Core delivers uncompromising broadcast-quality production power. It supports switching between up to four video sources with fully customisable scenes, smooth transitions, graphic overlays and multi-source layouts – providing professional results without the complexity of traditional broadcast hardware.
With three Full HD HDMI inputs featuring auto frame rate conversion, configurable HDMI output monitoring, a configurable USB-C expansion port and support for network cameras via up to four NDI inputs, the RØDECaster Video Core adapts to virtually any video setup, from podcasts to live studio productions.
In terms of audio, it brings the studio-grade sound RØDE is renowned for, featuring two Neutrik combo inputs with ultra-low-noise, high-gain Revolution Preamps
for pristine capture from microphones, instruments or line sources. Each of the nine stereo audio channels is enhanced with world-class APHEX processing – including EQ, compression, noise gating, de-essing and legendary effects like Aural Exciter, Big Bottom and Compellor – ensuring every production sounds as polished as it looks.
ANY CREATOR, ANY SETUP
Whether live streaming or recording for post-production, the RØDECaster Video Core integrates effortlessly into any creative setup. Creators can stream directly to YouTube, Twitch and other major platforms via Ethernet, or record straight to an external USB drive or SSD, with the option to capture each video and audio source independently through isolated (ISO) recording for maximum flexibility in the edit.
With support for a wide range of modern video inputs, from HDMI cameras to network sources and USB devices, the RØDECaster Video Core is built to adapt as productions grow. It also pairs seamlessly with the free RØDE Capture app, allowing creators to turn an iPhone into a high-quality dual-camera source for wireless multi-angle streaming, perfect for podcasts, interviews and solo content creation.
Compact, powerful and designed for the realities of today’s creators, the RØDECaster Video Core delivers a complete professional production solution without the traditional barriers of broadcast complexity.
The RØDECaster Video Core will be available worldwide to pre-order for US$599.
The post RØDE launches Video Core and Sync for content creators appeared first on My Startup World - Everything About the World of Startups!.
Over 10 biotech firms seek Hong Kong IPOs

Baidu-backed AI biotech BioMap seeks Hong Kong IPO

Solving Bitcoin’s gas issue (without a fork) | Opinion
DeFi killed tokenization, but ProFi is bringing it back | Opinion
-
My Startup World – Everything About the World of Startups!
- Luma launches Luma Agents for creative works
Luma launches Luma Agents for creative works
Luma announced the launch of Luma Agents, a new class of AI collaborators capable of executing end-to-end creative work across text, image, video, and audio. Designed for agencies, marketing teams, studios, and enterprise organizations that aspire to scale creative output without sacrificing quality, Luma Agents maintain full context from the initial brief to final delivery – coordinating tools, models, and iterations within a single unified system.
“Creative work has never lacked ambition; it’s lacked execution capacity,” said Amit Jain, Co-Founder and CEO of Luma. “Creative teams shouldn’t have to spend their time orchestrating tools. They should spend it creating. Agents aren’t shortcuts. They’re collaborators that maintain context, coordinate execution, and advance projects so teams can focus on taste, direction, and strategy.”
For the past several years, most AI systems have been assembled by chaining together separate models for language, vision, video, and reasoning — stitching outputs together through orchestration layers. While powerful in isolation, these systems fragment context and require increasingly complex workflows to produce reliable creative results.
Luma believes intelligence should not be assembled in pieces; it should be built as one coherent system.
Creative Agents That Make You Prolific
Luma Agents replace fragmented, multi-model workflows with coordinated, execution built on unified reasoning. Instead of switching between disconnected tools and rebuilding context at every step, teams work alongside Agents that:
- Execute projects end-to-end, from planning through production and delivery
- Maintain shared context across text, image, video, and audio
- Advance multiple creative directions in parallel
- Evaluate and refine outputs instead of generating one-shot results
- Integrate into enterprise tools and production systems via API
Agents operate inside a collaborative, multiplayer environment where humans direct creative intent and Agents handle orchestration, routing, and execution – resulting in more output, greater consistency, and higher creative velocity.
Deployed at Global Scale
Luma Agents are already embedded across global agency operations.
Publicis Groupe and Serviceplan Group are deploying Luma Agents across strategy, creative development, and production workflows to increase throughput while maintaining brand consistency across markets.
“Luma is now part of our broader House of AI ecosystem and integrated directly into our creative workflows. It allows our teams across more than 20 countries to collaborate more smoothly and develop great work faster. For our clients, that means high-quality creative output delivered with greater speed and efficiency – without compromising craft,” says Alexander Schill, Global CCO at Serviceplan Group.
Built on Unified Intelligence
Luma Agents are built on Unified Intelligence, a new model architecture designed to move beyond the industry’s prevailing approach of assembling intelligence in pieces. Instead of chaining together separate models for language, vision, and generation, Unified Intelligence trains a single multimodal reasoning system capable of understanding and generating across formats within the same architecture.
For the past several years, most AI systems have been assembled as pipelines: one model writes text, another generates images, another processes video, and orchestration layers attempt to stitch their outputs together. While effective for narrow tasks, these systems fragment reasoning, lose context between steps, and require complex workflows to produce reliable results.
Unified Intelligence takes a different approach. Instead of connecting specialized models after the fact, it trains a single multimodal reasoning system capable of understanding and generating across formats within the same architecture.
Rather than separating thinking from creation, Unified Intelligence tightly couples reasoning and rendering, allowing the system to plan, imagine, and produce as part of one coherent cognitive process.
When a human architect sketches a building, they are not simply drawing lines – they are simultaneously simulating structure, light, spatial dynamics, and lived experience. Reasoning and imagination happen together. Unified Intelligence is built on the same principle.
The first model built on this architecture is Uni-1.
Uni-1 is a decoder-only autoregressive transformer operating over a shared token space that interleaves language and image tokens, allowing both modalities to function as first-class inputs and outputs in the same sequence. This design enables the model to reason in language while imagining and rendering in pixels within the same forward pass.
Rather than generating outputs step-by-step across disconnected systems, Uni-1 can plan, visualize, and produce creative artifacts as part of a single coherent reasoning process. The result is a foundation where thinking and creation are tightly coupled, much closer to how human intelligence works.
Built on top of this unified architecture, Luma Agents can coordinate complex creative workflows that previously required multiple tools and manual orchestration. They can:
- Coordinate across leading AI models, including Ray3.14, Veo 3, Sora 2, Kling 2.6, Nano Banana Pro, Seedream, GPT Image 1.5, and ElevenLabs
- Automatically select and route tasks to the best model or capability for each step
- Maintain persistent context across assets, collaborators, and creative iterations
- Evaluate and refine outputs, improving results through iterative self-critique
Together, these capabilities allow Luma Agents to function not as isolated generation tools, but as collaborative AI creatives capable of executing end-to-end creative work.
“Intelligence shouldn’t be fragmented by modality,” added Jain. “Unified systems reason holistically. When the same model can think, imagine, and render, you move closer to intelligence that behaves coherently across the entire creative process.”
Enterprise-Ready by Design
Luma Agents are designed for enterprise environments where intellectual property protection, compliance, and operational scale are critical. Key enterprise safeguards include Full IP ownership retained by customers, automated content review to reduce copyright risk, legal trace documentation demonstrating human involvement, required human review workflows prior to public release, and cloud-based infrastructure with enterprise-grade guardrails.
The post Luma launches Luma Agents for creative works appeared first on My Startup World - Everything About the World of Startups!.