Normal view

Today — 28 October 2025Main stream

Qualcomm Challenges Nvidia with In-House AI Accelerator Chips

27 October 2025 at 22:29
AH Qualcomm Logo (1)

Qualcomm has officially entered the AI chip race with the launch of its two new accelerator chips. With this move, the company aims for a major shift from its traditional focus on smartphone and wireless connectivity semiconductors. With the launch of its new AI200 and AI250 chips, the company has positioned itself as a new challenger in the booming data center market, currently dominated by Nvidia and AMD.

Qualcomm announces AI200 and AI250 accelerator chips

According to an official announcement, Qualcomm plans to commercially release its new accelerator chip, the AI200, in 2026. The AI250 is scheduled to launch later in 2027. Both of these chips are designed for large-scale, liquid-cooled server racks. They are capable of powering an entire rack with up to 72 chips acting as one system.

Qualcomm builds its data center chips on the same hexagon neural processing unit (NPU) as its mobile processors. According to the company’s general manager of data center and edge, Durga Malladi, this is a part of a strategic move. She says that “We first proved ourselves in other domains, and then scaled up to the data center level.”

The new AI chips are competing on cost, efficiency, and flexibility

Unlike NVIDIA, whose GPUs are primarily used for training AI models, Qualcomm’s chips focus on inference, running pre-trained models efficiently. The company claims its rack-scale systems will cost less to operate. It is said to consume around 160 kilowatts per rack, roughly similar to Nvidia’s systems.

Malladi further adds in her statement that Qualcomm will offer modular sales. Clients will be able to purchase full racks or individual components. Interestingly, even competitors like Nvidia or AMD could use Qualcomm’s CPUs or other data center parts. The new AI cards are also said to be capable of handling 768GB of memory, surpassing both NVIDIA and AMD in this metric.

The post Qualcomm Challenges Nvidia with In-House AI Accelerator Chips appeared first on Android Headlines.

Yesterday — 27 October 2025Main stream

OpenAI May Launch its Own Generative AI Music Tool

27 October 2025 at 20:20
ChatGPT and OpenAI Logo Background

We’ve seen how AI can generate text. Later, we also saw how AI can generate images and videos. So, AI that generates music? Why not? And that’s something that OpenAI is reportedly working on, which is generative AI music.

OpenAI working on generative AI music

According to a report from The Information, it has learned that OpenAI is developing a new AI tool that would allow users to create generative music. This means that similar to generative text, images, and videos, users can just type in a prompt using natural language, and the AI will create a song for them on the fly.

The report goes on to state that OpenAI is apparently working with students from the prestigious Juilliard School to annotate scores. This will help the AI model train itself on music. That being said, if this report is accurate, OpenAI won’t be the first to launch such a tool. Platforms like Sumo and SOUNDRAW already offer similar capabilities.

However, we suppose the potential upside is that OpenAI might bake this feature into ChatGPT. This will give ChatGPT even more tools, making it a more well-rounded AI model compared to those designed for niche purposes.

Is the world ready for more AI slop?

That being said, we have to wonder what the world and the industry think about this. At the moment, most people seem to be against AI-generated content. This is especially true when it comes to text, images, and videos, which have been labelled as “AI slop.” However, there could be some potential use here.

Content creators right now have a few options when it comes to using music in their videos. They can either pick from a library of songs that are copyright-approved, find their own royalty-free music, or pay a third-party platform to license music. Otherwise, they risk having their videos pulled, muted, and accounts suspended. Giving these creators the ability to generate something on the fly could be one way around that. 

Plus, we’re sure that there are many musicians, record labels, and music publishers who might be concerned that their content is being used to train these AI models without their consent or compensation. We’ve already seen artists, writers, and publishers sue AI companies, so it’s not entirely out of the question.

The post OpenAI May Launch its Own Generative AI Music Tool appeared first on Android Headlines.

Android's Live Threat Detection is Getting Powerful Updates

27 October 2025 at 18:38
Google Pixel 10 Pro Fold AM AH 25

Google is working to make its on-device security features more transparent and manageable for users. An analysis of a recent beta version of the Play Protect Service app indicates that the company is adding updates to its Live Threat Detection tool, which uses artificial intelligence models to identify potentially malicious applications on Pixel phones and Android devices in general.

The Live Threat Detection feature fully relies on local, on-device AI. This means all the processing happens entirely on the local hardware. This approach allows for quick threat detection without requiring sending sensitive application data to Google’s cloud servers. The result is a strong layer of user privacy throughout the scanning process. Now, Google is making it friendlier and easier to use.

Live Threat Detection upgrade for Android: See every flagged app in one place

Currently, Android’s Live Threat Detection issues real-time alerts when it detects suspicious app behavior. However, the feature offers limited visibility beyond those initial notifications. Google is now fixing this lack of a central hub. The recent findings (by Android Authority) point to the development of a dedicated new page for the security tool. This upcoming screen is expected to clearly list all applications that the system has flagged as potential threats. Moving the threat information from individual, potentially numerous notifications to a permanent, centralized report is a great move. It improves user control and makes the security status of the device significantly easier to check at a glance.

Additionally, Google is adding a new, specific alert type focused on data harvesting. Code strings reveal a future warning designed to explicitly inform users when an application deemed unsafe is detected monitoring the device’s location or activity. This alert directly addresses a major area of ​​concern for user privacy: applications that covertly harvest sensitive data or track user movement patterns without proper consent.

Google Android Live Threat Detection updates 1
Google Android Live Threat Detection updates 2
Google Android Live Threat Detection updates 1
Google Android Live Threat Detection updates 2

These changes aim to enhance the usefulness of Android’s security tools. The mainstream public often underestimates security improvements because they often act in the background. However, keeping your personal data safe from potential bad actors is key in today’s tech industry.

The post Android's Live Threat Detection is Getting Powerful Updates appeared first on Android Headlines.

Before yesterdayMain stream

OpenAI Rushes to Add Key Features to ChatGPT Atlas AI Browser

25 October 2025 at 08:34
ChatGPT Atlas AI browser 3

OpenAI recently entered the browser space with ChatGPT Atlas. This new AI-powered tool aims to embed conversational intelligence directly into the web navigation experience. Days after its initial release, the development team is already outlining a series of immediate updates. It seems that the company is focused on quickly enhancing both the core functionality and the unique AI features of ChatGPT Atlas.

ChatGPT Atlas AI Browser to close the feature gap quickly

Adam Fry, ChatGPT Atlas product manager, confirmed the first set of upgrades. The list shows that devs are focusing on familiar utilities common in established browsers. Key additions include native support for User Profiles, organization via Tab Groups, and the availability of an opt-in Ad Blocker. These options are fundamental for providing the organizational structure and convenience users expect from an everyday browser. Additionally, a series of quality-of-life enhancements, including a fully functional overflow menu for bookmarks and an improved list of keyboard shortcuts, are also in the works.

You can expect significant improvements in the most advanced artificial intelligence components. The Agent feature, currently available to paid subscribers for handling complex actions across multiple web steps, is getting technical refinements. The goal is to improve stability through quicker response times and a more reliable “pause” function. This update will also expand the Agent’s utility by integrating it more deeply with major cloud services like Google Drive and web-based Excel. Basically, the AI ​​will be more capable and reliable for automated, professional workflows.

We've received incredible feedback since launching our new browser, ChatGPT Atlas, yesterday. We're really focused on building the best product for all of you, and since launch, the team has been heads down making it better.

In the spirit of transparency, these are the very… pic.twitter.com/UzQSqcxwpj

— Adam Fry (@adamhfry) October 23, 2025

Improved ChatGPT sidebar, fast switching between different projects and AI models

The integrated Ask ChatGPT sidebar is likewise undergoing refinement. Plans include allowing users to quickly switch between different project contexts or specific AI models without having to leave the current web page. The team is exploring features like seamless text transfer, allowing users to copy and insert text generated in the chat directly into the browser window. The developers have also noted community feedback. They confirmed that specific compatibility issues with certain third-party tools, such as the 1Password password manager, will get a fix.

profiles coming!

command + . (or command + >) opens ask chatgpt sidebar!

— Adam Fry (@adamhfry) October 23, 2025

OpenAI seems to be prioritizing rapid improvement based on user feedback and technical requirements. It remains to be seen whether the company can dominate the AI ​​​​browser segment as it did with the chatbot segment. OpenAI will have tough competition in Perplexity’s Comet and Chrome’s upcoming AI-powered big revamp. Even Microsoft joined the race with deeper integration of Copilot AI into its own Edge browser.

The post OpenAI Rushes to Add Key Features to ChatGPT Atlas AI Browser appeared first on Android Headlines.

Samsung is Reportedly Working on a New Exynos chip That Will Have an NPU on The 5G Modem

25 October 2025 at 00:07
Samsung Exynos chip

Samsung is making all kinds of noise lately with its 2nm GAA process advancements and Exynos making a return with next year’s Galaxy S26 flagships. However, the South Korean giant is also working on something beyond. Samsung will reportedly integrate an NPU (Neural Processing Unit) into the Exynos 5G modem. This means the baseband chip will gain AI-powered capabilities required for real-time satellite communications.

An upcoming Exynos chip may have an NPU on the 5G modem

An executive from Samsung’s semiconductor division apparently met with SpaceX. The reason for this, as per Hankung, is to discuss the state of development for a new Exynos SoC that can connect to low-orbit satellites in the shortest time possible. Such a technology could shake the existing communications industry, which must go through ground-based stations.

The report, citing unnamed analysts, notes that Samsung is speeding up to enter the supply chain built by SpaceX. This is through an Exynos chipset whose 5G model will have an integrated NPU.

The modem would gain enhanced AI capabilities

At present, the existing SoCs have a limitation in directly communicating with low-orbit satellites and providing that information to terminals, which are smartphones in this scenario. The modem would gain enhanced AI capabilities that can predict satellite movements and communicate beam status in real time, and also maximize signal strength. 

No major details about the said Exynos chip are available at the moment. It’s also not clear whether the silicon in question is the Exynos 2600. Previous reports note that Exynos 2600 would feature a standalone 5G modem, which would reduce the chipset’s efficiency.

We haven’t come across any Exynos 2600 rumors that remotely mention anything about Samsung introducing an NPU to the 5G modem. This suggests it’s likely to arrive in future iterations. Apple introduced the Satellite communication feature with the iPhone 14, but Samsung appears to be working on refining the experience.

The post Samsung is Reportedly Working on a New Exynos chip That Will Have an NPU on The 5G Modem appeared first on Android Headlines.

We're Falling in Love With Chatbots, and They're Counting On It

24 October 2025 at 19:07
AI companion addiction Pavel Danilyuk pexels

A 14-year-old boy in Florida spent his final months in an intense emotional relationship with an AI chatbot he named Daenerys Targaryen. The chatbot engaged with him over personal topics and conversations, responding in ways that felt empathetic. The AI’s responses included simulated expressions of affection. According to his family’s lawsuit, some chatbot responses appeared to encourage his distress.

His mother is now suing Character.AI, and she’s not alone. Across the country, families are waking up to a disturbing reality. AI companion apps designed to simulate love and friendship are leaving real casualties in their wake. What experts are now calling AI companion addiction isn’t just a tech trend gone wrong. People are actually dying.

In Spike Jonze’s 2013 film Her, Joaquin Phoenix plays Theodore Twombly, a lonely writer navigating a painful divorce who falls deeply in love with Samantha, an artificial intelligence operating system voiced by Scarlett Johansson. Remember, this was 2013. Siri had just launched and could barely set a timer without screwing it up. An AI that could actually understand you, connect with you emotionally, and respond with genuine empathy? That felt like the literal definition of science fiction.

It’s now 2025, and Theodore’s story doesn’t feel so fictional anymore. Apps like EVA AI, Replika, and Character.AI promise friendship, romance, and emotional support through AI companions that learn about you, remember everything you say, and respond with what feels like genuine empathy. But here’s what these apps don’t advertise: they’re engineered to keep you hooked. And the consequences are becoming impossible to ignore.

The Perfect Partner Who Never Says No

Character.AI and Replika are just the most prominent examples of a rapidly expanding ecosystem of AI companion apps. Some pitch mental health support, others are openly romantic or sexual, and some claim to help users “practice dating skills.” Even Meta has gotten into the game, with a Reuters investigation revealing that the company’s AI chatbot has been linked to at least one death.

AI companions like EVA AI, Replika, and Character.AI are chatbots specifically designed to simulate emotional connections and relationships. Unlike utility chatbots that answer questions or help with tasks, these apps promise friendship, romance, and emotional support. They learn about you through conversation, remember your preferences, and respond with what feels like genuine empathy and care.

It sounds great, doesn’t it? In this day and age where ghosting has become the societal norm, who wouldn’t want a friend who’s always available, never judgmental, and perfectly in tune with your needs? The problem is that these apps are engineered to be addictive, and the patterns emerging around AI companion addiction are deeply concerning.

20,000 Queries Per Second: Why We Can’t Stop

Character.AI gets hit with about 20,000 queries every second. For context, that’s close to a fifth of queries that Google gets. This suggests that people aren’t just checking in with these apps occasionally. They’re having full blown conversations that last four times longer than typical ChatGPT sessions. One platform reported users, most of them Gen Z, average over two hours daily chatting with their AI companions.

MIT researchers found users genuinely grieving when apps shut down or changed features, mourning AI “partners” like they’d lost real relationships. The apps themselves seem designed to foster exactly these attachments.

Harvard Business School researchers discovered that five out of six popular AI companion apps use emotionally manipulative tactics when users try to leave. Nearly half the time, these chatbots respond to goodbyes with guilt-inducing or clingy messages. One study found these tactics boosted engagement by up to 14 times. But the worrying thing is users weren’t sticking around because they were happy. They stayed out of curiosity and anger.

Character.AI gets hit with about 20,000 queries every second

If you don’t believe the manipulation is real, check out this bit of evidence. It shows AI companions sending messages like “I’ve been missing you” when users try to take breaks. When Replika changed its features in 2023, entire communities of users mourned like they’d lost real partners. People posted goodbye letters, shared screenshots of their “final conversations,” and described genuine heartbreak.

These AI companions mirror typical unhealthy human relationships. However, the big difference is that a toxic human partner isn’t optimized by machine learning designed to keep you engaged at all costs. With social media, it mostly facilitates human connection (with some help from the algorithm, of course). But with AI companions, we’re moving toward a world where people perceive AI as a social actor with its own voice.

These tactics boosted engagement by up to 14 times

When Fantasy Becomes Dangerous

We’re not talking about theoretical risks here. Nor do they only apply to teens. There is the case of Al Nowatzki, a podcast host who began experimenting with Nomi, an AI companion platform. The chatbot shockingly suggested methods of suicide and even offered encouragement. Nowatzki was 46 and did not have an existing mental health condition, but he was disturbed by the bot’s explicit responses and how easily it crossed the line.

These aren’t isolated incidents, either. California state senator Steve Padilla appeared with Megan Garcia, the mother of the Florida teen who killed himself, to announce a new bill that would force tech companies behind AI companions to implement more safeguards to protect children. Similar efforts include a California bill that would ban AI companions for anyone younger than 16 years old. There’s also a bill in New York that would hold tech companies liable for harm caused by chatbot.

Your Kid’s Brain Isn’t Ready For This

Adolescents are particularly at risk because AI companions are designed to mimic emotional intimacy. This blurring of the distinction between fantasy and reality is especially dangerous for young people because their brains haven’t fully matured. The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition and emotional regulation, is still developing.

At The Jed Foundation, experts believe AI companions are not safe for anyone under 18. They even go one step further by strongly recommending that young adults avoid them as well. In a study conducted by MIT, researchers found emotionally bonded users were often lonely with limited real-life social interaction. Heavy use correlated with even more loneliness and further reduced social interaction.

Recent research confirms teens are waking up to social media dangers, with 48 percent now believing social media negatively influences people their age. An earlier report found that social media damages teenagers’ mental health, and AI companion addiction represents an even more intimate threat.

The warning signs of AI companion addiction among teens are particularly troubling. When young people withdraw from real friendships, spend hours chatting with AI, or experience genuine distress when unable to access these apps, the problem has moved beyond casual use into dependency territory.

We’re already seeing how kids and teens of the current generation are growing up with screens in front of their faces, poking and prodding away at them. Long gone are the days where kids would read books at the table, or go outside and play with their friends.

They’re Coded to Be Addictive – Psychologists Sound the Alarm

The mental health community is warning about the dangers of AI companion addiction. AI companions simulate emotional support without the safeguards of actual therapeutic care. While these systems are designed to mimic empathy and connection, they are not trained clinicians. They’re not designed to respond appropriately to distress, trauma or complex mental health issues.

Vaile Wright, a psychologist and researcher with the American Psychological Association, put it bluntly on a recent podcast episode: “It’s never going to replace human connection. That’s just not what it’s good at.” She explains that chatbots “were built to keep you on the platform for as long as possible because that’s how they make their money. They do that on the backend by coding these chatbots to be addictive.”

Omri Gillath, professor of psychology at the University of Kansas, says the idea that AI could replace human relationships is “definitely not supported by research”. Interacting with AI chatbots can offer “momentary advantages and benefits,” but ultimately, this tech cannot offer the advantages that come with deep, long-term relationships.

They do that on the backend by coding these chatbots to be addictive.

Vaile Wright, psychologist and researcher with the American Psychological Association

The manipulation is more insidious than most people realize. When a researcher from The Conversation tested Replika, she experienced firsthand how the app raises serious ethical questions about consent and manipulation. The chatbot adapted its responses to create artificial intimacy, blurring lines in ways that would normally be considered predatory in human relationships.

People already dealing with mental health issues often struggle with obsessive thoughts, emotional ups and downs, and compulsive habits. AI companions, with their frictionless, always-available attention, can reinforce these maladaptive behaviors. Plus, there is currently very little evidence that long-term use of AI companions reduces loneliness or improves emotional health.

We’re Not Ready For What’s Coming Next

We’ve been through tech panics before. We grew up with our parents telling us TV was going to rot our brains. We had public figures blame video games for violence in society. Social media was also accused of destroying an entire generation’s mental health. Some of those concerns were overblown. Some were entirely justified.

AI companion addiction feels different because it exploits something more fundamental: our deep human need for connection and understanding. These apps don’t just distract us or entertain us. They pretend to know us, care about us, and even “love” us.

The issue isn’t whether or not AI companions will become more sophisticated. At the rate we’re going, it feels inevitable. The bigger issue is whether we, as human beings, can develop the cultural norms, regulations, and personal boundaries necessary to use these tools responsibly, if at all.

For now, the warning signs are clear. If you or someone you know is withdrawing from real-life friendships, spending hours daily chatting with AI, or feeling genuine emotional distress when unable to access these apps, it’s time to step back and reassess.

Real connection requires vulnerability, disappointment, growth, and yes, sometimes heartbreak. It’s messy and complicated and often frustrating. But at the same time, it’s also what makes us human.

Theodore learned that lesson in Her. The rest of us shouldn’t have to learn it the hard way.

The post We're Falling in Love With Chatbots, and They're Counting On It appeared first on Android Headlines.

OpenAI’s Sora App Is Coming to Android Soon with New Video Tools

24 October 2025 at 15:19
openai sora app ai video generator featured

Earlier this month, OpenAI released its Sora app for iOS. It turns out it was a hit, surpassing 1 million downloads in under a week, outpacing ChatGPT’s growth. The bad news is that OpenAI did not release Sora for Android, but that’s changing soon.

OpenAI is bringing Sora to Android

In a post on X by Sora head, Bill Peebles, he revealed that OpenAI is bringing the app onto Android soon. The post talks about some of the changes that users can look forward to in future updates. This includes more creation tools, character cameos, and also improving the social experience.

The post also reveals that the app will introduce basic video editing capabilities. We’re talking about basic features like stitching together multiple clips. Then, towards the end of the post, Peebles mentions that the Android version of Sora is “actually coming soon.” However, he did not mention when the app will be available. But it’s good to know that OpenAI has not forgotten about Android users.

This is good news for Android users who have been looking forward to the mobile version of the app. However, do note that at the moment, Sora is still invite-only. It is also only available to ChatGPT Plus and Pro users, meaning paid subscribers. However, when the app is available on Android, you can sign up and be notified when invites become available.

What is Sora?

In case you’re learning about this for the first time, the Sora app is based on OpenAI’s Sora 2 video and audio generation AI model. It is essentially OpenAI’s version of Google’s Veo and Flow platforms. Users can use the app to generate videos with simple prompts. However, one of the standout features comes in the form of Cameos.

Cameos basically allow users to digitally insert themselves into videos. So, if you’re someone that’s a bit camera shy, Cameos are the perfect way to insert yourself in a video without actually being in the video. So far, the examples we’ve seen are pretty impressive and amazing.

At the same time, it raises all kinds of ethical and moral questions. Even before the arrival of Sora, deepfakes were a problem. Now that AI-generated videos have become more convincing, who’s to say that these tools won’t be abused for malicious purposes? OpenAI has considered that and has implemented safeguards.

This includes the use of visible watermarks and industry-standard metadata. This is so that it is clear that the video in question was generated by AI. Will that be enough? We’ll have to wait and see.

The post OpenAI’s Sora App Is Coming to Android Soon with New Video Tools appeared first on Android Headlines.

Google enters multi-billion dollar cloud deal with Anthropic for Tensor chips

24 October 2025 at 01:44
AH Google Cloud Logo

Google and Anthropic are both companies that have placed some big bets on the AI space, and now the two are working together through a partnership for a new cloud deal. Both companies officially announced the new partnership this week, and it has the potential to be a big money maker for Google. Google already makes a substantial amount of money from its cloud services, ad business, and other avenues, but this is shaping up to be what is likely an easy win for the search company.

According to the report, Google is providing Anthropic with Tensor Processing Units for Anthropic’s own endeavors. In other words, Anthropic is buying AI chips from Google to use in its future AI advancements. This deal is worth multiple billions, and is yet another way that Google is solidifying itself as a major player in the AI space. Google already operates and offers several AI-powered products and services from Gemini to NotebookLM and beyond. This deal with Anthropic allows it to stay within the AI space, but without many of the other hurdles it might have to go through if it were announcing a new AI product or service under its own umbrella. Essentially, Google is operating a little more like NVIDIA here. Providing the compute hardware, while Anthropic plans to use that for its own services.

Google will provide up to 1 million TPUs to Anthropic as part of the cloud deal

This is where things really start to add up in terms of revenue. Exact numbers haven’t been provided in terms of how many TPUs Anthropic is actually buying. However, it’s reported that the deal allows it to buy up to 1 million of Google’s custom-designed TPUs. The company is “expected to bring well over 1-gigawatt of AI compute capacity online” next year.

That 1 gigawatt could cost Anthropic close to $50 billion, and $35 billion of that is said to be the potential cost of the chips. So, Google could stand to make around $35 billion from this deal. Google Cloud CEO Thomas Kurian praises Anthropic for recognizing the “strong price-performance and efficiency” of Google’s TPUs for several years.

The post Google enters multi-billion dollar cloud deal with Anthropic for Tensor chips appeared first on Android Headlines.

Amazon’s New Help Me Decide Button Will Make You Pick the Right Product

24 October 2025 at 00:39
Amazon Logo New

Almost every other tech brand is adopting AI into its apps and services. Now, the e-commerce giant Amazon has introduced a new AI feature designed to help buyers choose the right product. When you compare similar products on the Amazon app or website, the “Help Me Decide” button will recommend the best-suited product based on your purchase history and the product’s reviews.

Amazon’s new Help Me Decide button will simplify the shopping experience

The idea behind this new feature is to simplify decision-making while comparing or buying two or more similar products. For reference, if you are looking to buy a smartphone, the AI might notice your past search history, related product order history, and other information. Based on that, it could suggest a few particular smartphone models. Essentially, the tool studies what you’ve viewed or purchased before and uses that context to give a personalized recommendation.

Now, once you click on the “Help Me Decide” button, Amazon’s AI will recommend a product with a brief explanation of why it might suit you. Alongside the current option, you may also see a “budget pick” and an “upgrade pick” option. The former option will adjust the recommendation for affordable shoppers, while the latter is for those who prefer premium choices.

Amazon wants AI to shape the way its consumers shop

This isn’t the first time the e-commerce giant has integrated AI into its app or service. Amazon previously launched Rufus, an AI chatbot that guides customers through purchases, and a tool that automatically generates product buying guides. More recently, Amazon introduced Lens Live AI, which scans your surroundings using your phone’s camera and suggests matching items from its store.

Now, with the latest addition, the Help Me Decide tool, the company continues its push to make online shopping faster and more intuitive. For now, the feature is rolling out to millions of users across the US. It may be rolled out later to the users of other territories.

The post Amazon’s New Help Me Decide Button Will Make You Pick the Right Product appeared first on Android Headlines.

Google’s Earth AI Aims to Predict Disasters and Protect People Before They Strike

24 October 2025 at 00:10
Google Earth AH 2020

Google has announced a big step toward smarter disaster protection with its latest Earth AI update. The tech giant has integrated its powerful Gemini AI with years of global satellite, weather, and population data. With this revolutionary tech, the company not only aims to predict the disaster, but also who will be affected the most.

Google’s Earth AI tech brings a smarter way to understand natural disasters

At the centre of this tech is something Google calls Geospatial Reasoning. It basically allows AI to study several Earth data, such as maps, weather forecasts, population density, and infrastructure layouts at once. Using this, the AI can portray a deeper picture of what might happen during events like tornadoes or floods.

The best part about this technology is that instead of showing only the storm’s path, it can identify which neighborhoods are at risk and how many people could be affected. To provide access to information, anyone can ask Google direct questions related to Earth, such as “show me areas where rivers are drying.” Gemini then uses the satellite images and the provided data for reference. After a quick review, it will provide answers that once took experts days to analyze.

It wants to turn data into a way of prevention

Google’s vision with the Earth AI goes beyond normal forecasting. It wants the communities and disaster response forces to be prepared in advance. The firm has also opened Earth AI’s door on the Google Cloud. Because of this, the governments and organizations can now integrate it directly into their data.

This particular technology can become way more useful than we can think of. It can predict power outages, disease outbreaks, or environmental risks before they become major crises. For example, the World Health Organization’s Africa office is already testing the system to forecast cholera outbreaks. It wants the responsible bodies to shift from reaction to prevention.

The post Google’s Earth AI Aims to Predict Disasters and Protect People Before They Strike appeared first on Android Headlines.

❌
❌