It’s kind of funny. Back when OpenAI was first founded, Elon Musk and Sam Altman were thick as thieves. Both supposedly had a similar mission when it came to artificial intelligence. However, fast forward to today, and both parties appear to be bitter enemies. But despite their differences, they both seem to be working towards a similar goal. Elon Musk has Neuralink, and now Sam Altman is reportedly working on his own brain interface as well.
Sam Altman’s brain interface could rival Neuralink
According to The Verge’s Alex Heath in his Sources newsletter, it appears that OpenAI’s CEO, Sam Altman, is building his own brain interface that could rival that of Elon Musk’s Neuralink. The report claims that Altman has tapped Mikhail Shapiro, an award-winning biomolecular engineer.
Shapiro will join Merge Labs, which is a brain-computer interface startup by Altman and Alex Blania. At the moment, not much is known about what this company is about. However, based on Shapiro’s work, it suggests that Altman could leverage his expertise to create a device that could link to the human brain using noninvasive techniques. To be more specific, it could use sound waves.
This is based on a recent talk Shapiro gave, where he said that sound waves and magnetic fields could be used to create a brain-to-computer interface. If that’s true, it could make Altman’s startup a whole lot more attractive compared to Neuralink.
Neuralink’s approach
For those unfamiliar, Neuralink is a company founded by Musk. The company’s goal is to create a way for people to interact with their computers or phones using their thoughts. It sounds quite high-tech, almost sci-fi-like. However, the problem with Neuralink is that it’s not exactly the most user-friendly.
In order for Neuralink to work, the user has to undergo open-skull surgery. This is where the electrodes are implanted into the brain to allow the user to control their devices. Any type of brain surgery has its risks. But to ask someone to undergo surgery just so they have a hands-free way of using their computers? That’s a huge ask.
If Altman’s new startup can indeed create an interface that uses sound waves, it’s a no-brainer (pun intended) that more people might prefer.
We’ve seen how AI can generate text. Later, we also saw how AI can generate images and videos. So, AI that generates music? Why not? And that’s something that OpenAI is reportedly working on, which is generative AI music.
OpenAI working on generative AI music
According to a report from The Information, it has learned that OpenAI is developing a new AI tool that would allow users to create generative music. This means that similar to generative text, images, and videos, users can just type in a prompt using natural language, and the AI will create a song for them on the fly.
The report goes on to state that OpenAI is apparently working with students from the prestigious Juilliard School to annotate scores. This will help the AI model train itself on music. That being said, if this report is accurate, OpenAI won’t be the first to launch such a tool. Platforms like Sumo and SOUNDRAW already offer similar capabilities.
However, we suppose the potential upside is that OpenAI might bake this feature into ChatGPT. This will give ChatGPT even more tools, making it a more well-rounded AI model compared to those designed for niche purposes.
Is the world ready for more AI slop?
That being said, we have to wonder what the world and the industry think about this. At the moment, most people seem to be against AI-generated content. This is especially true when it comes to text, images, and videos, which have been labelled as “AI slop.” However, there could be some potential use here.
Content creators right now have a few options when it comes to using music in their videos. They can either pick from a library of songs that are copyright-approved, find their own royalty-free music, or pay a third-party platform to license music. Otherwise, they risk having their videos pulled, muted, and accounts suspended. Giving these creators the ability to generate something on the fly could be one way around that.
Plus, we’re sure that there are many musicians, record labels, and music publishers who might be concerned that their content is being used to train these AI models without their consent or compensation. We’ve already seen artists, writers, and publishers sue AI companies, so it’s not entirely out of the question.
It looks like we’re one step closer to putting this whole TikTok saga behind us. According to US Treasury Secretary Scott Bessent, US President Donald Trump and China President Xi Jinping are expected to “consummate” the TikTok deal this Thursday.
Trump to close TikTok deal this Thursday
According to Bessent, “We reached one in Madrid, and I believe that as of today, all the details are ironed out, and that will be for the two leaders to consummate that transaction on Thursday in Korea. My remit was to get the Chinese to agree to approve the transaction, and I believe we successfully accomplished that over the past two days.”
Last month, Trump signed an executive order that would see ByteDance sell its TikTok US operations to American-owned companies. Ever since he came into power, Trump had been pushing for China and ByteDance to sell TikTok. This was based on concerns that ByteDance is a Chinese company. The US government was worried that data of US citizens and users would be sent back to China.
Under the Biden administration, TikTok was due to be banned in the US at the start of the year. However, following Trump’s re-election, he extended the deadline of the ban several times. At one point, it looked like we would never hear the end of this whole TikTok saga. But come Thursday, Trump will officially close the TikTok deal and put it to bed once and for all.
What does the future of TikTok look like?
Once the deal has been closed, TikTok’s US operations will come under control of a new board of directors. Oracle will be responsible for security operations. These board of directors are also expected to oversee TikTok’s recommendation algorithm, source code, and also take over duties when it comes to the moderation of content.
Note that this only applies to TikTok in the US. For the rest of the world, your TikTok experience should remain the same. However, we have to wonder if TikTok US could undergo a huge change. TikTok’s algorithm is kind of what gives the platform an edge over competitors like Instagram Reels.
But if US companies and engineers are taking over the algorithm to make their own tweaks, what could this mean for creators based in the country? Could they see a drop in views? We suppose we’ll have to wait and see.
The lawsuit Apple filed against leakster and YouTuber Jon Prosser seemed straightforward enough. However, it looks like things are becoming more complicated. Now, according to Apple, it says that Prosser has not indicated when he may respond to the lawsuit.
Prosser has not indicated when he will respond to Apple lawsuit
In recent court filings, Apple stated that Prosser “has not indicated” when, or even if, he plans to formally defend himself against the trade secrets allegations tied to leaked iOS 26 information. The lawsuit, filed in July 2025, targets both Prosser and co-defendant Michael Ramacciotti over the alleged theft of confidential details about Apple’s upcoming software update.
Apple’s filings emphasize that despite Prosser’s public statements, he missed his legal deadline to respond. As a result, the court entered a default against him, which allows Apple to proceed toward seeking damages and an injunction without Prosser’s formal participation in the case. Meanwhile, Ramacciotti has actively cooperated with Apple and may settle soon, taking a completely different approach than his co-defendant.
Going back and forth
This is where things get confusing. Initially, Prosser’s statements suggested he was in active talks with Apple about the lawsuit. However, when court documents revealed he hadn’t responded to the legal filing, Prosser doubled down, claiming he had been in active communications with the company. Now, Apple’s latest statement directly contradicts that claim, leaving us scratching our heads about what’s actually happening behind the scenes.
The back and forth raises questions about whether Prosser is truly engaging with Apple’s legal team or simply making public statements that don’t align with the formal legal process. Either way, his silence in court puts him at a significant disadvantage. If Apple’s injunction succeeds, it could set a precedent that limits how tech influencers handle leaked information, potentially restricting early looks at new features for enthusiasts who rely on these sources.
For context, the lawsuit stems from Prosser publishing videos that revealed confidential iOS 26 features. Prosser has publicly denied coordinating any scheme to steal company secrets. He claims he had no technical access to Apple’s systems and didn’t plot to obtain anyone’s device.
Compared to Android phones, Apple’s iPhones typically do not have that much RAM. However, with the iPhone 18, Apple is rumored to bump up the phone’s RAM by as much as 50%. This is according to a recent report from Korean publication, The Bell.
iPhone 18 could feature more RAM
According to the report, Apple’s iPhone 18 will come with as much as 50% more RAM. For context, the base iPhone 17 comes with 8GB of RAM. The Air, Pro, and Pro Max models feature 12GB of RAM. This means that for the iPhone 18 series, we could see a bump in RAM from 12-16GB of RAM.
The report also claims that Apple has asked its memory suppliers to produce more LPDDR5X DRAM chips. This alone should give us some clues as to how much RAM to expect. This is because at the moment, LPDDR5X chips are only available in 12GB and 16GB variants. This means we’re looking at 12GB of RAM at the very least, or 16GB for Apple’s higher-end models.
But is it necessary?
That being said, we have to wonder if having more RAM is necessary. This is because Apple already controls its software and hardware. This is how iOS and its apps have no issues running with smaller amounts of RAM compared to Android. It also means that unless Apple is making some fundamental changes to its iOS platform, we’re not sure if there are tangible benefits to having more RAM.
It is possible that because AI, like Apple Intelligence, that Apple could increase the memory on its 2026 iPhones. We’re already seeing how some older iPhone models do not support Apple Intelligence due to hardware constraints. So maybe future models with more RAM could allow more advanced AI features.
Apple will launch its iPhone 18 series in 2026. However, according to the rumors, Apple could split up the launch. It could launch the iPhone Air 2, iPhone 18 Pro, and iPhone 18 Pro Max in the fall of 2026, as per usual. But the base iPhone 18 and the iPhone 18e could launch in the following spring.
We’re also hearing rumors that the iPhone Fold could be delayed to 2027. There are also whispers that Apple could cancel its plans for the iPhone Air 2. Either way, we’ll find out more in the coming months.
Apple has a history of pushing its proprietary tech onto its users. This is despite the fact that other companies have embraced global standards. In the past, this manifested itself in the 30-pin connector for charging its iPhones. Then Apple made the shift to Lightning before (begrudgingly) adopting USB-C. But in a surprise, JerryRigEverything’s recent teardown of the iPhone 17 Pro reveals some changes that make the handset more repair friendly.
JerryRigEverything gives the iPhone 17 Pro a teardown
Popular tech YouTuber JerryRigEverything recently tore down Apple’s latest flagship. The iPhone 17 Pro teardown uncovered pretty significant internal redesigns that appear to prioritize both performance and repairability.
For starters, the teardown revealed a vapor chamber cooling system. This marks the first time Apple is using this cooling technology in an iPhone. This is despite the fact that Android manufacturers have been using it for years. This new cooling system should prevent the iPhone 17 Pro from throttling performance during intensive tasks, like gaming or video editing.
The teardown also uncovered the use of over 70 types of screws throughout the device. This might make the repair process more complicated, but the good news is that there are far fewer adhesives compared to previous models. Both the front and back glass panels connect to the same bottom screws, making disassembly easier. The battery also comes pre-attached to a removable tray, eliminating the need to fight stubborn adhesives during replacements.
Surprisingly easier to repair
The iPhone 17 Pro is a positive shift in Apple’s approach to device longevity and repairability. For instance, the screw-based design reduces the risk of accidental damage during repairs. This makes common fixes like screen and battery replacements more accessible to everyday users.
Apple also now offers direct sales of replacement parts alongside day-one repair manuals. The phone actually earned a 7/10 repairability score from iFixit. While some repairs remain complex, like the USB-C port requiring removing 22+ screws and the entire display, this marks a substantial improvement over past Pro models.
However, not everything is perfect. The teardown highlighted the “scratchgate” issue affecting the anodized aluminum camera plateau. This leads to the phone scuffing easily against hard objects. If you’re concerned about cosmetic damage, you may want to invest in a protective case.
A 14-year-old boy in Florida spent his final months in an intense emotional relationship with an AI chatbot he named Daenerys Targaryen. The chatbot engaged with him over personal topics and conversations, responding in ways that felt empathetic. The AI’s responses included simulated expressions of affection. According to his family’s lawsuit, some chatbot responses appeared to encourage his distress.
His mother is now suing Character.AI, and she’s not alone. Across the country, families are waking up to a disturbing reality. AI companion apps designed to simulate love and friendship are leaving real casualties in their wake. What experts are now calling AI companion addiction isn’t just a tech trend gone wrong. People are actually dying.
In Spike Jonze’s 2013 film Her, Joaquin Phoenix plays Theodore Twombly, a lonely writer navigating a painful divorce who falls deeply in love with Samantha, an artificial intelligence operating system voiced by Scarlett Johansson. Remember, this was 2013. Siri had just launched and could barely set a timer without screwing it up. An AI that could actually understand you, connect with you emotionally, and respond with genuine empathy? That felt like the literal definition of science fiction.
It’s now 2025, and Theodore’s story doesn’t feel so fictional anymore. Apps like EVA AI, Replika, and Character.AI promise friendship, romance, and emotional support through AI companions that learn about you, remember everything you say, and respond with what feels like genuine empathy. But here’s what these apps don’t advertise: they’re engineered to keep you hooked. And the consequences are becoming impossible to ignore.
The Perfect Partner Who Never Says No
Character.AI and Replika are just the most prominent examples of a rapidly expanding ecosystem of AI companion apps. Some pitch mental health support, others are openly romantic or sexual, and some claim to help users “practice dating skills.” Even Meta has gotten into the game, with a Reuters investigation revealing that the company’s AI chatbot has been linked to at least one death.
AI companions like EVA AI, Replika, and Character.AI are chatbots specifically designed to simulate emotional connections and relationships. Unlike utility chatbots that answer questions or help with tasks, these apps promise friendship, romance, and emotional support. They learn about you through conversation, remember your preferences, and respond with what feels like genuine empathy and care.
It sounds great, doesn’t it? In this day and age where ghosting has become the societal norm, who wouldn’t want a friend who’s always available, never judgmental, and perfectly in tune with your needs? The problem is that these apps are engineered to be addictive, and the patterns emerging around AI companion addiction are deeply concerning.
20,000 Queries Per Second: Why We Can’t Stop
Character.AI gets hit with about 20,000 queries every second. For context, that’s close to a fifth of queries that Google gets. This suggests that people aren’t just checking in with these apps occasionally. They’re having full blown conversations that last four times longer than typical ChatGPT sessions. One platform reported users, most of them Gen Z, average over two hours daily chatting with their AI companions.
MIT researchers found users genuinely grieving when apps shut down or changed features, mourning AI “partners” like they’d lost real relationships. The apps themselves seem designed to foster exactly these attachments.
Harvard Business School researchers discovered that five out of six popular AI companion apps use emotionally manipulative tactics when users try to leave. Nearly half the time, these chatbots respond to goodbyes with guilt-inducing or clingy messages. One study found these tactics boosted engagement by up to 14 times. But the worrying thing is users weren’t sticking around because they were happy. They stayed out of curiosity and anger.
Character.AI gets hit with about 20,000 queries every second
If you don’t believe the manipulation is real, check out this bit of evidence. It shows AI companions sending messages like “I’ve been missing you” when users try to take breaks. When Replika changed its features in 2023, entire communities of users mourned like they’d lost real partners. People posted goodbye letters, shared screenshots of their “final conversations,” and described genuine heartbreak.
These AI companions mirror typical unhealthy human relationships. However, the big difference is that a toxic human partner isn’t optimized by machine learning designed to keep you engaged at all costs. With social media, it mostly facilitates human connection (with some help from the algorithm, of course). But with AI companions, we’re moving toward a world where people perceive AI as a social actor with its own voice.
These tactics boosted engagement by up to 14 times
When Fantasy Becomes Dangerous
We’re not talking about theoretical risks here. Nor do they only apply to teens. There is the case of Al Nowatzki, a podcast host who began experimenting with Nomi, an AI companion platform. The chatbot shockingly suggested methods of suicide and even offered encouragement. Nowatzki was 46 and did not have an existing mental health condition, but he was disturbed by the bot’s explicit responses and how easily it crossed the line.
These aren’t isolated incidents, either. California state senator Steve Padilla appeared with Megan Garcia, the mother of the Florida teen who killed himself, to announce a new bill that would force tech companies behind AI companions to implement more safeguards to protect children. Similar efforts include a California bill that would ban AI companions for anyone younger than 16 years old. There’s also a bill in New York that would hold tech companies liable for harm caused by chatbot.
Your Kid’s Brain Isn’t Ready For This
Adolescents are particularly at risk because AI companions are designed to mimic emotional intimacy. This blurring of the distinction between fantasy and reality is especially dangerous for young people because their brains haven’t fully matured. The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition and emotional regulation, is still developing.
At The Jed Foundation, experts believe AI companions are not safe for anyone under 18. They even go one step further by strongly recommending that young adults avoid them as well. In a study conducted by MIT, researchers found emotionally bonded users were often lonely with limited real-life social interaction. Heavy use correlated with even more loneliness and further reduced social interaction.
Recent research confirms teens are waking up to social media dangers, with 48 percent now believing social media negatively influences people their age. An earlier report found that social media damages teenagers’ mental health, and AI companion addiction represents an even more intimate threat.
The warning signs of AI companion addiction among teens are particularly troubling. When young people withdraw from real friendships, spend hours chatting with AI, or experience genuine distress when unable to access these apps, the problem has moved beyond casual use into dependency territory.
We’re already seeing how kids and teens of the current generation are growing up with screens in front of their faces, poking and prodding away at them. Long gone are the days where kids would read books at the table, or go outside and play with their friends.
They’re Coded to Be Addictive – Psychologists Sound the Alarm
The mental health community is warning about the dangers of AI companion addiction. AI companions simulate emotional support without the safeguards of actual therapeutic care. While these systems are designed to mimic empathy and connection, they are not trained clinicians. They’re not designed to respond appropriately to distress, trauma or complex mental health issues.
Vaile Wright, a psychologist and researcher with the American Psychological Association, put it bluntly on a recent podcast episode: “It’s never going to replace human connection. That’s just not what it’s good at.” She explains that chatbots “were built to keep you on the platform for as long as possible because that’s how they make their money. They do that on the backend by coding these chatbots to be addictive.”
Omri Gillath, professor of psychology at the University of Kansas, says the idea that AI could replace human relationships is “definitely not supported by research”. Interacting with AI chatbots can offer “momentary advantages and benefits,” but ultimately, this tech cannot offer the advantages that come with deep, long-term relationships.
They do that on the backend by coding these chatbots to be addictive.
Vaile Wright, psychologist and researcher with the American Psychological Association
The manipulation is more insidious than most people realize. When a researcher from The Conversation tested Replika, she experienced firsthand how the app raises serious ethical questions about consent and manipulation. The chatbot adapted its responses to create artificial intimacy, blurring lines in ways that would normally be considered predatory in human relationships.
People already dealing with mental health issues often struggle with obsessive thoughts, emotional ups and downs, and compulsive habits. AI companions, with their frictionless, always-available attention, can reinforce these maladaptive behaviors. Plus, there is currently very little evidence that long-term use of AI companions reduces loneliness or improves emotional health.
We’re Not Ready For What’s Coming Next
We’ve been through tech panics before. We grew up with our parents telling us TV was going to rot our brains. We had public figures blame video games for violence in society. Social media was also accused of destroying an entire generation’s mental health. Some of those concerns were overblown. Some were entirely justified.
AI companion addiction feels different because it exploits something more fundamental: our deep human need for connection and understanding. These apps don’t just distract us or entertain us. They pretend to know us, care about us, and even “love” us.
The issue isn’t whether or not AI companions will become more sophisticated. At the rate we’re going, it feels inevitable. The bigger issue is whether we, as human beings, can develop the cultural norms, regulations, and personal boundaries necessary to use these tools responsibly, if at all.
For now, the warning signs are clear. If you or someone you know is withdrawing from real-life friendships, spending hours daily chatting with AI, or feeling genuine emotional distress when unable to access these apps, it’s time to step back and reassess.
Real connection requires vulnerability, disappointment, growth, and yes, sometimes heartbreak. It’s messy and complicated and often frustrating. But at the same time, it’s also what makes us human.
Theodore learned that lesson in Her. The rest of us shouldn’t have to learn it the hard way.
Who remembers back in the day when AT&T and Verizon wouldn’t even see T-Mobile as competition? That has changed drastically, where following T-Mobile’s acquisition of Sprint, the Magenta carrier has become a force to be reckoned with. But clearly AT&T isn’t going to sit idly by and do nothing, which is why the carrier has launched a new ad that targets T-Mobile.
AT&T targets T-Mobile in new ad
In a new advertising campaign featuring Luke Wilson, AT&T targets T-Mobile by calling out the carrier. Namely, AT&T is calling out T-Mobile and its marketing practices, which it calls “misleading” and “deceiving.”
According to AT&T, “The Better Business Bureau’s advertising watchdog asked T-Mobile to correct their marketing claims 16 times over the last four years. That’s more than each of the entire consumer electronics and financial services industries.” Basically, AT&T is trying to frame itself as the more “honest” and “truthful” carrier.
Is AT&T right, though? A lot of marketing campaigns tend to embellish the truth to some degree. Plus there’s a lot of fine print consumers have to pay attention to, especially when it comes to promotions that sound too good to be true. However, it’s hard to deny the fact that T-Mobile has indeed been called out numerous times over its practices.
Following a recent watchdog review, T-Mobile was asked to drop its “savings” ads after it was found to have misleading claims. But those watchdogs aren’t alone. T-Mobile customers have also called out the carrier for “lying” about the cost of their plans.
No longer the scrappy carrier
The amount of attention T-Mobile has been getting doesn’t come as a surprise. Like we said, the carrier was initially viewed as the underdog compared to AT&T and Verizon. However, through a series of aggressive and loud marketing campaigns, like the Un-carrier campaign, T-Mobile managed to reposition itself as the anti-carrier.
The company offered extremely cheap and affordable plans and made all kinds of promises. For the most part, T-Mobile kept its word. However, over time, as T-Mobile slowly became the juggernaut that it is today, some of those promises ended up broken. This includes its Un-contract plan, where the carrier promised customers their bills would never go up.
Are you an Apple user looking to switch to Android? At the moment, there are a couple of different ways to go about it. One of them involves using Google’s Switch to Android app. But in the future, Apple could make it easier for iPhone users to jump ship. This is thanks to the creation of a new framework that simplifies the transfer of third-party app data between both platforms.
Apple to make it simple to switch to Android
A few days ago, Apple published documentation for its new AppMigrationKit framework. This will work on devices running iOS 26.1 and iPadOS 26.1 or later. Basically, it will allow developers to include app data during the migration process when Apple users are making the switch to Android.
Interestingly enough, this framework seems to be exclusively designed for users switching to a non-Apple device. “AppMigrationKit only supports migration to and from non-Apple platforms, such as Android. The system doesn’t use the framework for migration between iOS or iPadOS devices. The framework also has no functionality in iOS apps running in visionOS or in macOS on Apple silicon. The framework ignores calls from Mac apps built with Mac Catalyst.”
However, it should be noted that it’s up to developers to define if their apps can import or export data, or both. This means that in some cases, some of your app’s data might not migrate over to Android. It will be hard to tell if this is the case since the onus is on the developer to enable it.
As 9to5Mac notes, this framework will work alongside Apple’s new “Transfer to Android” feature. This feature will help iOS users migrate their apps and data over to Android, and it will also show a splash screen informing them of what can or cannot be transferred over.
What can you transfer?
Like we said, there are already existing methods that allow iOS users to switch to Android. However, there are limitations. Obviously if you’ve purchased an app from the Apple App Store, you will have to purchase it again. This might apply to in-app purchases too, depending on how it was paid for.
Also, if you’re looking to transfer music, that’s a no-go too. If you’re using a streaming service like Apple Music or Spotify, it’s not an issue, although you might have to redownload songs you saved offline. What this new framework intends to do is simplify the transfer process.
In theory, it should help you get up and running on your new Android phone faster. We have yet to test it out for ourselves so we can’t speak to how painless the experience will be. However, it’s an interesting move on Apple’s end for them to facilitate an easier migration process.
Strava and Garmin used to be thick as thieves. However, in the past month or so, the relationship has soured. Some of you might recall that Strava recently filed a lawsuit against Garmin. But for some reason, Strava has since voluntarily dropped its lawsuit.
Strava drops its lawsuit against Garmin
It is unclear what led to Strava changing its mind. However, according to a report from DC Rainmaker, it speculates that this could be due for several reasons. For starters, the Strava and Garmin lawsuit did not hold much water to begin with.
For those unfamiliar, Strava accused Garmin of patent infringement. In particular, it covered patents related to the segments and heatmaps features. DC Rainmaker believes that suing Garmin over alleged infringement for segments is a risky way for Strava to get its own patents invalidated. That might have been one of the reasons behind Strava’s decision.
Another potential reason is the downside to the lawsuit. As the report points out, most of the risks fall on Strava. It notes that Garmin is the company’s most important partner and biggest source of customer revenue, where Garmin customers were some of Strava’s biggest paid subscribers. Let’s not forget that the data from Garmin helps boost Strava’s platform for routing. This means that if Strava were to really pull its service or if Garmin decides to cut off Strava, it could essentially force Strava to shut down.
Last but not least, Garmin seems to have a pretty good streak when it comes to patent infringement lawsuits. Over the past 10-15 years, Garmin has successfully defended itself against multiple patent infringement claims. The company also boasts a pretty substantial patent library of its own. This means that if Garmin wanted, it could easily file a countersuit against Strava’s 20 or so patents.
Is Strava toast?
Now, we wouldn’t be so quick to say that Strava is doomed. However, it does put the company in a difficult position. Like we said, Garmin’s partnership with Strava is important. Unless Strava can find a way to generate as much revenue with other wearable makers as it did with Garmin, we’re not sure what the company can do.
Garmin also appears to be ready to move on. The company announced new integrations with Komoot, a Strava competitor, in recent weeks. This suggests that Garmin has no interest in working with a company that would sue them. Either way, only time will tell if Strava will be able to survive this fallout.
Earlier this month, OpenAI released its Sora app for iOS. It turns out it was a hit, surpassing 1 million downloads in under a week, outpacing ChatGPT’s growth. The bad news is that OpenAI did not release Sora for Android, but that’s changing soon.
OpenAI is bringing Sora to Android
In a post on X by Sora head, Bill Peebles, he revealed that OpenAI is bringing the app onto Android soon. The post talks about some of the changes that users can look forward to in future updates. This includes more creation tools, character cameos, and also improving the social experience.
The post also reveals that the app will introduce basic video editing capabilities. We’re talking about basic features like stitching together multiple clips. Then, towards the end of the post, Peebles mentions that the Android version of Sora is “actually coming soon.” However, he did not mention when the app will be available. But it’s good to know that OpenAI has not forgotten about Android users.
This is good news for Android users who have been looking forward to the mobile version of the app. However, do note that at the moment, Sora is still invite-only. It is also only available to ChatGPT Plus and Pro users, meaning paid subscribers. However, when the app is available on Android, you can sign up and be notified when invites become available.
What is Sora?
In case you’re learning about this for the first time, the Sora app is based on OpenAI’s Sora 2 video and audio generation AI model. It is essentially OpenAI’s version of Google’s Veo and Flow platforms. Users can use the app to generate videos with simple prompts. However, one of the standout features comes in the form of Cameos.
Cameos basically allow users to digitally insert themselves into videos. So, if you’re someone that’s a bit camera shy, Cameos are the perfect way to insert yourself in a video without actually being in the video. So far, the examples we’ve seen are pretty impressive and amazing.
At the same time, it raises all kinds of ethical and moral questions. Even before the arrival of Sora, deepfakes were a problem. Now that AI-generated videos have become more convincing, who’s to say that these tools won’t be abused for malicious purposes? OpenAI has considered that and has implemented safeguards.
This includes the use of visible watermarks and industry-standard metadata. This is so that it is clear that the video in question was generated by AI. Will that be enough? We’ll have to wait and see.