Apple is increasingly relying on Google's crutches to power its revamped Siri in the cloud. And we now know some of the specs of the custom Gemini model that is expected to bolster the new Siri. Mark Gurman: Apple is planning to use a custom Gemini AI model with 1.2 trillion parameters to power the revamped Siri Bloomberg's Mark Gurman is now reporting that Apple is planning to use a gigantic, albeit tailored, Gemini AI model to power its upcoming revamped Siri. With 1.2 trillion parameters under its belt, the customized Gemini model would "dwarf" the 1.5 billion-parameter, bespoke AI […]
Google Maps is adding new features to make navigation and exploring places easier and more useful. The updates use Google’s Gemini AI to give clearer directions and helpful information about locations.
The new feature is landmark-based navigation. Instead of just saying “turn in 500 feet” or showing traffic lights and stop signs, Maps can now use nearby landmarks to guide you.
For example, it could say, “turn right after the Thai Siam Restaurant,” and highlight that landmark on your map. Google will only use landmarks that are easy to see from the street, so you don’t get confused.
Gemini’s AI looks at Street View images and compares them with Google Maps’ 250 million places to make sure the landmarks are accurate. This feature is rolling out now as the default on Android and iOS in the US.
Image via Google
Google Maps is also introducing traffic alerts that work even when you are not using navigation. The app can notify you about traffic jams, road closures, or delays on your regular routes. This feature is available now for Android users in the US.
Moreover, there’s also an improved Google Lens inside Maps. You can point Lens at a location and ask questions like, “What is this place?” “Why do people like it?” or “Do they accept walk-ins?” Gemini AI will provide quick answers using information from Google Maps.
An updated Google Lens of Google Maps will roll out gradually later this month on Android and iOS in the US. Stay tuned for more information.
While driving, users can now ask Gemini to answer questions about places of interest on their route, return results about other topics (like sports or news), and even perform tasks like adding events to their calendar.
Gemini Google Maps introduces hands-free, conversational AI, landmark navigation, and visual search, fundamentally transforming the driving and exploration experience.
Similar to other AI models like ChatGPT and Claude, Google’s Gemini has a Deep Research feature. This allows the AI model to dive deeper into topics to get you more advanced answers. But now, an APK teardown by Android Authority has uncovered upcoming changes that would allow Gemini Deep Research to look through your own personal sources like Gmail, Google Drive, or Google Chat.
Gemini Deep Research goes deeper
According to the teardown, it seems that Google is working on an update to Gemini Deep Research that gives users more control over their source, like Gmail, for example. This means that if you’re looking to find out more about a topic that’s been discussed within emails, or within Google Drive, you can specify that.
Some of you might think it’s a waste not to have deep research expand to the web, but it makes sense. Sometimes, for some businesses, your discussions are held internally and over email. This could be for an upcoming product launch, or looking at internal metrics that are not otherwise not available publicly.
So, by allowing Gemini Deep Research to take a closer look at your Gmail inbox or the contents of your Google Drive, you can still extract the same quality of information. However, at the moment in its current form, Android Authority was unable to generate a report from Google Drive or Gmail. Presumably the feature isn’t completely live yet, which might explain why.
In any case, it’s hard to say when Google will be bringing this feature to the public. Sometimes APK teardowns hint at potential changes or new features, but they don’t always make the cut. Either way, we’ll have to wait and see.
Integrating Gemini across its services
This deeper integration of Google services in Gemini is kind of funny. In the past, we would see Google integrate Gemini across its services. Now, Google services are being integrated into Gemini.
We had previously seen how Gemini’s integration into Google Drive allowed the AI to analyze your videos. Gemini for Gmail also helps users summarize their emails and automatically schedule meetings and events. If you’re deep into the Google ecosystem, then this upcoming update might be worth checking out.
The exchange founded by Cameron and Tyler Winklevoss has discussed unveiling products in this area as soon as possible, according to a report on Tuesday.
Gemini is eyeing a regulated prediction markets as it awaits approval from U.S. derivatives regulators, marking its latest push to expand beyond crypto trading into event-based financial products. Crypto exchange Gemini is preparing to launch prediction market contracts, according to…
Google’s Gemini is about to get a useful update that will make its Deep Research feature even better. New information reveals that Gemini will let you choose where it searches for information, such as Gmail, Google Drive, Chat, or even the web.
Previously, Gemini’s Deep Research could only look through files you uploaded or stored in Google Drive. But a recent APK teardown (via Android Authority) of the Google app for Android shows that you’ll soon be able to pick specific sources for your research.
You can have Gemini search your Gmail inbox for relevant emails or scan all your Google Drive files for documents. This gives you more flexibility and control over what Gemini looks at when gathering information.
Image via Android Authority
A new Sources button will allow you to select which Google services you want Gemini to search. If you don’t want it to search the web, you can turn off that option. This is helpful if you want to avoid information from websites that may not be as reliable.
You can still upload specific files for research, but now Gemini can also search through all your Drive files or emails without you needing to choose them one by one. This new feature will make it easier to use Gemini if you need information from multiple sources at the same time.
As Google continues to improve Gemini, we may see more sources added in the future, like Google Photos. Stay tuned for more information.
The end is officially near for Google Assistant. Google has already been slowly transitioning users to its more advanced Gemini AI model. Now, recent findings in the app’s code confirm the final stage of that transition: soon, users will lose the ability to switch back to the legacy Google Assistant entirely in favor of Gemini.
For the last year, Google offered a “safety net.” The signature allowed users who preferred the old experience to revert to Assistant through a simple setting. Those days are numbered. Google is now actively working to remove the choice screen from Gemini and the “Digital assistants from Google” settings menu. This move effectively locks users into the Gemini experience.
Google Assistant Is dead: Gemini prepares to become your only AI option
Beyond the philosophical shift away from Assistant, Google is introducing several user interface improvements to make the Gemini experience cleaner and more efficient for everyday use (spotted by Android Authority). When the AI is working on a complex query, users will see a refreshed processing animation. The latter will improve the visual feedback, letting people know instantly that Google is working on their request.
Google is also addressing the clutter that often plagues long, in-depth AI chats. With that in mind, the firm is implementing conversation management tools. They are adding “expand” and “collapse” buttons for long user queries, a feature already available on the web version of Gemini. This helps declutter the mobile interface by hiding the full text of a long prompt once the AI has responded.
Furthermore, there will be a new “Jump to bottom” button to navigate extensive conversation histories. This simple addition is highly useful for quickly catching up on the latest response without endlessly scrolling through previous exchanges.
Fans of the classic Google Assistant may mourn the loss of the familiar voice model. However, the move to Gemini is inevitable. Google is betting that the enhanced AI capabilities and the streamlined new interface will prove far more valuable than clinging to the past. The final switch may happen via a server-side update soon, making Gemini your only option.
The question of how Google will monetize its new AI products just got clearer. Robbie Stein, Google’s VP of Product, recently confirmed the company is moving forward with ads in its Search features, specifically AI Mode and “other AI experiences.” Crucially, however, the executive did not explicitly mention the standalone Gemini chatbot service.
Some recent reports have mentioned that Gemini could receive ads soon. They cite Stein’s appearance on the “Silicon Valley Girl” podcast as the primary source. However, there have been some misinterpretations of his actual words.
Google VP confirms ads are under testing in Search’s AI Mode
In the podcast, Stein confirmed the ad integration is already underway, stating, “We’ve started some experiments on ads within AI Mode and within Google AI experiences” (via Live Mint) This confirms that the generative features integrated directly into the Search experience will become the next frontier for advertising revenue. However, Gemini was never mentioned as a product that will get ads.
Robbie Stein emphasized that the company’s primary focus has been on building great consumer products first. But he added that “users are starting to see some ad experiments there too.”
As mentioned before, the standalone Gemini chatbot service remains officially outside of this specific announcement. But the term “Google AI experiences” is purposefully broad. It suggests that any AI product connected to the main Google stream could eventually be included.
Exploring ad formats
Stein hinted that the ads we see may not look like the traditional sponsored links we know from decades of web searching. Google is exploring “new and novel ad formats” that could integrate more naturally into conversational interfaces. He sees this as an opportunity, suggesting that advertising in this new context could be “even more helpful for you, particularly in an advertising context.”
This confirms the direction of Google’s monetization strategy. The company will follow an AI-enhanced future that will still be driven by advertising revenue. While the exact format and location of these new AI ads are still taking shape, the experiments are in progress. This signals the next great evolution of search monetization for the company.
Google’s latest attempt to integrate generative AI into our living spaces, Gemini for Home, is generating headlines—but perhaps not the kind Google intended. Rolling out as part of Google’s paid Home subscription, the feature promises smart daily summaries and conversational insights from your Nest camera footage. However, early reports from users suggest that Gemini for Google Home system frequently misidentifies common household events, even erroneously detecting the presence of deer or fake people, generating accuracy doubts.
The new feature utilizes the Gemini AI model to process video clips, generating summaries and answering questions via an “Ask Home” chatbot. According to reports, it handles basic tasks like creating automations well. However, its video identification skill is proving unreliable sometimes, leading to some genuinely unsettling notifications.
Google Gemini for Home misidentifies dogs as deer, accuracy in question
As noted by Ars Technica, recurring issue involves the AI reporting fictional events. One user was alerted that, “Unexpectedly, a deer briefly entered the family room.” At that point, the camera will be simply looking at a dog. This isn’t a one-off mistake; the system frequently confuses dogs and shadows with “deer,” “cats,” or even “a person” roaming around an empty room.
Source: Ryan Whitwam (Ars Technica)
The problem is particularly jarring when Gemini mislabels security-critical events. An alert that “A person was seen in the family room” can cause genuine alarm, only for the user to check the feed and find absolutely nothing. After a few false positives, users quickly learn to distrust the system. These failures defeat the entire purpose of a security monitoring system.
Google attributes these errors to the nature of large language models. The firm explains that Gemini can make “inferential mistakes” when it lacks enough visual detail or base-level common sense. In fact, the AI is reportedly great at recognizing car models and logos. However, it struggles with the simple, necessary context that a human observer would never miss.
The question of launch readiness
For the security monitoring system to be genuinely useful, these inferential errors should decrease quite a bit. Google is “investing heavily in improving accurate identification” and encourages users to correct the model through feedback. However, at least for now, the core issue remains.
Source: Ryan Whitwam (Ars Technica)
It is surprising that Google chose to launch a premium feature that requires this much “hand-holding” to function correctly out of the box. Gemini for Home is a new product, and it will likely improve significantly. The company must gather more data and refine the model. However, releasing a security feature that effectively cries wolf about intruders right at launch risks eroding user trust and makes it difficult to justify the $20-per-month Advanced subscription fee. Users may find the $10 per month subscription, which offers less video history but avoids the unreliable AI features, a much smarter bet for now.
Apple appears to have conceded defeat apropos its in-house Siri revamp strategy, and is now leaning on Google to design a custom Gemini-based Large Language Model (LLM) to power the new Siri in the cloud. Mark Gurman: Apple is paying Google to design a custom Gemini AI model for its Private Cloud Compute framework The legendary Apple tipster, Bloomberg's Mark Gurman, reported in his latest 'Power On' newsletter that the Cupertino giant seems to have thrown in the proverbial towel when it comes to creating an in-house AI model to power the revamped Siri's upcoming features, all couched under the […]
Recently, Apple CEO Tim Cook confirmed that the company’s long-awaited upgraded Siri will be coming in 2026. However, we expressed our skepticism. Looking at the state of Apple Intelligence, we can’t say we have a lot of confidence in the company’s ability to pull it off. However, our fears may be put to rest. According to Bloomberg’s Mark Gurman’s Power On newsletter, the journalist has revealed that the new Siri could be powered by Google Gemini.
A little helping hand from Google
In the newsletter’s FAQ section, Gurman spoke more about the new Siri model and how Google Gemini will power it. According to Gurman, Apple has hired Google to create a custom Gemini-based model that will run on Apple’s own private cloud servers. This version of Google Gemini will help to power the new Siri model.
Apple had previously held a “bake-off” between Anthropic and Google. It seems that while Apple preferred Anthropic’s model, the company’s pre-existing relationship with Google made more sense. That being said, there are a couple of things that we should take note of.
While Apple will use a Google Gemini-based model for its new Siri, it won’t actually be Gemini. This means that if you downloaded the Gemini app on your iPhone, or if you’ve used it on the web or Android, the version Apple will use won’t be 100% the same. Its capabilities might be the same, since it is based on Gemini, but ultimately it will still be Siri at the end of the day.
This is actually good news
If Gurman’s report is accurate, we can breathe a sigh of relief. Like we said, Apple Intelligence is horrible to use and even calling it basic is an overstatement. In this regard, it’s good to see that Apple is quietly acknowledging that it cannot do everything itself.
Apple has already teamed up with OpenAI for more complex and advanced Apple Intelligence tasks. There have been rumors that Apple could offer Gemini as one of the model options for Apple Intelligence. We’re not sure if that will still happen, but maybe Siri powered by Gemini makes more sense.
In any case, take it with a grain of salt. 2026 is shaping up to be a massive year for Apple. If the company’s plans are on track, we should learn more about this upgraded version of Siri at WWDC 2026.