News from the AI & ML world

DeeperML - #aimobile

Alexey Shabanov@TestingCatalog //
Samsung is reportedly considering a significant shift in its AI strategy for the upcoming Galaxy S26 series, potentially replacing Google's Gemini with Perplexity AI as the default AI chatbot. According to Bloomberg, the company is nearing a deal with Perplexity to deeply integrate its AI services into next year's Android phones. This move could signal a departure from Samsung's heavy reliance on Google's AI technologies, potentially making their devices less Google-centric. Samsung aims to preload the Perplexity app on its phones and incorporate Perplexity's search functionality directly into its internet browser.

Discussions between the two companies have also explored the possibility of integrating Perplexity's AI tech into Samsung's Bixby assistant. While Samsung has seemingly deprioritized Bixby in recent years, opting for Google's Gemini, a partnership with Perplexity could revitalize its alternative assistant. This integration could provide Bixby with more powerful search capabilities, supercharging its functionality. Furthermore, Samsung and Perplexity are reportedly in the early stages of discussing the development of a new "AI-infused" operating system.

The deal between Samsung and Perplexity could be finalized soon, although it's unlikely to be implemented in the Galaxy Z Fold 7 and Galaxy Z Flip 7, which are expected to launch in August. The Galaxy S26 series, anticipated to be revealed in early 2026, is the target for this integration. If Perplexity becomes the default on all Samsung devices, it could significantly boost the adoption of Perplexity's search tools, making it a major win for the AI company. Perplexity is also in the process of raising a $500 million round at a $14 billion valuation.

Recommended read:
References :
  • PCMag Middle East ai: Samsung's Galaxy S26 May Drop Google Gemini as Its Default AI Chatbot
  • TechCrunch: Samsung may incorporate Perplexity’s AI tech in its phones
  • www.pcmag.com: Samsung's Galaxy S26 May Drop Google Gemini as Its Default AI Chatbot
  • www.techradar.com: The Samsung Galaxy S26 series could have Perplexity AI baked in
  • Data Phoenix: Perplexity launches Labs, an AI tool that helps users create reports, dashboards, and web apps
  • Mark Gurman: Samsung is nearing wide-ranging deal with Perplexity on an investment and deep integration into devices, Bixby assistant and web browser, I’m told.
  • Dataconomy: Samsung may invest in Perplexity and integrate it into Galaxy phones
  • www.lifewire.com: Samsung + Perplexity Might Be the AI Power Couple That Could Redefine Your Phone
  • www.zdnet.com: If Perplexity's app and assistant get preloaded on upcoming Galaxies, what happens to Google Gemini integration?

Alexey Shabanov@TestingCatalog //
Google has launched the NotebookLM mobile app for Android and iOS, bringing its AI-powered research assistant to mobile devices. This release marks a significant step in expanding access to NotebookLM, which was initially launched as a web-based tool in 2023 under the codename "Project Tailwind." The mobile app aims to offer personalized learning and efficient content synthesis, allowing users to interact with and process information on the go. The app is officially available to everyone after months of waiting, offering the core features of NotebookLM, with the promise of continued functionality additions.

The NotebookLM mobile app focuses on audio-first experiences, with features like audio overviews that generate podcast-style summaries. These summaries can be played directly from the list view without opening a project, making it feel like a media player for casual content consumption. Users can also download audio overviews for offline playback and listen in the background, supporting learning during commutes or other activities. Moreover, the app supports interactive mode in audio sessions, where users can ask questions mid-playback, creating a live dialogue experience.

The mobile app retains the functionality of the web version, including the ability to create new notebooks and upload sources like PDFs, Google Docs, and YouTube videos. Users can add sources directly from their mobile devices by using the "Share" button in any app, making it easier to build and maintain research libraries. NotebookLM relies only on user-uploaded sources, ensuring reliable and verifiable information. The rollout underscores Google’s evolving strategy for NotebookLM, transitioning from a productivity assistant to a multimodal content platform, appealing to students, researchers, and content creators seeking flexible ways to absorb structured knowledge.

Recommended read:
References :
  • AI News | VentureBeat: Google finally launches NotebookLM mobile app at I/O: hands-on, first impressions
  • www.laptopmag.com: An exclusive look at Google's NotebookLM app on Android and iOS
  • TestingCatalog: Google launches NotebookLM mobile app with audio-first features on mobile
  • www.tomsguide.com: NotebookLM just arrived on Android — and it can turn your notes into podcasts
  • THE DECODER: Google launches NotebookLM mobile app for Android and iOS
  • MarkTechPost: Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source Integration
  • the-decoder.com: Google launches NotebookLM mobile app for Android and iOS
  • www.marktechpost.com: Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source Integration
  • www.techradar.com: Google's free NotebookLM AI app is out now for Android and iOS – here's why it's a day-one download for me

Alexey Shabanov@TestingCatalog //
Google has officially launched the NotebookLM mobile app for both Android and iOS, extending the reach of its AI-powered research assistant. This release, anticipated before Google I/O 2025, allows users to leverage NotebookLM's capabilities directly from their smartphones and tablets. The app aims to help users understand information more effectively, regardless of their location, marking a step towards broader accessibility to AI tools.

The NotebookLM mobile app provides a range of features, including the ability to create new notebooks and add various content types, such as PDFs, websites, YouTube videos, and text. A key feature highlighted by Google is the availability of "Audio Overviews," which automatically generates audio summaries for offline and background playback. Furthermore, users can interact with AI hosts (in beta) to ask follow-up questions, enhancing the learning and research experience on the go. The app also integrates with the Android and iOS share sheets for quickly adding sources.

The initial release offers a straightforward user interface optimized for both phones and tablets. Navigation within the app includes a bottom bar providing easy access to Sources, Chat Q&A, and Studio. While it currently doesn't fully utilize Material 3 design principles, Google emphasizes this is an early version. Users can now download the NotebookLM app from the Google Play Store and the App Store, fulfilling a top feature request. Google has indicated that additional updates and features are planned for future releases.

Recommended read:
References :
  • TestingCatalog: Google launches NotebookLM mobile app with audio-first features on mobile
  • The Official Google Blog: Understand anything, anywhere with the new NotebookLM app
  • www.laptopmag.com: An exclusive look at Google's NotebookLM app on Android and iOS
  • www.tomsguide.com: NotebookLM just arrived on Android — and it can turn your notes into podcasts
  • www.marktechpost.com: Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source Integration.
  • THE DECODER: Google launches NotebookLM mobile app with audio-first features on mobile
  • 9to5Mac: Google launches NotebookLM mobile app for Android and iOS
  • TechCrunch: Google launches stand-alone NotebookLM app for Android
  • AI News | VentureBeat: Google finally launches NotebookLM mobile app at I/O: hands-on, first impressions
  • MarkTechPost: Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source Integration
  • Dataconomy: Google brings NotebookLM to mobile with new standalone apps
  • The Tech Basic: Google launches NotebookLM apps letting users research on the go
  • the-decoder.com: Google launches NotebookLM mobile app for Android and iOS
  • www.techradar.com: Google's free NotebookLM AI app is out now for Android and iOS – here's why it's a day-one download for me
  • thetechbasic.com: Google Launches NotebookLM Apps Letting Users Research on the Go
  • www.techradar.com: Google I/O 2025 live blog covering all the major announcements, including Android XR updates, new Gemini features, and more.

Scott Webster@AndroidGuys //
Google is aggressively expanding its Gemini AI across a multitude of devices, signifying a major push to create a seamless AI ecosystem. The tech giant aims to integrate Gemini into everyday experiences by bringing the AI assistant to smartwatches running Wear OS, Android Auto for in-car assistance, Google TV for enhanced entertainment, and even upcoming XR headsets developed in collaboration with Samsung. This expansion aims to provide users with a consistent and powerful AI layer connecting all their devices, allowing for natural voice interactions and context-based conversations across different platforms.

Google's vision for Gemini extends beyond simple voice commands, the AI assistant will offer a range of features tailored to each device. On smartwatches, Gemini will provide convenient access to information and app interactions without needing to take out a phone. In Android Auto, Gemini will replace the current Google voice assistant, enabling more sophisticated tasks like planning routes with charging stops or summarizing messages. For Google TV, the AI will offer personalized content recommendations and educational answers, while on XR headsets, Gemini will facilitate immersive experiences like planning trips using videos, maps, and local information.

In addition to expanding Gemini's presence across devices, Google is also experimenting with its search interface. Reports indicate that Google is testing replacing the "I'm Feeling Lucky" button on its homepage with an "AI Mode" button. This move reflects Google's strategy to keep users engaged on its platform by offering direct access to conversational AI responses powered by Gemini. The AI Mode feature builds on the existing AI Overviews, providing detailed AI-generated responses to search queries on a dedicated results page, further emphasizing Google's commitment to integrating AI into its core services.

Recommended read:
References :

Scott Webster@AndroidGuys //
Google is significantly expanding the reach of its Gemini AI assistant, bringing it to a wider range of devices beyond smartphones. This expansion includes integration with Android Auto for vehicles, Wear OS smartwatches, Google TV, and even upcoming XR headsets developed in collaboration with Samsung. Gemini's capabilities will be tailored to each device context, offering different functionalities and connectivity requirements to optimize the user experience. Material 3 Expressive will launch with Android 16 and Wear OS 6, starting with Google’s own Pixel devices first.

Google's integration of Gemini into Android Auto aims to enhance the driving experience by providing drivers with a natural language interface for various tasks. Drivers will be able to interact with Gemini to send messages, translate conversations, find restaurants, and play music, all through voice commands. While Gemini will require a data connection in Android Auto and Wear OS, cars with Google built-in will offer limited offline support. Google plans to address potential distractions by designing Gemini to be safe and focusing on quick tasks.

Furthermore, Google has unveiled 'Material 3 Expressive', a new design language set to debut with Android 16 and Wear OS 6. This design language features vibrant colours, adaptive typography, and responsive animations, aiming to create a more personalized and engaging user interface. The expanded color palette includes purples, pinks, and corals, and integrates dynamic colour theming that draws from personal elements. Customizable app icons, adaptive layouts, and refined quick settings tiles are some of the functional enhancements users can expect from this update.

Recommended read:
References :
  • PCMag Middle East ai: The car version of Gemini will first be available on Android Auto in the coming months and later this year on Google built-in.
  • www.tomsguide.com: Google is taking Gemini beyond smartphones — here’s what’s coming
  • The Tech Portal: After it was leaked online, Google has now officially launched ‘Material 3 Expressive’ design language, set to debut with Android 16 and Wear OS 6

Scott Webster@AndroidGuys //
Google is aggressively expanding the reach of its Gemini AI model, aiming to integrate it into a wide array of devices beyond smartphones. The tech giant plans to bring Gemini to Wear OS smartwatches, Android Auto in vehicles, Google TV for televisions, and even XR headsets developed in collaboration with Samsung. This move seeks to provide users with AI assistance in various contexts, from managing tasks while cooking or exercising with a smartwatch to planning routes and summarizing information while driving using Android Auto. Gemini's integration into Google TV aims to offer educational content and answer questions, while its presence in XR headsets promises immersive trip planning experiences.

YouTube is also leveraging Gemini AI to revolutionize its advertising strategy with the introduction of "Peak Points," a new ad format designed to identify moments of high user engagement in videos. Gemini analyzes videos to pinpoint these peak moments, strategically placing ads immediately afterward to capture viewers' attention when they are most invested. While this approach aims to benefit advertisers by improving ad retention, it has raised concerns about potentially disrupting the viewing experience and irritating users by interrupting engaging content. An alternative ad format called Shoppable CTV, which allows users to browse and purchase items during an ad, is considered a more palatable option.

To further fuel AI innovation, Google has launched the AI Futures Fund. This program is designed to support early-stage AI startups with equity investment and hands-on technical support. The AI Futures Fund provides startups with access to advanced Google DeepMind models like Gemini, Imagen, and Veo, as well as direct collaboration with Google experts from DeepMind and Google Lab. Startups also receive Google Cloud credits and dedicated technical resources to help them build and scale efficiently. The fund aims to empower startups to "move faster, test bolder ideas," and bring ambitious AI products to life, fostering innovation in the field.

Recommended read:
References :
  • PCMag Middle East ai: The car version of Gemini will first be available on Android Auto in the coming months and later this year on Google built-in.
  • thetechbasic.com: Google is bringing its smart AI named Gemini to cars that use Android Auto. This update will let drivers talk to their cars like a friend, ask for help, and even plan trips.
  • www.tomsguide.com: Google is taking Gemini beyond smartphones — here’s what’s coming
  • www.tomsguide.com: YouTube has a new ad format fueled by Gemini — and it might be the worst thing I’ve ever heard
  • THE DECODER: Google brings Gemini AI to smartwatches, cars, TVs, and XR headsets
  • Shelly Palmer: YouTube’s Gemini AI Uses Peak Points to Target Ads at Moments of Maximum Engagement
  • the-decoder.com: Google brings Gemini AI to smartwatches, cars, TVs, and XR headsets

Scott Webster@AndroidGuys //
Google is expanding its Gemini AI assistant to a wider range of Android devices, moving beyond smartphones to include smartwatches, cars, TVs, and headsets. The tech giant aims to seamlessly integrate AI into users' daily routines, making it more accessible and convenient. This expansion promises a more user-friendly and productive experience across various aspects of daily life. The move aligns with Google's broader strategy to make AI ubiquitous, enhancing usability through conversational and hands-free features.

This integration, referred to as "Gemini Everywhere," seeks to enhance usability and productivity by making AI features more conversational and hands-free. For in-car experiences, Google is bringing Gemini AI to Android Auto and Google Built-in vehicles, promising smarter in-car experiences and hands-free task management for safer driving. Gemini's capabilities should allow for simpler task management and more personalized results across all these new platforms.

The rollout of Gemini on these devices is expected later in 2025, first on Android Auto, then Google Built-in vehicles, and Google TV, although the specific models slated for updates remain unclear. Gemini on Wear OS and Android Auto will require a data connection, while Google Built-in vehicles will have limited offline support. The ultimate goal is to offer seamless AI assistance across multiple device types, enhancing both convenience and productivity for Android users.

Recommended read:
References :
  • PCMag Middle East ai: Google Tests Swapping 'I'm Feeling Lucky' Button for 'AI Mode'
  • www.tomsguide.com: Google is taking Gemini beyond smartphones — here’s what’s coming
  • www.zdnet.com: Google's 'I'm feeling lucky' button might soon be replaced by AI mode
  • The Official Google Blog: Google is expanding its Gemini AI beyond smartphones, with the technology set to integrate with smartwatches, cars, TVs, and headsets. The rollout of these features is part of a wider strategy aimed at making AI more accessible and convenient for users in various aspects of their daily routine.
  • AndroidGuys: Google is expanding Gemini AI functionality to more Android devices, beyond smartphones, to include smartwatches, cars, TVs, and headsets. This is part of a broader effort to integrate AI seamlessly into various aspects of users' daily lives, making it more user-friendly and productive.
  • www.lifewire.com: Google is expanding its Gemini AI assistant to a wider range of Android devices, including smartwatches, cars, TVs, and headsets. The update aims to enhance usability and productivity by making AI features more conversational and hands-free.
  • The Rundown AI: Google's Gemini AI expands across devices
  • THE DECODER: Google is extending its Gemini AI capabilities to smartwatches, cars, televisions, and XR headsets.
  • PCMag Middle East ai: Gemini Everywhere: Google Expands Its AI to Cars, TVs, Headsets
  • Shelly Palmer: Who Will Be “Google for AI Searchâ€? Google.
  • shellypalmer.com: In an unsurprising move, Google is putting generative AI at the center of its most valuable real estate. The company is redesigning its homepage to feature “AI Overviews,†a mode that uses Gemini to synthesize information directly on the results page.
  • AndroidGuys: Google Brings Gemini AI to Android Auto and Google Built-in Vehicles
  • Dataconomy: Google is bringing Gemini, its generative AI, to cars that support Android Auto in the next few months, the company announced ahead of its 2025 I/O developer conference.
  • www.zdnet.com: Your Android devices are getting a major Gemini upgrade - cars and watches included
  • the-decoder.com: Google brings Gemini AI to smartwatches, cars, TVs, and XR headsets
  • The Official Google Blog: Gemini smarts are coming to more Android devices
  • Shelly Palmer: YouTube’s Gemini AI Uses Peak Points to Target Ads at Moments of Maximum Engagement
  • www.tomsguide.com: Google is adding more accessibility features to Chrome and Android — and they're powered by Gemini