Alexey Shabanov@TestingCatalog
//
Samsung is reportedly considering a significant shift in its AI strategy for the upcoming Galaxy S26 series, potentially replacing Google's Gemini with Perplexity AI as the default AI chatbot. According to Bloomberg, the company is nearing a deal with Perplexity to deeply integrate its AI services into next year's Android phones. This move could signal a departure from Samsung's heavy reliance on Google's AI technologies, potentially making their devices less Google-centric. Samsung aims to preload the Perplexity app on its phones and incorporate Perplexity's search functionality directly into its internet browser.
Discussions between the two companies have also explored the possibility of integrating Perplexity's AI tech into Samsung's Bixby assistant. While Samsung has seemingly deprioritized Bixby in recent years, opting for Google's Gemini, a partnership with Perplexity could revitalize its alternative assistant. This integration could provide Bixby with more powerful search capabilities, supercharging its functionality. Furthermore, Samsung and Perplexity are reportedly in the early stages of discussing the development of a new "AI-infused" operating system. The deal between Samsung and Perplexity could be finalized soon, although it's unlikely to be implemented in the Galaxy Z Fold 7 and Galaxy Z Flip 7, which are expected to launch in August. The Galaxy S26 series, anticipated to be revealed in early 2026, is the target for this integration. If Perplexity becomes the default on all Samsung devices, it could significantly boost the adoption of Perplexity's search tools, making it a major win for the AI company. Perplexity is also in the process of raising a $500 million round at a $14 billion valuation. Recommended read:
References :
Alexey Shabanov@TestingCatalog
//
Google has launched the NotebookLM mobile app for Android and iOS, bringing its AI-powered research assistant to mobile devices. This release marks a significant step in expanding access to NotebookLM, which was initially launched as a web-based tool in 2023 under the codename "Project Tailwind." The mobile app aims to offer personalized learning and efficient content synthesis, allowing users to interact with and process information on the go. The app is officially available to everyone after months of waiting, offering the core features of NotebookLM, with the promise of continued functionality additions.
The NotebookLM mobile app focuses on audio-first experiences, with features like audio overviews that generate podcast-style summaries. These summaries can be played directly from the list view without opening a project, making it feel like a media player for casual content consumption. Users can also download audio overviews for offline playback and listen in the background, supporting learning during commutes or other activities. Moreover, the app supports interactive mode in audio sessions, where users can ask questions mid-playback, creating a live dialogue experience. The mobile app retains the functionality of the web version, including the ability to create new notebooks and upload sources like PDFs, Google Docs, and YouTube videos. Users can add sources directly from their mobile devices by using the "Share" button in any app, making it easier to build and maintain research libraries. NotebookLM relies only on user-uploaded sources, ensuring reliable and verifiable information. The rollout underscores Google’s evolving strategy for NotebookLM, transitioning from a productivity assistant to a multimodal content platform, appealing to students, researchers, and content creators seeking flexible ways to absorb structured knowledge. Recommended read:
References :
Alexey Shabanov@TestingCatalog
//
Google has officially launched the NotebookLM mobile app for both Android and iOS, extending the reach of its AI-powered research assistant. This release, anticipated before Google I/O 2025, allows users to leverage NotebookLM's capabilities directly from their smartphones and tablets. The app aims to help users understand information more effectively, regardless of their location, marking a step towards broader accessibility to AI tools.
The NotebookLM mobile app provides a range of features, including the ability to create new notebooks and add various content types, such as PDFs, websites, YouTube videos, and text. A key feature highlighted by Google is the availability of "Audio Overviews," which automatically generates audio summaries for offline and background playback. Furthermore, users can interact with AI hosts (in beta) to ask follow-up questions, enhancing the learning and research experience on the go. The app also integrates with the Android and iOS share sheets for quickly adding sources. The initial release offers a straightforward user interface optimized for both phones and tablets. Navigation within the app includes a bottom bar providing easy access to Sources, Chat Q&A, and Studio. While it currently doesn't fully utilize Material 3 design principles, Google emphasizes this is an early version. Users can now download the NotebookLM app from the Google Play Store and the App Store, fulfilling a top feature request. Google has indicated that additional updates and features are planned for future releases. Recommended read:
References :
Scott Webster@AndroidGuys
//
Google is aggressively expanding its Gemini AI across a multitude of devices, signifying a major push to create a seamless AI ecosystem. The tech giant aims to integrate Gemini into everyday experiences by bringing the AI assistant to smartwatches running Wear OS, Android Auto for in-car assistance, Google TV for enhanced entertainment, and even upcoming XR headsets developed in collaboration with Samsung. This expansion aims to provide users with a consistent and powerful AI layer connecting all their devices, allowing for natural voice interactions and context-based conversations across different platforms.
Google's vision for Gemini extends beyond simple voice commands, the AI assistant will offer a range of features tailored to each device. On smartwatches, Gemini will provide convenient access to information and app interactions without needing to take out a phone. In Android Auto, Gemini will replace the current Google voice assistant, enabling more sophisticated tasks like planning routes with charging stops or summarizing messages. For Google TV, the AI will offer personalized content recommendations and educational answers, while on XR headsets, Gemini will facilitate immersive experiences like planning trips using videos, maps, and local information. In addition to expanding Gemini's presence across devices, Google is also experimenting with its search interface. Reports indicate that Google is testing replacing the "I'm Feeling Lucky" button on its homepage with an "AI Mode" button. This move reflects Google's strategy to keep users engaged on its platform by offering direct access to conversational AI responses powered by Gemini. The AI Mode feature builds on the existing AI Overviews, providing detailed AI-generated responses to search queries on a dedicated results page, further emphasizing Google's commitment to integrating AI into its core services. Recommended read:
References :
Scott Webster@AndroidGuys
//
References:
PCMag Middle East ai
, www.tomsguide.com
,
Google is significantly expanding the reach of its Gemini AI assistant, bringing it to a wider range of devices beyond smartphones. This expansion includes integration with Android Auto for vehicles, Wear OS smartwatches, Google TV, and even upcoming XR headsets developed in collaboration with Samsung. Gemini's capabilities will be tailored to each device context, offering different functionalities and connectivity requirements to optimize the user experience. Material 3 Expressive will launch with Android 16 and Wear OS 6, starting with Google’s own Pixel devices first.
Google's integration of Gemini into Android Auto aims to enhance the driving experience by providing drivers with a natural language interface for various tasks. Drivers will be able to interact with Gemini to send messages, translate conversations, find restaurants, and play music, all through voice commands. While Gemini will require a data connection in Android Auto and Wear OS, cars with Google built-in will offer limited offline support. Google plans to address potential distractions by designing Gemini to be safe and focusing on quick tasks. Furthermore, Google has unveiled 'Material 3 Expressive', a new design language set to debut with Android 16 and Wear OS 6. This design language features vibrant colours, adaptive typography, and responsive animations, aiming to create a more personalized and engaging user interface. The expanded color palette includes purples, pinks, and corals, and integrates dynamic colour theming that draws from personal elements. Customizable app icons, adaptive layouts, and refined quick settings tiles are some of the functional enhancements users can expect from this update. Recommended read:
References :
Scott Webster@AndroidGuys
//
Google is aggressively expanding the reach of its Gemini AI model, aiming to integrate it into a wide array of devices beyond smartphones. The tech giant plans to bring Gemini to Wear OS smartwatches, Android Auto in vehicles, Google TV for televisions, and even XR headsets developed in collaboration with Samsung. This move seeks to provide users with AI assistance in various contexts, from managing tasks while cooking or exercising with a smartwatch to planning routes and summarizing information while driving using Android Auto. Gemini's integration into Google TV aims to offer educational content and answer questions, while its presence in XR headsets promises immersive trip planning experiences.
YouTube is also leveraging Gemini AI to revolutionize its advertising strategy with the introduction of "Peak Points," a new ad format designed to identify moments of high user engagement in videos. Gemini analyzes videos to pinpoint these peak moments, strategically placing ads immediately afterward to capture viewers' attention when they are most invested. While this approach aims to benefit advertisers by improving ad retention, it has raised concerns about potentially disrupting the viewing experience and irritating users by interrupting engaging content. An alternative ad format called Shoppable CTV, which allows users to browse and purchase items during an ad, is considered a more palatable option. To further fuel AI innovation, Google has launched the AI Futures Fund. This program is designed to support early-stage AI startups with equity investment and hands-on technical support. The AI Futures Fund provides startups with access to advanced Google DeepMind models like Gemini, Imagen, and Veo, as well as direct collaboration with Google experts from DeepMind and Google Lab. Startups also receive Google Cloud credits and dedicated technical resources to help them build and scale efficiently. The fund aims to empower startups to "move faster, test bolder ideas," and bring ambitious AI products to life, fostering innovation in the field. Recommended read:
References :
Scott Webster@AndroidGuys
//
Google is expanding its Gemini AI assistant to a wider range of Android devices, moving beyond smartphones to include smartwatches, cars, TVs, and headsets. The tech giant aims to seamlessly integrate AI into users' daily routines, making it more accessible and convenient. This expansion promises a more user-friendly and productive experience across various aspects of daily life. The move aligns with Google's broader strategy to make AI ubiquitous, enhancing usability through conversational and hands-free features.
This integration, referred to as "Gemini Everywhere," seeks to enhance usability and productivity by making AI features more conversational and hands-free. For in-car experiences, Google is bringing Gemini AI to Android Auto and Google Built-in vehicles, promising smarter in-car experiences and hands-free task management for safer driving. Gemini's capabilities should allow for simpler task management and more personalized results across all these new platforms. The rollout of Gemini on these devices is expected later in 2025, first on Android Auto, then Google Built-in vehicles, and Google TV, although the specific models slated for updates remain unclear. Gemini on Wear OS and Android Auto will require a data connection, while Google Built-in vehicles will have limited offline support. The ultimate goal is to offer seamless AI assistance across multiple device types, enhancing both convenience and productivity for Android users. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |