@cloud.google.com
//
References:
TechCrunch
, The Tech Portal
,
Google has launched a new Android application called "AI Edge Gallery" that allows users to download and run AI models directly on their devices without requiring an internet connection. This provides offline functionality and caters to users concerned about data privacy and latency. The app supports on-device AI capabilities, enabling features such as image creation, question answering, and code writing even without cloud connectivity. AI Edge Gallery integrates models from platforms like Hugging Face, facilitating on-device execution for enhanced privacy and faster processing, addressing the wariness some users have about sending sensitive data to remote data centers.
The AI Edge Gallery app also includes tools like MediaPipe and TensorFlow Lite, optimizing model performance on mobile devices, which ensures smooth operation even on devices with limited resources. The app's interface includes categories like 'AI Chat' and 'Ask Image', guiding users to relevant tools, along with a ‘Prompt Lab’ acting as a sandbox for users to test and refine single-turn prompts. One key model used is Gemma 3 1B, a compact language model designed for mobile and web platforms that enables rapid content generation and interaction. Currently, the AI Edge Gallery is in an experimental Alpha release, and Google encourages developers and users to provide feedback. The app is open-source under the Apache 2.0 license, making it freely accessible for both commercial and non-commercial purposes. While an Android version is already available, Google plans to launch an iOS version soon. However, performance may vary depending on a phone’s hardware capabilities, with newer devices expected to run models faster and more smoothly than older ones. Recommended read:
References :
@zdnet.com
//
Google is expanding access to its AI-powered research assistant, NotebookLM, with the launch of a standalone mobile app for Android and iOS devices. This marks a significant step for NotebookLM, transitioning it from a web-based beta tool to a more accessible platform for mobile users. The app retains core functionalities like source-grounded summaries and interactive Q&A, while also introducing new audio-first features designed for on-the-go content consumption. This release aligns with Google's broader strategy to integrate AI into its products, offering users a flexible way to absorb and interact with structured knowledge.
The NotebookLM mobile app places a strong emphasis on audio interaction, featuring AI-generated podcast-style summaries that can be played directly from the project list. Users can generate these audio overviews with a quick action button, creating an experience akin to a media player. The app also supports interactive mode during audio sessions, allowing users to ask questions mid-playback and participate in live dialogue. This focus on audio content consumption and interaction differentiates the mobile app and suggests that passive listening and educational use are key components of the intended user experience. The mobile app mirrors the web-based layout, offering functionalities across Sources, Chat, and Interactive Assets, including Notes, Audio Overviews, and Mind Maps. Users can now add sources directly from their mobile devices by using the "Share" button in any app. The new NotebookLM app aims to be a research assistant that is accessible to students, researchers, and content creators, providing a mobile solution for absorbing structured knowledge. Recommended read:
References :
Scott Webster@AndroidGuys
//
Google is aggressively expanding its Gemini AI across a multitude of devices, signifying a major push to create a seamless AI ecosystem. The tech giant aims to integrate Gemini into everyday experiences by bringing the AI assistant to smartwatches running Wear OS, Android Auto for in-car assistance, Google TV for enhanced entertainment, and even upcoming XR headsets developed in collaboration with Samsung. This expansion aims to provide users with a consistent and powerful AI layer connecting all their devices, allowing for natural voice interactions and context-based conversations across different platforms.
Google's vision for Gemini extends beyond simple voice commands, the AI assistant will offer a range of features tailored to each device. On smartwatches, Gemini will provide convenient access to information and app interactions without needing to take out a phone. In Android Auto, Gemini will replace the current Google voice assistant, enabling more sophisticated tasks like planning routes with charging stops or summarizing messages. For Google TV, the AI will offer personalized content recommendations and educational answers, while on XR headsets, Gemini will facilitate immersive experiences like planning trips using videos, maps, and local information. In addition to expanding Gemini's presence across devices, Google is also experimenting with its search interface. Reports indicate that Google is testing replacing the "I'm Feeling Lucky" button on its homepage with an "AI Mode" button. This move reflects Google's strategy to keep users engaged on its platform by offering direct access to conversational AI responses powered by Gemini. The AI Mode feature builds on the existing AI Overviews, providing detailed AI-generated responses to search queries on a dedicated results page, further emphasizing Google's commitment to integrating AI into its core services. Recommended read:
References :
Scott Webster@AndroidGuys
//
References:
PCMag Middle East ai
, www.tomsguide.com
,
Google is significantly expanding the reach of its Gemini AI assistant, bringing it to a wider range of devices beyond smartphones. This expansion includes integration with Android Auto for vehicles, Wear OS smartwatches, Google TV, and even upcoming XR headsets developed in collaboration with Samsung. Gemini's capabilities will be tailored to each device context, offering different functionalities and connectivity requirements to optimize the user experience. Material 3 Expressive will launch with Android 16 and Wear OS 6, starting with Google’s own Pixel devices first.
Google's integration of Gemini into Android Auto aims to enhance the driving experience by providing drivers with a natural language interface for various tasks. Drivers will be able to interact with Gemini to send messages, translate conversations, find restaurants, and play music, all through voice commands. While Gemini will require a data connection in Android Auto and Wear OS, cars with Google built-in will offer limited offline support. Google plans to address potential distractions by designing Gemini to be safe and focusing on quick tasks. Furthermore, Google has unveiled 'Material 3 Expressive', a new design language set to debut with Android 16 and Wear OS 6. This design language features vibrant colours, adaptive typography, and responsive animations, aiming to create a more personalized and engaging user interface. The expanded color palette includes purples, pinks, and corals, and integrates dynamic colour theming that draws from personal elements. Customizable app icons, adaptive layouts, and refined quick settings tiles are some of the functional enhancements users can expect from this update. Recommended read:
References :
Scott Webster@AndroidGuys
//
Google is aggressively expanding the reach of its Gemini AI model, aiming to integrate it into a wide array of devices beyond smartphones. The tech giant plans to bring Gemini to Wear OS smartwatches, Android Auto in vehicles, Google TV for televisions, and even XR headsets developed in collaboration with Samsung. This move seeks to provide users with AI assistance in various contexts, from managing tasks while cooking or exercising with a smartwatch to planning routes and summarizing information while driving using Android Auto. Gemini's integration into Google TV aims to offer educational content and answer questions, while its presence in XR headsets promises immersive trip planning experiences.
YouTube is also leveraging Gemini AI to revolutionize its advertising strategy with the introduction of "Peak Points," a new ad format designed to identify moments of high user engagement in videos. Gemini analyzes videos to pinpoint these peak moments, strategically placing ads immediately afterward to capture viewers' attention when they are most invested. While this approach aims to benefit advertisers by improving ad retention, it has raised concerns about potentially disrupting the viewing experience and irritating users by interrupting engaging content. An alternative ad format called Shoppable CTV, which allows users to browse and purchase items during an ad, is considered a more palatable option. To further fuel AI innovation, Google has launched the AI Futures Fund. This program is designed to support early-stage AI startups with equity investment and hands-on technical support. The AI Futures Fund provides startups with access to advanced Google DeepMind models like Gemini, Imagen, and Veo, as well as direct collaboration with Google experts from DeepMind and Google Lab. Startups also receive Google Cloud credits and dedicated technical resources to help them build and scale efficiently. The fund aims to empower startups to "move faster, test bolder ideas," and bring ambitious AI products to life, fostering innovation in the field. Recommended read:
References :
Scott Webster@AndroidGuys
//
Google is expanding its Gemini AI assistant to a wider range of Android devices, moving beyond smartphones to include smartwatches, cars, TVs, and headsets. The tech giant aims to seamlessly integrate AI into users' daily routines, making it more accessible and convenient. This expansion promises a more user-friendly and productive experience across various aspects of daily life. The move aligns with Google's broader strategy to make AI ubiquitous, enhancing usability through conversational and hands-free features.
This integration, referred to as "Gemini Everywhere," seeks to enhance usability and productivity by making AI features more conversational and hands-free. For in-car experiences, Google is bringing Gemini AI to Android Auto and Google Built-in vehicles, promising smarter in-car experiences and hands-free task management for safer driving. Gemini's capabilities should allow for simpler task management and more personalized results across all these new platforms. The rollout of Gemini on these devices is expected later in 2025, first on Android Auto, then Google Built-in vehicles, and Google TV, although the specific models slated for updates remain unclear. Gemini on Wear OS and Android Auto will require a data connection, while Google Built-in vehicles will have limited offline support. The ultimate goal is to offer seamless AI assistance across multiple device types, enhancing both convenience and productivity for Android users. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google is enhancing its defenses against online scams by integrating AI-powered systems across Chrome, Search, and Android platforms. The company announced it will leverage Gemini Nano, its on-device large language model (LLM), to bolster Safe Browsing capabilities within Chrome 137 on desktop computers. This on-device approach offers real-time analysis of potentially dangerous websites, enabling Google to safeguard users from emerging scams that may not yet be included in traditional blocklists or threat databases. Google emphasizes that this proactive measure is crucial, especially considering the fleeting lifespan of many malicious sites, often lasting less than 10 minutes.
The integration of Gemini Nano in Chrome allows for the detection of tech support scams, which commonly appear as misleading pop-ups designed to trick users into believing their computers are infected with a virus. These scams often involve displaying a phone number that directs users to fraudulent tech support services. The Gemini Nano model analyzes the behavior of web pages, including suspicious browser processes, to identify potential scams in real-time. The security signals are then sent to Google’s Safe Browsing online service for a final assessment, determining whether to issue a warning to the user about the possible threat. Google is also expanding its AI-driven scam detection to identify other fraudulent schemes, such as those related to package tracking and unpaid tolls. These features are slated to arrive on Chrome for Android later this year. Additionally, Google revealed that its AI-powered scam detection systems have become significantly more effective, ensnaring 20 times more deceptive pages and blocking them from search results. This has led to a substantial reduction in scams impersonating airline customer service providers (over 80%) and those mimicking official resources like visas and government services (over 70%) in 2024. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google is integrating its Gemini Nano AI model into the Chrome browser to provide real-time scam protection for users. This enhancement focuses on identifying and blocking malicious websites and activities as they occur, addressing the challenge posed by scam sites that often exist for only a short period. The integration of Gemini Nano into Chrome's Enhanced Protection mode, available since 2020, allows for the analysis of website content to detect subtle signs of scams, such as misleading pop-ups or deceptive tactics.
When a user visits a potentially dangerous page, Chrome uses Gemini Nano to evaluate security signals and determine the intent of the site. This information is then sent to Safe Browsing for a final assessment. If the page is deemed likely to be a scam, Chrome will display a warning to the user, providing options to unsubscribe from notifications or view the blocked content while also allowing users to override the warning if they believe it's unnecessary. This system is designed to adapt to evolving scam tactics, offering a proactive defense against both known and newly emerging threats. The AI-powered scam detection system has already demonstrated its effectiveness, reportedly catching 20 times more scam-related pages than previous methods. Google also plans to extend this feature to Chrome on Android devices later this year, further expanding protection to mobile users. This initiative follows criticism regarding Gmail phishing scams that mimic law enforcement, highlighting Google's commitment to improving online security across its platforms and safeguarding users from fraudulent activities. Recommended read:
References :
@www.eweek.com
//
Google is expanding access to its AI tools, making Gemini Live's screen sharing feature free for all Android users. This reverses the previous requirement of a Google One AI Premium plan. Now, anyone with the Gemini app on an Android device can utilize Gemini's ability to view and interact with their screen or camera in real-time. This allows users to get assistance with anything from understanding complex app settings to translating foreign language signs. The rollout is gradual and will be available to all compatible Android devices within the coming weeks. Google seems to be prioritizing widespread adoption and gathering valuable AI training data from a larger user base.
Google is also extending the availability of its AI tools to college students in the U.S. Students can now access Google's AI Premium Plan at no cost until Spring 2026. This includes access to Gemini Advanced, NotebookLM Plus, and 2TB of Google One storage. Gemini Advanced provides assistance with exams and essays and includes Gemini 2.5 Pro, Google’s most powerful AI model. This free access could assist students with research, writing, studying, and overall academic support. In the enterprise space, Google is improving its BigQuery data warehouse service with new AI capabilities. BigQuery Unified Governance helps organizations discover, understand, and trust their data assets. According to Google, BigQuery has five times the number of customers of both Snowflake and Databricks. Google Cloud is emphasizing the importance of having access to the correct data that meets business service level agreements (SLAs) for enterprise AI success. Recommended read:
References :
@cyberalerts.io
//
Google is rolling out AI-powered scam detection features for Android devices to protect users from conversational fraud. These features target scams that start harmlessly but evolve into harmful situations, where scammers often use spoofing techniques to disguise their real numbers. The AI models, developed in partnership with financial institutions, flag suspicious patterns and deliver real-time warnings during conversations, ensuring user privacy by running entirely on the device. Users can then dismiss, report, or block the sender. This enhancement builds upon existing protections, with over 1 billion Chrome users already benefiting from Safe Browsing's Enhanced Protection mode that uses AI to identify phishing and scam techniques.
This AI driven security system scans texts from strangers and flags potentially dangerous messages, giving users a 'Likely Scam' alert. Real-time scam alerts are also being introduced for phone calls, analyzing speech patterns to detect fraudulent phrases and buzzing the device if detected. This feature is initially launching in English in the U.S., the U.K., and Canada, with broader expansion planned. For Pixel 9+ users in the U.S. the call audio is processed but Google will beep at the start and during the call to notify participants the feature is on. The company assures that users' conversations remain private, and reporting a chat as spam only shares sender details and recent messages with Google and carriers. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |