Scott Webster@AndroidGuys
//
Google is aggressively expanding the reach of its Gemini AI model, aiming to integrate it into a wide array of devices beyond smartphones. The tech giant plans to bring Gemini to Wear OS smartwatches, Android Auto in vehicles, Google TV for televisions, and even XR headsets developed in collaboration with Samsung. This move seeks to provide users with AI assistance in various contexts, from managing tasks while cooking or exercising with a smartwatch to planning routes and summarizing information while driving using Android Auto. Gemini's integration into Google TV aims to offer educational content and answer questions, while its presence in XR headsets promises immersive trip planning experiences.
YouTube is also leveraging Gemini AI to revolutionize its advertising strategy with the introduction of "Peak Points," a new ad format designed to identify moments of high user engagement in videos. Gemini analyzes videos to pinpoint these peak moments, strategically placing ads immediately afterward to capture viewers' attention when they are most invested. While this approach aims to benefit advertisers by improving ad retention, it has raised concerns about potentially disrupting the viewing experience and irritating users by interrupting engaging content. An alternative ad format called Shoppable CTV, which allows users to browse and purchase items during an ad, is considered a more palatable option. To further fuel AI innovation, Google has launched the AI Futures Fund. This program is designed to support early-stage AI startups with equity investment and hands-on technical support. The AI Futures Fund provides startups with access to advanced Google DeepMind models like Gemini, Imagen, and Veo, as well as direct collaboration with Google experts from DeepMind and Google Lab. Startups also receive Google Cloud credits and dedicated technical resources to help them build and scale efficiently. The fund aims to empower startups to "move faster, test bolder ideas," and bring ambitious AI products to life, fostering innovation in the field. Recommended read:
References :
Jibin Joseph@PCMag Middle East ai
//
Google is experimenting with replacing its iconic "I'm Feeling Lucky" button with a new "AI Mode" tool. This represents a significant shift in how the search engine operates, moving away from providing a direct link to the first search result and towards offering AI-powered answers directly within the Google search interface. The "I'm Feeling Lucky" button, which has been a staple of Google's homepage since 1998, bypasses the search results page entirely, taking users directly to what Google deems the most relevant website. However, Google believes that most users now prefer browsing a range of links rather than immediately jumping to a single site.
The new AI Mode aims to provide a more interactive and informative search experience. When users ask questions, the AI Mode tool leverages Google's AI chatbot to generate detailed responses instead of simply presenting a list of links. For example, if a user asks "Where can I purchase a camping chair under $100?", AI Mode may display images, prices, and store links directly within the search results. Users can then engage in follow-up questions with the AI, such as "Is it waterproof?", receiving further details and recommendations. The AI also uses real-time information to display store hours, product availability, and other relevant data. Testing of the AI Mode button is currently limited to a small percentage of users in the U.S. who are part of Google's Search Labs program. Google is exploring different placements for the button, including inside the search bar next to the camera icon, or replacing the "I'm Feeling Lucky" button entirely. Some users have also reported seeing a rainbow-colored glow around the button when they hover over it. While this move aims to align with modern search habits, some users have expressed concern over the potential loss of the nostalgic "I'm Feeling Lucky" feature, and are also concerned about the accuracy of Google's AI Mode. Recommended read:
References :
@felloai.com
//
References:
felloai.com
, www.techradar.com
Google is making significant strides in integrating artificial intelligence into healthcare, security, and user experience applications. One of the most groundbreaking developments is AMIE (Articulate Medical Intelligence Explorer), an AI model designed to revolutionize medical diagnostics. AMIE is capable of interpreting medical images, such as X-rays, MRIs, and CT scans, allowing it to engage in diagnostic conversations. This new multimodal AMIE addresses a significant gap in previous iterations by enabling the AI to understand visual medical information, potentially surpassing human capabilities in certain diagnostic areas.
Chrome 137 will feature an additional layer of protection against tech support scams using the on-device Gemini Nano large language model (LLM). This feature will leverage the LLM to generate signals that will be used by Safe Browsing in order to deliver higher confidence verdicts about potentially dangerous sites like tech support scams. The on-device approach allows Chrome to detect and block attacks that haven't been crawled before and see threats the way users see them. When a user navigates to a potentially dangerous page, specific triggers that are characteristic of tech support scams will cause Chrome to evaluate the page using the on-device Gemini Nano LLM. Google is also debuting an AI-powered image-to-video tool on Honor smartphones. This tool, called AI Image to Video, can generate 5-second videos from pictures with a single tap, adding action to still images. For example, it can make a person in a photo smile or blink, or simulate flowing water in a picture of a river. This feature is part of the Honor 400 phone's Gallery app and does not require any extra downloads. Google will not use your pictures to train its AI. Recommended read:
References :
Andrew Hutchinson@socialmediatoday.com
//
References:
felloai.com
, www.socialmediatoday.com
Google is aggressively expanding its AI capabilities across various platforms, aiming to enhance user experiences and maintain a competitive edge. One significant advancement is the launch of an AI-based system for generating 3D assets for shopping listings. This new technology simplifies the creation of high-quality, shoppable 3D product visualizations from as few as three product images, leveraging Google's Veo AI model to infer movement and infill frames, resulting in more responsive and logical depictions of 3D objects. This enhancement allows brands to include interactive 3D models of their products in Google Shopping displays, creating a more engaging online shopping experience and potentially feeding into VR models for virtual worlds depicting real objects.
Google is also leveraging AI to combat tech support scams in its Chrome browser. The new feature, launched with Chrome 137, utilizes the on-device Gemini Nano large language model (LLM) to detect and block potentially dangerous sites. When a user navigates to a suspicious page exhibiting characteristics of tech support scams, Chrome evaluates the page using the LLM to extract security signals, such as the intent of the page, and sends this information to Safe Browsing for a final verdict. This on-device approach allows for the detection of threats as they appear to users, even on malicious sites that exist for less than 10 minutes, providing an additional layer of protection against cybercrime. Furthermore, Google is exploring the potential of AI in healthcare with advancements to its Articulate Medical Intelligence Explorer (AMIE). The multimodal AMIE can now interpret visual medical information such as X-rays, CT scans, and MRIs, engaging in diagnostic conversations with remarkable accuracy. This breakthrough enables AMIE to request, interpret, and reason about visual medical data, potentially surpassing human capabilities in certain diagnostic areas. The AI can now look at a scan, discuss its findings, ask clarifying questions, and integrate that visual data into its overall diagnostic reasoning. This development suggests a future where AI could play a more active and insightful role in diagnosing diseases, revolutionizing healthcare as we know it. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google is significantly ramping up its efforts to combat online scams through the integration of advanced AI technologies across various platforms, including Search, Chrome, and Android. The company's intensified focus aims to safeguard users from the increasing sophistication of cybercriminals, who are constantly evolving their tactics. Google's strategy centers around deploying AI-powered defenses to detect and block fraudulent activities in real-time, providing a more secure online experience.
AI is now central to Google's anti-scam strategy, with the company reporting a substantial increase in its ability to identify and block harmful content. Recent updates to AI classifiers have enabled Google to detect 20 times more scam pages than before, leading to a significant improvement in the quality of search results. Notably, the AI systems have proven effective in targeting specific scam types, such as those impersonating airline customer service providers, where dedicated protections have reduced related attacks by over 80%. The Gemini Nano LLM will soon expand to Android devices as well. Beyond Search, Google is extending its AI-driven security measures to Chrome and Android to provide comprehensive protection across multiple surfaces. Chrome's Enhanced Protection mode now utilizes Gemini Nano, an on-device AI model, to instantly identify scams, even those previously unseen. Android devices will also benefit from AI warnings that flag suspicious website notifications and scam detection in Google Messages and Phone, bolstering defenses against deceptive calls and texts. This multi-layered approach demonstrates Google's commitment to staying ahead of scammers and ensuring a safer digital environment for its users. Recommended read:
References :
info@thehackernews.com (The@The Hacker News
//
Google is integrating its Gemini Nano AI model into the Chrome browser to provide real-time scam protection for users. This enhancement focuses on identifying and blocking malicious websites and activities as they occur, addressing the challenge posed by scam sites that often exist for only a short period. The integration of Gemini Nano into Chrome's Enhanced Protection mode, available since 2020, allows for the analysis of website content to detect subtle signs of scams, such as misleading pop-ups or deceptive tactics.
When a user visits a potentially dangerous page, Chrome uses Gemini Nano to evaluate security signals and determine the intent of the site. This information is then sent to Safe Browsing for a final assessment. If the page is deemed likely to be a scam, Chrome will display a warning to the user, providing options to unsubscribe from notifications or view the blocked content while also allowing users to override the warning if they believe it's unnecessary. This system is designed to adapt to evolving scam tactics, offering a proactive defense against both known and newly emerging threats. The AI-powered scam detection system has already demonstrated its effectiveness, reportedly catching 20 times more scam-related pages than previous methods. Google also plans to extend this feature to Chrome on Android devices later this year, further expanding protection to mobile users. This initiative follows criticism regarding Gmail phishing scams that mimic law enforcement, highlighting Google's commitment to improving online security across its platforms and safeguarding users from fraudulent activities. Recommended read:
References :
erichs211@gmail.com (Eric@techradar.com
//
Google's powerful AI model, Gemini 2.5 Pro, has achieved a significant milestone by completing the classic Game Boy game Pokémon Blue. This accomplishment, spearheaded by software engineer Joel Z, demonstrates the AI's enhanced reasoning and problem-solving abilities. Google CEO Sundar Pichai celebrated the achievement online, highlighting it as a substantial win for AI development. The project showcases how AI can learn to handle complex tasks, requiring long-term planning, goal tracking, and visual navigation, which are vital components in the pursuit of general artificial intelligence.
Joel Z facilitated Gemini's gameplay over several months, livestreaming the AI's progress. While Joel is not affiliated with Google, his efforts were supported by the company's leadership. To enable Gemini to navigate the game, Joel used an emulator, mGBA, to feed screenshots and game data, like character position and map layout. He also incorporated smaller AI helpers, like a "Pathfinder" and a "Boulder Puzzle Solver," to tackle particularly challenging segments. These sub-agents, also versions of Gemini, were deployed strategically by the AI to manage complex situations, showcasing its ability to differentiate between routine and complicated tasks. Google is also experimenting with transforming its search engine into a Gemini-powered chatbot via an AI Mode. This new feature, currently being tested with a small percentage of U.S. users, delivers conversational answers generated from Google's vast index, effectively turning Search into an answer engine. Instead of a list of links, AI Mode provides rich, visual summaries and remembers prior queries, directly competing with the search features of Perplexity and ChatGPT. While this shift could potentially impact organic SEO tactics, it signifies Google's commitment to integrating AI more deeply into its core products, offering users a more intuitive and informative search experience. Recommended read:
References :
@the-decoder.com
//
Google is integrating its Gemini AI model deeper into its search engine with the introduction of 'AI Mode'. This new feature, currently in a limited testing phase in the US, aims to transform the search experience into a conversational one. Instead of the traditional list of links, AI Mode delivers answers generated directly from Google’s index, functioning much like a Gemini-powered chatbot. The search giant is also dropping the Labs waitlist, allowing any U.S. user who opts in to try the new search function.
The AI Mode includes visual place and product cards, enhanced multimedia features, and a left-side panel for managing past searches. This provides more organized results for destinations, products, and services. Users can ask contextual follow-up questions, and the AI Mode will populate a sidebar with cards referring to the sources it's using to formulate its answers. It can also access Google's Shopping Graph and localized data from Maps. This move is seen as Google's direct response to AI-native upstarts that are recasting the search bar as a natural-language front end to the internet. Google CEO Sundar Pichai is hopeful to have an agreement with Apple to have Gemini as an option as part of Apple Intelligence by middle of this year. The rise of AI in search raises concerns for marketers. Organic SEO tactics built on blue links will erode and there will be a need to prepare content for zero‑click, AI‑generated summaries. Recommended read:
References :
Krishna Chytanya@AI & Machine Learning
//
Google is significantly enhancing its AI capabilities across its Gemini platform and various products, focusing on multilingual support and AI-assisted features. To address the needs of global users, Google has introduced the Model Context Protocol (MCP) which enables the creation of chatbots capable of supporting multiple languages. This system uses Gemini, Gemma, and Translation LLM to provide quick and accurate answers in different languages. MCP acts as a standardized way for AI systems to interact with external data sources and tools, allowing AI agents to access information and execute actions outside their own models.
Google Gemini is also receiving AI-powered image editing features within its chat interface. Users can now tweak backgrounds, swap out objects, and make other adjustments to both AI-generated and personal photos, with support for over 45 languages in most countries. The editing tools are being rolled out gradually for users on web and mobile devices. Additionally, Google is expanding access to its AI tools by releasing a standalone app for NotebookLM, one of its best AI tools. This will make it easier for users to delve into notes on complex topics directly from their smartphones. In a move toward monetization within the AI space, Google is testing AdSense ads inside AI chatbot conversations. The company has expanded its AdSense for Search platform to support chatbots from startups and its own Gemini tools. This reflects a shift in how people find information online as AI services increasingly provide direct answers, potentially reducing the need to visit traditional websites. Furthermore, Google is extending Gemini's reach to younger users by rolling out a version for children under 13 with parent-managed Google accounts through Family Link, ensuring safety and privacy measures are in place. Recommended read:
References :
@the-decoder.com
//
Google is enhancing its AI capabilities across several platforms. NotebookLM, the AI-powered research tool, is expanding its "Audio Overviews" feature to approximately 75 languages, including less common ones such as Icelandic, Basque, and Latin. This enhancement will enable users worldwide to listen to AI-generated summaries of documents, web pages, and YouTube transcripts, making research more accessible. The audio for each language is generated by AI agents using metaprompting, with the Gemini 2.5 Pro language model as the underlying system, moving towards audio production technology based entirely on Gemini’s multimodality.
These Audio Overviews are designed to distill a mix of documents into a scripted conversation between two synthetic hosts. Users can direct the tone and depth through prompts, and then download an MP3 or keep playback within the notebook. This expansion rebuilds the speech stack and language detection while maintaining a one-click flow. Early testers have reported that multilingual voices make long reading lists easier to digest and provide an alternative channel for blind or low-vision audiences. In addition to NotebookLM enhancements, Google Gemini is receiving AI-assisted image editing capabilities. Users will be able to modify backgrounds, swap objects, and make other adjustments to both AI-generated and personal photos directly within the chat interface. These editing tools are being introduced gradually for users on web and mobile devices, supporting over 45 languages in most countries. To access the new features on your phone, users will need the latest version of the Gemini app. Recommended read:
References :
@cloud.google.com
//
Google is significantly expanding the AI and ML capabilities within its BigQuery and Vertex AI platforms. BigQuery is receiving a boost with the integration of the TimesFM forecasting model, a state-of-the-art, pre-trained model from Google Research designed to simplify forecasting problems. This managed and scalable engine enables users to generate forecasts for both single and millions of time series within a single query. Additionally, BigQuery now supports structured data extraction and generation using large language models (LLMs) through the AI.GENERATE_TABLE function, alongside new row-wise inference functions, expanded model choice with Gemini and OSS models, and the general availability of the Contribution Analysis feature.
NotebookLM is also seeing expansion with the "Audio Overviews" feature now available in approximately 75 languages. This feature, powered by Gemini, allows users to listen to AI-generated summaries of documents, slides, web pages, and YouTube transcripts in multiple languages. This feature distills any mix of documents into a scripted back-and-forth between two synthetic hosts. Users can direct tone and depth through a prompt and then download an MP3 or keep playback inside the notebook. Early testers report that multilingual voices make long reading lists easier to digest on commutes and provide an alternative channel for blind or low-vision audiences. Furthermore, Google is experimenting with AI-powered language learning formats through its “Little Language Lessons,” integrated directly into NotebookLM and running on Gemini. These tools support situational learning, generating content dynamically based on user-described scenarios, rather than relying on fixed vocabulary lists. Google is also preparing new Gemini AI subscription tiers, potentially including a "Gemini Ultra" plan, evidenced by code discoveries in the Gemini web interface referencing distinct tiers with varying capabilities and usage limits. Recommended read:
References :
@cloud.google.com
//
Google is reportedly developing new subscription tiers for its Gemini AI service, potentially introducing a "Gemini Ultra" plan. Code discoveries within the Gemini web interface suggest that these additional tiers will offer varying capabilities and usage limits beyond the existing "Gemini Advanced" tier, which is available through the Google One AI Premium plan at $19.99 per month. These plans could offer increased or unlimited access to specific features, with users potentially encountering upgrade prompts when reaching usage limits on lower tiers.
References to "Gemini Pro" and "Gemini Ultra" indicate that Google is planning distinct tiers with differing capabilities. Google's strategy mirrors its broader shift towards a subscription-based model, as evidenced by the growth of Google One and YouTube Premium. By offering tiered access, Google can cater to a wider range of users, from casual consumers to professionals requiring advanced AI capabilities. In other news, Alphabet CEO Sundar Pichai testified in court regarding the Justice Department's antitrust case against Google. Pichai defended Google against the DOJ's proposals, calling them "extraordinary" and akin to a "de facto divestiture" of the company's search engine. He also expressed optimism about integrating Gemini into iPhones this fall, revealing conversations with Apple CEO Tim Cook and expressing hope for a deal by mid-year. BigQuery is adding TimesFM forecasting model, structured data extraction and generation with LLMs, and row-wise (Scalar) LLM functions to simplify data analysis. Recommended read:
References :
@techradar.com
//
Google has officially launched its AI-powered NotebookLM app on both Android and iOS platforms, expanding the reach of this powerful research tool beyond the web. The app, which leverages AI to summarize and analyze documents, aims to enhance productivity and learning by enabling users to quickly extract key insights from large volumes of text. The release of the mobile app coincides with Google I/O 2025, where further details about the app's features and capabilities are expected to be unveiled. Users can now pre-order the app on both the Google Play Store and Apple App Store, ensuring automatic download upon its full launch on May 20th.
NotebookLM provides users with an AI-powered workspace to collate information from multiple sources, including documents, webpages, and more. The app offers smart summaries and allows users to ask questions about the data, making it a helpful alternative to Google Gemini for focused research tasks. The mobile version of NotebookLM retains most of the web app's features, including the ability to create and browse notebooks, add sources, and engage in conversations with the AI about the content. Users can also utilize the app to generate audio overviews or "podcasts" of their notes, which can be interrupted for follow-up questions. In addition to the mobile app launch, Google has significantly expanded the language support for NotebookLM's "Audio Overviews" feature. Originally available only in English, the AI-generated summaries can now be accessed in approximately 75 languages, including Spanish, French, Hindi, Turkish, Korean, Icelandic, Basque and Latin. This expansion allows researchers, students, and content creators worldwide to benefit from the audio summarization capabilities of NotebookLM, making it easier to digest long reading lists and providing an alternative channel for blind or low-vision users. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |