Dr. Hura@Digital Information World
//
References:
gHacks Technology News
, Digital Information World
Google is reportedly developing a child-friendly version of its Gemini AI chatbot. An APK teardown by Android Authority has revealed some strings within the Google app, which hint at an optimized experience designed for kids. The code included mentions of a welcome screen tailored for younger users and outlined key functionalities of the upcoming Gemini version. Among other things, Gemini for kids will allow them to create stories, and get homework help.
This new iteration is expected to ship with additional safeguards, ensuring a safer interaction for children while also adhering to Google's strict privacy policies surrounding data processing. Google currently enforces robust content policies for teenagers accessing the original Gemini app, promoting a secure environment by automatically onboarding them with guidance on responsible AI use. Given Google's history of implementing child-centric features, it is certainly plausible that Google can move forward with introducing a dedicated version of Gemini aimed at children. Recommended read:
References :
Evelyn Blake@The Tech Basic
//
References:
The Tech Basic
, gHacks Technology News
,
Google has begun rolling out real-time interaction features to its AI assistant, Gemini, enabling live video and screen sharing. These enhancements, powered by Project Astra, allow users to engage more intuitively with their devices, marking a significant advancement in AI-assisted technology. These features are available to Google One AI Premium subscribers.
The new live video feature allows users to utilize their smartphone cameras to engage in real-time visual interactions with Gemini, enabling the AI to answer questions about what it observes. Gemini can analyze a user’s phone screen or camera feed in real-time and instantly answer questions. The screen-sharing feature enables the AI to analyze and provide insights on the displayed content, useful for navigating complex applications or troubleshooting issues. Google plans to expand access to more users soon. Recommended read:
References :
Evelyn Blake@The Tech Basic
//
References:
The Tech Basic
, gHacks Technology News
,
Google has started rolling out new AI tools for Gemini, allowing the assistant to analyze your phone screen or camera feed in real time. These features are powered by Project Astra and are available to Google One AI Premium subscribers. The update transforms Gemini into a visual helper, enabling users to point their camera at an object and receive descriptions or suggestions from the AI.
These features are part of Google's Project Astra initiative, which aims to enhance AI's ability to understand and interact with the real world in real-time. Gemini can now analyze your screen in real-time through a "Share screen with Live" button and analyze your phone's camera. Early adopters have tested the screen-reading tool, and Google plans to expand access to more users soon. With Gemini's live video and screen sharing functionalities, Google is positioning itself ahead in the competitive landscape of AI assistants. Recommended read:
References :
@Google DeepMind Blog
//
References:
Google DeepMind Blog
, The Tech Basic
,
Google has launched Gemini 2.0, its most capable AI model yet, designed for the new agentic era. This model introduces advancements in multimodality, including native image and audio output, and native tool use, enabling the development of new AI agents. Gemini 2.0 is being rolled out to developers and trusted testers initially, with plans to integrate it into Google products like Gemini and Search. Starting today, the Gemini 2.0 Flash experimental model is available to all Gemini users.
New features powered by Project Astra are now accessible to Google One AI Premium subscribers, enabling live video analysis and screen sharing. This update transforms Gemini into a more interactive visual helper, capable of instantly answering questions about what it sees through the device's camera. Users can point their camera at an object, and Gemini will describe it or offer suggestions, providing a more contextual understanding of the real world. These advanced tools will enhance AI Overviews in Google Search. Recommended read:
References :
@tomsguide.com
//
Google is enhancing its AI capabilities by integrating Gemini AI into Google Calendar and introducing Gemini Embedding, its most advanced text embedding model. The integration with Google Calendar aims to provide users with a more efficient way to manage their schedules by using natural language to check events, create meetings, and find key details. Google is set to roll out a Gemini AI upgrade to Google Calendar, allowing users to use the AI assistant to create events, check schedules, or recall event details.
Gemini Embedding offers state-of-the-art performance, increased language support, and improved efficiency for AI-powered search, classification, and retrieval tasks. The new model supports over 100 languages and offers a mean score of 68.32 on the MTEB Multilingual leaderboard, outperforming competitors. Google has launched an experimental Gemini-based text embedding model, offering state-of-the-art performance, increased language support, and improved efficiency for AI-powered search, classification, and retrieval tasks. Recommended read:
References :
Matthias Bastian@THE DECODER
//
Google is enhancing its Gemini AI assistant with the ability to access users' Google Search history to deliver more personalized and relevant responses. This opt-in feature allows Gemini to analyze a user's search patterns and incorporate that information into its responses. The update is powered by the experimental Gemini 2.0 Flash Thinking model, which the company launched in late 2024.
This new capability, known as personalization, requires explicit user permission. Google is emphasizing transparency by allowing users to turn the feature on or off at any time, and Gemini will clearly indicate which data sources inform its personalized answers. To test the new feature Google suggests users ask about vacation spots, YouTube content ideas, or potential new hobbies. The system then draws on individual search histories to make tailored suggestions. Recommended read:
References :
Chris McKay@Maginative
//
Google is currently navigating the "innovator’s dilemma" by experimenting with AI-driven search solutions to disrupt its core search business before competitors do. The company is testing and developing AI versions of Google Search, including a new experimental "AI Mode" powered by Gemini 2.0. This new mode transforms the search engine into a chatbot-like interface, providing more nuanced and multi-step answers to user queries. It allows users to interact with the AI, ask follow-up questions, and even compare products directly within the search page.
AI Mode delivers a full-page AI-generated response. Users can interact with the AI, ask follow-up questions, and even compare products. This mode runs on a custom Gemini 2.0 version and is currently available to Google One AI Premium subscribers. This move comes as Google faces increasing competition from other AI chatbots like OpenAI's ChatGPT and Perplexity AI, who are rethinking the search experience. The goal is to provide immediate, conversational answers and a more comprehensive search experience, though some experts caution that the traditional link-based search may eventually disappear as a result. Recommended read:
References :
@Dataconomy
//
Google has enhanced the iOS experience by integrating Gemini AI with new lock screen widgets and control center access. iPhone users can now interact with Gemini directly from their lock screen, gaining quick access to Gemini Live and other tools without needing to unlock their devices. This update simplifies AI interactions on Apple's mobile platform, making it more accessible and convenient for users.
The new Gemini app widget allows instant access to the AI's voice chat feature, Gemini Live, by simply adding the widget to the lock screen and tapping it. Beyond voice chats, the update introduces three additional widgets: Camera Upload, allowing users to snap photos and send them to Gemini for analysis; Reminders & Calendar, for quickly setting events or tasks; and Text Chat, enabling immediate typed conversations. These widgets aim to streamline basic AI interactions, reducing the need to unlock the device. Recommended read:
References :
Carl Franzen@AI News | VentureBeat
//
Google has recently launched a Gemini-powered Data Science Agent on its Colab Python platform, aiming to revolutionize data analysis. This AI agent automates various routine data science tasks, including importing libraries, cleaning data, running exploratory data analysis (EDA), and generating code. By handling these tedious processes, the agent allows data scientists to focus on more strategic and insightful aspects of their work, such as uncovering patterns and building predictive models.
The Data Science Agent, accessible within Google Colab, operates as an intelligent assistant that executes tasks autonomously, including error handling. Users can define their analysis objectives in plain language, and the agent generates a Colab notebook, executes it, and simplifies the machine learning process. In addition, Google is expanding the capabilities of its Gemini AI model, which will soon allow users to ask questions about content displayed on their screens. This enhancement, part of Google's Project Astra, enables real-time interaction and accessibility by identifying screen elements and responding to user queries through voice. Recommended read:
References :
@workspaceupdates.googleblog.com
//
Google is expanding language support for Gemini in the side panel of Workspace applications. Starting February 19, 2025, users can access Gemini in seventeen additional languages: Arabic, Chinese, Czech, Danish, Dutch, Finnish, Hebrew, Hungarian, Norwegian, Polish, Romanian, Russian, Swedish, Thai, Turkish, Ukrainian, and Vietnamese. This update allows users to summarize, brainstorm, and generate content from their emails, documents, and more in their preferred language.
Gemini is now available in the side panel of Google Docs, Google Sheets, Google Drive, and Gmail. Image generation is also supported in these languages. End users can access Gemini in the side panel by clicking on “Ask Gemini” in the top right corner of these applications. The Gemini side panel will work according to the language you set in your Google account. Recommended read:
References :
@support.google.com
//
References:
AI GPT Journal
Google is integrating its Gemini AI into YouTube to enhance user experience by providing video summarization and quick access to key information. This new feature, designed for busy learners and content creators, allows users to bypass lengthy introductions and irrelevant sections, focusing on the content that matters most. Gemini analyzes YouTube videos, identifies critical moments with timestamps, and answers specific questions based on the video's content, ultimately saving users time and increasing productivity.
Gemini is not intended to replace human judgment but to assist users in efficiently extracting information. For example, it can extract recipe steps from cooking tutorials or summarize lectures for students. Furthermore, it can analyze competitor videos for content creators. Users have control over Gemini's outputs, choosing between detailed summaries or bullet points. Google's Gemini AI is now also capable of recalling previous conversations for paying subscribers, building upon past functionality of remembering user preferences. This feature, available for Gemini Advanced users, remembers entire discussions, streamlining interactions and removing the need to revisit old chats. Users can manage their chat histories and privacy settings, and Google plans to extend the memory feature to other languages and Google Workspace customers. Recommended read:
References :
@the-decoder.com
//
Google's Gemini AI now possesses the ability to remember past conversations, allowing for more personalized and context-aware interactions. This feature is currently available to paying subscribers and enables Gemini to recall user preferences and past discussions, enhancing its capacity to provide relevant and coherent responses. Users can now ask Gemini to provide a summary of past discussions, eliminating the need to start from scratch or search for previous threads.
The new memory feature, which extends beyond simply remembering preferences to recalling entire conversations, is currently available in English for Gemini Advanced subscribers on the web and mobile. Google says that users must “check responses for accuracy.” Users have the option to review, delete, or adjust how long Gemini retains their chats, and can also disable all Gemini app activities through the MyActivity tab. In the coming weeks, it will be expanded to more languages and Google Workspace Business and Enterprise customers. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |