Carl Franzen@AI News | VentureBeat
//
Google is enhancing Android development with its Gemini AI model, launching Gemini in Android Studio for Businesses to streamline the design of work applications. This new offering is a subscription-based service that aims to meet the growing demand for secure, privacy-conscious, and customizable AI integration within large organizations and development teams. By leveraging Gemini, Android developers can now more easily create workplace apps within the Android ecosystem, with enhanced features tailored for managing sensitive codebases and workflows. This move brings AI-assisted coding into enterprise-grade environments without compromising data governance or intellectual property protection.
Visual AI in Gemini Live is also bringing AI-powered vision to devices like the Samsung Galaxy S25. The upgrade allows users to grant Gemini Live access to their camera and screen sharing, enabling the AI to provide real-time conversational interactions about what it sees. Samsung states the new upgrade to Gemini Live means the AI can 'have a real-time conversation with users about what it sees – making everyday tasks easier.' For Galaxy S25 users, this update is already rolling out as a free upgrade, demonstrating the deepening partnership between Google and Samsung in the AI space. In addition to benefiting developers and end users, Gemini is also being integrated into other Google services, such as Google Chat. Gemini in Google Chat can now help users catch up on unread conversations with summaries, even extending this ability to direct messages and read conversations. This functionality, already available, has also been expanded to include three additional languages: Spanish, Portuguese, and German. These enhancements across different platforms show Google's commitment to leveraging AI to improve productivity and user experience across its suite of products. Recommended read:
References :
@laptopmag.com
//
Microsoft is celebrating its 50th anniversary on April 4th, 2025, marking a significant milestone focused on its AI driven future. Top executives highlighted the company's vision for AI, centered around Copilot, which will be further integrated into Microsoft products. Mustafa Suleyman, CEO of Microsoft AI, emphasized that Copilot will understand users in the context of their lives and show up in the right way at the right time, calling it "a new kind of relationship with technology."
Microsoft is launching Researcher and Analyst AI agents in Microsoft 365 Copilot. Researcher integrates OpenAI's deep research model with Microsoft 365 Copilot's orchestration and search capabilities to perform complex, multi-step research workflows. Analyst, powered by OpenAI's o3-mini reasoning model, helps turn raw data into actionable insights within minutes, capable of running Python code to process complex data queries. These agents, accessible through the "Frontier" program, aim to provide on-demand assistance in data analysis and general research tasks, enhancing user capabilities across various applications. Recommended read:
References :
Alexey Shabanov@TestingCatalog
//
Microsoft is supercharging its Copilot assistant with new capabilities, transforming it into a companion for all. The company is equipping Copilot with new features designed to make it more responsive and helpful, including memory recall and personalization. This will allow the AI assistant to better understand and remember user preferences, complete tasks, analyze surroundings, and keep life organized. Microsoft aims to make AI work for everyone and wants Copilot to become the AI companion people want, tailored just for them.
Microsoft launched two AI reasoning agents for 365 Copilot: Researcher and Analyst. Researcher handles complex research using multiple sources, while Analyst functions as a data scientist to transform raw data into insights. These agents will roll out this month as part of a new program called "Frontier". The company is also adding new mobile and web features, personalization options, and exclusive tools for Surface devices. Recommended read:
References :
Mike Wheatley@SiliconANGLE
//
Microsoft is enhancing its Copilot AI assistant with advanced reasoning and agent capabilities aimed at boosting productivity across various Microsoft 365 applications. Key updates include deep reasoning within Copilot Studio and the introduction of specialized AI agents like Researcher and Analyst, designed to tackle complex problems and provide more nuanced insights. These agents leverage advanced reasoning models, such as OpenAI's o1 and o3-mini, to perform detailed analysis, methodical thinking, and data-driven decision-making, simulating the approach of skilled professionals.
Microsoft is also integrating its Security Copilot with AI agents to automate tasks. New agents being added to Security Copilot include ones designed to sort through large volumes of security info, analyze notifications, and fix insecure user access rules. Microsoft also released a new Knowledge Base-Augmented Language Models (KBLaM) system which is a more efficient way to incorporate external knowledge into language models. All of these tools are designed to give you the help you need faster. Recommended read:
References :
@Google DeepMind Blog
//
References:
Google DeepMind Blog
, The Tech Basic
,
Google has launched Gemini 2.0, its most capable AI model yet, designed for the new agentic era. This model introduces advancements in multimodality, including native image and audio output, and native tool use, enabling the development of new AI agents. Gemini 2.0 is being rolled out to developers and trusted testers initially, with plans to integrate it into Google products like Gemini and Search. Starting today, the Gemini 2.0 Flash experimental model is available to all Gemini users.
New features powered by Project Astra are now accessible to Google One AI Premium subscribers, enabling live video analysis and screen sharing. This update transforms Gemini into a more interactive visual helper, capable of instantly answering questions about what it sees through the device's camera. Users can point their camera at an object, and Gemini will describe it or offer suggestions, providing a more contextual understanding of the real world. These advanced tools will enhance AI Overviews in Google Search. Recommended read:
References :
Facebook@Meta
//
References:
Meta
, TechInformed
,
Meta is expanding its AI assistant, Meta AI, to 41 European countries, including those in the European Union, and 21 overseas territories. This marks Meta's largest global AI rollout to date. However, the European version will initially offer limited capabilities, starting with an "intelligent chat function" available in six languages: English, French, Spanish, Portuguese, German, and Italian.
This rollout comes after regulatory challenges with European privacy authorities. Notably, the European version of Meta AI has not been trained on local users' data, addressing previous concerns about user consent and data privacy. Users can access the assistant through a blue circle icon across Meta's apps like Facebook, Instagram, WhatsApp, and Messenger, with plans to expand the feature to group chats, starting with WhatsApp. Meta is also mulling ad-free subscriptions for users in the UK following legal challenges to its personalised advertising practices. Recommended read:
References :
Andrew Liszewski@The Verge
//
Amazon has announced Alexa+, a new, LLM-powered version of its popular voice assistant. This upgraded version will cost $19.99 per month, but will be included at no extra cost for Amazon Prime subscribers. Alexa+ boasts enhanced AI agent capabilities, enabling users to perform tasks like booking Ubers, creating study plans, and sending texts via voice command. These new features are intended to provide a more seamless and natural conversational experience. Early access to Alexa+ will begin in late March 2025 for customers with eligible Echo Show devices in the United States.
Amazon emphasizes that Alexa+ utilizes a "model agnostic" system, drawing on Amazon Bedrock and employing various AI models, including Amazon Nova and those from Anthropic, to optimize performance. This approach allows Alexa+ to choose the best model for each task, leveraging specialized "experts" for orchestrating services. With seamless integration into tens of thousands of devices and services, including news sources like Time, Reuters, and the Associated Press, Alexa+ provides accurate and real-time information. Recommended read:
References :
@Techmeme
//
Perplexity has introduced Perplexity Assistant, an AI agent now available within their Android application. This assistant is designed to aid users with daily tasks by utilizing reasoning, search, and the ability to perform actions across multiple apps. Examples include hailing a ride or searching for music, and it leverages Perplexity's search engine for access to real-time web information. The assistant is also multimodal, meaning it can use the phone's camera to identify objects or answer questions about items on the screen. Perplexity Assistant can maintain context across multiple interactions, allowing users to perform complex tasks such as researching and booking restaurant reservations.
The initial release of Perplexity Assistant will be free for users and is available in 15 languages. Perplexity acknowledges that certain actions may not always work correctly but aims to address these issues in future updates. Alongside this, Perplexity has also launched a GenAI search API called Sonar Pro which they claim has the best answer quality based on benchmark findings. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |