News from the AI & ML world

DeeperML - #integration

@www.searchenginejournal.com //
References: Search Engine Journal , WhatIs ,
Google is aggressively expanding its artificial intelligence capabilities across its platforms, integrating the Gemini AI model into Search, and Android XR smart glasses. The tech giant unveiled the rollout of "AI Mode" in the U.S. Search, making it accessible to all users after initial testing in the Labs division. This move signifies a major shift in how people interact with the search engine, offering a conversational experience akin to consulting with an expert.

Google is feeding its latest AI model, Gemini 2.5, into its search algorithms, enhancing features like "AI Overviews" which are now available in over 200 countries and 40 languages and are used by 1.5 billion monthly users. In addition, Gemini 2.5 Pro introduces enhanced reasoning, through Deep Think, to give deeper and more thorough responses with AI Mode with Deep Search. Google is also testing new AI-powered features, including the ability to conduct searches through live video feeds with Search Live.

Google is also re-entering the smart glasses market with Android XR-powered spectacles featuring a hands-free camera and a voice-powered AI assistant. This project, named Astra, allows users to talk back and forth with Search about what they see in real-time with their cameras. These advancements aim to create more personalized and efficient user experiences, marking a new phase in the AI platform shift and solidifying AI's position in search.

Recommended read:
References :
  • Search Engine Journal: Google Expands AI Features in Search: What You Need to Know
  • WhatIs: Google expands Gemini model, Search as AI rivals encroach
  • www.theguardian.com: Google unveils ‘AI Mode’ in the next phase of its journey to change search

@about.fb.com //
References: about.fb.com , techxplore.com ,
Meta has launched the Meta AI app, a standalone AI assistant designed to compete with ChatGPT. This marks Meta's initial step in building a more personalized AI experience for its users. The app is built with Llama 4, a model Meta touts as more cost-efficient than its competitors. Users can access Meta AI through voice conversations and interactions will be personalized over time as the AI learns user preferences and contexts across Meta's various apps.

The Meta AI app includes a Discover feed, enabling users to share and explore how others are utilizing AI. It also replaces Meta View as the companion app for Ray-Ban Meta smart glasses, allowing for seamless conversations across glasses, mobile app, and desktop interfaces. According to Meta CEO Mark Zuckerberg, the app is designed to be a personal AI, starting with basic context about user interests and evolving to incorporate more comprehensive knowledge from across Meta's platforms.

With the introduction of the Meta AI app, Meta aims to provide a direct path to its generative AI models for its users, similar to the approach taken by OpenAI with ChatGPT. The release comes as OpenAI stands as a leader of straight-to-user AI through its ChatGPT assistant that is regularly updated with new capabilities. Zuckerberg noted that a billion people are already using Meta AI across Meta's apps.

Recommended read:
References :
  • about.fb.com: We're launching the Meta AI app, our first step in building a more personal AI.
  • techxplore.com: Meta releases standalone AI app, competing with ChatGPT
  • Antonio Pequen?o IV: Here’s What To Know About Meta’s Standalone AI App To Rival ChatGPT

Facebook@Meta Newsroom //
Meta has launched its first dedicated AI application, directly challenging ChatGPT in the burgeoning AI assistant market. The Meta AI app, built on the Llama 4 large language model (LLM), aims to offer users a more personalized AI experience. The application is designed to learn user preferences, remember context from previous interactions, and provide seamless voice-based conversations, setting it apart from competitors. This move is a significant step in Meta's strategy to establish itself as a major player in the AI landscape, offering a direct path to its generative AI models.

The new Meta AI app features a 'Discover' feed, a social component allowing users to explore how others are utilizing AI and share their own AI-generated creations. The app also replaces Meta View as the companion application for Ray-Ban Meta smart glasses, enabling a fluid experience across glasses, mobile, and desktop platforms. Users will be able to initiate conversations on one device and continue them seamlessly on another. To use the application, a Meta products account is required, though users can sign in with their existing Facebook or Instagram profiles.

CEO Mark Zuckerberg emphasized that the app is designed to be a user’s personal AI, highlighting the ability to engage in voice conversations. The app begins with basic information about a user's interests, evolving over time to incorporate more detailed knowledge about the user and their network. The launch of the Meta AI app comes as other companies are also developing their AI models, seeking to demonstrate the power and flexibility of its in-house Llama 4 models to both consumers and third-party software developers.

Recommended read:
References :
  • The Register - Software: Meta bets you want a sprinkle of social in your chatbot
  • THE DECODER: Meta launches AI assistant app and Llama API platform
  • Analytics Vidhya: Latest Features of Meta AI Web App Powered by Llama 4
  • www.techradar.com: Meta AI is here to take on ChatGPT and give your Ray-Ban Meta Smart Glasses a fresh AI upgrade
  • Meta Newsroom: Meta's launch of a new AI app is covered.
  • techxplore.com: Meta releases standalone AI app, competing with ChatGPT
  • AI News | VentureBeat: Meta’s first dedicated AI app is here with Llama 4 — but it’s more consumer than productivity or business oriented
  • Antonio Pequen?o IV: Meta's new AI app is designed to rival ChatGPT.
  • venturebeat.com: Meta partners with Cerebras to launch its new Llama API, offering developers AI inference speeds up to 18 times faster than traditional GPU solutions, challenging OpenAI and Google in the fast-growing AI services market.
  • about.fb.com: We're launching the Meta AI app, our first step in building a more personal AI.
  • siliconangle.com: Meta announces standalone AI app for personalized assistance
  • www.tomsguide.com: Meta takes on ChatGPT with new standalone AI app — here's what makes it different
  • Data Phoenix: Meta launched a dedicated Meta AI app
  • techstrong.ai: Can Meta’s New AI App Top ChatGPT?
  • the-decoder.com: Meta launches AI assistant app and Llama API platform
  • SiliconANGLE: Meta Platforms Inc. today announced a new standalone Meta AI app that houses an artificial intelligence assistant powered by the company’s Llama 4 large language model to provide a more personalized experience for users.
  • techstrong.ai: Meta Previews Llama API to Streamline AI Application Development
  • TestingCatalog: Meta tests new AI features including Reasoning and Voice Personalization
  • www.windowscentral.com: Mark Zuckerberg says Meta is developing AI friends to beat "the loneliness epidemic" — after Bill Gates claimed AI will replace humans for most things
  • Ken Yeung: IN THIS ISSUE: Meta hosts its first-ever event around its Llama model, launching a standalone app to take on Microsoft’s Copilot and ChatGPT. The company also plans to soon open its LLM up to developers via an API. But can Meta’s momentum match its ambition?
  • www.marktechpost.com: Meta AI Introduces First Version of Its Llama 4-Powered AI App: A Standalone AI Assistant to Rival ChatGPT
  • MarkTechPost: Meta AI Introduces First Version of Its Llama 4-Powered AI App: A Standalone AI Assistant to Rival ChatGPT

@techstrong.ai //
References: techstrong.ai , www.eweek.com ,
President Donald Trump has signed an executive order aimed at integrating artificial intelligence (AI) into the K-12 education system. The order, titled "Advancing artificial intelligence education for American youth," directs the Education and Labor Departments to foster AI training for students and collaborate with states to promote AI education. This initiative seeks to equip American students with the skills necessary to use and advance AI technology, ensuring the U.S. remains a global leader in this rapidly evolving field.

The executive order establishes a White House Task Force on AI Education, which will include Education Secretary Linda McMahon and Labor Secretary Lori Chavez-DeRemer, and be chaired by Michael Kratsios, director of the White House Office of Science and Technology Policy. This task force will be responsible for creating a "Presidential AI Challenge" to highlight and encourage AI use in classrooms. It will also work to establish public-private partnerships to provide resources for AI education in K-12 schools. Private sector AI companies like Elon Musk's xAI and OpenAI may be asked to participate, helping develop programs for schools.

Beyond the task force, Trump's order directs the Department of Education to prioritize AI-related teacher training grants and the National Science Foundation to prioritize research on AI in education. The Labor Department is also instructed to expand AI-related apprenticeships. According to a draft of the order, AI is described as "driving innovation across industries, enhancing productivity, and reshaping the way we live and work." The move underscores bipartisan concerns about integrating AI into teaching, with the goal of preparing students for a future increasingly shaped by AI technologies.

Recommended read:
References :
  • techstrong.ai: President Donald Trump signed an executive order on Wednesday that makes artificial intelligence (AI) part of the K-12 school curriculum.
  • www.eweek.com: AI Education in K-12 Classes Might Become US Policy, Per Trump EO Draft
  • www.usatoday.com: President Trump signs executive order boosting AI in K-12 schools.

Carl Franzen@AI News | VentureBeat //
Google is enhancing Android development with its Gemini AI model, launching Gemini in Android Studio for Businesses to streamline the design of work applications. This new offering is a subscription-based service that aims to meet the growing demand for secure, privacy-conscious, and customizable AI integration within large organizations and development teams. By leveraging Gemini, Android developers can now more easily create workplace apps within the Android ecosystem, with enhanced features tailored for managing sensitive codebases and workflows. This move brings AI-assisted coding into enterprise-grade environments without compromising data governance or intellectual property protection.

Visual AI in Gemini Live is also bringing AI-powered vision to devices like the Samsung Galaxy S25. The upgrade allows users to grant Gemini Live access to their camera and screen sharing, enabling the AI to provide real-time conversational interactions about what it sees. Samsung states the new upgrade to Gemini Live means the AI can 'have a real-time conversation with users about what it sees – making everyday tasks easier.' For Galaxy S25 users, this update is already rolling out as a free upgrade, demonstrating the deepening partnership between Google and Samsung in the AI space.

In addition to benefiting developers and end users, Gemini is also being integrated into other Google services, such as Google Chat. Gemini in Google Chat can now help users catch up on unread conversations with summaries, even extending this ability to direct messages and read conversations. This functionality, already available, has also been expanded to include three additional languages: Spanish, Portuguese, and German. These enhancements across different platforms show Google's commitment to leveraging AI to improve productivity and user experience across its suite of products.

Recommended read:
References :
  • AI News | VentureBeat: Google launches Gemini in Android Studio for Businesses, making it easier for devs to design work apps
  • www.techradar.com: Your Samsung Galaxy S25 just got a huge free Gemini upgrade that gives your AI assistant eyes
  • www.tomsguide.com: Google Gemini Live brings AI-powered vision to Galaxy S25 and Pixel 9 — here's how it works
  • www.eweek.com: Samsung’s Galaxy S25 Now Talks to You — And Sees What You See — Thanks to Real-Time AI
  • Android Developers Blog: Gemini in Android Studio for businesses: Develop with confidence, powered by AI
  • Developer Tech News: Google enhances Android Studio with enterprise Gemini AI tools
  • www.developer-tech.com: Google enhances Android Studio with enterprise Gemini AI tools
  • cloud.google.com: Delivers an application-centric, AI-powered cloud for developers and operators.

John Werner,@John Werner //
OpenAI is making a strategic shift by releasing its first open-weight AI model since 2019, a move influenced by the rising economic pressures from competitors like DeepSeek and Meta. This marks a significant reversal for the company, known for its proprietary AI systems. CEO Sam Altman announced the plan on X, stating the model will allow developers to run it on their own hardware, diverging from OpenAI's cloud-based subscription model. This decision comes after Altman admitted OpenAI was "on the wrong side of history" regarding open-source AI.

Alongside this strategic shift, OpenAI also announced it secured $40 billion in new funding at a $300 billion valuation, the largest fundraise in its history. This infusion of capital will support the company's AI research. The announcement of the open-source model also coincided with the release of image generation capabilities within ChatGPT, enabling users to transform images into various art styles, including the style of Studio Ghibli. This feature has gained popularity online, with users converting personal photos and memes into animated images, sparking both creative expression and ethical considerations.

Recommended read:
References :
  • venturebeat.com: OpenAI to release open-source model as AI economics force strategic shift
  • John Werner: Picture This: Big Changes With ChatGPT’s Image Release

Cierra Choucair@thequantuminsider.com //
References: The Register - On-Prem , , ...
NVIDIA is establishing the Accelerated Quantum Research Center (NVAQC) in Boston to integrate quantum hardware with AI supercomputers. The aim of the NVAQC is to enable accelerated quantum supercomputing, addressing quantum computing challenges such as qubit noise and error correction. Commercial and academic partners will work with NVIDIA, with collaborations involving industry leaders like Quantinuum, Quantum Machines, and QuEra, as well as researchers from Harvard's HQI and MIT's EQuS.

NVIDIA's GB200 NVL72 systems and the CUDA-Q platform will power research on quantum simulations, hybrid quantum algorithms, and AI-driven quantum applications. The center will support the broader quantum ecosystem, accelerating the transition from experimental to practical quantum computing. Despite the CEO's recent statement that practical quantum systems are likely still 20 years away, this investment shows confidence in the long-term potential of the technology.

Recommended read:
References :
  • The Register - On-Prem: Nvidia invests in quantum computing weeks after CEO said it's decades from being useful
  • : NVIDIA Launches Boston-Based Quantum Research Center to Integrate AI Supercomputing with Quantum Computing
  • AI News | VentureBeat: Nvidia will build accelerated quantum computing research center
  • : NVIDIA’s Quantum Strategy: Not Building the Computer, But the World That Enables It
  • : Quantum Machines Announces NVIDIA DGX Quantum Early Access Program

mpesce@Windows Copilot News //
Microsoft is expanding the reach of its AI capabilities by integrating them into everyday applications. Notepad, the classic text editor, is now receiving AI-powered text editing features. The "Rewrite" feature, currently rolling out in preview to Windows Insiders, will enable users to rephrase sentences, adjust tone, and modify the length of their content directly within Notepad. This marks a significant update for an application known for its simplicity since its introduction in 1983, bringing advanced AI tools to a wider audience.

Furthermore, Microsoft's Copilot, the AI assistant, is receiving updates and expanding its availability. It is now accessible within the Viber app, allowing users to leverage its capabilities for various tasks like answering questions, generating images, and providing advice. The Copilot app is also available on MacOS, with sign-in options expanded to include Apple ID for iOS users. File upload support has been added for tabular data formats like XLSX, CSV, and JSON, and Copilot will now offer follow-up suggestions to enhance conversation flow.

Recommended read:
References :