News from the AI & ML world

DeeperML - #aiintegration

Allison Siu@NVIDIA Blog //
Amazon is currently testing a new feature called "Buy for Me" within its mobile shopping app. This innovative tool allows users to purchase products from third-party brand websites that are not directly sold by Amazon, all without ever leaving the Amazon app environment. The feature leverages AI agents to seamlessly complete the purchase process on these external sites. "Buy for Me" is in a limited beta release for select iOS and Android users in the U.S.

When a customer searches for an item not available on Amazon, the app will display qualifying products from external brand sites in a dedicated section titled "Shop brand sites directly". Tapping on one of these items opens a product detail page within the Amazon app. From this page, users can select the "Buy for Me" option, granting Amazon permission to complete the transaction. Amazon's AI, combined with Anthropic's Claude, securely enters the payment and shipping information, while the brand handles fulfillment, customer service, and any potential returns.

This initiative showcases the potential of narrowly scoped, highly specialized AI agents in providing useful services. It keeps customers within Amazon's ecosystem while extending functionality beyond its own inventory. Retailers can deepen customer engagement, enhance their offerings and maintain a competitive edge in a rapidly shifting digital marketplace by tapping into AI agents.

Recommended read:
References :
  • Data Phoenix: Amazon's Nova Act joins OpenAI and Anthropic's computer using AI agents
  • NVIDIA Newsroom: From Browsing to Buying: How AI Agents Enhance Online Shopping
  • Shelly Palmer: Amazon is testing a new feature in its mobile shopping app that lets users buy products Amazon doesn’t sell—without leaving the app.
  • gHacks Technology News: Amazon is taking artificial intelligence to the next-level with its newly announced “Buy for me†feature.
  • Maginative: Amazon Tests AI Shopping Agent That Can Make Purchases from Other Retailers for You

alex.wawro@futurenet.com (Alex@tomsguide.com //
Microsoft is actively integrating AI across its product lines to enhance both functionality and security. One significant development involves the use of AI-powered Security Copilot to identify vulnerabilities in open-source bootloaders. This expedited discovery process has revealed 20 previously unknown vulnerabilities in GRUB2, U-Boot, and Barebox, which could impact systems relying on Unified Extensible Firmware Interface (UEFI) Secure Boot. These vulnerabilities, particularly in GRUB2, could allow threat actors to bypass Secure Boot and install stealthy bootkits, potentially granting them complete control over affected devices.

Microsoft is also expanding AI capabilities on Copilot Plus PCs, particularly those powered by Intel and AMD processors. Features like Live Captions, which translates audio into English subtitles in real-time, as well as creative tools like Cocreator in Paint, Restyle Image, and Image Creator in Photos, are becoming more widely available. The company is additionally testing a new tool called Quick Machine Recovery, designed to remotely restore unbootable Windows 11 devices by automatically diagnosing and deploying fixes through Windows Update, preventing widespread outages similar to those experienced in the past.

Recommended read:
References :
  • Microsoft Security Blog: Analyzing open-source bootloaders: Finding vulnerabilities faster with AI
  • The Verge: Microsoft is making its AI features widely available on Copilot Plus PCs equipped with Intel and AMD chips. One of the most notable of these features will be Live Captions, which translates audio to English subtitles from dozens of different languages in real time. Microsoft first started testing Live Captions on Intel and AMD devices …
  • Microsoft Security Blog: Transforming public sector security operations in the AI era
  • gHacks Technology News: Microsoft expands Snapdragon-exclusive Copilot+ features to Intel and AMD PCs
  • Techmeme: Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.
  • Ken Yeung: Microsoft’s AI Skills Fest Seeks World Record for Most Online AI Learners
  • eWEEK: During Microsoft’s AI Skills Fest, You Can Attempt to Set a Guinness World Records Title
  • Data Phoenix: Microsoft launches the Researcher and Analyst AI agents in Microsoft 365 Copilot
  • TestingCatalog: AI Agents and Deep Research among major Copilot upgrades for Microsoft’s 50th anniversary
  • www.techrepublic.com: After Detecting 30B Phishing Attempts, Microsoft Adds Even More AI to Its Security Copilot
  • Ken Yeung: Microsoft’s Copilot Just Got a Brain Boost—And It’s Ready to Work for You
  • www.tomsguide.com: Copilot is now at its smartest yet with a smarter brain, better memory, and lots of new tricks
  • www.techradar.com: Microsoft Copilot has received a significant upgrade that will make it more of a proactive AI companion.
  • www.tomsguide.com: I just went hands-on with the new Microsoft Copilot — 3 features that impress me most
  • www.laptopmag.com: Microsoft hopes Clippy will make you like Copilot more
  • TestingCatalog: Microsoft expands Copilot features to rival ChatGPT and Gemini
  • www.techradar.com: I didn’t care about Copilot, but this massive upgrade could make Microsoft’s AI the personal assistant I’ve always wanted
  • The Tech Portal: Microsoft has now enhanced its AI assistant - Copilot with a new…
  • The Tech Basic: Microsoft Copilot Now Books Trips and Remembers Your Preferences
  • www.techradar.com: Microsoft is expanding Copilot's Vision capabilities.
  • PCMag Middle East ai: Hands On With Copilot Pages: Using AI to Organize and Enhance Notes in Real Time
  • Search Engine Journal: Microsoft Launches Copilot Search in Bing, offering AI-powered summaries with clear links to sources for fast, conversational answers.
  • PCMag Middle East ai: Review of Microsoft's Copilot's new features and capabilities.
  • www.windowscentral.com: Analysis of the new Copilot features.
  • analyticsindiamag.com: Microsoft Used AI to Recreate Quake II
  • Analytics India Magazine: Microsoft used its AI model for video games to build a gameplay demo of Quake II.
  • Dataconomy: Microsoft Copilot got an amazing update you should not miss
  • Digital Information World: Microsoft updates Copilot with exciting new features that are more in line with alternatives like ChatGPT and Claude
  • blogs.microsoft.com: Your AI Companion

Maximilian Schreiner@THE DECODER //
References: THE DECODER , Analytics Vidhya ,
OpenAI is set to integrate Anthropic's Model Context Protocol (MCP) across its product line, according to an announcement by CEO Sam Altman on X. MCP is an open-source standard that allows AI models to connect directly to various data systems, enabling them to access, query, and interact with business tools, repositories, and software in real time. This integration will begin with the Agents SDK, followed by the ChatGPT desktop app and Responses API, effectively standardizing how AI assistants access and utilize external data sources.

This move addresses one of AI's biggest limitations: isolation from real-world data, which eliminates the need for custom integrations for each data source. Anthropic's Chief Product Officer Mike Krieger noted that MCP has become "a thriving open standard with thousands of integrations and growing" since its open-source release. Companies like Block, Apollo, Replit, Codeium, and Sourcegraph have also adopted MCP.

A recent study by MIT and OpenAI suggests excessive dependency on ChatGPT can lead to loneliness and a "loss of confidence" in decision-making. The study, involving over 1,000 ChatGPT users over four weeks, revealed that overreliance on the tool for advice and explanations can result in unhealthy emotional dependency and potentially "addictive behaviors." Researchers plan to investigate whether this overdependence negatively impacts users' urgency and confidence in problem-solving.

Recommended read:
References :
  • THE DECODER: OpenAI adopts competitor Anthropic's standard for AI data access
  • Analytics Vidhya: How to Use OpenAI MCP Integration for Building Agents?
  • www.windowscentral.com: OpenAI says an excessive dependency on ChatGPT can lead to loneliness and a "loss of confidence" in decision-making

Maximilian Schreiner@THE DECODER //
References: Shelly Palmer , THE DECODER , Runtime ...
OpenAI has announced it will adopt Anthropic's Model Context Protocol (MCP) across its product line. This surprising move involves integrating MCP support into the Agents SDK immediately, followed by the ChatGPT desktop app and Responses API. MCP is an open standard introduced last November by Anthropic, designed to enable developers to build secure, two-way connections between their data sources and AI-powered tools. This collaboration between rivals marks a significant shift in the AI landscape, as competitors typically develop proprietary systems.

MCP aims to standardize how AI assistants access, query, and interact with business tools and repositories in real-time, overcoming the limitation of AI being isolated from systems where work happens. It allows AI models like ChatGPT to connect directly to the systems where data lives, eliminating the need for custom integrations for each data source. Other companies, including Block, Apollo, Replit, Codeium, and Sourcegraph, have already added MCP support, and Anthropic's Chief Product Officer Mike Krieger welcomes OpenAI's adoption, highlighting MCP as a thriving open standard with growing integrations.

Recommended read:
References :
  • Shelly Palmer: OpenAI and Anthropic Play Nice – It’s A Big Deal For Agents
  • THE DECODER: OpenAI adopts competitor Anthropic's standard for AI data access
  • AI News | VentureBeat: Model Context Protocol (MCP)—a rising open standard designed to help AI agents interact seamlessly with tools, data, and interfaces—just hit a significant milestone.
  • Runtime: Model Context Protocol (MCP) was introduced last November by Anthropic, which called it "an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools."
  • The Tech Basic: OpenAI has formed a partnership with its competitor, Anthropic, to implement the Model Context Protocol (MCP) tool.

Maximilian Schreiner@THE DECODER //
OpenAI has announced its support for Anthropic’s Model Context Protocol (MCP), an open-source standard. The move is designed to streamline the integration between AI assistants and various data systems. MCP is an open standard that facilitates connections between AI models and external repositories and business tools, eliminating the need for custom integrations.

The integration is already available in OpenAI's Agents SDK, with support coming soon to the ChatGPT desktop app and Responses API. The aim is to create a unified framework for AI applications to access and utilize external data sources effectively. This collaboration marks a pivotal step towards enhancing the relevance and accuracy of AI-generated responses by enabling real-time data retrieval and interaction.

Anthropic’s Chief Product Officer Mike Krieger welcomed the development, noting MCP has become “a thriving open standard with thousands of integrations and growing.” Since Anthropic released MCP as open source, multiple companies have adopted the standard for their platforms. CEO Sam Altman confirmed on X that OpenAI will integrate MCP support into its Agents SDK immediately, with the ChatGPT desktop app and Responses API following soon.

Recommended read:
References :
  • AI News | VentureBeat: The open source Model Context Protocol was just updated — here’s why it’s a big deal
  • Runtime: Why AI infrastructure companies are lining up behind Anthropic's MCP
  • THE DECODER: OpenAI adopts competitor Anthropic's standard for AI data access
  • Simon Willison's Weblog: OpenAI Agents SDK You can now connect your Model Context Protocol servers to Agents: We’re also working on MCP support for the OpenAI API and ChatGPT desktop app—we’ll share some more news in the coming months. — Tags: , , , , , ,
  • Analytics Vidhya: To improve AI interoperability, OpenAI has announced its support for Anthropic’s Model Context Protocol (MCP), an open-source standard designed to streamline the integration between AI assistants and various data systems. This collaboration marks a pivotal step in creating a unified framework for AI applications to access and utilize external data sources effectively. Understanding the Model
  • THE DECODER: Anthropic and Databricks close 100 million dollar deal for AI agents
  • Analytics India Magazine: Databricks and Anthropic Partner to Bring AI Models to Businesses
  • www.itpro.com: Databricks and Anthropic are teaming up on agentic AI development – here’s what it means for customers
  • Runtime: Model Context Protocol (MCP) was introduced last November by Anthropic, which called it "an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools."
  • www.techrepublic.com: OpenAI Agents Now Support Rival Anthropic’s Protocol, Making Data Access ‘Simpler, More Reliable’
  • Techzine Global: OpenAI is adding support for MCP, an open-source technology that uses large language models (LLMs) to perform tasks in external systems. OpenAI CEO Sam Altman announced the move this week, SiliconANGLE reports. This development is special, partly because MCP was developed by Anthropic PBC, the ChatGPT developer’s best-funded startup rival.

Maximilian Schreiner@THE DECODER //
OpenAI and Anthropic, competitors in the AI space, are joining forces to standardize AI-data integration through the Model Context Protocol (MCP). Introduced by Anthropic last November, MCP is an open standard designed to enable developers to build secure, two-way connections between data sources and AI-powered tools. The protocol allows AI systems like ChatGPT to access digital documents and other data, enhancing the quality and relevance of AI-generated responses. MCP functions as a "USB-C port for AI applications," offering a universal method for connecting AI models to diverse data sources and supporting secure, bidirectional interactions between AI applications (MCP clients) and data sources (MCP servers).

With OpenAI's support, MCP is gaining momentum as a vendor-neutral way to simplify the implementation of AI agents. Microsoft and Cloudflare have already announced support for MCP, with Microsoft adding it to Copilot Studio. This collaboration aims to improve AI interoperability by providing a standard way for AI agents to access and retrieve data, streamlining the process of building and maintaining agents. The goal is to enable AI agents to take actions based on real-time data, making them more practical for everyday business use, with companies like Databricks aiming to improve the accuracy of AI agents to above 95 percent.

Recommended read:
References :
  • Analytics Vidhya: To improve AI interoperability, OpenAI has announced its support for Anthropic’s Model Context Protocol (MCP), an open-source standard designed to streamline the integration between AI assistants and various data systems.
  • Simon Willison's Weblog: MCP ğŸ¤� OpenAI Agents SDK You can now connect your Model Context Protocol servers to Agents: We’re also working on MCP support for the OpenAI API and ChatGPT desktop app—we’ll share some more news in the coming months.
  • THE DECODER: OpenAI adopts competitor Anthropic's standard for AI data access
  • The Tech Basic: OpenAI has formed a partnership with its competitor, Anthropic, to implement the Model Context Protocol (MCP) tool.
  • www.techrepublic.com: OpenAI Agents Now Support Rival Anthropic’s Protocol, Making Data Access ‘Simpler, More Reliable’

staff@insidehpc.com //
References: BigDATAwire
Nvidia's GTC 2025 event showcased the company's advancements in AI, particularly highlighting the integration of AI into various industries. CEO Jensen Huang emphasized that every industry is adopting AI and it is becoming critical for future revenue. Nvidia also unveiled an open Physical AI dataset to advance robotics and autonomous vehicle development. The dataset is claimed to be the world’s largest unified and open dataset for physical AI development, enabling the pretraining and post-training of AI models.

Central to Nvidia’s ambitions for Physical AI is its Omniverse platform, a digital development platform connecting spatial computing, 3D design, and physics-based workflows. Originally designed as a simulation and visualization tool, Omniverse has evolved significantly and has now become more of an operating system for Physical AI, allowing users to train autonomous systems before physical deployment. In quantum computing, SEEQC and Nvidia announced they have completed an end-to-end fully digital quantum-classical interface protocol demo between a QPU and GPU.

Recommended read:
References :
  • BigDATAwire: The Rise of Intelligent Machines: Nvidia Accelerates Physical AI Progress

matthewthomas@Microsoft Industry Blogs //
References: Source , The Quantum Insider
Microsoft is emphasizing both AI security and advancements in quantum computing. The company is integrating AI features across its products and services, including Microsoft 365, while also highlighting the critical intersection of AI innovation and security. Microsoft will host Microsoft Secure on April 9th, an online event designed to help professionals discover AI innovations for the security lifecycle. Attendees can learn how to harden their defenses, secure AI investments, and discover AI-first tools and best practices.

Microsoft is also continuing its work in quantum computing, recently defending its topological qubit claims at the American Physical Society (APS) meeting. While Microsoft maintains confidence in its results, skepticism remains within the scientific community regarding the verification methods used, particularly the reliability of the topological gap protocol (TGP) in detecting Majorana quasiparticles. Chetan Nayak, a leading theoretical physicist at Microsoft, presented the company’s findings, acknowledging the skepticism but insisting that the team is confident.

Recommended read:
References :
  • Source: AI innovation requires AI security: Hear what’s new at Microsoft Secure
  • The Quantum Insider: Microsoft defends topological qubit claims at APS Meeting Amid Skepticism

Matthias Bastian@THE DECODER //
Google has announced significant upgrades to its Gemini app, focusing on enhanced functionality, personalization, and accessibility. A key update is the rollout of the upgraded 2.0 Flash Thinking Experimental model, now supporting file uploads and boasting a 1 million token context window for processing large-scale information. This model aims to improve reasoning and response efficiency by breaking down prompts into actionable steps. The Deep Research feature, powered by Flash Thinking, allows users to create detailed multi-page reports with real-time insights into its reasoning process and is now available globally in over 45 languages, accessible for free or with expanded access for Gemini Advanced users.

Another major addition is the experimental "Personalization" feature, integrating Gemini with Google apps like Search to deliver tailored responses based on user activity. Gemini is also strengthening its integration with Google apps such as Calendar, Notes, Tasks, and Photos, enabling users to handle complex multi-app requests in a single prompt. Google is also putting Gemini 2.0 AI into robots through the DeepMind AI team, which has developed two new models of Gemini specifically designed to work with robots. The first, Gemini Robotics, is an advanced vision-language-action (VLA) LLM that uses physical motion to respond to prompts. The second model, Gemini Robots-ER, is a VLM with advanced spatial understanding, enabling robots to navigate changing environments. Google is partnering with robotics companies to further develop humanoid robots.

Google will replace its long-standing Google Assistant with Gemini on mobile devices later this year. The classic Google Assistant will no longer be accessible on most mobile devices, marking the end of an era. The shift represents Google's pivot toward generative AI, believing that Gemini's advanced AI capabilities will deliver a more powerful and versatile experience. Gemini will also come to tablets, cars, and connected devices like headphones and watches. The company also introduced Gemini Embedding, a novel embedding model initialized from the powerful Gemini Large Language Model, aiming to enhance embedding quality across diverse tasks.

Recommended read:
References :
  • The Official Google Blog: Over the coming months, we’ll be upgrading users on mobile devices from Google Assistant to Gemini.
  • Android Faithful: Google's AI tool Gemini gets a boost by working with deeper insight about you through personalization and app connections.
  • Search Engine Journal: Google Gemini's integration of Search history blurs the line between traditional Search and AI assistants
  • Maginative: Google will replace its long-standing Google Assistant with Gemini on mobile devices later this year, marking the end of an era for the company's original voice assistant.
  • MarkTechPost: Google AI Introduces Gemini Embedding: A Novel Embedding Model Initialized from the Powerful Gemini Large Language Model
  • www.tomsguide.com: Google is taking Gemini to the next level and giving users more with major upgrades aimed to make Gemini even more personal, plus many of the upgrades are free.
  • PCMag Middle East ai: RIP Google Assistant? Gemini AI Poised to Replace It This Year
  • The Tech Basic: Android’s New AI Era: Gemini Replaces Google Assistant This Year
  • Search Engine Land: Google to replace Google Assistant with Gemini
  • www.tomsguide.com: Google Assistant is losing features to make way for Gemini — here's what's just been axed
  • The Official Google Blog: Gemini gets personal, with tailored help from your Google apps
  • Analytics Vidhya: Google's Gemini models are undergoing significant updates, now featuring faster models, longer context lengths, and integrated AI agents.
  • Google DeepMind Blog: Gemini breaks new ground: a faster model, longer context and AI agents

Matthias Bastian@THE DECODER //
Google is enhancing its Gemini AI assistant with the ability to access users' Google Search history to deliver more personalized and relevant responses. This opt-in feature allows Gemini to analyze a user's search patterns and incorporate that information into its responses. The update is powered by the experimental Gemini 2.0 Flash Thinking model, which the company launched in late 2024.

This new capability, known as personalization, requires explicit user permission. Google is emphasizing transparency by allowing users to turn the feature on or off at any time, and Gemini will clearly indicate which data sources inform its personalized answers. To test the new feature Google suggests users ask about vacation spots, YouTube content ideas, or potential new hobbies. The system then draws on individual search histories to make tailored suggestions.

Recommended read:
References :
  • Android Faithful: Google's AI tool Gemini gets a boost by working with deeper insight about you through personalization and app connections.
  • Google DeepMind Blog: Experiment with Gemini 2.0 Flash native image generation
  • THE DECODER: Google adds native image generation to Gemini language models
  • THE DECODER: Google's Gemini AI assistant can now tap into users' search histories to provide more personalized responses, marking a significant expansion of the chatbot's capabilities.
  • TestingCatalog: Discover the latest updates to Google's Gemini app, featuring the new 2.0 Flash Thinking model, enhanced personalization, and deeper integration with Google apps.
  • The Official Google Blog: Gemini gets personal, with tailored help from your Google apps
  • Search Engine Journal: Google Search History Can Now Power Gemini AI Answers
  • www.zdnet.com: Gemini might soon have access to your Google Search history - if you let it
  • The Official Google Blog: The Assistant experience on mobile is upgrading to Gemini
  • www.zdnet.com: Google launches Gemini with Personalization, beating Apple to personal AI
  • Maginative: Google to Replace Google Assistant with Gemini on Android Phones
  • www.tomsguide.com: Google is giving away Gemini's best paid features for free — here's the tools you can try now
  • MacSparky: This article reports on Google's integration of Gemini AI into its search engine and discusses the implications for users and creators.
  • Search Engine Land: This change will roll out to most devices except Android 9 or earlier (and some other devices).
  • www.zdnet.com: Gemini's new features are now available for free, extending beyond its previous paid subscriber model.
  • www.techradar.com: Discusses how Google is giving Gemini a superpower by allowing it to access your Search history, raising excitement and concerns.
  • PCMag Middle East ai: This article discusses Google's plan to replace Google Assistant with Gemini AI, highlighting the timeline for the transition and requirements for the devices.
  • The Tech Basic: This article announces Google’s plan to replace Google Assistant with Gemini, focusing on the company’s focus on advancing AI and integrating Gemini into its mobile product ecosystem.
  • Verdaily: Google Announces New Update for its AI Wizard, Gemini: Improves User Experience
  • Windows Copilot News: Google is prepping Gemini to take action inside of apps
  • www.techradar.com: Worried about DeepSeek? Well, Google Gemini collects even more of your personal data
  • Maginative: Gemini App Gets a Major Upgrade: Canvas Mode, Audio Overviews, and More
  • TestingCatalog: Google launches Canvas and Audio Overview for all Gemini users
  • Android Faithful: Google Gemini Gets A Powerful Collaborative Upgrade: Canvas and Audio Overviews Now Available

Charles Lamanna@Microsoft 365 Blog //
Microsoft is enhancing Copilot Studio with new capabilities to build autonomous agents, set to be in public preview at Microsoft Ignite 2024. These agents are designed to understand the nature of users' work and act on their behalf, offering support across business roles, teams, and functions. The goal is to transform business operations by automating complex tasks and streamlining workflows.

These autonomous agents can be configured, secured, and tested, automating tasks across apps and data sources for entire teams. Organizations are already utilizing Copilot Studio to create agents for specific business workflows, such as Pets at Home, which developed an agent for its profit protection team that could potentially drive a seven-figure annual savings. Copilot Studio plays a crucial role in customizing Copilot and creating agents for an entire company, enhancing efficiency, customer experience, and driving growth.

Recommended read:
References :
  • Microsoft 365 Blog: Announcement of autonomous agent capabilities in Microsoft Copilot Studio.
  • Insight Partners: AI Agents: Disrupting automation and reimagining productivity
  • Source: Details on Microsoft 365 Copilot for Finance.
  • Bernard Marr: Details major banks investing in AI agents for financial services.
  • Windows Copilot News: The next step for the technology is AI agents – chatbots that perform a series of linked tasks based on instructions.
  • www.laptopmag.com: Microsoft's Copilot for Gaming uses AI to solve a problem every gamer faces
  • Pivot to AI: Microsoft Copilot for Games: an AI Clippy for your Xbox
  • blogs.microsoft.com: From questions to discoveries: NASA’s new Earth Copilot brings Microsoft AI capabilities to democratize access to complex data