News from the AI & ML world

DeeperML - #microsoftai

@www.microsoft.com //
Microsoft is pushing the boundaries of AI with advancements in both model efficiency and novel applications. The company recently commemorated the one-year anniversary of Phi-3 by introducing three new small language models: Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning. These models are designed to deliver complex reasoning capabilities that rival much larger models while maintaining efficiency for diverse computing environments. According to Microsoft, "Phi-4-reasoning generates detailed reasoning chains that effectively leverage additional inference-time compute," demonstrating that high-quality synthetic data and careful curation can lead to smaller models that perform comparably to their more powerful counterparts.

The 14-billion parameter Phi-4-reasoning and its enhanced version, Phi-4-reasoning-plus, have shown outstanding performance on numerous benchmarks, outperforming larger models. Notably, they achieve better results than OpenAI's o1-mini and a DeepSeek R1 distill on Llama 70B on mathematical reasoning and PhD-level science questions. Furthermore, Phi-4-reasoning-plus surpasses the massive 671-billion parameter DeepSeek-R1 model on AIME and HMMT evaluations. These results highlight the efficiency and competitive edge of the new models.

In addition to pushing efficiency, Microsoft Research has introduced ARTIST (Agentic Reasoning and Tool Integration in Self-improving Transformers), a framework that combines agentic reasoning, reinforcement learning, and dynamic tool use to enhance LLMs. ARTIST enables models to autonomously decide when, how, and which tools to use. This framework aims to address the limitations of static internal knowledge and text-only reasoning, especially in tasks requiring real-time information or domain-specific expertise. The integration of reinforcement learning allows the models to adapt dynamically and interact with external tools and environments during the reasoning process, ultimately improving their performance in real-world applications.

Recommended read:
References :
  • Microsoft Research: In this issue: New research on compound AI systems and causal verification of the Confidential Consortium Framework; release of Phi-4-reasoning; enriching tabular data with semantic structure, and more. The post appeared first on .
  • www.microsoft.com: Research Focus: Week of May 7, 2025
  • learn.aisingapore.org: Phi-4-reasoning, a 14-billion parameter model, has been released by Microsoft. The model has shown promise in achieving competitive performance with larger models through supervised fine-tuning and synthetic data curation.
  • Source: Microsoft Fusion Summit explores how AI can accelerate fusion research

@www.marktechpost.com //
Microsoft Research has unveiled ARTIST (Agentic Reasoning and Tool Integration in Self-improving Transformers), a reinforcement learning framework designed to enhance Large Language Models (LLMs) with agentic reasoning and dynamic tool use. This framework addresses the limitations of current RL-enhanced LLMs, which often rely on static internal knowledge and text-only reasoning, making them unsuitable for tasks requiring real-time information, domain-specific expertise, or precise computations. ARTIST enables models to autonomously decide when, how, and which tools to use, allowing for more effective reasoning strategies that adapt dynamically to a task’s complexity.

Microsoft researchers have also conducted a comparison of API-based and GUI-based AI agents, revealing the distinct advantages of each approach. API agents, which interact with software through programmable interfaces, are found to be faster, more stable, and less error-prone as they complete tasks via direct function calls. GUI agents, on the other hand, mimic human interactions with software interfaces, navigating menus and clicking buttons on a screen. While GUI agents may require multiple actions to accomplish the same goal, their versatility allows them to control almost any software with a visible interface, even without an API.

In a move to foster interoperability across platforms, Microsoft has announced support for the open Agent2Agent (A2A) protocol. This protocol empowers multi-agent applications by enabling structured agent communication, including the exchange of goals, management of state, invocation of actions, and secure return of results. A2A is set to be integrated into Azure AI Foundry and Copilot Studio, allowing developers to build agents that interoperate across clouds and frameworks while maintaining enterprise-grade security and compliance. Microsoft aims to empower both pro and citizen developers to create agents that can orchestrate tasks across diverse environments.

Recommended read:
References :
  • the-decoder.com: Microsoft finds API agents are faster but GUI agents more flexible
  • www.marktechpost.com: Microsoft Researchers Introduce ARTIST: A Reinforcement Learning Framework That Equips LLMs with Agentic Reasoning and Dynamic Tool Use
  • www.microsoft.com: Empowering multi-agent apps with the open Agent2Agent (A2A) protocol

Chris McKay@Maginative //
Microsoft is formally partnering with Google on the Agent2Agent (A2A) protocol, signaling a major step towards interoperability in AI systems. This move will enable AI agents to communicate and collaborate across different platforms and organizational boundaries, a previously significant limitation of current AI technology. Microsoft is integrating the A2A protocol into its Azure AI Foundry and Copilot Studio, allowing enterprise customers to build multiagent workflows that span partner tools and production infrastructure. This partnership reflects the industry’s recognition that achieving seamless AI collaboration requires shared standards and open protocols.

The endorsement of Google DeepMind's A2A protocol and Anthropic’s Model Context Protocol (MCP) by Microsoft CEO Satya Nadella is expected to accelerate the development and adoption of agentic AI. Nadella's support is seen as a catalyst for further collaboration within the AI community, potentially leading to the creation of new agentic-based applications and platforms. He highlighted the importance of open protocols like A2A and MCP for enabling an agentic web, emphasizing that these standards allow customers to build agentic systems that interoperate by design. This commitment to open architectures aligns with Nadella’s long-standing belief that open standards are crucial for driving the adoption of new AI technologies.

Microsoft is joining over 50 technology partners, including Salesforce, Oracle, and SAP, in supporting the A2A standard created by Google. The company is also actively contributing to the A2A working group on GitHub, a Microsoft subsidiary, to further develop the specification and tooling. The A2A protocol allows AI agents to securely exchange goals, manage state, invoke actions, and return results, facilitating complex workflows across multiple specialized agents. Microsoft’s commitment to A2A underscores the shift towards an "open garden" approach, where AI systems can effectively work together regardless of their origin, marking a foundational shift in how software is built and decisions are made.

Recommended read:
References :
  • AI News | VentureBeat: The walled garden cracks: Nadella bets Microsoft’s Copilots—and Azure’s next act—on A2A/MCP interoperability
  • Maginative: Microsoft formalizes partnership with Google on Agent2Agent protocol, enabling AI systems to communicate across platforms and organizational boundaries.
  • the-decoder.com: Microsoft is adding support for the open Agent2Agent (A2A) protocol to Azure AI Foundry and Copilot Studio, aiming to enable AI agents to work together across different platforms.
  • www.microsoft.com: Microsoft Fusion Summit explores how AI can accelerate fusion research
  • CIO Dive - Latest News: The cloud giant joins more than 50 technology partners that are supporting the Agent2Agent standard, including Salesforce, Oracle and SAP.

Ellie Ramirez-Camara@Data Phoenix //
Microsoft is expanding its AI capabilities with enhancements to its Phi-4 family and the integration of the Agent2Agent (A2A) protocol. The company's new Phi-4-Reasoning and Phi-4-Reasoning-Plus models are designed to deliver strong reasoning performance with low latency. In addition, Microsoft is embracing interoperability by adding support for the open A2A protocol to Azure AI Foundry and Copilot Studio. This move aims to facilitate seamless collaboration between AI agents across various platforms, fostering a more connected and efficient AI ecosystem.

Microsoft's integration of the A2A protocol into Azure AI Foundry and Copilot Studio will empower AI agents to work together across platforms. The A2A protocol defines how agents formulate tasks and execute them, enabling them to delegate tasks, share data, and act together. With A2A support, Copilot Studio agents can call on external agents, including those outside the Microsoft ecosystem and built with tools like LangChain or Semantic Kernel. Microsoft reports that over 230,000 organizations are already utilizing Copilot Studio, with 90 percent of the Fortune 500 among them. Developers can now access sample applications demonstrating automated meeting scheduling between agents.

Independant developer Simon Willison has been testing the phi4-reasoning model, and reported that the 11GB download (available via Ollama) may well overthink things. Willison noted that it produced 56 sentences of reasoning output in response to a prompt of "hi". Microsoft is actively contributing to the A2A specification work on GitHub and intends to play a role in driving its future development. A public preview of A2A in Azure Foundry and Copilot Studio is anticipated to launch soon. Microsoft envisions protocols like A2A as the bedrock of a novel software architecture where interconnected agents automate daily workflows and collaborate across platforms with auditability and control.

Recommended read:
References :
  • bsky.app: Microsoft's phi4-reasoning model, an 11GB download (via Ollama) which may well overthink things
  • Simon Willison: Simon Willison Published some notes on Microsoft's phi4-reasoning model
  • the-decoder.com: Microsoft leverages Google's open A2A protocol for interoperable AI agents
  • the-decoder.com: Microsoft's Phi 4 responds to a simple "Hi" with 56 thoughts
  • Data Phoenix: Microsoft has introduced three new small language models—Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning—that reportedly deliver complex reasoning capabilities comparable to much larger models while maintaining efficiency for deployment across various computing environments.
  • www.microsoft.com: In this issue: New research on compound AI systems and causal verification of the Confidential Consortium Framework; release of Phi-4-reasoning; enriching tabular data with semantic structure, and more.

Mels Dees@Techzine Global //
Microsoft is reportedly preparing to host Elon Musk's Grok AI model within its Azure AI Foundry platform, signaling a potential shift in its AI strategy. The move, stemming from discussions with xAI, Musk's AI company, could make Grok accessible to a broad user base and integrate it into Microsoft's product teams via the Azure cloud service. Azure AI Foundry serves as a generative AI development hub, providing developers with the necessary tools and models to host, run, and manage AI-driven applications, potentially positioning Microsoft as a neutral platform supporting multiple AI models. This follows reports indicating Microsoft is exploring third-party AI models like DeepSeek and Meta for its Copilot service.

Microsoft's potential hosting of Grok comes amid reports that its partnership with OpenAI may be evolving. While Microsoft remains quiet about any deal with xAI, sources indicate that Grok will be available on Azure AI Foundry, providing developers with access to the model. However, Microsoft reportedly intends only to host the Grok model and will not be involved in training future xAI models. This collaboration with xAI could strengthen Microsoft's position as an infrastructure provider for AI models, offering users more freedom of choice in selecting which AI models they want to use within their applications.

Alongside these developments, Microsoft is enhancing its educational offerings with Microsoft 365 Copilot Chat agents. These specialized AI assistants can personalize student support and provide instructor assistance. Copilot Chat agents can be tailored to offer expertise in instructional design, cater to unique student preferences, and analyze institutional data. These agents are designed to empower educators and students alike, transforming education experiences through customized support and efficient access to resources.

Recommended read:
References :
  • www.microsoft.com: Discover how Microsoft 365 Copilot Chat agents in education can enhance learning with personalized student support, instructor assistance, and more.
  • Techzine Global: Microsoft is preparing to host the Grok AI model from xAI, Elon Musk’s AI company, within its Azure AI Foundry platform.
  • www.windowscentral.com: Microsoft is reportedly planning to host Elon Musk's Grok AI model. However, it won't host xAI's servers to train any of its future AI models.
  • thetechbasic.com: Microsoft is getting ready to add Elon Musk’s Grok AI model to its Azure cloud service. This move could help developers build new apps using Grok’s technology.

Joey Snow@Microsoft Security Blog //
Microsoft is significantly expanding its AI capabilities across various sectors, with innovations impacting cloud services, business applications, and even education. CEO Satya Nadella revealed that Microsoft's AI model performance is "doubling every six months," driven by advancements in pre-training, inference, and system design. Microsoft is also betting on Musk’s Grok AI to Challenge OpenAI’s Dominance by planning to host Grok AI on Azure, potentially providing developers with more options for building AI applications. These advancements are designed to accelerate AI innovation and business transformation for organizations.

The company's Dynamics 365 and Power Platform are receiving substantial updates with the 2025 release wave 1, integrating AI-driven Copilot and agent capabilities to enhance business processes and customer engagement. These new features will empower the workforce to automate tasks across sales, service, finance, and supply chain functions. Microsoft is also launching pre-built agents for Dynamics 365, accelerating time to value for businesses looking to leverage AI in their operations. Microsoft 365 Copilot is being implemented into education to enhance learning. Copilot Chat is being used for personalized student support and instructor assistance.

Security remains a key priority, with Microsoft sharing secure coding tips from experts at Microsoft Build. These tips, including securing AI from the start and locking down data with Purview APIs, are designed to help developers build safer applications. With AI playing a bigger role, secure coding is now a necessity. Microsoft is releasing a set of Purview APIs (+SDK) that will allow partners and customers to integrate their custom AI apps with the Microsoft Purview ecosystem for enterprise grade Data Security and Compliance outcomes.

Recommended read:
References :
  • Microsoft Security Blog: 14 secure coding tips: Learn from the experts at Microsoft Build
  • www.microsoft.com: 2025 release wave 1 brings hundreds of updates to Microsoft Dynamics 365 and Power Platform
  • www.microsoft.com: Setting a strong cloud foundation is paramount for organizations striving to achieve superior differentiation.

@zdnet.com //
Microsoft is rolling out a wave of new AI-powered features for Windows 11 and Copilot+ PCs, aiming to enhance user experience and streamline various tasks. A key addition is an AI agent designed to assist users in navigating and adjusting Windows 11 settings. This agent will understand user intent through natural language, allowing them to simply describe the setting they wish to change, such as adjusting mouse pointer size or enabling voice control. With user permission, the AI agent can then automate and execute the necessary adjustments. This feature, initially available to Windows Insiders on Snapdragon X Copilot+ PCs, seeks to eliminate the frustration of searching for and changing settings manually.

Microsoft is also enhancing Copilot with new AI skills, including the ability to act on screen content. One such action, "Ask Copilot," will enable users to draft content in Microsoft Word based on on-screen information, or create bulleted lists from selected text. These capabilities aim to boost productivity by leveraging generative AI to quickly process and manipulate information. Furthermore, the Windows 11 Start menu is undergoing a revamp, offering easier access to apps and a phone companion panel for quick access to information from synced iPhones or Android devices. The updated Start menu, along with the new AI features, will first be available to Windows Insiders running Snapdragon X Copilot Plus PCs.

In a shift toward passwordless security, Microsoft is removing the password autofill feature from its Authenticator app, encouraging users to transition to Microsoft Edge for password management. Starting in June 2025, users will no longer be able to save new passwords in the Authenticator app, with autofill functionality being removed in July 2025. By August 2025, saved passwords will no longer be accessible in the app. Microsoft argues that this change streamlines the process, as passwords will be synced with the Microsoft account and accessible through Edge. However, users who do not use Edge may find this transition less seamless, as they will need to install Edge and make it the default autofill provider to maintain access to their saved passwords.

Recommended read:
References :
  • cyberinsider.com: Microsoft to Retire Password Autofill in Authenticator by August 2025
  • www.bleepingcomputer.com: Microsoft ends Authenticator password autofill, moves users to Edge
  • Davey Winder: You Have Until June 1 To Save Your Passwords, Microsoft Warns App Users
  • The DefendOps Diaries: Microsoft's Strategic Shift: Transitioning Password Management to Edge
  • www.ghacks.net: Microsoft removes Authenticator App feature to promote Microsoft Edge
  • www.ghacks.net: Microsoft Removes Authenticator App feature to promote Microsoft Edge
  • Tech Monitor: Microsoft to phase out Authenticator autofill by August 2025
  • Davey Winder: You won't be able to save new passwords after June 1, Microsoft warns all authenticator app users. Here's what you need to do.
  • www.microsoft.com: The post appeared first on .
  • PCWorld: If you use Microsoft’s Authenticator app on your mobile phone as a password manager, here’s some bad news: Microsoft is discontinuing the “autofill†password management functionality in Authenticator.
  • securityaffairs.com: Microsoft announced that all new accounts will be “passwordless by default” to increase their level of security.
  • heise Security: Microsoft Authenticator: Zurück vom Passwort-Manager zum Authenticator Microsofts Authenticator-App kann neben erweiterter Authentifizierung als zweiter Faktor auch Passwörter verwalten. Das endet jetzt.
  • PCMag Middle East ai: Microsoft Tests Using Copilot AI to Adjust Windows 11 Settings for You
  • PCMag UK security: Microsoft Is Dropping A Useful Feature From Its Authenticator App
  • www.zdnet.com: Microsoft's new AI skills are coming to Copilot+ PCs - including some for all Windows 11 users
  • Dataconomy: Microsoft is revamping the Windows 11 Start menu and introducing several new AI features this month, initially available to Windows Insiders running Snapdragon X Copilot Plus PCs, including the newly announced Surface devices.
  • www.windowscentral.com: Microsoft just announced major Windows 11 and Copilot+ PC updates, adding a bunch of exclusive features and AI capabilities.
  • Microsoft Copilot Blog: Welcome to Microsoft’s Copilot Release Notes. Here we’ll provide regular updates on what’s happening with Copilot, from new features to firmware updates and more.
  • shellypalmer.com: Microsoft is officially going passwordless by default. On the surface, it’s a welcome step toward a safer, simpler future.
  • www.techradar.com: Microsoft has a big new AI settings upgrade for Windows 11 on Copilot+ PCs – plus 3 other nifty tricks
  • www.engadget.com: Microsoft introduces agent for AI-powered settings controls in Copilot+ PCs
  • www.ghacks.net: Finally! Microsoft is making AI useful in Windows by introducing AI agents
  • www.cybersecurity-insiders.com: Cybersecurity Insiders reports Microsoft is saying NO to passwords and to shut down Authenticator App
  • FIDO Alliance: PC Mag: RIP Passwords: Microsoft Moves to Passkeys as the Default on New Accounts
  • www.cybersecurity-insiders.com: Microsoft to say NO to passwords and to shut down Authenticator App

Carl Franzen@AI News | VentureBeat //
Microsoft has recently launched its Phi-4 reasoning models, marking a significant stride in the realm of small language models (SLMs). This expansion of the Phi series includes three new variants: Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning, designed to excel in advanced reasoning tasks like mathematics and coding. The company's new models are optimized for complex problems, and can handle complex problems through structured reasoning and internal reflection, while remaining lightweight enough to run on lower-end hardware, including mobile devices.

Microsoft asserts that these models demonstrate that smaller AI can achieve impressive results, rivaling much larger models while operating efficiently on devices with limited resources. CEO Satya Nadella says Microsoft's AI model performance is "doubling every 6 months" due to pre-training, inference, and system design. The Phi-4-reasoning model contains 14 billion parameters and was trained via supervised fine-tuning using reasoning paths from OpenAI's o3-mini. A more advanced version, Phi-4-reasoning-plus, adds reinforcement learning and processes 1.5 times more tokens than the base model.

These new models leverage distillation, reinforcement learning, and high-quality data to achieve their performance. In a demonstration, the Phi-4-reasoning model correctly solved a wordplay riddle by recognizing patterns and applying local reasoning, showcasing its ability to identify patterns, understand riddles, and perform mathematical operations. Despite having just 14 billion parameters, the Phi-4 reasoning models match or outperform significantly larger systems, including the 70B parameter DeepSeek-R1-Distill-Llama. On the AIME-2025 benchmark, the Phi models also surpass DeepSeek-R1, which has 671 billion parameters.

Recommended read:
References :
  • Ken Yeung: Microsoft is doubling down on small language models with new Phi-4 variants that aim to prove a bold idea: small AI can think big.
  • www.windowscentral.com: Microsoft just launched expanded small language models (SLMs) based on its own Phi-4 AI.
  • THE DECODER: Microsoft is expanding its Phi series of compact language models with three new variants designed for advanced reasoning tasks.
  • the-decoder.com: Microsoft's Phi 4 responds to a simple "Hi" with 56 thoughts
  • Data Phoenix: Microsoft launches Phi-4 'reasoning' models to celebrate Phi-3's first anniversary

Carl Franzen@AI News | VentureBeat //
Microsoft has announced the release of Phi-4-reasoning-plus, a new small, open-weight language model designed for advanced reasoning tasks. Building upon the architecture of the previously released Phi-4, this 14-billion parameter model integrates supervised fine-tuning and reinforcement learning to achieve strong performance on complex problems. According to Microsoft, the Phi-4 reasoning models outperform larger language models on several demanding benchmarks, despite their compact size. This new model pushes the limits of small AI, demonstrating that carefully curated data and training techniques can lead to impressive reasoning capabilities.

The Phi-4 reasoning family, consisting of Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning, is specifically trained to handle complex reasoning tasks in mathematics, scientific domains, and software-related problem solving. Phi-4-reasoning-plus, in particular, extends supervised fine-tuning with outcome-based reinforcement learning, which is targeted for improved performance in high-variance tasks such as competition-level mathematics. All models are designed to enable reasoning capabilities, especially on lower-performance hardware such as mobile devices.

Microsoft CEO Satya Nadella revealed that AI is now contributing to 30% of Microsoft's code. The open weight models were released with transparent training details and evaluation logs, including benchmark design, and are hosted on Hugging Face for reproducibility and public access. The model has been released under a permissive MIT license, enabling its use for broad commercial and enterprise applications, and fine-tuning or distillation, without restriction.

Recommended read:
References :
  • the-decoder.com: Microsoft's Phi-4-reasoning models outperform larger models and run on your laptop or phone
  • MarkTechPost: Microsoft AI Released Phi-4-Reasoning: A 14B Parameter Open-Weight Reasoning Model that Achieves Strong Performance on Complex Reasoning Tasks
  • THE DECODER: Microsoft's Phi-4-reasoning models outperform larger models and run on your laptop or phone
  • AI News | VentureBeat: The release demonstrates that with carefully curated data and training techniques, small models can deliver strong reasoning performance.
  • Maginative: Microsoft’s Phi-4 Reasoning Models Push the Limits of Small AI
  • www.marktechpost.com: Microsoft AI Released Phi-4-Reasoning: A 14B Parameter Open-Weight Reasoning Model that Achieves Strong Performance on Complex Reasoning Tasks
  • www.tomshardware.com: Microsoft's CEO reveals that AI writes up to 30% of its code — some projects may have all of its code written by AI
  • Ken Yeung: Microsoft’s New Phi-4 Variants Show Just How Far Small AI Can Go
  • www.tomsguide.com: Microsoft just unveiled new Phi-4 reasoning AI models — here's why they're a big deal
  • Techzine Global: Microsoft is launching three new advanced small language models as an extension of the Phi series. These models have reasoning capabilities that enable them to analyze and answer complex questions effectively.
  • Analytics Vidhya: Microsoft Launches Two Powerful Phi-4 Reasoning Models
  • www.analyticsvidhya.com: Microsoft Launches Two Powerful Phi-4 Reasoning Models
  • www.windowscentral.com: Microsoft Introduces Phi-4 Reasoning SLM Models — Still "Making Big Leaps in AI" While Its Partnership with OpenAI Frays
  • Towards AI: Phi-4 Reasoning Models
  • the-decoder.com: Microsoft's Phi 4 responds to a simple "Hi" with 56 thoughts
  • Data Phoenix: Microsoft has introduced three new small language models—Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning—that reportedly deliver complex reasoning capabilities comparable to much larger models while maintaining efficiency for deployment across various computing environments.
  • AI News: Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning—that reportedly deliver complex reasoning capabilities comparable to much larger models while maintaining efficiency for deployment across various computing environments.

@blogs.microsoft.com //
Microsoft is expanding its digital commitments to Europe, focusing on open access to AI and cloud infrastructure. The company's efforts are aimed at fostering mutual economic growth and prosperity across the Atlantic, and they emphasize the importance of sustaining customer trust. This initiative includes datacenter operations across 16 countries and a Digital Resilience Commitment, promising to stand by European customers regardless of geopolitical shifts. Microsoft’s commitment signals a willingness to fight for its European customers in U.S. courts if necessary, drawing on a history of prior legal actions when deemed essential.

Satya Nadella, Microsoft CEO, revealed that AI is now responsible for a significant portion of the company's code. AI writes between 20% and 30% of the code in Microsoft's repositories and projects. This advancement highlights the transformative impact of AI on software development, enhancing efficiency and cutting down on entry-level jobs by automating repetitive tasks. Microsoft is observing that AI produces better results with Python code compared to C++, primarily due to Python's simpler syntax and dynamic typing style.

Microsoft is also enhancing its AI Access Principles, ensuring open access to its AI and cloud platform across Europe. While there are differing opinions on the sufficiency of these commitments to achieve a "sovereign Europe," the tech community generally acknowledges Microsoft's efforts. Some believe government-funded European Digital Public Infrastructure is needed, while others feel Microsoft deserves credit for its initiatives. The company hopes that ongoing talks can resolve tariff issues and reduce non-tariff barriers, aligning with the recommendations in the recent Draghi report.

Recommended read:
References :
  • ai fray: Microsoft makes digital commitments to Europe: open access to AI, cloud infrastructure, pledges to fight for European customers in U.S. court
  • The Microsoft Cloud Blog: Microsoft announces new European digital commitments

editors@tomshardware.com (Hassam@tomshardware.com //
Microsoft CEO Satya Nadella has revealed that Artificial Intelligence is playing an increasingly significant role in the company's software development. Speaking at Meta's LlamaCon conference, Nadella stated that AI now writes between 20% and 30% of the code in Microsoft's repositories and projects. This underscores the growing influence of AI in revolutionizing software creation, especially for repetitive and data-heavy tasks, leading to efficiency gains. Nadella mentioned that AI is showing more promise in generating Python code compared to C++, due to Python's simpler syntax and better memory management.

Microsoft's embrace of AI in coding aligns with similar trends observed at other tech giants like Google, where AI is reported to generate over 30% of new code. The use of AI in code generation also brings forth concerns about job displacement for new programmers. Despite these anxieties, industry experts highlight the importance of software developers adapting to and leveraging AI tools, rather than ignoring them. Nadella emphasized that while AI can produce code, senior developer oversight remains critical to ensure the stability and reliability of the production environment.

Beyond its internal use, Microsoft is also making strategic moves to expand its cloud and AI infrastructure in Europe. This commitment to the European market includes pledges to fight for its European customers in U.S. courts if necessary, highlighting the importance of trans-Atlantic ties and digital resilience. Microsoft is dedicated to ensuring open access to its AI and cloud platform across Europe, and will be enhancing its AI Access Principles in the coming months. Furthermore, Microsoft is releasing the 2025 Work Trend Index, designed to help leaders and employees navigate the shifting landscape brought about by AI.

Recommended read:
References :
  • news.microsoft.com: Microsoft Releases 2025 Work Trend Index: The Frontier Firm Emerges in Singapore
  • The Microsoft Cloud Blog: Accelerate AI innovation and business transformation: Scaling AI transformation with strategic cloud partnership
  • www.tomshardware.com: Satya Nadella revealed that AI writes as much as 20% to 30% of the code in Microsoft's repositories and projects.
  • TechCrunch: Microsoft CEO says up to 30% of the company’s code was written by AI.
  • Entrepreneur: AI is already writing about 30% of code at Microsoft, Google, and Meta.
  • PCWorld: Microsoft's CEO claims 30% of its new code is written by AI.
  • blogs.microsoft.com: Microsoft is announcing five digital commitments to Europe, starting with an expansion of our cloud and AI infrastructure in Europe.
  • CIO Dive - Latest News: Microsoft expands European footprint amid global trade tensions
  • PCMag Middle East ai: Microsoft Says Up to 30% of Its Code Now Written by AI, Meta Aims For 50% in 2026
  • SiliconANGLE: Satya Nadella says AI is now writing 30% of Microsoft’s code but real change is still many years away
  • The Register - Software: 30 percent of some Microsoft code now written by AI - especially the new stuff
  • siliconangle.com: Satya Nadella says AI is now writing 30% of Microsoft’s code but real change is still many years away
  • MarkTechPost: Microsoft AI Released Phi-4-Reasoning: A 14B Parameter Open-Weight Reasoning Model that Achieves Strong Performance on Complex Reasoning Tasks
  • Analytics Vidhya: Microsoft Launches Two Powerful Phi-4 Reasoning Models
  • www.windowscentral.com: Satya Nadella says AI already writes 30% of Microsoft's code — but Bill Gates claims software development is too complex to be fully automated
  • The Next Platform: AI Steady, Cloud Accelerating Gives Microsoft A Big Datacenter Boost

Isaac Sacolick@drive.starcio.com //
Microsoft is significantly expanding its AI infrastructure and coding capabilities. CEO Satya Nadella recently revealed that Artificial Intelligence now writes between 20% and 30% of the code powering Microsoft's software. In some projects, AI may even write the entirety of the code. This adoption of AI in coding highlights its transformative impact on software development, streamlining repetitive and data-heavy tasks to boost corporate efficiency.

The increasing reliance on AI for code generation is not without its concerns, particularly for new programmers. While AI excels at handling predictable tasks, senior developer oversight remains crucial to ensure the stability and accuracy of the code. Microsoft is reporting better results with AI-generated Python code compared to C++, partly attributed to Python's simpler syntax and memory management features.

In addition to enhancing its coding capabilities, Microsoft is also focusing on expanding its digital commitments and infrastructure in Europe. Furthermore, Appian is transforming low-code app development through AI agents. These agents are making app creation easier and more scalable, fostering collaboration and innovation in the development process. Microsoft has also released its 2025 Work Trend Index, highlighting the emergence of the "Frontier Firm" in Singapore, where businesses are embracing AI agents to enhance workforce capabilities and address capacity gaps.

Recommended read:
References :
  • drive.starcio.com: How Appian is Inspiring with AI Agents and Transforming Low-Code App Development
  • www.tomshardware.com: Microsoft's CEO reveals that AI writes up to 30% of its code — some projects may have all of its code written by AI
  • news.microsoft.com: Microsoft Releases 2025 Work Trend Index: The Frontier Firm Emerges in Singapore

@techstrong.ai //
Microsoft has unveiled the public preview of Azure MCP Server, an open-source tool designed to empower AI agents with enhanced capabilities. This innovative server implements the Model Context Protocol (MCP), establishing a standardized communication bridge between AI agents and Azure cloud resources. By utilizing natural language instructions, AI systems can now seamlessly interact with various Azure services, marking a significant step towards AI-driven business transformation. This allows a "write once" approach to integration between AI systems and data sources by creating a universal interface.

The Azure MCP Server provides AI agents with access to core Azure services, including Azure Cosmos DB (NoSQL), Azure Storage, Azure Monitor (Log Analytics), Azure App Configuration, and Azure Resource Groups. Agents can perform tasks such as querying databases, managing storage blobs, configuring monitoring settings, and managing resource groups. The momentum behind MCP is substantial, with over 1,000 MCP servers built since its launch. The server's current capabilities enable agents to list accounts, databases, and containers; execute SQL queries; access container properties; query logs using Kusto Query Language (KQL); and manage key-value pairs.

Furthermore, Microsoft's AI Red Team (AIRT) has released a comprehensive guide to failure modes in agentic AI systems. The report categorizes failure modes across two dimensions: security and safety, each comprising novel and existing types. Novel security failures include agent compromise, agent injection, and multi-agent jailbreaks. Novel safety failures cover issues such as biases in resource allocation and prioritization risks. By publishing this detailed taxonomy, Microsoft aims to provide practitioners with a critical foundation for designing and maintaining resilient agentic systems.

Recommended read:
References :

@www.microsoft.com //
References: THE DECODER , Ken Yeung , Maginative ...
Microsoft is aggressively integrating AI into its services to boost productivity and user experience. A key development is the rollout of Microsoft Copilot for Judges in UK courts, alongside updated guidelines for GenAI usage. These efforts are part of a broader strategy to embrace human-agent collaboration, as seen in the latest Microsoft 365 Copilot upgrades. The goal is to harness AI's capabilities while ensuring responsible and secure implementation across various sectors.

The UK Courts and Tribunals Judiciary are encouraging judges to utilize Microsoft’s ‘Copilot Chat’ genAI capability through their eJudiciary platform. Updated guidance emphasizes that while useful, judges must use genAI cautiously and understand its limitations, particularly concerning the accuracy and sources of information. Judges are warned that public AI chatbots do not provide answers from authoritative databases and are not necessarily the most accurate source. Microsoft has assured that ‘Copilot Chat’ offers enterprise data protection and operates within Microsoft 365's security frameworks when accessed via the eJudiciary account.

Microsoft is also working to enhance safety and security in AI agent systems. The Microsoft AI Red Team has released a whitepaper outlining the taxonomy of failure modes in AI agents, to help security professionals and machine learning engineers understand potential risks. This effort involved cataloging failures from internal red teaming, collaboration with various Microsoft teams, and interviews with external practitioners. The taxonomy identifies failure modes across security and safety pillars, addressing issues from data exfiltration to biased service delivery.

Recommended read:
References :
  • THE DECODER: Microsoft adds reasoning agents and company search to 365 Copilot
  • Ken Yeung: Microsoft Pushes Into Human-Agent Collaboration Era with Latest M365 Copilot Upgrades
  • venturebeat.com: Microsoft just launched powerful AI ‘agents’ that could completely transform your workday — and challenge Google’s workplace dominance
  • Maginative: Microsoft 365 Copilot Redesign: The New Face of Human-Agent Collaboration
  • The Register - Software: The latest update to Microsoft 365 Copilot brings AI-powered search, so-called reasoning agents, and a new Agent Store. Some users already have access to certain features, while others may have to wait through May.…
  • www.artificiallawyer.com: UK Courts Roll Out Microsoft Copilot For Judges, Update GenAI Rules
  • www.microsoft.com: Experience the future of customer service with AI agents
  • Ken Yeung: Microsoft: Companies Are Going AI-First, Turning to Digital Labor to Scale
  • the-decoder.com: Microsoft adds reasoning agents and company search to 365 Copilot
  • thetechbasic.com: Microsoft has a new vision for the future of work.
  • blogs.microsoft.com: The 2025 Annual Work Trend Index: The Frontier Firm is born
  • Towards AI: The Future of Work: How Microsoft 365 Office Solutions Are Evolving with AI Integration
  • www.marktechpost.com: Microsoft Releases a Comprehensive Guide to Failure Modes in Agentic AI Systems
  • Source Asia: Microsoft blog post on agentic AI driving AI-first business transformation.

@techxplore.com //
Microsoft is making strides in artificial intelligence with the introduction of a new AI model designed to run efficiently on regular CPUs, rather than the more power-hungry GPUs traditionally required. Developed by computer scientists at Microsoft Research, in collaboration with the University of Chinese Academy of Sciences, this innovative model utilizes a 1-bit architecture, processing data using only three values: -1, 0, and 1. This allows for simplified computations that rely on addition and subtraction, significantly reducing memory usage and energy consumption compared to models that use floating-point numbers. Testing has shown that this CPU-based model can compete with and even outperform some GPU-based models in its class, marking a significant step towards more sustainable AI.

Alongside advancements in AI model efficiency, Microsoft is also enhancing user accessibility across its platforms. The company's Dynamics 365 Field Service is receiving a new Exchange Integration feature, designed to seamlessly synchronize work order bookings with Outlook and Teams calendars. This feature allows technicians to view their work assignments, personal appointments, and other work meetings in one centralized location. With a one-way sync from Dynamics 365 to Exchange that takes a maximum of 15 minutes, technicians can operate within Outlook, reducing scheduling confusion and creating a more streamlined workflow.

However, the rapid expansion of AI also raises concerns about energy consumption and resource management. OpenAI CEO Sam Altman has revealed that user politeness, specifically the use of "please" and "thank you" when interacting with ChatGPT, is costing the company millions of dollars in electricity. This highlights the immense energy requirements of AI chatbots, which consume significantly more power than traditional Google searches. These insights underscore the importance of developing energy-efficient AI solutions, as well as considering the broader environmental impact of increasingly complex AI systems.

Recommended read:
References :

Megan Crouse@techrepublic.com //
Microsoft has unveiled BitNet b1.58 2B4T, a groundbreaking AI model designed for exceptional efficiency. Developed by Microsoft's General Artificial Intelligence group, this model utilizes one-bit neural network weights, representing each weight with only three discrete values (-1, 0, or +1). This approach, called ternary quantization, allows each weight to be stored in just 1.58 bits, drastically reducing memory usage. The result is an AI model that can operate on standard CPUs without the need for specialized, energy-intensive GPUs.

Unlike conventional AI models that rely on 16- or 32-bit floating-point numbers, BitNet's unique architecture allows it to run smoothly on hardware like Apple's M2 chip, requiring only 400MB of memory. To compensate for its low-precision weights, BitNet b1.58 2B4T was trained on a massive dataset of four trillion tokens, the equivalent of approximately 33 million books. This extensive training enables the model to perform on par with, and in some cases even better than, other leading models of similar size, such as Meta's Llama 3.2 1B, Google's Gemma 3 1B, and Alibaba's Qwen 2.5 1.5B.

To facilitate the deployment and adoption of this innovative model, Microsoft has released a custom software framework called bitnet.cpp, optimized to take full advantage of BitNet's ternary weights. This framework is available for both GPU and CPU execution, including a lightweight C++ version. The model has demonstrated strong performance across a variety of tasks including math and common sense reasoning in benchmark tests. Microsoft plans to expand BitNet to support longer texts, additional languages, and multimodal inputs like images, while also working on the Phi series, another family of efficient AI models.

Recommended read:
References :
  • the-decoder.com: BitNet: Microsoft shows how to put AI models on a diet The article appeared first on .
  • TechSpot: Microsoft's BitNet shows what AI can do with just 400MB and no GPU
  • www.techrepublic.com: Microsoft’s model BitNet b1.58 2B4T is available on Hugging Face but doesn’t run on GPU and requires a proprietary framework.
  • www.tomshardware.com: Microsoft researchers developed a 1-bit AI model that's efficient enough to run on traditional CPUs without needing specialized chips like NPUs or GPUs.

Megan Crouse@techrepublic.com //
Microsoft has unveiled BitNet b1.58, a groundbreaking language model designed for ultra-efficient operation. Unlike traditional language models that rely on 16- or 32-bit floating-point numbers, BitNet utilizes a mere 1.58 bits per weight. This innovative approach significantly reduces memory requirements and energy consumption, enabling the deployment of powerful AI on devices with limited resources. The model is based on the standard transformer architecture, but incorporates modifications aimed at efficiency, such as BitLinear layers and 8-bit activation functions.

The BitNet b1.58 2B4T model contains two billion parameters and was trained on a massive dataset of four trillion tokens, roughly equivalent to the contents of 33 million books. Despite its reduced precision, BitNet reportedly performs comparably to models that are two to three times larger. In benchmark tests, it outperformed other compact models and performed competitively with significantly larger and less efficient systems. Its memory footprint is just 400MB, making it suitable for deployment on laptops or in cloud environments.

Microsoft has released dedicated inference tools for both GPU and CPU execution, including a lightweight C++ version, to facilitate adoption. The model is available on Hugging Face. Future development plans include expanding the model to support longer texts, additional languages, and multimodal inputs such as images. Microsoft is also working on another efficient model family under the Phi series. The company demonstrated that this model can run on a Apple M2 chip.

Recommended read:
References :
  • www.techrepublic.com: Microsoft Releases Largest 1-Bit LLM, Letting Powerful AI Run on Some Older Hardware
  • medium.com: Microsoft has released a new language model, BitNet, designed for energy efficiency, minimizing the computational and memory requirements for use on older hardware. This strategy aims to make advanced AI more accessible to a wider range of users.
  • THE DECODER: Microsoft's new model, BitNet b1.58 2B4T, is intended to operate with reduced memory and energy consumption. The model demonstrates an effort to expand access and reduce computational burdens for AI applications.
  • www.zdnet.com: Microsoft introduces BitNet b1.58 2B4T, a new small language model designed to run efficiently on older hardware without GPUs.
  • the-decoder.com: Microsoft Shows How to Put AI Models on a Diet
  • arstechnica.com: This article details Microsoft researchers creating a super-efficient AI that uses up to 96% less energy.
  • www.tomshardware.com: Microsoft researchers build 1-bit AI LLM, model small enough to run on some CPUs
  • TechSpot: Microsoft's BitNet shows what AI can do with just 400MB and no GPU
  • www.sciencedaily.com: Researchers developed a more efficient way to control the outputs of a large language model, guiding it to generate text that adheres to a certain structure, like a programming language, and remains error free.

Alexey Shabanov@TestingCatalog //
Microsoft is significantly expanding the integration of artificial intelligence across its platforms, aiming to enhance productivity and user experience. Key initiatives include advancements in Copilot Studio, Dynamics 365 Field Service, and Azure AI Foundry, along with a focus on cybersecurity and AI safety. These efforts demonstrate Microsoft's commitment to embedding AI into various aspects of its software and services, transforming how users interact with technology.

Copilot Studio is gaining a "computer use" tool, available as an early access research preview, allowing agents to interact with graphical user interfaces across websites and desktop applications. This feature enables the automation of tasks, such as data entry and market research, even for systems lacking direct API integration, marking an evolution in robotic process automation (RPA). Moreover, Microsoft is launching the Copilot Merchant Program, integrating third-party retailers into its AI ecosystem for real-time product suggestions and seamless purchases.

Microsoft is also actively addressing cybersecurity concerns related to AI through its Secure Future Initiative (SFI). This initiative focuses on improving Microsoft's security posture and working with governments and industry to enhance the security of the entire digital ecosystem. Additionally, Microsoft Research is exploring AI systems as "Tools for Thought" at CHI 2025, examining how AI can support critical thinking, decision-making, and creativity, as part of its ongoing effort to reimagine AI’s role in human thinking and knowledge work.

Recommended read:
References :
  • TestingCatalog: Details Copilot Studio gains early access computer use tool to automate complex GUIs.
  • THE DECODER: Microsoft has launched "Computer Use" for Copilot Studio as an early research preview.
  • The Lalit Blogs: Greetings, readers! Lalit Mohan here, diving into one of the most exciting announcements from Microsoft—the introduction of Microsoft 365 Copilot Chat.
  • Microsoft Copilot Blog: Release Notes: April 16, 2025
  • www.techrepublic.com: Microsoft’s New Copilot Studio Feature Offers More User-Friendly Automation
  • Analytics India Magazine: Microsoft has released the model weights on Hugging Face, along with open-source code for running it.
  • THE DECODER: BitNet b1.58 2B4T is a new language model from Microsoft designed to operate with minimal energy and memory usage.
  • TestingCatalog: Discover Microsoft Copilot, including a new avatar and voices. Get insights on upcoming features and how they enhance your AI experience.
  • www.microsoft.com: Describes the Exchange Integration feature in Dynamics 365 Field Service, syncing work order bookings with Outlook and Teams calendars.
  • The Microsoft Cloud Blog: How real-world businesses are transforming with AI – with 50 new stories
  • The Register - Software: The Register article: Microsoft 365 Copilot gets a new crew, including Researcher and Analyst bots.
  • the-decoder.com: Microsoft is adding reasoning agents and company search to 365 Copilot
  • THE DECODER: Microsoft adds reasoning agents and company search to 365 Copilot

@www.microsoft.com //
Microsoft researchers have unveiled Debug-Gym, a novel environment designed to enhance the debugging skills of AI coding tools. Debug-Gym allows AI agents to actively engage in the debugging process by setting breakpoints, navigating codebases, and examining runtime variable values. This interactive approach enables AI to gain a deeper understanding of code execution flow, mimicking how human developers identify and resolve errors. The platform supports both Python and Java programming languages, providing approximately 3,000 real-world debugging tasks for AI agents to learn from.

The need for Debug-Gym stems from the observation that while AI coding tools are increasingly adept at code generation, they often struggle with debugging, a critical and time-consuming aspect of software development. Current AI models often lack the ability to explore and gather additional information when initial solutions fail, leaving bugs unaddressed. Debug-Gym addresses this limitation by providing AI agents with the tools and environment to actively seek information and refine their approach, similar to how human developers use interactive debuggers like Python's pdb to inspect variables and trace execution.

Debug-Gym offers buggy Python programs with known faults, spanning syntax, runtime, and logical errors, along with a tool interface exposing debugging commands. This environment is designed to evaluate how AI agents perform in realistic code-repair tasks. The intent is to facilitate the training of AI models to perform interactive debugging more effectively, potentially leading to significant time savings for developers. Microsoft intends to open-source Debug-Gym to encourage further research in this area.

Recommended read:
References :
  • MarkTechPost: Can LLMs Debug Like Humans? Microsoft Introduces Debug-Gym for AI Coding Agents
  • www.microsoft.com: Developers spend a lot of time debugging code.
  • www.techradar.com: Microsoft study claims AI is still struggling to debug software
  • www.marktechpost.com: Can LLMs Debug Like Humans? Microsoft Introduces Debug-Gym for AI Coding Agents
  • Microsoft Research: Debug-gym: an environment for AI coding tools to learn how to debug code like programmers
  • www.developer-tech.com: Microsoft Research teaches AI tools how to debug code
  • www.laptopmag.com: Microsoft Recall is gradually rolling out — will new privacy features get you to try Windows AI?
  • Developer Tech News: Microsoft Research teaches AI tools how to debug code
  • www.microsoft.com: FYAI: How agents will transform business and daily work with Business and Industry Copilot Corporate Vice President Charles Lamanna
  • TechSpot: Microsoft Research shows AI coding tools fall short in key debugging tasks
  • arstechnica.com: Researchers find AI is pretty bad at debugging—but they’re working on it