News from the AI & ML world

DeeperML - #azureai

@devblogs.microsoft.com //
Microsoft is aggressively pushing AI innovation to the edge, with a series of announcements highlighting the company's vision for an AI-powered future where humans partner with autonomous agents. At the Build developer conference, Microsoft unveiled tools designed to help developers build this agentic future, embedding bots into browsers, websites, operating systems, and everyday workflows. Unlike previous Copilot-centric approaches, Microsoft is placing greater emphasis on dynamic agents, powered by integrations with third-party systems through the Model Context Protocol (MCP), shifting from single-use AI assistants to broader, integrated ecosystems.

Microsoft is also introducing the Agent Store for Microsoft 365 Copilot, a centralized, curated marketplace designed to help automate tasks, streamline workflows, and boost productivity. The Agent Store offers a new experience within Microsoft 365 Copilot that enables users to browse, install, and try agents tailored to their needs, and features agents built by Microsoft, trusted partners, and customers. With over 70 agents available at launch, the Agent Store aims to make it easier to discover, share, and deploy agents across teams and organizations, using both low-code and pro-code tools.

Furthermore, Microsoft’s agentic AI platform, Azure AI Foundry, is powering key healthcare advances with Stanford. Beyond healthcare, Microsoft is exploring ways to bring AI to web apps on the Edge browser and enable developers to deploy bots directly on Windows, as the company recognizes the full potential of its AI agents ecosystem is still unfolding.

Recommended read:
References :
  • Ken Yeung: Microsoft Pushes AI to the Edge
  • Source Asia: Introducing the Agent Store: Build, publish and discover agents in Microsoft 365 Copilot
  • John Werner: Stanford’s Use Of Microsoft Agentic Platform Leads To Better Analysis
  • blogs.microsoft.com: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • news.microsoft.com: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • MarkTechPost: Microsoft Releases NLWeb: An Open Project that Allows Developers to Easily Turn Any Website into an AI-Powered App with Natural Language Interfaces
  • news.microsoft.com: From sea to sky: Microsoft’s Aurora AI foundation model goes beyond weather forecasting

Kevin Okemwa@windowscentral.com //
Microsoft is strategically prioritizing AI model accessibility through Azure, with CEO Satya Nadella emphasizing making AI solutions available to customers for maximum profit. This approach involves internal restructuring, including job cuts, to facilitate increased investment in AI and streamline operations. The goal is to build a robust, subscription-based AI operating system that leverages advancements like ChatGPT, ensuring that Microsoft remains competitive in the rapidly evolving AI landscape.

Microsoft is actively working on improving integrations with external data sources using the Model Context Protocol (MCP). This initiative has led to a collaboration with Twilio to enhance conversational AI capabilities for enterprise customer communication. Twilio's technology helps deliver the "last mile" of AI conversations, enabling businesses to integrate Microsoft's conversational intelligence capabilities into their existing communication channels. This partnership gives Twilio greater visibility among Microsoft's enterprise customers, exposing its developer tools to large firms looking to build extensible custom communication solutions.

In addition to these strategic partnerships, Microsoft is also contributing to the open-source community by releasing Pyrefly, a faster Python type checker written in Rust. Developed initially at Meta for Instagram's codebase, Pyrefly is now available for the broader Python community to use, helping developers catch errors before runtime. The release of Pyrefly signifies Microsoft's commitment to fostering innovation and supporting the development of AI-related tools and technologies.

Recommended read:
References :
  • engineering.fb.com: Open-sourcing Pyrefly: A faster Python type checker written in Rust
  • www.windowscentral.com: Microsoft's allegiance isn't to OpenAI's pricey models — Satya Nadella's focus is selling any AI customers want for maximum profits

@the-decoder.com //
Microsoft is embracing interoperability in the AI agent space by integrating Google's open Agent2Agent (A2A) protocol into its Azure AI Foundry and Copilot Studio platforms. This move aims to enable AI agents to seamlessly collaborate across diverse platforms and ecosystems. A2A defines how a client agent formulates tasks and a remote agent executes them, supporting both synchronous and asynchronous task handling with status updates exchanged via the protocol. By adopting A2A, Microsoft is fostering a future where AI agents can work together regardless of the underlying framework or vendor, promoting cross-platform compatibility and enhancing AI application development efficiency.

Microsoft's A2A support will allow Copilot Studio agents to interact with external agents, including those built using tools like LangChain or Semantic Kernel, even those outside the Microsoft ecosystem. This means agents can delegate tasks, share data, and act together to automate daily workflows. Microsoft promises full integration with existing security and governance systems, including Microsoft Entra and audit logging. Over 230,000 organizations already use Copilot Studio, including 90 percent of the Fortune 500. Developers can access sample applications, such as automated meeting scheduling between two agents.

Google introduced the A2A protocol in April with more than 50 technology partners and is designed to let agents work together using standardized interfaces like HTTP and JSON-RPC. Microsoft is contributing to the specification work on GitHub and plans to help drive further development. A public preview of A2A in Azure Foundry and Copilot Studio is set to launch soon. Microsoft sees protocols like A2A as the foundation for a new kind of software architecture, where connected agents automate daily workflows and collaborate across platforms, without vendor lock-in, but with auditability and control.

Recommended read:
References :
  • huggingface.co: Microsoft introduces two new additions to its Phi-4 family: Phi-4-Reasoning and Phi-4-Reasoning-Plus
  • the-decoder.com: Microsoft leverages Google's open A2A protocol for interoperable AI agents
  • techcrunch.com: Microsoft adopts Google’s standard for linking up AI agents.
  • AI News | VentureBeat: Microsoft CEO Satya Nadella’s endorsement of Google DeepMind‘s Agent2Agent (A2A) open protocol and Anthropic’s Model Context Protocol (MCP) will immediately accelerate agentic AI-based collaboration and interdependence, leading to rapid gains in agentic-based apps and platforms.
  • Data Phoenix: Microsoft launches Phi-4 'reasoning' models to celebrate Phi-3's first anniversary
  • Analytics India Magazine: Microsoft Backs Google’s Open Agent2Agent Protocol to Power Multi-Agent AI Apps
  • THE DECODER: Microsoft leverages Google's open A2A protocol for interoperable AI agents
  • analyticsindiamag.com: Microsoft Backs Google’s Open Agent2Agent Protocol to Power Multi-Agent AI Apps
  • Maginative: Microsoft formalizes partnership with Google on Agent2Agent protocol, enabling AI systems to communicate across platforms and organizational boundaries.
  • CIO Dive - Latest News: Microsoft commits to Google’s interoperability protocol for AI agents
  • www.microsoft.com: Empowering multi-agent apps with the open Agent2Agent (A2A) protocol
  • www.microsoft.com: A new era in business processes: Autonomous agents for ERP

Joey Snow@Microsoft Security Blog //
Microsoft is significantly expanding its AI capabilities across various sectors, with innovations impacting cloud services, business applications, and even education. CEO Satya Nadella revealed that Microsoft's AI model performance is "doubling every six months," driven by advancements in pre-training, inference, and system design. Microsoft is also betting on Musk’s Grok AI to Challenge OpenAI’s Dominance by planning to host Grok AI on Azure, potentially providing developers with more options for building AI applications. These advancements are designed to accelerate AI innovation and business transformation for organizations.

The company's Dynamics 365 and Power Platform are receiving substantial updates with the 2025 release wave 1, integrating AI-driven Copilot and agent capabilities to enhance business processes and customer engagement. These new features will empower the workforce to automate tasks across sales, service, finance, and supply chain functions. Microsoft is also launching pre-built agents for Dynamics 365, accelerating time to value for businesses looking to leverage AI in their operations. Microsoft 365 Copilot is being implemented into education to enhance learning. Copilot Chat is being used for personalized student support and instructor assistance.

Security remains a key priority, with Microsoft sharing secure coding tips from experts at Microsoft Build. These tips, including securing AI from the start and locking down data with Purview APIs, are designed to help developers build safer applications. With AI playing a bigger role, secure coding is now a necessity. Microsoft is releasing a set of Purview APIs (+SDK) that will allow partners and customers to integrate their custom AI apps with the Microsoft Purview ecosystem for enterprise grade Data Security and Compliance outcomes.

Recommended read:
References :
  • Microsoft Security Blog: 14 secure coding tips: Learn from the experts at Microsoft Build
  • www.microsoft.com: 2025 release wave 1 brings hundreds of updates to Microsoft Dynamics 365 and Power Platform
  • www.microsoft.com: Setting a strong cloud foundation is paramount for organizations striving to achieve superior differentiation.

Alexey Shabanov@TestingCatalog //
Microsoft is significantly expanding the integration of artificial intelligence across its platforms, aiming to enhance productivity and user experience. Key initiatives include advancements in Copilot Studio, Dynamics 365 Field Service, and Azure AI Foundry, along with a focus on cybersecurity and AI safety. These efforts demonstrate Microsoft's commitment to embedding AI into various aspects of its software and services, transforming how users interact with technology.

Copilot Studio is gaining a "computer use" tool, available as an early access research preview, allowing agents to interact with graphical user interfaces across websites and desktop applications. This feature enables the automation of tasks, such as data entry and market research, even for systems lacking direct API integration, marking an evolution in robotic process automation (RPA). Moreover, Microsoft is launching the Copilot Merchant Program, integrating third-party retailers into its AI ecosystem for real-time product suggestions and seamless purchases.

Microsoft is also actively addressing cybersecurity concerns related to AI through its Secure Future Initiative (SFI). This initiative focuses on improving Microsoft's security posture and working with governments and industry to enhance the security of the entire digital ecosystem. Additionally, Microsoft Research is exploring AI systems as "Tools for Thought" at CHI 2025, examining how AI can support critical thinking, decision-making, and creativity, as part of its ongoing effort to reimagine AI’s role in human thinking and knowledge work.

Recommended read:
References :
  • TestingCatalog: Details Copilot Studio gains early access computer use tool to automate complex GUIs.
  • THE DECODER: Microsoft has launched "Computer Use" for Copilot Studio as an early research preview.
  • The Lalit Blogs: Greetings, readers! Lalit Mohan here, diving into one of the most exciting announcements from Microsoft—the introduction of Microsoft 365 Copilot Chat.
  • Microsoft Copilot Blog: Release Notes: April 16, 2025
  • www.techrepublic.com: Microsoft’s New Copilot Studio Feature Offers More User-Friendly Automation
  • Analytics India Magazine: Microsoft has released the model weights on Hugging Face, along with open-source code for running it.
  • THE DECODER: BitNet b1.58 2B4T is a new language model from Microsoft designed to operate with minimal energy and memory usage.
  • TestingCatalog: Discover Microsoft Copilot, including a new avatar and voices. Get insights on upcoming features and how they enhance your AI experience.
  • www.microsoft.com: Describes the Exchange Integration feature in Dynamics 365 Field Service, syncing work order bookings with Outlook and Teams calendars.
  • The Microsoft Cloud Blog: How real-world businesses are transforming with AI – with 50 new stories
  • The Register - Software: The Register article: Microsoft 365 Copilot gets a new crew, including Researcher and Analyst bots.
  • the-decoder.com: Microsoft is adding reasoning agents and company search to 365 Copilot
  • THE DECODER: Microsoft adds reasoning agents and company search to 365 Copilot

Chris McKay@Maginative //
OpenAI has unveiled its latest advancements in AI technology with the launch of the GPT-4.1 family of models. This new suite includes GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano, all accessible via API, and represents a significant leap forward in coding capabilities, instruction following, and context processing. Notably, these models feature an expanded context window of up to 1 million tokens, enabling them to handle larger codebases and extensive documents. The GPT-4.1 family aims to cater to a wide range of developer needs by offering different performance and cost profiles, with the goal of creating more advanced and efficient AI applications.

These models demonstrate superior results on various benchmarks compared to their predecessors, GPT-4o and GPT-4o mini. Specifically, GPT-4.1 showcases a substantial improvement on the SWE-bench Verified coding test with a 54.6% increase, and a 38.3% increase on Scale’s MultiChallenge for instruction following. Each model is designed with a specific purpose in mind: GPT-4.1 excels in high-level cognitive tasks like software development and research, GPT-4.1 mini offers a balanced performance with reduced latency and cost, while GPT-4.1 nano provides the quickest and most affordable option for tasks such as classification. All three models have knowledge updated through June 2024.

The introduction of the GPT-4.1 family also brings about changes in OpenAI's existing model offerings. The GPT-4.5 Preview model in the API is set to be deprecated on July 14, 2025, due to GPT-4.1 offering comparable or better utility at a lower cost. In terms of pricing, GPT-4.1 is 26% less expensive than GPT-4o for median queries, along with increased prompt caching discounts. Early testers have already noted positive outcomes, with improvements in code review suggestions and data retrieval from large documents. OpenAI emphasizes that many underlying improvements are being integrated into the current GPT-4o version within ChatGPT.

Recommended read:
References :
  • TestingCatalog: OpenAI debuts GPT-4.1 family offering 1M token context window
  • venturebeat.com: OpenAI slashes prices for GPT-4.1, igniting AI price war among tech giants
  • Interconnects: OpenAI's latest models optimizing on intelligence per dollar.
  • THE DECODER: OpenAI launches GPT-4.1: New model family to improve agents, long contexts and coding
  • Simon Willison's Weblog: OpenAI three new models this morning: GPT-4.1, GPT-4.1 mini and GPT-4.1 nano. These are API-only models right now, not available through the ChatGPT interface (though you can try them out in OpenAI's ).
  • Analytics Vidhya: All About OpenAI’s Latest GPT 4.1 Family
  • pub.towardsai.net: TAI #148: New API Models from OpenAI (4.1) & xAI (grok-3); Exploring Deep Research’s Scaling Laws
  • Towards AI: The GPT-4.1 models, accessible via API, provide a significant advancement in AI capabilities and offer an intriguing alternative for developers looking for high performance at lower cost.
  • Towards AI: TAI #148: New API Models from OpenAI (4.1) & xAI (grok-3); Exploring Deep Research’s Scaling Laws
  • venturebeat.com: OpenAI’s new GPT-4.1 models can process a million tokens and solve coding problems better than ever
  • techstrong.ai: Just days after announcing its plans to retire GPT-4 in ChatGPT, OpenAI on Monday launched a new set of flagship models named GPT-4.1. The release, which The Verge anticipated in an article last week, included the standard version GPT-4.1 model, along with two smaller models — GPT-4.1 mini, and GPT-4.1 nano which OpenAI touts as […]
  • the-decoder.com: OpenAI launches GPT-4.1: New model family to improve agents, long contexts and coding
  • www.tomsguide.com: OpenAI's latest model is here but it isn't GPT-5, it's 4.1, a model all about coding
  • shellypalmer.com: Shelly Palmer discusses the launch of GPT-4.1 and its improved capabilities.
  • felloai.com: OpenAI Quietly Launched GPT‑4.1 – A GPT-4o Successor That’s Crushing Benchmarks
  • thezvi.wordpress.com: The Zvi discusses the mini upgrade from GPT-4.1.
  • bdtechtalks.com: GPT-4.1: OpenAI’s most confusing model
  • Fello AI: OpenAI Quietly Launched GPT‑4.1 – A GPT-4o Successor That’s Crushing Benchmarks
  • www.eweek.com: eWeek reports on the pros and cons of OpenAI's new GPT-4.1 model.
  • Last Week in AI: Last Week in AI discusses the new GPT 4.1 model release by OpenAI
  • Fello AI: OpenAI’s language models have become part of everyday life for millions of people—whether you’re using ChatGPT to get quick answers, brainstorm ideas, or even generate code. With each new version, the models get faster, smarter, and more capable.
  • thezvi.wordpress.com: Yesterday’s news alert, nevertheless: The verdict is in. GPT-4.1-Mini in particular is an excellent practical model, offering strong performance at a good price. The full GPT-4.1 is an upgrade to OpenAI’s more expensive API offerings, it is modestly better but …
  • composio.dev: GPT-4.1 vs. Deepseek v3 vs. Sonnet 3.7 vs. GPT-4.5
  • hackernoon.com: OpenAI announced GPT-4.1, featuring a staggering 1M-token context window and perfect needle-in-a-haystack accuracy.
  • Shelly Palmer: OpenAI has launched GPT-4.1, along with GPT-4.1 Mini and GPT-4.1 Nano. These models are for developers and will not show up in your ChatGPT model picker.
  • eWEEK: OpenAI is releasing new language models, GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano.

Chris McKay@Maginative //
OpenAI has launched a new series of GPT-4.1 models, including GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano. These API-only models are not accessible via the ChatGPT interface but offer significant improvements in coding, instruction following, and context handling. All three models support a massive 1 million token context window, and they have a May 31, 2024 cutoff date.

GPT-4.1 demonstrates enhanced performance in coding benchmarks, surpassing GPT-4o by 21.4% on industry benchmarks. The models are also more cost-effective, with GPT-4.1 being 26% cheaper than GPT-4o and offering better latency. The GPT-4.1 nano model is OpenAI's cheapest model yet, priced at $0.10 per million input tokens and $0.40 per million output tokens. As a result of GPT-4.1's improved performance, OpenAI will be deprecating GPT-4.5 Preview on July 14, 2025.

The GPT-4.1 series excels in several key areas, including coding capabilities and instruction following. The models have achieved impressive scores on benchmarks like SWE-bench Verified and Scale’s MultiChallenge, demonstrating real-world software engineering skills and enhanced adherence to requested formats. Several companies have reported significant improvements in their specialized applications, with GPT-4.1 scoring higher on internal coding benchmarks, providing better code review suggestions, and improving the extraction of granular financial data from complex documents.

Recommended read:
References :
  • Simon Willison's Weblog: Simon Willison reports on three new million token input models from OpenAI, including their cheapest model yet.
  • Maginative: OpenAI has rolled out GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano—faster, cheaper models with sharper coding, better instruction following, and support for 1 million-token context windows.
  • bsky.app: New release of my llm-openai plugin supporting today's three new GPT-4.1 models from OpenAI: llm install -U llm-openai-plugin llm -m openai/gpt-4.1 "Generate an SVG of a pelican riding a bicycle"
  • TestingCatalog: OpenAI debuts GPT-4.1 family offering 1M token context window
  • venturebeat.com: VentureBeat's report on the launch of GPT-4.1.
  • Interconnects: OpenAI's GPT-4.1 and separating the API from ChatGPT
  • the-decoder.com: OpenAI launches GPT-4.1: New model family to improve agents, long contexts and coding
  • THE DECODER: OpenAI launches GPT-4.1: New model family to improve agents, long contexts and coding
  • venturebeat.com: OpenAI’s new GPT-4.1 models can process a million tokens and solve coding problems better than ever
  • Analytics Vidhya: All About OpenAI’s Latest GPT 4.1 Family
  • pub.towardsai.net: TAI #148: New API Models from OpenAI (4.1) & xAI (grok-3); Exploring Deep Research’s Scaling Laws
  • Latent.Space: GPT 4.1: The New OpenAI Workhorse
  • www.tomsguide.com: OpenAI launches another model before GPT 5 — here’s what this one can do
  • techstrong.ai: Details the launch of a new set of flagship models named GPT-4.1 which included the standard version GPT-4.1 model, along with two smaller models.
  • Towards AI: Details the launch of GPT-4.1 models, emphasizing coding and instruction-following capabilities.
  • Towards AI: The GPT-4.1 model series release through Azure AI Foundry represents a major step forward in AI capabilities.
  • techstrong.ai: OpenAI Introduces GPT-4.1 with Improved Coding
  • www.analyticsvidhya.com: OpenAI's new models show improvement in multiple benchmarks, excelling in long-context processing (up to 1 million tokens).
  • thezvi.wordpress.com: GPT-4.1 Is a Mini Upgrade
  • felloai.com: OpenAI has just launched a brand-new series of GPT models—GPT‑4.1, GPT‑4.1 mini, and GPT‑4.1 nano—that promise major advances in coding, instruction following, and the ability to handle incredibly long contexts.
  • shellypalmer.com: While admitting that they "suck at naming their models," OpenAI has launched GPT-4.1, along with GPT-4.1 Mini and GPT-4.1 Nano.
  • thezvi.wordpress.com: GPT-4.1 Is a Mini Upgrade
  • www.analyticsvidhya.com: How to Build Agentic RAG Using GPT-4.1?
  • felloai.com: Ultimate Comparison of GPT-4.1 vs GPT-4o: Which One Should You Use?
  • www.eweek.com: OpenAI released GPT-4.1, the newest successor to its GPT-4o series of AI language models.
  • Fello AI: OpenAI has just launched a brand-new series of GPT models—GPT‑4.1, GPT‑4.1 mini, and GPT‑4.1 nano—that promise major advances in coding, instruction following, and the ability to handle incredibly long contexts.
  • Shelly Palmer: While admitting that they "suck at naming their models," OpenAI has launched GPT-4.1, along with GPT-4.1 Mini and GPT-4.1 Nano. These models are for developers and will not show up in your ChatGPT model picker.
  • eWEEK: OpenAI announced on Monday the release of GPT-4.1, the newest successor to its GPT-4o series of AI language models.
  • bdtechtalks.com: GPT-4.1: OpenAI’s most confusing model
  • composio.dev: GPT-4.1 vs. Deepseek v3 vs. Sonnet 3.7 vs. GPT-4.5
  • thezvi.wordpress.com: OpenAI has upgraded its entire suite of models. By all reports, they are back in the game for more than images. GPT-4.1 and especially GPT-4.1-mini are their new API non-reasoning models. All reports are that GPT-4.1-mini especially is very good.
  • thezvi.wordpress.com: Greg Brockman (OpenAI): Just released o3 and o4-mini! These models feel incredibly smart.
  • Last Week in AI: Analyzes OpenAI's new AI models, focusing on their enhanced coding capabilities and concerns about reduced resources for safety testing.
  • techcrunch.com: OpenAI's new GPT-4.1 AI models focus on coding, Google’s newest Gemini AI model focuses on efficiency, and more!
  • TheSequence: The Sequence Radar #526: The OpenAI Blitz: From GPT-4.1 to Windsurf

Kailyn Sylvester@Microsoft Security Blog //
References: Source Asia , Source , TestingCatalog ...
Microsoft is making significant strides in the AI landscape, expanding its Copilot features to directly compete with leading AI models like ChatGPT and Gemini. These enhancements include web browsing capabilities, allowing users to task Copilot with booking tickets, making reservations, or shopping online. Furthermore, Copilot now boasts enhanced multimodal functionality, capable of analyzing live video feeds from mobile devices and responding to questions based on visual content. A key addition is the memory retention feature, enabling personalized interactions by remembering user preferences, although users retain control over managing or deleting these memories for privacy.

Microsoft is also investing in nurturing local AI start-ups through initiatives like the Cyberport x Microsoft AI Partnership Programme in Hong Kong. This program provides resources such as solution support, technical expert guidance, and business matching opportunities, aimed at enabling these start-ups to develop innovative AI solutions across various sectors. Six companies were selected to launch solutions on the Microsoft Azure Marketplace, addressing healthcare, insurance claims, risk management, and corporate sustainability. These companies showcased their technologies at a DEMO Day, attracting significant interest and fostering business collaborations.

In addition to software advancements, Microsoft is pushing the boundaries of AI in gaming with WHAMM (World and Human Action MaskGIT Model). This generative AI model creates game visuals in real-time, representing a significant upgrade from its predecessor, WHAM-1.6B. WHAMM boasts faster visual output, generating images at over 10 frames per second with enhanced resolution. Trained on Quake II using intentionally curated data, WHAMM demonstrates enhanced performance in tracking existing environments and responding to user input, despite some limitations regarding stat accuracy, input lag, and context length.

Recommended read:
References :
  • Source Asia: Microsoft Hong Kong and Cyberport nurtured local start-ups through AI Partnership Programme
  • Source: Tech Accelerator: Azure security and AI adoption
  • eWEEK: Microsoft’s WHAMM Offers an Interactive Real-Time Gameplay Experience – Though It Has Limitations
  • TestingCatalog: Microsoft expands Copilot features to rival ChatGPT and Gemini

Kailyn Sylvester@Microsoft Security Blog //
Microsoft is actively enhancing its AI integration across Azure and Copilot, introducing new features and programs to support both enterprise security and AI innovation. The company is now offering the "Llama 4 herd" within Azure AI Foundry and Azure Databricks, providing users with more AI tools and resources. Simultaneously, Microsoft is working to improve Copilot's capabilities by integrating enhanced security measures and features designed to facilitate AI adoption within organizations. These advancements reflect Microsoft's commitment to making AI more accessible and secure for its users.

Microsoft is also working with external partners to foster AI development. Microsoft Hong Kong has collaborated with Cyberport to launch the "Cyberport x Microsoft AI Partnership Programme," aimed at nurturing local start-ups. This program provides benefits like solution support, expert guidance, and business matching opportunities to promising Hong Kong-based companies. Six companies were selected to participate, showcasing innovative AI solutions in healthcare, insurance, risk management, and corporate sustainability.

In addition to external partnerships, Microsoft is focused on internal security enhancements related to AI. Microsoft Copilot utilizes classification labels as part of Microsoft Information Protection (MIP) to safeguard sensitive information, ensuring data security and regulatory compliance. These labels, applied manually, automatically, or suggested by Copilot, categorize data based on sensitivity levels, such as public, internal, or confidential. Furthermore, Microsoft is hosting a "Tech Accelerator: Azure Security and AI Adoption" event on April 22, 2025, designed to equip developers and cloud architects with the essential guidance and resources needed to securely plan, build, manage, and optimize their Azure deployments and AI projects.

Recommended read:
References :
  • Source Asia: Microsoft Hong Kong and Cyberport nurtured local start-ups through AI Partnership Programme
  • Source: Tech Accelerator: Azure security and AI adoption
  • John Werner: Real People Using AI And More From Microsoft’s Copilot 50th Anniversary Event
  • www.sharepointeurope.com: How Microsoft Copilot Respects and Uses Classification Labels: Enhancing Security and Productivity
  • Rashi Shrivastava: The Prompt: Microsoft Bets Big On Its AI Future
  • Source Asia: Microsoft AI Boardroom 2025: Ushering in the new era of Banking, Financial Services and Insurance (BFSI) with AI