@www.ghacks.net
//
References:
www.artificialintelligence-new
, gHacks Technology News
Microsoft is enhancing its Windows 11 operating system with the introduction of the "Hey, Copilot!" wake word, allowing users to activate the AI assistant hands-free. This new feature enables users to initiate conversations and tasks through voice commands, streamlining interaction with Windows 11. By simply saying "Hey, Copilot!", users can invoke the AI and begin a dialogue without needing to use a mouse or keyboard. This aims to provide a more intuitive and accessible way to engage with the operating system, especially when users are occupied with other tasks.
The "Hey, Copilot!" feature is an opt-in setting that users can enable within the Copilot app's settings. Once activated, a microphone icon will appear on the screen, indicating that Copilot is listening. The system uses an on-device wake word spotter that is designed to only detect the "Hey, Copilot!" phrase. To address privacy concerns, Microsoft states that the spotter has a 10-second audio buffer that isn't recorded or stored locally. Instead, audio from when the wake word is detected is sent to the cloud to answer the user's question. Currently, the "Hey, Copilot!" feature is rolling out to Windows Insiders with Copilot app version 1.25051.10.0 and higher, and is limited to users who have set their display language to English. Microsoft is also making strategic shifts internally, including cutting approximately 7,000 jobs, or 3% of its workforce, to increase investment in artificial intelligence. The job cuts primarily affect middle management and non-technical staff, reflecting a broader trend in the tech industry to streamline operations and prioritize AI development. This realignment includes a significant investment of up to $80 billion in AI infrastructure, such as data centers, to support the computational demands of training and running AI models. Recommended read:
References :
Rowan Cheung@The Rundown AI
//
References:
www.lifewire.com
, The Rundown AI
Google is significantly broadening the reach of its Gemini AI, integrating it into a variety of devices beyond smartphones. This expansion includes Wear OS smartwatches, such as the Samsung Galaxy Watch 7 and Google Pixel Watch 3, Google TV, Android Auto, and even upcoming XR headsets like the Samsung XR headset. The move aims to create a more consistent and accessible AI experience across a user's digital life, allowing for interactions and assistance regardless of the device in use. This positions Gemini as a potential central AI layer connecting various devices within the Android ecosystem.
This integration allows users to interact with Gemini through voice commands on smartwatches, eliminating the need to take out their phones. Gemini will also connect with phone apps, providing relevant information from emails or texts directly on the user's wrist. In Android Auto, Gemini will manage in-car requests like finding destinations and reading messages. On Google TV, Gemini will recommend content and answer questions. The focus is on making AI assistance more readily available and naturally integrated into daily routines. The expansion of Gemini to more devices is viewed as a strategic move in the competitive AI assistant landscape. While other companies like Apple have been slower to integrate advanced AI into consumer products, Google aims to accelerate the adoption of AI-powered capabilities by embedding Gemini across its ecosystem. The widespread integration aims to give Google an edge by providing a seamless AI experience, potentially increasing user engagement and loyalty across its range of products. The majority of Gemini’s most helpful features, such as Gemini Live’s camera and screen sharing capabilities, will be available to “billions of Android devices,” with no subscription required. Recommended read:
References :
@engineering.fb.com
//
Microsoft is making significant moves in the realm of agentic AI and open-source technologies. In a strategic shift to invest more in AI-centric solutions and streamline operations, the company has been actively involved in initiatives ranging from open-sourcing key development tools to integrating AI into enterprise platforms. One notable example is the open-sourcing of Pyrefly, a faster Python type checker written in Rust, aimed at helping developers catch errors before runtime. This aligns with Microsoft's broader efforts to enhance developer productivity and contribute to the open-source community.
Microsoft is also focusing on integrating AI into enterprise solutions, highlighted by Salesforce CEO Marc Benioff mentioning the company’s Agentforce platform. The company is also cutting around 7,000 jobs, mostly in middle management and non-technical roles. This decision reflects a strategic reallocation of resources towards AI infrastructure. Microsoft plans to invest heavily in data centers designed for training and running AI models, signaling a major push into AI-driven technologies. Despite strong earnings, Microsoft is trimming its workforce to free up resources for AI investments. The layoffs, primarily affecting middle managers and support staff, are part of a broader industry trend where companies are streamlining operations to accelerate product cycles and reduce bureaucracy. While Microsoft shuts off Bing Search APIs and recommends switching to AI , Twilio has partnered with Microsoft to expand AI capabilities. Recommended read:
References :
Kevin Okemwa@windowscentral.com
//
References:
engineering.fb.com
, www.windowscentral.com
Microsoft is strategically prioritizing AI model accessibility through Azure, with CEO Satya Nadella emphasizing making AI solutions available to customers for maximum profit. This approach involves internal restructuring, including job cuts, to facilitate increased investment in AI and streamline operations. The goal is to build a robust, subscription-based AI operating system that leverages advancements like ChatGPT, ensuring that Microsoft remains competitive in the rapidly evolving AI landscape.
Microsoft is actively working on improving integrations with external data sources using the Model Context Protocol (MCP). This initiative has led to a collaboration with Twilio to enhance conversational AI capabilities for enterprise customer communication. Twilio's technology helps deliver the "last mile" of AI conversations, enabling businesses to integrate Microsoft's conversational intelligence capabilities into their existing communication channels. This partnership gives Twilio greater visibility among Microsoft's enterprise customers, exposing its developer tools to large firms looking to build extensible custom communication solutions. In addition to these strategic partnerships, Microsoft is also contributing to the open-source community by releasing Pyrefly, a faster Python type checker written in Rust. Developed initially at Meta for Instagram's codebase, Pyrefly is now available for the broader Python community to use, helping developers catch errors before runtime. The release of Pyrefly signifies Microsoft's commitment to fostering innovation and supporting the development of AI-related tools and technologies. Recommended read:
References :
@thetechbasic.com
//
Microsoft has announced major layoffs affecting approximately 6,000 employees, which is equivalent to 3% of its global workforce. This move is part of a broader strategic shift aimed at streamlining operations and boosting the company's focus on artificial intelligence (AI) and cloud computing. The layoffs are expected to impact various divisions, including LinkedIn, Xbox, and overseas offices. The primary goal of the restructuring is to position Microsoft for success in a "dynamic marketplace" by reducing management layers and increasing agility.
The decision to implement these layoffs comes despite Microsoft reporting strong financial results for FY25 Q3, with $70.1 billion in revenue and a net income of $25.8 billion. According to Microsoft CFO Amy Hood, the company is focused on “building high-performing teams and increasing our agility by reducing layers with fewer managers". The cuts also align with a recurring trend across the industry, with firms eliminating staff who do not meet expectations. Microsoft's move to prioritize AI investments is costing the company a significant number of jobs. Microsoft is following a trend of other technology companies that are investing heavily in AI, the company has been pouring billions into AI tools and cloud services. The company's cloud service, Azure, is expanding at a rapid rate and the company aims to inject more money into this region. Recommended read:
References :
Kevin Okemwa@windowscentral.com
//
OpenAI and Microsoft are reportedly engaged in high-stakes negotiations to revise their existing partnership, a move prompted by OpenAI's aspirations for an initial public offering (IPO). The discussions center around redefining the terms of their strategic alliance, which has seen Microsoft invest over $13 billion in OpenAI since 2019. A key point of contention is Microsoft's desire to secure guaranteed access to OpenAI's AI technology beyond the current contractual agreement, set to expire in 2030. Microsoft is reportedly willing to sacrifice some equity in OpenAI to ensure long-term access to future AI models.
These negotiations also entail OpenAI potentially restructuring its for-profit arm into a Public Benefit Corporation (PBC), a move that requires Microsoft's approval as the startup's largest financial backer. The PBC structure would allow OpenAI to pursue commercial goals and attract further capital, paving the way for a potential IPO. However, the non-profit entity would retain overall control. OpenAI reportedly aims to reduce Microsoft's revenue share from 20% to a share of 10% by 2030, a year when the company forecasts $174B in revenue. Tensions within the partnership have reportedly grown as OpenAI pursues agreements with Microsoft competitors and targets overlapping enterprise customers. One senior Microsoft executive expressed concern over OpenAI's attitude, stating that they seem to want Microsoft to "give us money and compute and stay out of the way." Despite these challenges, Microsoft remains committed to the partnership, recognizing its importance in the rapidly evolving AI landscape. Recommended read:
References :
Kevin Okemwa@windowscentral.com
//
OpenAI and Microsoft are reportedly engaged in high-stakes negotiations to revise their existing partnership, a move prompted by OpenAI's aspirations for an initial public offering (IPO). The discussions center around redefining the terms of their strategic alliance, which has seen Microsoft invest over $13 billion in OpenAI since 2019. A key point of contention is Microsoft's desire to secure guaranteed access to OpenAI's AI technology beyond the current contractual agreement, set to expire in 2030. Microsoft is reportedly willing to sacrifice some equity in OpenAI to ensure long-term access to future AI models.
These negotiations also entail OpenAI potentially restructuring its for-profit arm into a Public Benefit Corporation (PBC), a move that requires Microsoft's approval as the startup's largest financial backer. The PBC structure would allow OpenAI to pursue commercial goals and attract further capital, paving the way for a potential IPO. However, the non-profit entity would retain overall control. OpenAI reportedly aims to reduce Microsoft's revenue share from 20% to a share of 10% by 2030, a year when the company forecasts $174B in revenue. Tensions within the partnership have reportedly grown as OpenAI pursues agreements with Microsoft competitors and targets overlapping enterprise customers. One senior Microsoft executive expressed concern over OpenAI's attitude, stating that they seem to want Microsoft to "give us money and compute and stay out of the way." Despite these challenges, Microsoft remains committed to the partnership, recognizing its importance in the rapidly evolving AI landscape. Recommended read:
References :
@cyberalerts.io
//
A new malware campaign is exploiting the hype surrounding artificial intelligence to distribute the Noodlophile Stealer, an information-stealing malware. Morphisec researcher Shmuel Uzan discovered that attackers are enticing victims with fake AI video generation tools advertised on social media platforms, particularly Facebook. These platforms masquerade as legitimate AI services for creating videos, logos, images, and even websites, attracting users eager to leverage AI for content creation.
Posts promoting these fake AI tools have garnered significant attention, with some reaching over 62,000 views. Users who click on the advertised links are directed to bogus websites, such as one impersonating CapCut AI, where they are prompted to upload images or videos. Instead of receiving the promised AI-generated content, users are tricked into downloading a malicious ZIP archive named "VideoDreamAI.zip," which contains an executable file designed to initiate the infection chain. The "Video Dream MachineAI.mp4.exe" file within the archive launches a legitimate binary associated with ByteDance's CapCut video editor, which is then used to execute a .NET-based loader. This loader, in turn, retrieves a Python payload from a remote server, ultimately leading to the deployment of the Noodlophile Stealer. This malware is capable of harvesting browser credentials, cryptocurrency wallet information, and other sensitive data. In some instances, the stealer is bundled with a remote access trojan like XWorm, enabling attackers to gain entrenched access to infected systems. Recommended read:
References :
@www.microsoft.com
//
Microsoft is pushing the boundaries of AI with advancements in both model efficiency and novel applications. The company recently commemorated the one-year anniversary of Phi-3 by introducing three new small language models: Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning. These models are designed to deliver complex reasoning capabilities that rival much larger models while maintaining efficiency for diverse computing environments. According to Microsoft, "Phi-4-reasoning generates detailed reasoning chains that effectively leverage additional inference-time compute," demonstrating that high-quality synthetic data and careful curation can lead to smaller models that perform comparably to their more powerful counterparts.
The 14-billion parameter Phi-4-reasoning and its enhanced version, Phi-4-reasoning-plus, have shown outstanding performance on numerous benchmarks, outperforming larger models. Notably, they achieve better results than OpenAI's o1-mini and a DeepSeek R1 distill on Llama 70B on mathematical reasoning and PhD-level science questions. Furthermore, Phi-4-reasoning-plus surpasses the massive 671-billion parameter DeepSeek-R1 model on AIME and HMMT evaluations. These results highlight the efficiency and competitive edge of the new models. In addition to pushing efficiency, Microsoft Research has introduced ARTIST (Agentic Reasoning and Tool Integration in Self-improving Transformers), a framework that combines agentic reasoning, reinforcement learning, and dynamic tool use to enhance LLMs. ARTIST enables models to autonomously decide when, how, and which tools to use. This framework aims to address the limitations of static internal knowledge and text-only reasoning, especially in tasks requiring real-time information or domain-specific expertise. The integration of reinforcement learning allows the models to adapt dynamically and interact with external tools and environments during the reasoning process, ultimately improving their performance in real-world applications. Recommended read:
References :
kevinokemwa@outlook.com (Kevin@windowscentral.com
//
Microsoft CEO Satya Nadella is betting big on the future of Agentic AI, predicting a disruptive shift that could reshape the entire software landscape. Nadella envisions an "Agentic AI era" where traditional Software as a Service (SaaS) applications are rendered obsolete. In a recent discussion, Nadella questioned the necessity of conventional software like Excel, suggesting that Agentic AIs will replace business logic, fundamentally altering how organizations operate. Microsoft's CEO believes AI agents will take over tasks, interacting across multiple repositories, and ultimately collapsing the need for individual business applications.
Nadella's commitment to open architectures is evident through Microsoft's endorsement of Google DeepMind's Agent2Agent (A2A) open protocol and Anthropic's Model Context Protocol (MCP). This move signifies a shift from proprietary ecosystems towards cross-platform, agentic AI collaboration. By supporting these open protocols in Copilot Studio and Foundry, Microsoft aims to enable customers to build agentic systems that can seamlessly interoperate. This initiative is expected to accelerate the development and adoption of agentic-based apps and platforms, fostering innovation and interdependence within the AI community. Beyond software, Microsoft is exploring AI's potential to revolutionize energy production. The company believes that AI can accelerate the development of nuclear fusion as a practical energy source. Microsoft Research held its inaugural Fusion Summit to find ways to utilize AI to push the boundaries of nuclear fusion. Realizing the enormous energy needs of AI, the fusion research is intended to both bring clean energy to the world but also ensure the capacity to continue to expand AI capabilities. This would hasten our understanding of how to power AI, addressing the sustainability challenges associated with its growing energy consumption. Recommended read:
References :
Tom Dotan@Newcomer
//
OpenAI is facing an identity crisis, according to former research scientist Steven Adler, stemming from its history, culture, and contentious transition from a non-profit to a for-profit entity. Adler's insights, shared in a recent discussion, delve into the company's early development of GPT-3 and GPT-4, highlighting internal cultural and ethical disagreements. This comes as OpenAI's enterprise adoption accelerates, seemingly at the expense of its rivals, signaling a significant shift in the AI landscape.
OpenAI's recent $3 billion acquisition of Windsurf, an AI-native integrated development environment (IDE), underscores its urgent need to defend its territory in AI-powered coding against growing competition from Google and Anthropic. The move reflects OpenAI's imperative to equip developers with superior coding capabilities and secure a dominant position in the emerging agentic AI world. This deal is seen as a defensive maneuver as OpenAI finds itself on the back foot, needing to counter challenges from competitors who are making significant inroads in AI-assisted coding. Meanwhile, tensions are reportedly simmering between OpenAI and Microsoft, its key partner. Negotiations are shaky, with Microsoft seeking a larger equity stake and retention of IP rights to OpenAI's models, while OpenAI aims to claw those rights back. These issues, along with disagreements over an AGI provision that allows OpenAI an out once it develops artificial general intelligence, have complicated OpenAI's plans for a for-profit conversion and the current effort to become a public benefit corporation. Furthermore, venture capitalists and limited partners are offloading shares in secondaries, which may come at a steep loss compared to 2021 valuations, adding another layer of complexity to OpenAI's current situation. Recommended read:
References :
@www.marktechpost.com
//
References:
the-decoder.com
, www.marktechpost.com
,
Microsoft Research has unveiled ARTIST (Agentic Reasoning and Tool Integration in Self-improving Transformers), a reinforcement learning framework designed to enhance Large Language Models (LLMs) with agentic reasoning and dynamic tool use. This framework addresses the limitations of current RL-enhanced LLMs, which often rely on static internal knowledge and text-only reasoning, making them unsuitable for tasks requiring real-time information, domain-specific expertise, or precise computations. ARTIST enables models to autonomously decide when, how, and which tools to use, allowing for more effective reasoning strategies that adapt dynamically to a task’s complexity.
Microsoft researchers have also conducted a comparison of API-based and GUI-based AI agents, revealing the distinct advantages of each approach. API agents, which interact with software through programmable interfaces, are found to be faster, more stable, and less error-prone as they complete tasks via direct function calls. GUI agents, on the other hand, mimic human interactions with software interfaces, navigating menus and clicking buttons on a screen. While GUI agents may require multiple actions to accomplish the same goal, their versatility allows them to control almost any software with a visible interface, even without an API. In a move to foster interoperability across platforms, Microsoft has announced support for the open Agent2Agent (A2A) protocol. This protocol empowers multi-agent applications by enabling structured agent communication, including the exchange of goals, management of state, invocation of actions, and secure return of results. A2A is set to be integrated into Azure AI Foundry and Copilot Studio, allowing developers to build agents that interoperate across clouds and frameworks while maintaining enterprise-grade security and compliance. Microsoft aims to empower both pro and citizen developers to create agents that can orchestrate tasks across diverse environments. Recommended read:
References :
@industrialcyber.co
//
References:
Industrial Cyber
, NCSC News Feed
,
The UK's National Cyber Security Centre (NCSC) has issued a warning that critical systems in the United Kingdom face increasing risks due to AI-driven vulnerabilities. The agency highlighted a growing 'digital divide' between organizations capable of defending against AI-enabled threats and those that are not, exposing the latter to greater cyber risk. According to a new report, developments in AI are expected to accelerate the exploitation of software vulnerabilities by malicious actors, intensifying cyber threats by 2027.
The report, presented at the NCSC's CYBERUK conference, predicts that AI will significantly enhance the efficiency and effectiveness of cyber intrusions. Paul Chichester, NCSC director of operations, stated that AI is transforming the cyber threat landscape by expanding attack surfaces, increasing the volume of threats, and accelerating malicious capabilities. He emphasized the need for organizations to implement robust cybersecurity practices across their AI systems and dependencies, ensuring up-to-date defenses. The NCSC assessment emphasizes that by 2027, AI-enabled tools will almost certainly improve threat actors' ability to exploit known vulnerabilities, leading to a surge in attacks against systems lacking security updates. With the time between vulnerability disclosure and exploitation already shrinking, AI is expected to further reduce this timeframe. The agency urges organizations to adopt its guidance on securely implementing AI tools while maintaining strong cybersecurity measures across all systems. Recommended read:
References :
@www.microsoft.com
//
References:
The Register - Software
, www.microsoft.com
Microsoft is actively exploring the potential of artificial intelligence to revolutionize fusion energy research. This initiative aims to accelerate the development of a clean, scalable, and virtually limitless energy source. The first Microsoft Research Fusion Summit recently convened global experts to discuss and explore how AI can play a pivotal role in unlocking the secrets of fusion power. This summit fostered collaborations with leading institutions and researchers, with the ultimate goal of expediting progress toward practical fusion energy generation.
The summit showcased ongoing efforts to apply AI in various aspects of fusion research. Experts from the DIII-D National Fusion Program, North America's largest fusion facility, demonstrated how AI is already being used to advance reactor design and operations. These applications include using AI for active plasma control to prevent disruptive instabilities, implementing AI-controlled trajectories to avoid tearing modes, and utilizing machine learning-derived density limits for safer, high-density operations. Microsoft believes that AI can significantly shorten the timeline for realizing nuclear fusion as a viable energy source. This advancement, in turn, could provide the immense power required to fuel the growing demands of AI itself. Ashley Llorens, Corporate Vice President and Managing Director of Microsoft Research Accelerator, envisions a self-reinforcing system where AI drives sustainability, including the development of fusion energy. The challenge now lies in harnessing the combined power of AI and high-performance computing, along with international collaboration, to model and optimize future fusion reactor designs. Recommended read:
References :
Chris McKay@Maginative
//
Microsoft is formally partnering with Google on the Agent2Agent (A2A) protocol, signaling a major step towards interoperability in AI systems. This move will enable AI agents to communicate and collaborate across different platforms and organizational boundaries, a previously significant limitation of current AI technology. Microsoft is integrating the A2A protocol into its Azure AI Foundry and Copilot Studio, allowing enterprise customers to build multiagent workflows that span partner tools and production infrastructure. This partnership reflects the industry’s recognition that achieving seamless AI collaboration requires shared standards and open protocols.
The endorsement of Google DeepMind's A2A protocol and Anthropic’s Model Context Protocol (MCP) by Microsoft CEO Satya Nadella is expected to accelerate the development and adoption of agentic AI. Nadella's support is seen as a catalyst for further collaboration within the AI community, potentially leading to the creation of new agentic-based applications and platforms. He highlighted the importance of open protocols like A2A and MCP for enabling an agentic web, emphasizing that these standards allow customers to build agentic systems that interoperate by design. This commitment to open architectures aligns with Nadella’s long-standing belief that open standards are crucial for driving the adoption of new AI technologies. Microsoft is joining over 50 technology partners, including Salesforce, Oracle, and SAP, in supporting the A2A standard created by Google. The company is also actively contributing to the A2A working group on GitHub, a Microsoft subsidiary, to further develop the specification and tooling. The A2A protocol allows AI agents to securely exchange goals, manage state, invoke actions, and return results, facilitating complex workflows across multiple specialized agents. Microsoft’s commitment to A2A underscores the shift towards an "open garden" approach, where AI systems can effectively work together regardless of their origin, marking a foundational shift in how software is built and decisions are made. Recommended read:
References :
@the-decoder.com
//
Microsoft is embracing interoperability in the AI agent space by integrating Google's open Agent2Agent (A2A) protocol into its Azure AI Foundry and Copilot Studio platforms. This move aims to enable AI agents to seamlessly collaborate across diverse platforms and ecosystems. A2A defines how a client agent formulates tasks and a remote agent executes them, supporting both synchronous and asynchronous task handling with status updates exchanged via the protocol. By adopting A2A, Microsoft is fostering a future where AI agents can work together regardless of the underlying framework or vendor, promoting cross-platform compatibility and enhancing AI application development efficiency.
Microsoft's A2A support will allow Copilot Studio agents to interact with external agents, including those built using tools like LangChain or Semantic Kernel, even those outside the Microsoft ecosystem. This means agents can delegate tasks, share data, and act together to automate daily workflows. Microsoft promises full integration with existing security and governance systems, including Microsoft Entra and audit logging. Over 230,000 organizations already use Copilot Studio, including 90 percent of the Fortune 500. Developers can access sample applications, such as automated meeting scheduling between two agents. Google introduced the A2A protocol in April with more than 50 technology partners and is designed to let agents work together using standardized interfaces like HTTP and JSON-RPC. Microsoft is contributing to the specification work on GitHub and plans to help drive further development. A public preview of A2A in Azure Foundry and Copilot Studio is set to launch soon. Microsoft sees protocols like A2A as the foundation for a new kind of software architecture, where connected agents automate daily workflows and collaborate across platforms, without vendor lock-in, but with auditability and control. Recommended read:
References :
Ellie Ramirez-Camara@Data Phoenix
//
Microsoft is expanding its AI capabilities with enhancements to its Phi-4 family and the integration of the Agent2Agent (A2A) protocol. The company's new Phi-4-Reasoning and Phi-4-Reasoning-Plus models are designed to deliver strong reasoning performance with low latency. In addition, Microsoft is embracing interoperability by adding support for the open A2A protocol to Azure AI Foundry and Copilot Studio. This move aims to facilitate seamless collaboration between AI agents across various platforms, fostering a more connected and efficient AI ecosystem.
Microsoft's integration of the A2A protocol into Azure AI Foundry and Copilot Studio will empower AI agents to work together across platforms. The A2A protocol defines how agents formulate tasks and execute them, enabling them to delegate tasks, share data, and act together. With A2A support, Copilot Studio agents can call on external agents, including those outside the Microsoft ecosystem and built with tools like LangChain or Semantic Kernel. Microsoft reports that over 230,000 organizations are already utilizing Copilot Studio, with 90 percent of the Fortune 500 among them. Developers can now access sample applications demonstrating automated meeting scheduling between agents. Independant developer Simon Willison has been testing the phi4-reasoning model, and reported that the 11GB download (available via Ollama) may well overthink things. Willison noted that it produced 56 sentences of reasoning output in response to a prompt of "hi". Microsoft is actively contributing to the A2A specification work on GitHub and intends to play a role in driving its future development. A public preview of A2A in Azure Foundry and Copilot Studio is anticipated to launch soon. Microsoft envisions protocols like A2A as the bedrock of a novel software architecture where interconnected agents automate daily workflows and collaborate across platforms with auditability and control. Recommended read:
References :
@zdnet.com
//
Microsoft has introduced a new AI-powered agent for settings control in Copilot+ PCs, designed to simplify how users adjust their computer settings. The agent utilizes on-device AI to understand natural language queries, allowing users to ask questions like "how to control my PC by voice" or "my mouse pointer is too small." The AI will then either provide an answer or automatically make the requested changes, streamlining the user experience and eliminating the need to navigate through complex menus. Initially, this feature will support English language queries and is being rolled out to Copilot+ PCs equipped with Snapdragon chips, with plans to expand support to Intel and AMD-powered computers in the near future.
Microsoft is also enhancing the capabilities of its Click to Do feature for Copilot AI assistance. This feature, accessible while a computer screen is active, will now be able to act on text or images. Examples include creating bulleted lists from selected text or drafting copy into Microsoft Word, improving efficiency in content creation. Additionally, new actions will include scheduling meetings, sending messages via Microsoft Teams, and transferring data to Microsoft Excel. The agent will also support a computer's Reading Coach and Immersive Reader modes. These AI enhancements aim to seamlessly integrate AI into everyday computing tasks. Beyond the settings control agent and Click to Do improvements, Windows search is receiving AI-driven upgrades, enabling users to find files using natural language. Copilot will also gain support for screen sharing through Copilot Vision on Windows. Microsoft will also add enhanced search to its Photos app, showcasing Microsoft's commitment to leveraging AI to improve the overall Windows 11 and Copilot+ PC user experience. Recommended read:
References :
Alexey Shabanov@TestingCatalog
//
Microsoft is aggressively expanding the AI capabilities within its Copilot ecosystem, incorporating task automation and enhanced content creation tools. The company is currently testing "Agent Actions" in Microsoft Copilot, a feature designed to automate daily computing tasks. This capability, initially limited to select testers or Copilot Pro subscribers, is intended to allow users to delegate tasks during brief sessions. Furthermore, Copilot now includes native image generation powered by OpenAI’s GPT-4o model, replacing DALL-E 3. This upgrade allows users across various platforms to generate higher-quality visuals directly within the app, negating the need for third-party integrations.
Microsoft is also refining the visual identity of Copilot, evolving the appearances of its AI personas. The fourth character, resembling a bubblegum or cloud, is undergoing further design changes. These characters, which serve as a branding layer, are expected to be further refined before their full release. These changes align with Microsoft's focus on seamlessly integrating productivity, assistance, and personality within the Copilot AI environment. Copilot for Sales is receiving significant updates aimed at streamlining sales workflows and improving CRM integration. These improvements include improved extensibility for third-party insights in email summaries within Outlook, providing partners the ability to surface richer sales insights. Additionally, sellers can now directly save AI-generated meeting summaries to CRM systems such as Microsoft Dynamics 365 and Salesforce from Teams, eliminating the need for manual logging. Microsoft CEO Satya Nadella has stated that the company's AI model performance is doubling every six months due to improvements in pre-training, inference, and system design. Recommended read:
References :
Mels Dees@Techzine Global
//
Microsoft is reportedly preparing to host Elon Musk's Grok AI model within its Azure AI Foundry platform, signaling a potential shift in its AI strategy. The move, stemming from discussions with xAI, Musk's AI company, could make Grok accessible to a broad user base and integrate it into Microsoft's product teams via the Azure cloud service. Azure AI Foundry serves as a generative AI development hub, providing developers with the necessary tools and models to host, run, and manage AI-driven applications, potentially positioning Microsoft as a neutral platform supporting multiple AI models. This follows reports indicating Microsoft is exploring third-party AI models like DeepSeek and Meta for its Copilot service.
Microsoft's potential hosting of Grok comes amid reports that its partnership with OpenAI may be evolving. While Microsoft remains quiet about any deal with xAI, sources indicate that Grok will be available on Azure AI Foundry, providing developers with access to the model. However, Microsoft reportedly intends only to host the Grok model and will not be involved in training future xAI models. This collaboration with xAI could strengthen Microsoft's position as an infrastructure provider for AI models, offering users more freedom of choice in selecting which AI models they want to use within their applications. Alongside these developments, Microsoft is enhancing its educational offerings with Microsoft 365 Copilot Chat agents. These specialized AI assistants can personalize student support and provide instructor assistance. Copilot Chat agents can be tailored to offer expertise in instructional design, cater to unique student preferences, and analyze institutional data. These agents are designed to empower educators and students alike, transforming education experiences through customized support and efficient access to resources. Recommended read:
References :
Joey Snow@Microsoft Security Blog
//
References:
Microsoft Security Blog
, www.microsoft.com
,
Microsoft is significantly expanding its AI capabilities across various sectors, with innovations impacting cloud services, business applications, and even education. CEO Satya Nadella revealed that Microsoft's AI model performance is "doubling every six months," driven by advancements in pre-training, inference, and system design. Microsoft is also betting on Musk’s Grok AI to Challenge OpenAI’s Dominance by planning to host Grok AI on Azure, potentially providing developers with more options for building AI applications. These advancements are designed to accelerate AI innovation and business transformation for organizations.
The company's Dynamics 365 and Power Platform are receiving substantial updates with the 2025 release wave 1, integrating AI-driven Copilot and agent capabilities to enhance business processes and customer engagement. These new features will empower the workforce to automate tasks across sales, service, finance, and supply chain functions. Microsoft is also launching pre-built agents for Dynamics 365, accelerating time to value for businesses looking to leverage AI in their operations. Microsoft 365 Copilot is being implemented into education to enhance learning. Copilot Chat is being used for personalized student support and instructor assistance. Security remains a key priority, with Microsoft sharing secure coding tips from experts at Microsoft Build. These tips, including securing AI from the start and locking down data with Purview APIs, are designed to help developers build safer applications. With AI playing a bigger role, secure coding is now a necessity. Microsoft is releasing a set of Purview APIs (+SDK) that will allow partners and customers to integrate their custom AI apps with the Microsoft Purview ecosystem for enterprise grade Data Security and Compliance outcomes. Recommended read:
References :
@zdnet.com
//
Microsoft is rolling out a wave of new AI-powered features for Windows 11 and Copilot+ PCs, aiming to enhance user experience and streamline various tasks. A key addition is an AI agent designed to assist users in navigating and adjusting Windows 11 settings. This agent will understand user intent through natural language, allowing them to simply describe the setting they wish to change, such as adjusting mouse pointer size or enabling voice control. With user permission, the AI agent can then automate and execute the necessary adjustments. This feature, initially available to Windows Insiders on Snapdragon X Copilot+ PCs, seeks to eliminate the frustration of searching for and changing settings manually.
Microsoft is also enhancing Copilot with new AI skills, including the ability to act on screen content. One such action, "Ask Copilot," will enable users to draft content in Microsoft Word based on on-screen information, or create bulleted lists from selected text. These capabilities aim to boost productivity by leveraging generative AI to quickly process and manipulate information. Furthermore, the Windows 11 Start menu is undergoing a revamp, offering easier access to apps and a phone companion panel for quick access to information from synced iPhones or Android devices. The updated Start menu, along with the new AI features, will first be available to Windows Insiders running Snapdragon X Copilot Plus PCs. In a shift toward passwordless security, Microsoft is removing the password autofill feature from its Authenticator app, encouraging users to transition to Microsoft Edge for password management. Starting in June 2025, users will no longer be able to save new passwords in the Authenticator app, with autofill functionality being removed in July 2025. By August 2025, saved passwords will no longer be accessible in the app. Microsoft argues that this change streamlines the process, as passwords will be synced with the Microsoft account and accessible through Edge. However, users who do not use Edge may find this transition less seamless, as they will need to install Edge and make it the default autofill provider to maintain access to their saved passwords. Recommended read:
References :
@www.nextplatform.com
//
References:
blogs.microsoft.com
, CIO Dive - Latest News
,
Microsoft is significantly expanding its investment in Europe, underscoring its commitment to the continent's digital future amid rising global trade tensions. The company announced new digital commitments, including a promise to uphold Europe’s digital resilience regardless of geopolitical and trade volatility. As part of this initiative, Microsoft will boost its regional computing capacity by 40% over the next two years, expanding its data center operations across 16 European countries. This strategic move aims to provide assurance and continuity to EU customers navigating an uncertain international environment, addressing concerns about the impact of potential tariffs and trade disputes.
This expansion demonstrates Microsoft's belief in strong trans-Atlantic ties that promote mutual economic growth and prosperity. Microsoft is dedicated to supporting European digital resilience, ensuring open access to its AI and cloud platform across the region. To further protect its European customers, the company has pledged to pursue litigation in support of its contracts should the need arise, drawing on its history of challenging government actions when necessary. The move is aimed at helping companies in Europe "navigate the uncertain geopolitical and trade environment," as enterprises in the region look for ways to manage risks. Microsoft's commitment includes enhancing its AI Access Principles to facilitate broader access to its AI and cloud infrastructure across Europe. In addition to its infrastructure investments, Microsoft continues to drive innovation and business transformation through strategic cloud partnerships, helping organizations modernize their technology stacks and capitalize on advances like generative AI. Microsoft recognizes the importance of providing assurances to the EU. These combined efforts solidify Microsoft's position as a key player in Europe's digital ecosystem, ready to defend its customers and contribute to the region's ongoing technological advancement. Recommended read:
References :
Carl Franzen@AI News | VentureBeat
//
Microsoft has recently launched its Phi-4 reasoning models, marking a significant stride in the realm of small language models (SLMs). This expansion of the Phi series includes three new variants: Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning, designed to excel in advanced reasoning tasks like mathematics and coding. The company's new models are optimized for complex problems, and can handle complex problems through structured reasoning and internal reflection, while remaining lightweight enough to run on lower-end hardware, including mobile devices.
Microsoft asserts that these models demonstrate that smaller AI can achieve impressive results, rivaling much larger models while operating efficiently on devices with limited resources. CEO Satya Nadella says Microsoft's AI model performance is "doubling every 6 months" due to pre-training, inference, and system design. The Phi-4-reasoning model contains 14 billion parameters and was trained via supervised fine-tuning using reasoning paths from OpenAI's o3-mini. A more advanced version, Phi-4-reasoning-plus, adds reinforcement learning and processes 1.5 times more tokens than the base model. These new models leverage distillation, reinforcement learning, and high-quality data to achieve their performance. In a demonstration, the Phi-4-reasoning model correctly solved a wordplay riddle by recognizing patterns and applying local reasoning, showcasing its ability to identify patterns, understand riddles, and perform mathematical operations. Despite having just 14 billion parameters, the Phi-4 reasoning models match or outperform significantly larger systems, including the 70B parameter DeepSeek-R1-Distill-Llama. On the AIME-2025 benchmark, the Phi models also surpass DeepSeek-R1, which has 671 billion parameters. Recommended read:
References :
Matthias Bastian@THE DECODER
//
Microsoft has launched three new additions to its Phi series of compact language models: Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning. These models are designed to excel in complex reasoning tasks, including mathematical problem-solving, algorithmic planning, and coding, demonstrating that smaller AI models can achieve significant performance. The models are optimized to handle complex problems through structured reasoning and internal reflection, while also being efficient enough to run on lower-end hardware, including mobile devices, making advanced AI accessible on resource-limited devices.
Phi-4-reasoning, a 14-billion parameter model, was trained using supervised fine-tuning with reasoning paths from OpenAI's o3-mini. Phi-4-reasoning-plus enhances this with reinforcement learning and processes more tokens, leading to higher accuracy, although with increased computational cost. Notably, these models outperform larger systems, such as the 70B parameter DeepSeek-R1-Distill-Llama, and even surpass DeepSeek-R1 with 671 billion parameters on the AIME-2025 benchmark, a qualifier for the U.S. Mathematical Olympiad, highlighting the effectiveness of Microsoft's approach to efficient, high-performing AI. The Phi-4 reasoning models show strong results in programming, algorithmic problem-solving, and planning tasks, with improvements in logical reasoning positively impacting general capabilities such as following prompts and answering questions based on long-form content. Microsoft employed a data-centric training strategy, using structured reasoning outputs marked with special tokens to guide the model's intermediate reasoning steps. The open-weight models have been released with transparent training details and are hosted on Hugging Face, allowing for public access, fine-tuning, and use in various applications under a permissive MIT license. Recommended read:
References :
@cyberscoop.com
//
North Korean operatives have infiltrated hundreds of Fortune 500 companies, posing a significant and growing threat to IT infrastructure and sensitive data. Security leaders at Mandiant and Google Cloud have indicated that nearly every major company has either hired or received applications from North Korean nationals working on behalf of the regime. These individuals primarily aim to earn salaries that are then sent back to Pyongyang, contributing to the country's revenue stream. Cybersecurity experts warn that this issue is more pervasive than previously understood, with organizations often unaware of the extent of the infiltration.
Hundreds of Fortune 500 organizations have unknowingly hired these North Korean IT workers, and nearly every CISO interviewed has admitted to hiring at least one, if not several, of these individuals. Google has also detected North Korean technical workers within its talent pipeline, though the company states that none have been hired to date. The risk of North Korean nationals working for large organizations has become so prevalent that security professionals now assume it is happening unless actively detected. Security analysts continue to raise alarms and highlight the expansive ecosystem of tools, infrastructure, and specialized talent North Korea has developed to support this illicit activity. The FBI and cybersecurity experts are actively working to identify and remove these remote workers. According to Adam Meyers, Head of Country Adversary Operations at CrowdStrike, there have been over 90 incidents in the past 90 days, resulting in millions of dollars flowing to the North Korean regime through high-paying developer jobs. Microsoft is tracking thousands of personas and identities used by these North Korean IT workers, indicating a high-volume operation. Uncovering one North Korean IT worker scam often leads to the discovery of many others, as demonstrated by CrowdStrike's investigation that revealed 30 victim organizations. Recommended read:
References :
Carl Franzen@AI News | VentureBeat
//
Microsoft has announced the release of Phi-4-reasoning-plus, a new small, open-weight language model designed for advanced reasoning tasks. Building upon the architecture of the previously released Phi-4, this 14-billion parameter model integrates supervised fine-tuning and reinforcement learning to achieve strong performance on complex problems. According to Microsoft, the Phi-4 reasoning models outperform larger language models on several demanding benchmarks, despite their compact size. This new model pushes the limits of small AI, demonstrating that carefully curated data and training techniques can lead to impressive reasoning capabilities.
The Phi-4 reasoning family, consisting of Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning, is specifically trained to handle complex reasoning tasks in mathematics, scientific domains, and software-related problem solving. Phi-4-reasoning-plus, in particular, extends supervised fine-tuning with outcome-based reinforcement learning, which is targeted for improved performance in high-variance tasks such as competition-level mathematics. All models are designed to enable reasoning capabilities, especially on lower-performance hardware such as mobile devices. Microsoft CEO Satya Nadella revealed that AI is now contributing to 30% of Microsoft's code. The open weight models were released with transparent training details and evaluation logs, including benchmark design, and are hosted on Hugging Face for reproducibility and public access. The model has been released under a permissive MIT license, enabling its use for broad commercial and enterprise applications, and fine-tuning or distillation, without restriction. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |