@hbr.org
//
References:
The Dataiku Blog
, Smashing Frames
,
SAS has unveiled its roadmap for agentic AI at SAS Innovate 2025 in Orlando, positioning itself as a company deeply rooted in intelligent decision automation. Agentic AI, defined as AI systems capable of acting autonomously to achieve goals without constant human intervention, has gained significant traction. SAS CTO Bryan Harris emphasized that the key metric isn't the quantity of AI agents deployed, but the quality and value of the decisions they facilitate within an enterprise. SAS's approach integrates reasoning, analytics, and embedded governance into AI systems.
SAS defines agentic AI beyond simple automation, focusing on systems that make decisions with a blend of reasoning, analytics, and embedded governance. The SAS Viya platform supports this by unifying deterministic models, machine learning algorithms, and large language models. This orchestration enables the deployment of intelligent agents capable of autonomous action when appropriate, while also providing transparency and human oversight when the stakes are high. Udo Sglavo, VP of applied AI and modeling R&D, highlights this as a natural progression from SAS's consulting-driven history, aiming to transform repeated problem-solving IP into scalable software solutions. The rising comfort with LLMs has accelerated the shift towards prepackaged models and agent-based systems. However, both Harris and Sglavo caution that LLMs are just one element of a larger ensemble. Agentic AI is also transforming the retail sector, enhancing personalization, optimizing supply chains, and accelerating product innovation. AI agents can serve as marketing assistants, delivering anticipatory and dynamic personalized recommendations. This is achieved by understanding changing consumer preferences, shopper browsing patterns, and adapting to real-time factors, ensuring individualized and effective marketing strategies. Recommended read:
References :
@siliconangle.com
//
References:
aithority.com
, SiliconANGLE
,
Vectara has announced the launch of its Hallucination Corrector, a new guardian agent integrated within the Vectara platform designed to significantly improve the reliability and accuracy of AI agents and assistants. This innovative approach aims to reduce AI hallucinations to below 1%, addressing a critical challenge in enterprise AI deployments where accuracy is paramount. The Hallucination Corrector builds upon Vectara's existing leadership in hallucination detection and mitigation, offering organizations a solution that not only identifies unreliable responses but also provides explanations and options for correction.
The Hallucination Corrector functions as a 'guardian agent,' actively monitoring and protecting agentic workflows. It leverages the Hughes Hallucination Evaluation Model (HHEM), a widely-used tool with 4 million downloads on Hugging Face, to compare AI-generated responses against source documents and identify inaccuracies. When inconsistencies are detected, the Corrector provides detailed explanations and corrected versions of the responses, ensuring minimal changes are made to maintain accuracy. This capability is particularly beneficial for smaller language models, enabling them to achieve accuracy levels comparable to larger models from Google and OpenAI. According to Vectara, initial testing of the Hallucination Corrector has shown promising results, reducing hallucination rates in enterprise AI systems to approximately 0.9%. The system not only identifies and corrects factual inconsistencies but also provides a detailed explanation of why a statement is a hallucination. While the corrected output is automatically used in summaries for end-users, experts can utilize the detailed explanations and suggested fixes to refine their models and guardrails, further mitigating the risk of hallucinations in AI applications. The Hallucination Corrector represents a significant step towards building trusted AI and unlocking the full potential of generative AI for enterprises. Recommended read:
References :
@techcrunch.com
//
References:
venturebeat.com
, Last Week in AI
OpenAI is making a bold move to defend its leadership in the AI space with a reported $3 billion acquisition of Windsurf, an AI-native integrated development environment (IDE). This strategic maneuver, dubbed the "Windsurf initiative," comes as the company faces increasing competition from Google and Anthropic, particularly in the realm of AI-powered coding. The acquisition aims to strengthen OpenAI's position and provide developers with superior coding capabilities, while also securing its role as a primary interface for autonomous AI agents.
The enterprise AI landscape is becoming increasingly competitive, with Google and Anthropic making significant strides. Google, leveraging its infrastructure and the expertise of Gemini head Josh Woodward, has been updating its Gemini models to enhance their coding abilities. Anthropic has also gained traction with its Claude series, which are becoming defaults on popular AI coding platforms like Cursor. These platforms, including Windsurf, Replit, and Lovable, are where developers are increasingly turning to generate code using high-level prompts in agentic environments. In addition to the Windsurf acquisition, OpenAI is also enhancing its API with new integration capabilities. These improvements are designed to boost the performance of Large Language Models (LLMs) and image generators, offering updated functionalities and improved user interfaces. These updates reflect OpenAI's commitment to providing developers with advanced tools, and to stay competitive in the rapidly evolving AI landscape. Recommended read:
References :
Tom Dotan@Newcomer
//
OpenAI is facing an identity crisis, according to former research scientist Steven Adler, stemming from its history, culture, and contentious transition from a non-profit to a for-profit entity. Adler's insights, shared in a recent discussion, delve into the company's early development of GPT-3 and GPT-4, highlighting internal cultural and ethical disagreements. This comes as OpenAI's enterprise adoption accelerates, seemingly at the expense of its rivals, signaling a significant shift in the AI landscape.
OpenAI's recent $3 billion acquisition of Windsurf, an AI-native integrated development environment (IDE), underscores its urgent need to defend its territory in AI-powered coding against growing competition from Google and Anthropic. The move reflects OpenAI's imperative to equip developers with superior coding capabilities and secure a dominant position in the emerging agentic AI world. This deal is seen as a defensive maneuver as OpenAI finds itself on the back foot, needing to counter challenges from competitors who are making significant inroads in AI-assisted coding. Meanwhile, tensions are reportedly simmering between OpenAI and Microsoft, its key partner. Negotiations are shaky, with Microsoft seeking a larger equity stake and retention of IP rights to OpenAI's models, while OpenAI aims to claw those rights back. These issues, along with disagreements over an AGI provision that allows OpenAI an out once it develops artificial general intelligence, have complicated OpenAI's plans for a for-profit conversion and the current effort to become a public benefit corporation. Furthermore, venture capitalists and limited partners are offloading shares in secondaries, which may come at a steep loss compared to 2021 valuations, adding another layer of complexity to OpenAI's current situation. Recommended read:
References :
@www.artificialintelligence-news.com
//
ServiceNow is making significant strides in the realm of artificial intelligence with the unveiling of Apriel-Nemotron-15b-Thinker, a new reasoning model optimized for enterprise-scale deployment and efficiency. The model, consisting of 15 billion parameters, is designed to handle complex tasks such as solving mathematical problems, interpreting logical statements, and assisting with enterprise decision-making. This release addresses the growing need for AI models that combine strong performance with efficient memory and token usage, making them viable for deployment in practical hardware environments.
ServiceNow is betting on unified AI to untangle enterprise complexity, providing businesses with a single, coherent way to integrate various AI tools and intelligent agents across the entire company. This ambition was unveiled at Knowledge 2025, where the company showcased its new AI platform and deepened relationships with tech giants like NVIDIA, Microsoft, Google, and Oracle. The aim is to help businesses orchestrate their operations with genuine intelligence, as evidenced by the adoption from industry leaders like Adobe, Aptiv, the NHL, Visa, and Wells Fargo. To further broaden its reach, ServiceNow has introduced the Core Business Suite, an AI-driven solution aimed at the mid-market. This suite connects employees, suppliers, systems, and data in one place, enabling organizations of all sizes to work faster and more efficiently across critical business processes such as HR, procurement, finance, facilities, and legal affairs. ServiceNow aims for rapid implementation, suggesting deployment within a few weeks, and integrates functionalities from different divisions into a single, uniform experience. Recommended read:
References :
@siliconangle.com
//
References:
siliconangle.com
, thequantuminsider.com
SAS and Intel are collaborating to redefine AI architecture through optimized intelligence, moving away from a GPU-centric approach. This partnership focuses on aligning hardware and software roadmaps to deliver smarter performance, lower costs, and greater trust across various environments. Optimized intelligence allows businesses to tailor their AI infrastructure to specific use cases, which ensures efficient and ethical AI practices with human-centered design, instilling greater confidence in real-world outcomes. SAS and Intel have a 25-year relationship built around this concept, with deep investments in technical alignment to ensure hardware and software co-evolve.
SAS is integrating Intel's silicon innovations, such as AMX acceleration and Gaudi GPUs, into its Viya platform to provide cost-effective performance. This collaboration enables clients to deploy advanced models without overspending on infrastructure, with Viya demonstrating significant performance improvements on the latest Intel platforms. The company is also working with companies like Procter & Gamble and quantum hardware providers including D-Wave, IBM, and QuEra to develop hybrid quantum-classical solutions for real-world problems across industries like life sciences, finance, and manufacturing. A recent global SAS survey revealed that over 60% of business leaders are actively investing in or exploring quantum AI, although concerns remain regarding high costs, a lack of understanding, and unclear use cases. SAS aims to make quantum AI more accessible by working on pilot projects and research, providing guidance to businesses on applying quantum technologies. SAS Principal Quantum Architect Bill Wisotsky states that quantum technologies allow companies to analyze more data and achieve fast answers to complex questions, and SAS wants to simplify this research for its customers. Recommended read:
References :
@www.aiwire.net
//
References:
insideAI News
, www.aiwire.net
The rise of AI agents is rapidly transforming the business landscape, with companies like IBM and Oracle leading the charge in integrating these intelligent tools into the workforce. IBM kicked off its annual Think conference in Boston, highlighting generative AI and agentic AI tools as central themes. CEO Arvind Krishna noted the expectation of a billion new applications being built using generative AI, emphasizing the need to address the challenges of AI deployment, execution, and return on investment. IBM is touting its watsonx enterprise AI platform and rolling out new features, many designed to tame the AI’s deployment, execution, and ROI issues.
IBM and Oracle are expanding their partnership to bring IBM's watsonx, a portfolio of AI products, to Oracle Cloud Infrastructure (OCI). This collaboration aims to create a new era of multi-agentic, AI-driven productivity and efficiency across enterprises. Greg Pavlik, executive vice president at Oracle Cloud Infrastructure, emphasized the importance of seamless AI integration across businesses, stating that the expanded partnership will provide customers with new ways to transform their operations using AI. IBM is making its watsonx Orchestrate AI agent offerings available on OCI in July. Furthermore, the integration of AI agents is expected to significantly impact human resources. Salesforce research indicates that HR leaders are planning to redeploy a quarter of their workforce to focus on agentic AI-related tasks, as AI agent adoption is projected to grow by 327% by 2027. This shift highlights the increasing importance of digital labor and the need for reskilling employees to adapt to the changing demands of the modern workforce. 81% of HR chiefs plan to reskill their employees for better job opportunities in the digital labor era. Recommended read:
References :
@aithority.com
//
References:
, Blocks and Files
,
Nutanix is expanding its cloud capabilities with a focus on cloud-native technologies, external storage enhancements, and generative AI integrations, unveiled at its .NEXT 2025 conference. The company is introducing Cloud Native AOS, offering general availability of Dell PowerFlex support, integrating with Pure Storage FlashArray and FlashStack, and launching Nutanix Enterprise AI initiative with NVIDIA. These updates aim to create a generalized software platform where users can run applications anywhere, addressing the growing need for flexibility and scalability in modern IT environments.
Nutanix is also deepening its integration with NVIDIA AI Enterprise to accelerate the deployment of Agentic AI applications within enterprises. The latest version of Nutanix Enterprise AI (NAI) includes NVIDIA NIM microservices and the NVIDIA NeMo framework, simplifying the building, running, and managing of AI models and inferencing services across various environments, including edge, data centers, and public clouds. This integration aims to provide a streamlined foundation for building and running secure AI agents. The enhanced NAI solution features shared LLM endpoints, allowing customers to reuse existing deployed model endpoints for multiple applications, reducing hardware and storage costs. The platform incorporates NVIDIA's NeMo Guardrails to filter out non-approved content, ensuring compliance, privacy, and security within AI applications. Nutanix's Cloud Infrastructure solution, combined with NVIDIA's AI Data Platform, is designed to convert data into actionable insights, providing an optimized stack for GPU data processing and deployment across HCI, bare-metal, and cloud Infrastructure-as-a-Service. Recommended read:
References :
@www.techmeme.com
//
A recent report from Amazon Web Services (AWS) indicates a significant shift in IT spending priorities for 2025. Generative AI has overtaken cybersecurity as the primary focus for global IT leaders, with 45% now prioritizing AI investments. This change underscores the increasing emphasis on implementing AI strategies and acquiring the necessary talent, even amidst ongoing skills shortages. The AWS Generative AI Adoption Index surveyed 3,739 senior IT decision makers across nine countries, including the United States, Brazil, Canada, France, Germany, India, Japan, South Korea, and the United Kingdom.
This move to prioritize generative AI doesn't suggest a neglect of security, according to Rahul Pathak, Vice President of Generative AI and AI/ML Go-to-Market at AWS. Pathak stated that customers' security remains a massive priority, and the surge in AI investment reflects the widespread recognition of AI's diverse applications and the pressing need to accelerate its adoption. The survey revealed that 90% of organizations are already deploying generative AI in some capacity, with 44% moving beyond experimental phases to production deployment, indicating a critical inflection point in AI adoption. The survey also highlights the emergence of new leadership roles within organizations to manage AI initiatives. Sixty percent of companies have already appointed a Chief AI Officer (CAIO) or equivalent, and an additional 26% plan to do so by 2026. This executive-level commitment reflects the growing strategic importance of AI, although the study cautions that nearly a quarter of organizations may still lack formal AI transformation strategies by 2026. These companies are planning ways to bridge the gen AI talent gap this year by creating training plans to upskill their workforce for GenAI. Recommended read:
References :
Rowan Cheung@The Rundown AI
//
AI is rapidly transforming how businesses operate, particularly in streamlining data processing and automation. Mid-sized enterprises are leveraging AI data processing capabilities to automate repetitive tasks, extract valuable insights from extensive datasets, and minimize errors associated with manual processes. UiPath has launched Agentic Automation, a platform that expands the role of digital workers beyond routine tasks to intelligent AI agents capable of reasoning, adapting, and collaborating more like humans. This shift enables intelligent collaboration between AI, robots, and people, accelerating decision-making and boosting productivity gains across various enterprise environments.
UiPath's platform, featuring Maestro, coordinates AI agents, robots, and humans across business processes, transforming static workflows into dynamic streams of events that adapt to changing conditions in real-time. According to UiPath CEO Daniel Dines, agentic automation combines Robotic Process Automation (RPA), AI models, and human expertise into cohesive workflows. This integration allows for understanding, improving, and automating diverse workflows, thereby driving significant enterprise efficiency. The goal is to empower people to focus on meaningful work by freeing them from mundane tasks. FutureHouse has also entered the scene with a new platform featuring four "superintelligent" AI agents—Crow, Falcon, Owl, and Phoenix—designed to assist scientists in navigating the vast amount of research literature. These agents are reportedly more accurate and precise than major frontier search models and even PhD-level researchers in literature search tasks. The AI agents can identify unexplored mechanisms, find contradictions in literature, analyze experimental methods, customize research pipelines, and reason about chemical compounds. This innovation promises to accelerate scientific discovery by automating and enhancing the research process. Recommended read:
References :
Coen van@Techzine Global
//
ServiceNow has announced the launch of AI Control Tower, a centralized control center designed to manage, secure, and optimize AI agents, models, and workflows across an organization. Unveiled at Knowledge 2025 in Las Vegas, this platform provides a holistic view of the entire AI ecosystem, enabling enterprises to monitor and manage both ServiceNow and third-party AI agents from a single location. The AI Control Tower aims to address the growing complexity of managing AI deployments, giving users a central point to see all AI systems, their deployment status, and ensuring governance and understanding of their activities.
The AI Control Tower offers key benefits such as enterprise-wide AI visibility, built-in compliance and AI governance, end-to-end lifecycle management of agentic processes, real-time reporting, and improved alignment. It is designed to help AI systems administrators and other stakeholders monitor and manage every AI agent, model, or workflow within their system, providing real-time reporting for different metrics and embedded compliance and AI governance. The platform helps users understand the different systems by provider and type, improving risk and compliance management. In addition to the AI Control Tower, ServiceNow introduced AI Agent Fabric, facilitating communication between AI agents and partner integrations. ServiceNow has also partnered with NVIDIA to engineer an open-source model, Apriel Nemotron 15B, designed to drive advancements in enterprise large language models (LLMs) and power AI agents that support various enterprise workflows. The Apriel Nemotron 15B, developed using NVIDIA NeMo and ServiceNow domain-specific data, is engineered for reasoning, drawing inferences, weighing goals, and navigating rules in real time, making it efficient and scalable for concurrent enterprise workflows. Recommended read:
References :
@infoworld.com
//
IBM is expanding its artificial intelligence offerings with a major initiative focused on agentic AI, unveiled at the THINK 2025 conference. The company is introducing a suite of domain-specific AI agents and tools designed to help enterprises move beyond basic AI assistants and embrace more sophisticated, autonomous AI agents. These agents can be integrated using watsonx Orchestrate, a framework added to IBM's integration portfolio. The goal is to make it easier for businesses to build, deploy, and benefit from AI agents in real-world applications.
IBM's new agentic AI capabilities include an AI Agent Catalog, offering a centralized hub for pre-built agents, and Agent Connect, a partner program for third-party developers. Domain-specific agent templates for sales, procurement, and HR are also being provided, along with a no-code agent builder for business users and an agent development toolkit for developers. A multi-agent orchestrator enables agent-to-agent collaboration, and Agent Ops (in private preview) offers telemetry and observability. The core aim is to bridge the gap between AI experimentation and tangible business benefits. IBM CEO Arvind Krishna believes that over a billion new applications will be built with generative AI in the coming years, emphasizing AI's potential to drive productivity, cost savings, and revenue scaling. IBM's initiative directly addresses the challenges enterprises face in achieving a return on investment from their AI projects, including data silos and hybrid infrastructure complexities. These new tools and integration capabilities intend to facilitate AI agent adoption across various vendors and platforms. Recommended read:
References :
Tor Constantino,@Tor Constantino
//
The rise of AI agents is gaining significant momentum, attracting substantial interest and creating new job opportunities across various industries. Recent publications and industry initiatives highlight the transformative potential of AI agents in automating complex tasks and optimizing existing workflows. IBM, for instance, has launched a major agentic AI initiative, introducing a suite of domain-specific AI agents that can be integrated using the watsonx Orchestrate framework, aiming to provide comprehensive observability capabilities across the entire agent lifecycle, while UiPath has launched a next-gen platform for agentic automation designed to orchestrate AI agents, robots, and humans on a single intelligent system to autonomously manage complex tasks across enterprise environments.
AI agents are evolving from simple tools into sophisticated systems capable of reasoning, adapting, and collaborating in more human-like ways. IBM is providing a range of tools that enable organizations to build their agents in minutes. Local AI agents are also gaining traction, offering customization and enhanced privacy by allowing users to run powerful, customizable AI models on their own computers. Tools like Ollama and Langflow are simplifying the process of building and deploying local AI agents, making it accessible to individuals without extensive coding expertise. Outshift by Cisco has achieved a 10x productivity boost with their Agentic AI Platform Engineer, demonstrating the potential of AI agents to significantly improve operational efficiency and reduce turnaround times by automating commonly requested developer tasks. These advancements are paving the way for a new era of intelligent automation, where AI agents can seamlessly integrate into existing business processes and augment human capabilities. The evolution of AI agents is not only transforming enterprise automation but also unlocking new possibilities for innovation and productivity across various sectors. As the demand for AI agents continues to grow, professionals with expertise in their design, deployment, and orchestration will be highly sought after, making it essential to understand the foundational concepts and advanced implementation strategies of agentic AI. Recommended read:
References :
@siliconangle.com
//
References:
AI ? SiliconANGLE
, siliconangle.com
,
UiPath has officially launched its platform for building and orchestrating AI agents, marking its entry into the agentic AI market. The platform, unveiled on April 30, 2025, is designed to help companies automate complex tasks with minimal supervision, addressing the growing trend of businesses turning to AI agents. UiPath, a robotic process automation company, aims to leverage its expertise in automating digital processes to overcome barriers to agent adoption, such as security concerns, compliance risks, and scalability limitations. The platform is seen as vital for UiPath to regain its historically high growth rates amid shifting customer preferences towards AI agents.
Maestro, UiPath’s orchestration platform, is central to this new offering. It dynamically orchestrates across agents, robots, and humans to execute long-running workflows. Mark Geene, senior vice president and general manager of AI products and platform at UiPath, describes Maestro as automating, modeling, and optimizing intricate business processes with built-in process intelligence and real-time key performance indicator monitoring, enabling continuous optimization of agent fleets. Maestro uses process intelligence to understand process execution, identify bottlenecks, and provide adaptive recommendations at runtime, offering both predefined KPIs and customizable options. Key features of the UiPath platform include built-in governance to ensure AI agents operate within defined security parameters. It provides real-time vulnerability assessments and data access controls. Developers can prototype agents in UiPath Studio using low-code tools and Python. The platform integrates with third-party agent frameworks like LangChain, Anthropic, and Microsoft, and supports orchestrations from processes defined in Business Process Model and Notation. UiPath emphasizes the importance of human oversight in agent workflows, incorporating escalations to involve human intervention when an agent's confidence falls below a threshold. Recommended read:
References :
Carl Franzen@AI News | VentureBeat
//
Microsoft has announced the release of Phi-4-reasoning-plus, a new small, open-weight language model designed for advanced reasoning tasks. Building upon the architecture of the previously released Phi-4, this 14-billion parameter model integrates supervised fine-tuning and reinforcement learning to achieve strong performance on complex problems. According to Microsoft, the Phi-4 reasoning models outperform larger language models on several demanding benchmarks, despite their compact size. This new model pushes the limits of small AI, demonstrating that carefully curated data and training techniques can lead to impressive reasoning capabilities.
The Phi-4 reasoning family, consisting of Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning, is specifically trained to handle complex reasoning tasks in mathematics, scientific domains, and software-related problem solving. Phi-4-reasoning-plus, in particular, extends supervised fine-tuning with outcome-based reinforcement learning, which is targeted for improved performance in high-variance tasks such as competition-level mathematics. All models are designed to enable reasoning capabilities, especially on lower-performance hardware such as mobile devices. Microsoft CEO Satya Nadella revealed that AI is now contributing to 30% of Microsoft's code. The open weight models were released with transparent training details and evaluation logs, including benchmark design, and are hosted on Hugging Face for reproducibility and public access. The model has been released under a permissive MIT license, enabling its use for broad commercial and enterprise applications, and fine-tuning or distillation, without restriction. Recommended read:
References :
@zdnet.com
//
Salesforce is tackling the challenge of "jagged intelligence" in AI, aiming to enhance the reliability and consistency of enterprise AI agents. The company's AI Research division has introduced new benchmarks, models, and guardrails designed to make these agents more intelligent, trusted, and versatile for business applications. This initiative seeks to bridge the gap between an AI system's potential intelligence and its ability to perform consistently in unpredictable real-world enterprise environments. Salesforce is focusing on "Enterprise General Intelligence" (EGI), which prioritizes consistency alongside capability for AI agents in complex business settings.
Salesforce AI Research is addressing AI's inconsistency problem by introducing the SIMPLE dataset, a public benchmark with 225 reasoning questions to measure the "jaggedness" of AI systems. They have also introduced ContextualJudgeBench, which evaluates an agent’s ability to maintain accuracy and faithfulness in context-specific answers, emphasizing factual correctness and the ability to abstain from answering when appropriate, especially in sensitive fields like law, finance, and healthcare. These tools are essential for diagnosing and mitigating the erratic behavior of AI agents across tasks of similar complexity. A recent Salesforce survey of 2,552 U.S. consumers reveals a growing acceptance of AI agents, with roughly half (53%) wanting AI to simplify complex information. Furthermore, Salesforce is expanding its Trust Layer with new safeguards, including the SFR-Guard model family, to detect prompt injections, toxic outputs, and hallucinations in both open-domain and CRM-specific data. Overall, the survey makes it clear that AI agents are already starting to have a societal impact. Recommended read:
References :
@www.bigdatawire.com
//
Dataminr and IBM are making significant strides in leveraging agentic AI to enhance security operations. Dataminr has introduced Dataminr Intel Agents, an autonomous AI capability designed to provide contextual analysis of emerging events, threats, and risks. These Intel Agents are part of a broader AI roadmap aimed at improving real-time decision-making by providing continuously updated insights derived from public and proprietary data. This allows organizations to respond faster and more effectively to dynamic situations, sorting through the noise to understand what matters most in real-time.
IBM is also delivering autonomous security operations through agentic AI, with new capabilities designed to transform cybersecurity operations. This includes driving efficiency and precision in threat hunting, detection, investigation, and response. IBM is launching Autonomous Threat Operations Machine (ATOM), an agentic AI system designed for autonomous threat triage, investigation, and remediation with minimal human intervention. ATOM is powered by IBM's Threat Detection and Response (TDR) services, leveraging an AI agentic framework and orchestration engine to augment existing security analytics solutions. These advancements are critical as cybersecurity faces a unique moment where AI-enhanced threat intelligence can give defenders an advantage over evolving threats. Agentic AI is redefining the cybersecurity landscape, creating new opportunities and demanding a rethinking of how to secure AI. By automating threat hunting and improving detection and response processes, companies like Dataminr and IBM are helping organizations unlock new value from security operations and free up valuable security resources, enabling them to focus on high-priority threats. Recommended read:
References :
@techstrong.ai
//
Microsoft has unveiled the public preview of Azure MCP Server, an open-source tool designed to empower AI agents with enhanced capabilities. This innovative server implements the Model Context Protocol (MCP), establishing a standardized communication bridge between AI agents and Azure cloud resources. By utilizing natural language instructions, AI systems can now seamlessly interact with various Azure services, marking a significant step towards AI-driven business transformation. This allows a "write once" approach to integration between AI systems and data sources by creating a universal interface.
The Azure MCP Server provides AI agents with access to core Azure services, including Azure Cosmos DB (NoSQL), Azure Storage, Azure Monitor (Log Analytics), Azure App Configuration, and Azure Resource Groups. Agents can perform tasks such as querying databases, managing storage blobs, configuring monitoring settings, and managing resource groups. The momentum behind MCP is substantial, with over 1,000 MCP servers built since its launch. The server's current capabilities enable agents to list accounts, databases, and containers; execute SQL queries; access container properties; query logs using Kusto Query Language (KQL); and manage key-value pairs. Furthermore, Microsoft's AI Red Team (AIRT) has released a comprehensive guide to failure modes in agentic AI systems. The report categorizes failure modes across two dimensions: security and safety, each comprising novel and existing types. Novel security failures include agent compromise, agent injection, and multi-agent jailbreaks. Novel safety failures cover issues such as biases in resource allocation and prioritization risks. By publishing this detailed taxonomy, Microsoft aims to provide practitioners with a critical foundation for designing and maintaining resilient agentic systems. Recommended read:
References :
@docs.llamaindex.ai
//
References:
Blog on LlamaIndex
, docs.llamaindex.ai
LlamaIndex is advancing agentic systems design by focusing on the optimal blend of autonomy and structure, particularly through its innovative Workflows system. Workflows provide an event-based mechanism for orchestrating agent execution, connecting individual steps implemented as vanilla functions. This approach enables developers to create chains, branches, loops, and collections within their agentic systems, aligning with established design patterns for effective agents. The system, available in both Python and TypeScript frameworks, is fundamentally simple yet powerful, allowing for complex orchestration of agentic tasks.
LlamaIndex Workflows support hybrid systems by allowing decisions about control flow to be made by LLMs, traditional imperative programming, or a combination of both. This flexibility is crucial for building robust and adaptable AI solutions. Furthermore, Workflows not only facilitate the implementation of agents but also enable the use of sub-agents within each step. This hierarchical agent design can be leveraged to decompose complex tasks into smaller, more manageable units, enhancing the overall efficiency and effectiveness of the system. The introduction of Workflows underscores LlamaIndex's commitment to providing developers with the tools they need to build sophisticated knowledge assistants and agentic applications. By offering a system that balances autonomy with structured execution, LlamaIndex is addressing the need for design principles when building agents. The company draws from its experience with LlamaCloud and its collaboration with enterprise customers to offer a system that integrates agents, sub-agents, and flexible decision-making capabilities. Recommended read:
References :
Derek Egan@AI & Machine Learning
//
Google Cloud is enhancing its MCP Toolbox for Databases to provide simpler and more secure access to enterprise data for AI agents. Announced at Google Cloud Next 2025, this update includes support for Model Context Protocol (MCP), an emerging open standard developed by Anthropic, which aims to standardize how AI systems connect to various data sources. The MCP Toolbox for Databases, formerly known as the Gen AI Toolbox for Databases, acts as an open-source MCP server, allowing developers to connect GenAI agents to enterprise databases like AlloyDB for PostgreSQL, Spanner, and Cloud SQL securely and efficiently.
The enhanced MCP Toolbox for Databases reduces boilerplate code, improves security through OAuth2 and OIDC, and offers end-to-end observability via OpenTelemetry integration. These features simplify the development process, allowing developers to build agents with the Agent Development Kit (ADK). The ADK, an open-source framework, supports the full lifecycle of intelligent agent development, from prototyping and evaluation to production deployment. ADK provides deterministic guardrails, bidirectional audio and video streaming capabilities, and a direct path to production deployment via Vertex AI Agent Engine. This update represents a significant step forward in creating secure and standardized methods for AI agents to communicate with one another and access enterprise data. Because the Toolbox is fully open-source, it includes contributions from third-party databases such as Neo4j and Dgraph. By supporting MCP, the Toolbox enables developers to leverage a single, standardized protocol to query a wide range of databases, enhancing interoperability and streamlining the development of agentic applications. New customers can also leverage Google Cloud's offer of $300 in free credit to begin building and testing their AI solutions. Recommended read:
References :
Thomas Claburn@The Register
//
Microsoft is significantly advancing human-agent collaboration with the latest upgrades to its Microsoft 365 Copilot. The tech giant is rolling out new updates, including Researcher and Analyst agents designed to enhance workplace productivity by providing in-depth research and data analysis. These AI agents, powered by OpenAI's deep reasoning models, are envisioned as digital colleagues capable of performing complex workplace tasks, helping professionals tackle intricate challenges with advanced reasoning capabilities. This aligns with Microsoft’s broader AI-first strategy, aiming to scale digital labor and drive substantial productivity gains for companies adopting these technologies.
Microsoft is also unveiling a redesigned Microsoft 365 Copilot app featuring AI-powered search and an Agent Store, positioning it as the central hub for human-agent collaboration. The new AI-powered enterprise search tool, Copilot Search, organizes data across the enterprise, providing rich, context-aware answers from first-party and third-party apps like ServiceNow, Google Drive, and Jira. The Agent Store provides access to Microsoft’s Researcher and Analyst agents, initially introduced in March and available to those enrolled in Microsoft’s Frontier program. Moreover, Microsoft is adding new capabilities to its Control System feature to assist IT professionals in overseeing and measuring bot usage effectively. These updates are part of a broader effort to integrate AI across the Microsoft ecosystem, addressing the increasing demand for AI-powered solutions in the enterprise. According to Microsoft’s 2025 Work Trend Index report, a significant majority of companies are rethinking their strategies to leverage AI, signaling a decisive move toward full-scale AI transformation. Key features include AI-powered enterprise search, personalized memory capabilities, specialized reasoning agents like Researcher and Analyst, and the Agent Store. The Researcher agent helps with multi-step research tasks, while the Analyst agent provides data science capabilities. Recommended read:
References :
John Werner,@John Werner
//
References:
John Werner
Companies are rapidly adopting AI agents to enhance various business operations. Salesforce, for example, has integrated AI agents into their Agentforce platform, amassing insights from over 500,000 customer conversations, showcasing how AI can drive both empathy and efficiency. The interest in leveraging AI agent designs to optimize processes is growing, as businesses seek to harness the power of intelligent automation. Companies are increasingly interested in how to get the most out of new agentic AI designs.
Agent Architect is emerging as a critical role, tasked with designing and implementing intelligent agent workflows using low-code tools. These architects bridge the gap between business goals and AI automation, mapping processes to workflows and designing agent behaviors that adapt and evolve with minimal human intervention. Lyzr is helping to train Agent Architects with the aim of accelerating automation strategies. This new role is essential, as analysts, developers, and system engineers were not originally hired to manage agentic AI. Furthermore, partnerships are forming to extend the reach of AI agents into government sectors. Leidos and Moveworks are collaborating to provide agentic AI solutions to government agencies in the U.S., U.K., and Australia, aiming to improve efficiency for government workers. Moveworks has received security certifications from Leidos, showing their capacity to support government agencies with secure AI solutions. Additionally, Zero Networks is promoting automated microsegmentation to enhance cybersecurity through zero trust policies, isolating assets within networks to limit the impact of cyberattacks, with automation seen as key to practical real-world network security. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |