News from the AI & ML world

DeeperML - #efficiency

@www.microsoft.com //
Microsoft and Towards AI are addressing the AI skills gap with a new course, "AI for Business Professionals," designed to help professionals move beyond basic AI usage to achieve significant improvements in work quality and innovation. This initiative comes as organizations increasingly recognize generative AI as a game-changer but struggle with effective adoption due to a lack of strategic knowledge and technical skills among their teams. The course aims to transform individuals from merely AI-curious to AI-skilled collaborators, enabling them to use AI not just for speed, but to generate innovative ideas and achieve exceptional quality in their work.

The "AI for Business Professionals" course offers practical training tailored for diverse roles, including software engineers and investment research analysts. It provides modules and actionable strategies designed to optimize coding, streamline administrative tasks, and identify groundbreaking opportunities using AI. The self-paced course includes short videos, hands-on exercises, real-world demos, and expert guidance, all designed for busy, non-technical professionals. Participants will learn how to use AI deeply, effectively, and strategically in their daily work, addressing common issues such as distrust in AI output and uncertainty about time savings.

Microsoft emphasized the importance of continuous learning and career development in keeping up with evolving business needs. Nearly half (47%) of businesses say their top workforce strategy over the next 12 to 18 months is to train their existing workforce in AI skills. The course aims to provide individuals with the opportunity to develop the AI skills they need to build confidence, establish fluency, and thrive in the new AI economy. A free lesson is available to preview the course's content and learn how to use AI effectively, and strategically in daily work. The full course is available for $399.

Recommended read:
References :

@www.searchenginejournal.com //
References: Search Engine Journal , WhatIs ,
Google is aggressively expanding its artificial intelligence capabilities across its platforms, integrating the Gemini AI model into Search, and Android XR smart glasses. The tech giant unveiled the rollout of "AI Mode" in the U.S. Search, making it accessible to all users after initial testing in the Labs division. This move signifies a major shift in how people interact with the search engine, offering a conversational experience akin to consulting with an expert.

Google is feeding its latest AI model, Gemini 2.5, into its search algorithms, enhancing features like "AI Overviews" which are now available in over 200 countries and 40 languages and are used by 1.5 billion monthly users. In addition, Gemini 2.5 Pro introduces enhanced reasoning, through Deep Think, to give deeper and more thorough responses with AI Mode with Deep Search. Google is also testing new AI-powered features, including the ability to conduct searches through live video feeds with Search Live.

Google is also re-entering the smart glasses market with Android XR-powered spectacles featuring a hands-free camera and a voice-powered AI assistant. This project, named Astra, allows users to talk back and forth with Search about what they see in real-time with their cameras. These advancements aim to create more personalized and efficient user experiences, marking a new phase in the AI platform shift and solidifying AI's position in search.

Recommended read:
References :
  • Search Engine Journal: Google Expands AI Features in Search: What You Need to Know
  • WhatIs: Google expands Gemini model, Search as AI rivals encroach
  • www.theguardian.com: Google unveils ‘AI Mode’ in the next phase of its journey to change search

@Salesforce //
References: Salesforce , Salesforce
Agentic AI is rapidly transforming various sectors, from government operations to small businesses. Salesforce executives highlight the potential of agentic AI to assist overstretched government workers by automating routine tasks and improving efficiency. The focus is shifting from automating basic tasks to creating intelligent systems that adapt and learn, providing personalized and efficient support. This evolution promises to reshape how work is done, streamlining processes and enhancing productivity.

Companies are quickly adopting AI agents to enhance customer support and streamline operations, leading to a new competitive landscape. Microsoft has launched powerful AI agents designed to transform the workday and challenge Google’s workplace dominance. These agents, such as the 'Researcher' and 'Analyst' agents, are powered by OpenAI’s deep reasoning models and can handle complex tasks, such as research and data analysis, that previously required specialized human expertise. This increased productivity across sectors signifies a major shift in how businesses operate.

Dynamics 365 Customer Service now offers three AI service agents in public preview: Case Management, Customer Intent, and Customer Knowledge Management agents. These agents learn to address emerging issues, uncover new knowledge, and automate manual processes to boost business efficiency and reduce costs. The Case Management Agent automates key tasks throughout the lifecycle of a case, while the Customer Intent Agent uses generative AI to analyze past interactions and provide tailored solutions. This represents a significant step towards autonomous systems that improve customer experiences and reduce the burden on human agents.

Recommended read:
References :
  • Salesforce: Salesforce execs on agentic AI for government workers.
  • Salesforce: AI Agents: A New Competitive Edge for SMBs

@www.microsoft.com //
Microsoft is at the forefront of a workplace revolution, driven by the rapid advancement of AI. According to their 2025 Work Trend Index, AI agents are transforming how businesses operate, particularly in customer service and security operations. These agents, powered by sophisticated AI, are designed to augment human capabilities, enabling companies to scale rapidly, operate with agility, and generate value faster. The report highlights the emergence of "Frontier Firms," organizations built around on-demand AI and human-agent teams, where employees act as "agent bosses."

Microsoft envisions a future where every employee will have an AI assistant, and AI agents will join teams as "digital colleagues," taking on specific tasks. Eventually, humans will set directions for these agents, who will then execute business processes and workflows independently, with their human supervisors checking in as needed. This shift represents a move from simple coding assistance to AI agents capable of handling complex tasks, such as end-to-end logistics in a supply chain, while humans guide the system and manage relationships with suppliers. This transformation is expected to impact various knowledge work professions, including scientists, academics, and lawyers.

The company also introduced AI service agents for Dynamics 365 Customer Service and Contact Center. These agents are available in public preview and include Case Management, Customer Intent, and Customer Knowledge Management agents. These AI agents learn to address emerging issues, uncover new knowledge, and automate manual processes to boost business efficiency and reduce costs. The Case Management Agent simplifies case management, reduces handling time, and improves customer satisfaction, while the Customer Intent Agent uses generative AI to analyze past interactions and provide tailored solutions. Microsoft is also emphasizing the importance of securing, managing, and measuring agent workstreams with the Copilot Control System, ensuring that businesses can effectively mitigate risks and track the ROI of their AI agent deployments.

Recommended read:
References :

@www.analyticsvidhya.com //
OpenAI recently unveiled its groundbreaking o3 and o4-mini AI models, representing a significant leap in visual problem-solving and tool-using artificial intelligence. These models can manipulate and reason with images, integrating them directly into their problem-solving process. This unlocks a new class of problem-solving that blends visual and textual reasoning, allowing the AI to not just see an image, but to "think with it." The models can also autonomously utilize various tools within ChatGPT, such as web search, code execution, file analysis, and image generation, all within a single task flow.

These models are designed to improve coding capabilities, and the GPT-4.1 series includes GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano. GPT-4.1 demonstrates enhanced performance and lower prices, achieving a 54.6% score on SWE-bench Verified, a significant 21.4 percentage point increase from GPT-4o. This is a big gain in practical software engineering capabilities. Most notably, GPT-4.1 offers up to one million tokens of input context, compared to GPT-4o's 128k tokens, making it suitable for processing large codebases and extensive documentation. GPT-4.1 mini and nano also offer performance boosts at reduced latency and cost.

The new models are available to ChatGPT Plus, Pro, and Team users, with Enterprise and education users gaining access soon. While reasoning alone isn't a silver bullet, it reliably improves model accuracy and problem-solving capabilities on challenging tasks. With Deep Research products and o3/o4-mini, AI-assisted search-based research is now effective.

Recommended read:
References :
  • bdtechtalks.com: What to know about o3 and o4-mini, OpenAI’s new reasoning models
  • TestingCatalog: OpenAI’s o3 and o4‑mini bring smarter tools and faster reasoning to ChatGPT
  • thezvi.wordpress.com: OpenAI has finally introduced us to the full o3 along with o4-mini. These models feel incredibly smart.
  • venturebeat.com: OpenAI launches groundbreaking o3 and o4-mini AI models that can manipulate and reason with images, representing a major advance in visual problem-solving and tool-using artificial intelligence.
  • www.techrepublic.com: OpenAI’s o3 and o4-mini models are available now to ChatGPT Plus, Pro, and Team users. Enterprise and education users will get access next week.
  • the-decoder.com: OpenAI's o3 achieves near-perfect performance on long context benchmark
  • the-decoder.com: Safety assessments show that OpenAI's o3 is probably the company's riskiest AI model to date
  • www.unite.ai: Inside OpenAI’s o3 and o4‑mini: Unlocking New Possibilities Through Multimodal Reasoning and Integrated Toolsets
  • thezvi.wordpress.com: Discusses the release of OpenAI's o3 and o4-mini reasoning models and their enhanced capabilities.
  • Simon Willison's Weblog: OpenAI o3 and o4-mini System Card
  • Interconnects: OpenAI's o3: Over-optimization is back and weirder than ever. Tools, true rewards, and a new direction for language models.
  • techstrong.ai: Nobody’s Perfect: OpenAI o3, o4 Reasoning Models Have Some Kinks
  • bsky.app: It's been a couple of years since GPT-4 powered Bing, but with the various Deep Research products and now o3/o4-mini I'm ready to say that AI assisted search-based research actually works now
  • www.analyticsvidhya.com: o3 vs o4-mini vs Gemini 2.5 pro: The Ultimate Reasoning Battle
  • pub.towardsai.net: TAI#149: OpenAI’s Agentic o3; New Open Weights Inference Optimized Models (DeepMind Gemma, Nvidia Nemotron-H) Also, Grok-3 Mini Shakes Up Cost Efficiency, Codex, Cohere Embed 4, PerceptionLM & more.
  • Last Week in AI: Last Week in AI #307 - GPT 4.1, o3, o4-mini, Gemini 2.5 Flash, Veo 2
  • composio.dev: OpenAI o3 vs. Gemini 2. 5 Pro vs. o4-mini
  • Towards AI: Details about Open AI's Agentic O3 models

@arstechnica.com //
Microsoft researchers have achieved a significant breakthrough in AI efficiency with the development of a 1-bit large language model (LLM) called BitNet b1.58 2B4T. This model, boasting two billion parameters and trained on four trillion tokens, stands out due to its remarkably low memory footprint and energy consumption. Unlike traditional AI models that rely on 16- or 32-bit floating-point formats for storing numerical weights, BitNet utilizes only three distinct weight values: -1, 0, and +1. This "ternary" architecture dramatically reduces complexity, enabling the AI to run efficiently on a standard CPU, even an Apple M2 chip, according to TechCrunch.

The development of BitNet b1.58 2B4T represents a significant advancement in the field of AI, potentially paving the way for more accessible and sustainable AI applications. This 1-bit model, available on Hugging Face, uses a novel approach of representing each weight with a single bit. While this simplification can lead to a slight reduction in accuracy compared to larger, more complex models, BitNet b1.58 2B4T compensates through its massive training dataset, comprising over 33 million books. The reduction in memory usage is substantial, with the model requiring only 400MB of non-embedded memory, significantly less than comparable models.

Comparisons against leading mainstream models like Meta’s LLaMa 3.2 1B, Google’s Gemma 3 1B, and Alibaba’s Qwen 2.5 1.5B have shown that BitNet b1.58 2B4T performs competitively across various benchmarks. In some instances, it has even outperformed these models. However, to achieve optimal performance and efficiency, the LLM must be used with the bitnet.cpp inference framework. This highlights a current limitation as the model does not run on GPU and requires a proprietary framework. Despite this, the creation of such a lightweight and efficient LLM marks a crucial step toward future AI that may not necessarily require supercomputers.

Recommended read:
References :
  • arstechnica.com: Microsoft Researchers Create Super‑Efficient AI That Uses Up to 96% Less Energy
  • www.techrepublic.com: Microsoft Releases Largest 1-Bit LLM, Letting Powerful AI Run on Some Older Hardware
  • www.tomshardware.com: Microsoft researchers build 1-bit AI LLM with 2B parameters — model small enough to run on some CPUs