News from the AI & ML world

DeeperML

@Techmeme //
Microsoft is doubling down on its commitment to artificial intelligence, particularly through its Copilot platform. The company is showcasing Copilot as a central AI model for Windows users and is planning to roll out new features. A new memory feature is undergoing testing for Copilot Pro users, enabling the AI to retain contextual information about users, mimicking the functionality of ChatGPT. This personalization feature, accessible via the "Privacy" tab in Copilot's settings, allows the AI to remember user preferences and prior tasks, enhancing its utility for tasks like drafting documents or scheduling.

Microsoft is also making strategic moves concerning its Office 365 and Microsoft 365 suites in response to an EU antitrust investigation. To address concerns about anti-competitive bundling practices related to its Teams communication app, Microsoft plans to offer these productivity suites without Teams at a lower price point. Teams will also be available as a standalone product. This initiative aims to provide users with more choice and address complaints that the inclusion of Teams unfairly disadvantages competitors. Microsoft has also committed to improving interoperability, enabling rival software to integrate more effectively with its services.

Satya Nadella, Microsoft's CEO, is focused on making AI models accessible to customers through Azure, regardless of their origin. Microsoft's strategy involves providing various AI models to maximize profit gains, even those developed outside of Microsoft. Nadella emphasizes that Microsoft's allegiance isn't tied exclusively to OpenAI's models but encompasses a broader approach to AI accessibility. Microsoft believes ChatGPT and Copilot are similar however the company is working hard to encourage users to use Copilot by adding features such as its new memory function and not supporting the training of the ChatGPT model.

Recommended read:
References :
  • www.laptopmag.com: The big show could see some major changes for Microsoft.
  • TestingCatalog: Discover Microsoft's new memory feature in Copilot, improving personalization by retaining personal info. Users can now explore this update!
  • www.windowscentral.com: Microsoft says Copilot is ChatGPT, but better with a powerful user experience and enhanced security.
  • www.windowscentral.com: Microsoft Copilot gets OpenAI's GPT-4o image generation support to transform an image's style, generate photorealistic images, and follow complex directions.
  • blogs.microsoft.com: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • AI News | VentureBeat: VentureBeat discusses Microsoft's AI agents' ability to communicate with each other.
  • The Microsoft Cloud Blog: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • SiliconANGLE: Microsoft launches tools to streamline AI agent development
  • www.zdnet.com: Microsoft unveils new AI agent customization and oversight features at Build 2025
  • Techmeme: Microsoft Corp. today unveiled a major expansion of its artificial intelligence security and governance offerings with the introduction of new capabilities designed to secure the emerging “agentic workforce,” a world where AI agents and humans collaborate and work together. Announced at the company’s annual Build developer conference, Microsoft is expanding Entra, Defender and Purview, embedding these capabilities directly into Azure AI Foundry and Copilot Studio to help organizations secure AI apps and agents.
  • The Tech Portal: Microsoft Build 2025: GitHub Copilot gets AI boost with new coding agent
  • AI News | VentureBeat: GitHub Copilot evolves into autonomous agent with asynchronous code testing
  • Latest from ITPro in News: GitHub just unveiled a new AI coding agent for Copilot – and it’s available now
  • AI News | VentureBeat: Microsoft announces over 50 AI tools to build the 'agentic web' at Build 2025
  • www.zdnet.com: Copilot's Coding Agent brings automation deeper into GitHub workflows
  • WhatIs: New GitHub Copilot agent edges into DevOps
  • siliconangle.com: Microsoft launches tools to streamline AI agent development
  • Source Asia: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • techvro.com: Microsoft Launches AI Coding Agent With Windows Getting Support for the ‘USB-C of AI Apps’
  • Ken Yeung: Microsoft Opens Teams to Smarter AI Agents With A2A Protocol and New Developer Tools
  • Techzine Global: Microsoft envisions a future in which AI agents from different companies can collaborate and better remember their previous interactions.
  • Ken Yeung: AI Agents Are Coming to Windows—Here’s How Microsoft Is Making It Happen

Brian Fagioli@BetaNews //
References: BetaNews , Changelog , cyberinsider.com ...
Microsoft has officially open-sourced the Windows Subsystem for Linux (WSL), a feature allowing developers to run Linux distributions directly on Windows without needing virtual machines or dual-boot setups. This move, announced at Build 2025, marks a significant shift after nearly a decade of closed development, and fulfills the very first feature request on the Microsoft/WSL GitHub repository: "Will this be open source?". Developers now have full access to the WSL code on GitHub, inviting them to inspect, improve, and contribute to seamlessly integrate Linux with Windows.

This open-source release includes core components that power WSL 2, such as command-line tools, background services, init processes, networking daemons, and the Plan9-based file sharing system. Users can now build WSL from source, fork it, or contribute directly on GitHub, fostering community involvement in shaping the future of Linux-on-Windows integration. The move also helps in customization, allowing developers to tailor WSL to their specific needs.

While not every part of WSL is open, as some legacy elements like the WSL 1 kernel driver remain proprietary, this release is a meaningful step towards bridging the gap between Windows and Linux ecosystems. By making WSL open source, Microsoft aims to leverage community expertise and accelerate the evolution of the platform, allowing for more seamless cross-platform app development. This decision makes Windows a stronger contender as a go-to dev platform, alongside the open-sourcing of other development tools such as the command-line text editor Edit.

Recommended read:
References :
  • BetaNews: Betanews: Well, it finally happened, folks. Microsoft has open-sourced the Windows Subsystem for Linux (WSL), giving developers full access to its code on GitHub!
  • Changelog: Microsoft finally opens the source of WSL, Paolo Scanferla describes an inherent trade-off in TypeScript's type system, Alberto Fortin is taking a step back from heavy LLM use while coding, a pseudonymous hacker spent two weeks coding from their Android phone, and NLWeb might become the HTML of the open agentic web.
  • borncity.com: Windows Subsystem for Linux is now Open Source
  • cyberinsider.com: Windows Subsystem for Linux Finally Becomes Open Source
  • www.computerworld.com: Windows Subsystem for Linux becomes open source
  • OMG! Ubuntu: Microsoft Open-Sources Windows Subsystem for Linux
  • www.windowscentral.com: Microsoft open sources the Windows Subsystem for Linux — invites developers to help more seamlessly integrate Linux with Windows
  • arstechnica.com: Microsoft takes Windows Subsystem for Linux open source after nearly a decade
  • The Register - Software: Microsoft open sources the Windows Subsystem for Linux — well, most of it
  • bsky.app: The Windows Subsystem for Linux is now open source
  • Help Net Security: Microsoft has officially open-sourced the Windows Subsystem for Linux (WSL), closing the very first issue ever filed on the Microsoft/WSL GitHub repository: “Will this be open source?â€
  • securityonline.info: WSL Goes Open Source: Microsoft Opens Up Windows Subsystem for Linux
  • www.helpnetsecurity.com: Microsoft has officially open-sourced the Windows Subsystem for Linux (WSL), closing the very first issue ever filed on the Microsoft/WSL GitHub repository: “Will this be open source?”
  • linuxsecurity.com: For years, Windows Subsystem for Linux (WSL) has closed the gap between Windows and Linux ecosystems.
  • Ars OpenForum: Microsoft takes Windows Subsystem for Linux open source after nearly a decade
  • Daily CyberSecurity: WSL Goes Open Source: Microsoft Opens Up Windows Subsystem for Linux
  • BleepingComputer: Microsoft open-sources Windows Subsystem for Linux at Build 2025
  • thenewstack.io: The Windows Subsystem for Linux Is Now Open Source
  • Techlore: Microsoft Releases Windows Subsystem for Linux Code!
  • thenewstack.io: Microsoft Releases Windows Subsystem for Linux Code!

Nicole Kobie@itpro.com //
The FBI has issued a warning about a rise in scams targeting U.S. government officials. Cybercriminals are using AI-generated voice clones and text messages to impersonate senior officials. This campaign, which started in April 2025, aims to trick current and former federal and state officials, as well as their contacts, into divulging sensitive information or granting unauthorized access to accounts. These tactics are referred to as "smishing" (malicious SMS messages) and "vishing" (fraudulent voice calls). The FBI is advising the public that if you receive a message claiming to be from a senior U.S. official, do not assume it is authentic.

The attackers use AI to create realistic voice deepfakes, making it difficult to distinguish between real and fake messages. They also leverage publicly available data to make their messages more convincing, exploiting human trust to infiltrate broader networks. The FBI has found that one method attackers use to gain access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform. The use of AI-generated audio has increased sharply, as large language models have proliferated and improved their abilities to create lifelike audio.

Once an account is compromised, it can be used in future attacks to target other government officials, their associates, and contacts by using trusted contact information they obtain. Stolen contact information acquired through social engineering schemes could also be used to impersonate contacts to elicit information or funds. The FBI advises that the scammers are using software to generate phone numbers that are not attributed to specific phones, making them more difficult to trace. Individuals should be vigilant and follow standard security advice, such as not trusting unsolicited messages and verifying requests through official channels.

Recommended read:
References :
  • Threats | CyberScoop: Texts or deepfaked audio messages impersonate high-level government officials and were sent to current or former senior federal or state government officials and their contacts, the bureau says.
  • Talkback Resources: Deepfake voices of senior US officials used in scams: FBI [social]
  • thecyberexpress.com: TheCyberExpress reports FBI Warns of AI Voice Scam
  • www.itpro.com: The FBI says hackers are using AI voice clones to impersonate US government officials
  • The Register - Software: The FBI has warned that fraudsters are impersonating "senior US officials" using deepfakes as part of a major fraud campaign.
  • www.cybersecuritydive.com: Hackers are increasingly using vishing and smishing for state-backed espionage campaigns and major ransomware attacks.
  • Tech Monitor: FBI warns of AI-generated audio deepfakes targeting US officials
  • cyberinsider.com: Senior U.S. Officials Impersonated in AI-Powered Vishing Campaign
  • cyberscoop.com: Texts or deepfaked audio messages impersonate high-level government officials and were sent to current or former senior federal or state government officials and their contacts, the bureau says.
  • thecyberexpress.com: The Federal Bureau of Investigation (FBI) has released a public service announcement to warn individuals about a growing involving text and voice messaging scams. Since April 2025, malicious actors have been impersonating senior U.S. government officials to target individuals, especially current or former senior federal and state officials, as well as their contacts.
  • BleepingComputer: FBI: US officials targeted in voice deepfake attacks since April
  • Talkback Resources: FBI warns of deepfake technology being used in a major fraud campaign targeting government officials, advising recipients to verify authenticity through official channels.
  • www.techradar.com: The FBI is warning about ongoing smishing and vishing attacks impersonating senior US officials.
  • securityaffairs.com: US Government officials targeted with texts and AI-generated deepfake voice messages impersonating senior U.S. officials
  • The DefendOps Diaries: The Rising Threat of Voice Deepfake Attacks: Understanding and Mitigating the Risks
  • PCWorld: Fake AI voice scammers are now impersonating government officials
  • hackread.com: FBI Warns of AI Voice Scams Impersonating US Govt Officials
  • Security Affairs: US Government officials targeted with texts and AI-generated deepfake voice messages impersonating senior U.S. officials Shields up US
  • iHLS: The FBI has flagged a concerning wave of cyber activity involving AI-generated content used to impersonate high-ranking U.S. government officials.
  • arstechnica.com: FBI warns of ongoing that uses audio to government officials
  • Popular Science: That weird call or text from a senator is probably an AI scam

Ross Kelly@Latest from ITPro //
OpenAI has launched Codex, a new AI agent designed for software engineering, integrated within ChatGPT. This cloud-based coding agent represents a significant advancement in AI-assisted software development, going beyond simple code completion to autonomously perform various programming tasks. Codex is built upon codex-1, a fine-tuned version of OpenAI's reasoning model, specifically optimized for software engineering workflows. It enables users to delegate tasks such as writing features, fixing bugs, answering questions about the codebase, and proposing pull requests, with each task running in its own cloud sandbox environment preloaded with the repository.

The Codex agent is accessible through the ChatGPT interface and is available to Pro, Team, and Enterprise users, with broader access planned. Developers can interact with Codex by typing simple prompts, and the agent will handle the coding behind the scenes, surfacing results for review and feedback. This integration allows for parallel tasking, enabling users to delegate different coding operations without disrupting their local development environment. The activities of the tool can also be monitored in real-time and upon completion, Codex provides verifiable evidence of its actions, including citations of terminal logs and test outputs.

Sam Altman, OpenAI's CEO, has expressed an ambition for OpenAI to become the "Microsoft of AI," envisioning a subscription-based operating system built on ChatGPT. The company could develop a core AI subscription, featuring ChatGPT's user experience, as well as surfaces like future devices, similar to operating systems. According to one user who has used Codex internally for a few months, Codex has significantly reduced the time it takes to complete projects, stating that "software engineering will truly never be the same".

Recommended read:
References :
  • bsky.app: i’ve used codex internally for a few months and have cut days or weeks off several projects on the API team. software engineering will truly never be the same https://openai.com/index/introducing-codex/
  • Latest from ITPro in News: OpenAI just launched 'Codex', a new AI agent for software engineering
  • AI News | VentureBeat: OpenAI's new coding agent, Codex, is available as a research preview for ChatGPT Pro, Enterprise, and Team users.
  • MarkTechPost: OpenAI introduces Codex, a cloud-based coding agent inside ChatGPT, signaling a new era in AI-assisted software development.
  • AI News | VentureBeat: OpenAI brings GPT-4.1 and 4.1 mini to ChatGPT — what enterprises should know
  • github.com: The OpenAI's Codex product documentation.
  • www.analyticsvidhya.com: OpenAI released Codex, a cloud‑native software agent designed to work alongside developers. Codex is not a single product but a family of agents powered by codex‑1, OpenAI’s […] The post appeared first on .
  • Latent.Space: ChatGPT Codex is here - the first cloud hosted Autonomous Software Engineer (A-SWE) from OpenAI. Josh Ma and Alexander Embiricos tell us how to WHAM every codebase like a power user.
  • www.marktechpost.com: OpenAI Introduces Codex, a Cloud-Based Coding Agent Inside ChatGPT
  • BetaNews: Codex, OpenAI's new coding agent, is now available in ChatGPT.
  • THE DECODER: OpenAI is rolling out Codex, a cloud-based AI agent for software development that automates tasks like bug fixes and feature implementation.
  • Analytics Vidhya: OpenAI released Codex, a cloud‑native software agent designed to work alongside developers.
  • the-decoder.com: The Decoder's report on OpenAI's Codex launch.
  • SiliconANGLE: OpenAI updates ChatGPT with coding-optimized Codex AI agent
  • Last Week in AI: Last Week in AI #309 - OpenAI keeps non-profit & launches Codex, AlphaEvolve, and more!
  • Maginative: Meet Codex: OpenAI’s New Software Engineering AI Agent
  • TestingCatalog: Discover OpenAI Codex, a cloud-based AI agent for automating coding tasks. Available for ChatGPT Pro, Team and Enterprise users now.
  • TestingCatalog: OpenAI prepares SWE Agent that answers code questions and drafts PR
  • pub.towardsai.net: AI-assisted code generation can help improve efficiency and reduce errors in the development process, but experts warn that it is not a replacement for human programmers.
  • The Tech Basic: OpenAI’s New Codex AI Helps Write Code Faster in ChatGPT
  • Runtime: Article about OpenAI's coding tool.
  • devops.com: OpenAI's Codex transforms software development with cloud-based AI agents that can tackle multiple coding tasks simultaneously, enhancing developer productivity.
  • Ars OpenForum: OpenAI introduces Codex, its first full-fledged AI agent for coding. It replicates your development environment and takes up to 30 minutes per task.
  • www.eweek.com: OpenAI’s Codex agent helps developers write code, fix bugs, and test features—all from ChatGPT. Early adopters include Cisco, Temporal, and Superhuman.
  • www.infoworld.com: OpenAI has announced the release of Codex, an AI coding agent it said was designed to help software engineers write code, fix bugs, and run tests.
  • eWEEK: OpenAI Debuts Codex AI Agent for Developers: ‘Like a Remote Teammate’
  • www.infoq.com: OpenAI Launches Codex Software Engineering Agent Preview
  • Ken Yeung: The New GitHub Copilot Agent Doesn’t Just Help You Code—it Codes for You

@Google DeepMind Blog //
Google DeepMind has unveiled AlphaEvolve, an AI agent powered by Gemini, that is revolutionizing algorithm discovery and scientific optimization. This innovative system combines the creative problem-solving capabilities of large language models (LLMs) with automated evaluators to verify solutions and iteratively improve upon promising ideas. AlphaEvolve represents a significant leap in AI's ability to develop sophisticated algorithms for both scientific challenges and everyday computing problems, expanding upon previous work by evolving entire codebases rather than single functions.

AlphaEvolve has already demonstrated its potential by breaking a 56-year-old mathematical record, discovering a more efficient matrix multiplication algorithm that had eluded human mathematicians. The system leverages an ensemble of state-of-the-art large language models, including Gemini Flash and Gemini Pro, to propose and refine algorithmic solutions as code. These programs are then evaluated using automated metrics, providing an objective assessment of accuracy and quality. This approach makes AlphaEvolve particularly valuable in domains where progress can be clearly and systematically measured, such as math and computer science.

The impact of AlphaEvolve extends beyond theoretical breakthroughs, with algorithms discovered by the system already deployed across Google's computing ecosystem. Notably, AlphaEvolve has enhanced the efficiency of Google's data centers, chip design, and AI training processes, including the training of the large language models underlying AlphaEvolve itself. It has also optimized a matrix multiplication kernel used to train Gemini models and found new solutions to open mathematical problems. By optimizing Google’s massive cluster management system, Borg, AlphaEvolve recovers an average of 0.7% of Google’s worldwide computing resources continuously, which translates to substantial cost savings.

Recommended read:
References :
  • Google DeepMind Blog: New AI agent evolves algorithms for math and practical applications in computing by combining the creativity of large language models with automated evaluators
  • venturebeat.com: VentureBeat article about Google DeepMind's AlphaEvolve system.
  • SiliconANGLE: Google DeepMind develops AlphaEvolve AI agent optimized for coding and math
  • MarkTechPost: Google DeepMind introduces AlphaEvolve: A Gemini-powered coding AI agent for algorithm discovery and scientific optimization
  • Maginative: Google's DeepMind Unveils AlphaEvolve, an AI System that Designs and Optimizes Algorithms
  • THE DECODER: AlphaEvolve is Google DeepMind's new AI system that autonomously creates better algorithms
  • www.marktechpost.com: Google DeepMind Introduces AlphaEvolve: A Gemini-Powered Coding AI Agent for Algorithm Discovery and Scientific Optimization
  • the-decoder.com: AlphaEvolve is Google DeepMind's new AI system that autonomously creates better algorithms
  • thetechbasic.com: DeepMind’s AlphaEvolve: A New Era of AI-Driven Problem Solving
  • The Tech Basic: DeepMind’s AlphaEvolve: A New Era of AI-Driven Problem Solving
  • LearnAI: AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms
  • The Next Web: Google DeepMind’s AI systems have taken big scientific strides in recent years — from predicting the 3D structures of almost every known protein in the universe to forecasting weather more accurately than ever before.  The UK-based lab today unveiled its latest advancement: AlphaEvolve, an AI coding agent that makes large language models (LLMs) like Gemini better at solving complex computing and mathematical problems.  AlphaEvolve is powered by the same models that it’s trying to improve.
  • learn.aisingapore.org: AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms
  • LearnAI: Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer
  • Towards Data Science: Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer
  • deepmind.google: Provides an overview of AlphaEvolve and its capabilities in designing advanced algorithms.
  • gregrobison.medium.com: AlphaEvolve: How AI-Driven Algorithm Discovery Is Rewriting Computing
  • www.unite.ai: Google DeepMind has unveiled AlphaEvolve, an evolutionary coding agent designed to autonomously discover novel algorithms and scientific solutions. Presented in the paper titled “AlphaEvolve: A Coding Agent for Scientific and Algorithmic Discovery,” this research represents a foundational step toward Artificial General Intelligence (AGI) and even Artificial Superintelligence (ASI).
  • learn.aisingapore.org: Google’s AlphaEvolve Is Evolving New Algorithms — And It Could Be a Game Changer
  • Unite.AI: Google DeepMind has unveiled AlphaEvolve, an evolutionary coding agent designed to autonomously discover novel algorithms and scientific solutions.
  • AI News | VentureBeat: Google's AlphaEvolve is the epitome of a best-practice AI agent orchestration. It offers a lesson in production-grade agent engineering. Discover its architecture & essential takeaways for your enterprise AI strategy.
  • AlternativeTo: Google has unveiled AlphaEvolve, an evolving coding agent that uses Gemini large language models and automated evaluators for discovering, evaluating, and optimizing computer algorithms for math and practical applications.
  • Last Week in AI: Last Week in AI #209 - OpenAI non-profit, US diffusion rules, AlphaEvolve
  • TheSequence: The model is pushing the boundaries of algorithmic discovery.
  • pub.towardsai.net: TAI #153: AlphaEvolve & Codex—AI Breakthroughs in Algorithm Discovery & Software Engineering Also, VS Code moves to open-source AI, MiniMax voice, Qwen Parallel Scaling & more.

Alexey Shabanov@TestingCatalog //
Google has officially launched the NotebookLM mobile app for both Android and iOS, extending the reach of its AI-powered research assistant. This release, anticipated before Google I/O 2025, allows users to leverage NotebookLM's capabilities directly from their smartphones and tablets. The app aims to help users understand information more effectively, regardless of their location, marking a step towards broader accessibility to AI tools.

The NotebookLM mobile app provides a range of features, including the ability to create new notebooks and add various content types, such as PDFs, websites, YouTube videos, and text. A key feature highlighted by Google is the availability of "Audio Overviews," which automatically generates audio summaries for offline and background playback. Furthermore, users can interact with AI hosts (in beta) to ask follow-up questions, enhancing the learning and research experience on the go. The app also integrates with the Android and iOS share sheets for quickly adding sources.

The initial release offers a straightforward user interface optimized for both phones and tablets. Navigation within the app includes a bottom bar providing easy access to Sources, Chat Q&A, and Studio. While it currently doesn't fully utilize Material 3 design principles, Google emphasizes this is an early version. Users can now download the NotebookLM app from the Google Play Store and the App Store, fulfilling a top feature request. Google has indicated that additional updates and features are planned for future releases.

Recommended read:
References :
  • TestingCatalog: Google launches NotebookLM mobile app with audio-first features on mobile
  • The Official Google Blog: Understand anything, anywhere with the new NotebookLM app
  • www.laptopmag.com: An exclusive look at Google's NotebookLM app on Android and iOS
  • www.tomsguide.com: NotebookLM just arrived on Android — and it can turn your notes into podcasts
  • www.marktechpost.com: Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source Integration.
  • THE DECODER: Google launches NotebookLM mobile app with audio-first features on mobile
  • 9to5Mac: Google launches NotebookLM mobile app for Android and iOS
  • TechCrunch: Google launches stand-alone NotebookLM app for Android
  • AI News | VentureBeat: Google finally launches NotebookLM mobile app at I/O: hands-on, first impressions
  • MarkTechPost: Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source Integration
  • Dataconomy: Google brings NotebookLM to mobile with new standalone apps
  • The Tech Basic: Google launches NotebookLM apps letting users research on the go
  • the-decoder.com: Google launches NotebookLM mobile app for Android and iOS
  • www.techradar.com: Google's free NotebookLM AI app is out now for Android and iOS – here's why it's a day-one download for me
  • thetechbasic.com: Google Launches NotebookLM Apps Letting Users Research on the Go

Joe DeLaere@NVIDIA Technical Blog //
NVIDIA has unveiled NVLink Fusion, a technology that expands the capabilities of its high-speed NVLink interconnect to custom CPUs and ASICs. This move allows customers to integrate non-NVIDIA CPUs or accelerators with NVIDIA's GPUs within their rack-scale setups, fostering the creation of heterogeneous computing environments tailored for diverse AI workloads. This technology opens up the possibility of designing semi-custom AI infrastructure with NVIDIA's NVLink ecosystem, allowing hyperscalers to leverage the innovations in NVLink, NVIDIA NVLink-C2C, NVIDIA Grace CPU, NVIDIA GPUs, NVIDIA Co-Packaged Optics networking, rack scale architecture, and NVIDIA Mission Control software.

NVLink Fusion enables users to deliver top performance scaling with semi-custom ASICS or CPUs. As hyperscalers are already deploying full NVIDIA rack solutions, this expansion caters to the increasing demand for specialized AI factories, where diverse accelerators work together at rack-scale with maximal bandwidth and minimal latency to support the largest number of users in the most power-efficient way. The advantage of using NVLink for CPU-to-GPU communications is that it offers 14x higher bandwidth compared to PCIe 5.0 (128 GB/s). The technology will be offered in two configurations. The first will be for connecting custom CPUs to Nvidia GPUs.

NVIDIA CEO Jensen Huang emphasized that AI is becoming a fundamental infrastructure, akin to the internet and electricity. He envisions an AI infrastructure industry worth trillions of dollars, powered by AI factories that produce valuable tokens. NVIDIA's approach involves expanding its ecosystem through partnerships and platforms like CUDA-X, which is used across a range of applications. NVLink Fusion is a crucial part of this vision, enabling the construction of semi-custom AI systems and solidifying NVIDIA's role at the center of AI development.

Recommended read:
References :
  • The Register - Software: Nvidia opens up speedy NVLink interconnect to custom CPUs, ASICs
  • www.techmeme.com: Nvidia unveils NVLink Fusion, letting customers use its NVLink to pair non-Nvidia CPUs or accelerators with Nvidia's products in their own rack-scale setups (Bloomberg)
  • NVIDIA Technical Blog: Integrating Semi-Custom Compute into Rack-Scale Architecture with NVIDIA NVLink Fusion
  • Tom's Hardware: Nvidia announces NVLink Fusion to allow custom CPUs and AI Accelerators to work with its products Nvidia's NVLink Fusion program allows customers to use the company’s key NVLink tech for their own custom rack-scale designs with non-Nvidia CPUs or accelerators in tandem with Nvidia’s products.
  • Maginative: NVIDIA Opens Its NVLink Ecosystem to Rivals in Bid to Further Cement AI Dominance
  • NVIDIA Newsroom: NVIDIA-Powered Supercomputer to Enable Quantum Leap for Taiwan Research
  • AI News | VentureBeat: Foxconn builds AI factory in partnership with Taiwan and Nvidia
  • www.tomshardware.com: Nvidia teams up with Foxconn to build an AI supercomputer in Taiwan
  • The Next Platform: There are many reasons why Nvidia is the hardware juggernaut of the AI revolution, and one of them, without question, is the NVLink memory sharing port that started out on its “Pascal†P100 GOU accelerators way back in 2016.
  • www.nextplatform.com: Nvidia Licenses NVLink Memory Ports To CPU And Accelerator Makers
  • The Register - Software: Nvidia sets up shop in Taiwan with AI supers and a factory full of ambition
  • techvro.com: NVLink Fusion: Nvidia To Sell Hybrid Systems Using AI Chips
  • www.networkworld.com: Nvidia opens NVLink to competitive processors
  • AIwire: Nvidia’s Global Expansion: AI Factories, NVLink Fusion, AI Supercomputers, and More

Ross Kelly@Latest from ITPro //
GitHub has launched a new AI coding agent for Copilot, designed to automate tasks and enhance developer workflows. Unveiled at Microsoft Build 2025, the coding agent is available to Copilot Enterprise and Copilot Pro+ users and is designed to handle "low-to-medium complexity tasks" such as adding features, fixing bugs, refactoring code, and improving documentation. CEO Thomas Dohmke highlighted that the agent is embedded directly within GitHub, activated by assigning a GitHub issue to Copilot.

The coding agent operates within a secure and customizable development environment powered by GitHub Actions. Once a task is assigned, the agent boots a virtual machine, clones the relevant repository, sets up the development environment, analyzes the codebase, and pushes changes to a draft pull request. Developers can monitor the agent's progress through session logs, ensuring transparency throughout the process. Crucially, all pull requests require human approval before CI/CD workflows are executed, adding an extra layer of security.

In related news, GitHub and Microsoft are joining forces with Anthropic on the Model Context Protocol (MCP) standard. This move aims to create safer AI agent deployments by establishing a universal protocol for AI models to access data from apps and services. MCP allows AI clients to discover servers and call their functions without extra coding. Microsoft and GitHub will add first-party support across Azure and Windows to help developers expose app features as MCP servers, improve security, and add a registry to list trusted MCP servers.

Recommended read:
References :

@zdnet.com //
Microsoft is unveiling new AI agent customization and oversight features at Build 2025, marking a significant step towards making AI agents more trustworthy and secure. This move aligns with the company's broader strategy of empowering businesses and individuals with custom-made AI systems. A core component of this initiative is extending Zero Trust principles to the agentic workforce, ensuring that AI agents operate within a secure framework. The announcements point to the tech giant's broader strategy of equipping businesses and individuals with their own, custom-made AI systems.

Microsoft is introducing Microsoft Entra Agent ID, a feature that extends identity management and access capabilities to AI agents. This aims to enhance trust when AI agents handle user data, addressing a critical concern as AI becomes more integrated into daily operations. The era of AI agents is upon us, driven by advancements in reasoning and memory, making AI models more capable and efficient. We’re seeing how AI systems can help us all solve problems in new ways.

Microsoft is also launching tools to streamline AI agent development. One notable announcement is the general availability of the Azure AI Foundry Agent Service, which was announced last fall. The platform allows developers to build, manage and scale up AI agents that automate business processes. To that end, Microsoft is previewing several new developer tools for building agentic applications in Microsoft Teams. They support secure, peer-to-peer communication via the A2A protocol, agent memory for contextual user experiences and improved developme

Recommended read:
References :
  • The Microsoft Cloud Blog: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • Microsoft Security Blog: Microsoft extends Zero Trust to secure the agentic workforce
  • www.zdnet.com: Microsoft unveils new AI agent customization and oversight features at Build 2025
  • blogs.microsoft.com: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • news.microsoft.com: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • www.microsoft.com: Microsoft extends Zero Trust to secure the agentic workforce
  • Source Asia: Microsoft Build 2025: The age of AI agents and building the open agentic web
  • THE DECODER: Microsoft Build 2025 showcases new AI agent tools and open interfaces for developers
  • Ken Yeung: Microsoft Introduces Entra Agent ID to Bring Zero Trust to AI Agents

@blogs.nvidia.com //
NVIDIA's CEO, Jensen Huang, has presented a bold vision for the future of technology, forecasting that the artificial intelligence infrastructure industry will soon be worth trillions of dollars. Huang emphasized AI's transformative potential across all sectors globally during his Computex 2025 keynote in Taipei. He envisions AI becoming as essential as electricity and the internet, necessitating "AI factories" to produce valuable tokens by applying energy. These factories are not simply data centers but sophisticated environments that will drive innovation and growth.

NVIDIA is actively working to solidify its position as a leader in this burgeoning AI landscape. A key strategy involves expanding its research and development footprint, with plans to establish a new R&D center in Shanghai. This initiative, proposed during a meeting with Shanghai Mayor Gong Zheng, includes leasing additional office space to accommodate current staff and future expansion. The Shanghai center will focus on tailoring AI solutions for Chinese clients and contributing to global R&D efforts in areas such as chip design verification, product optimization, and autonomous driving technologies, with the Shanghai government expressing initial support for the project.

Furthermore, NVIDIA is collaborating with Foxconn and the Taiwan government to construct an AI factory supercomputer, equipped with 10,000 NVIDIA Blackwell GPUs. This AI factory will provide state-of-the-art infrastructure to researchers, startups, and industries, significantly expanding AI computing availability and fueling innovation within Taiwan's technology ecosystem. Huang highlighted the importance of Taiwan in the global technology ecosystem, noting that NVIDIA is helping build AI not only for the world but also for Taiwan, emphasizing the strategic partnerships and investments crucial for realizing the AI-powered future.

Recommended read:
References :
  • NVIDIA Newsroom: NVIDIA CEO Jensen Huang took the stage at a packed Taipei Music Center Monday to kick off COMPUTEX 2025, captivating the audience of more than 4,000 with a vision for a technology revolution that will sweep every country
  • TechNode: NVIDIA reportedly plans to establish research center in Shanghai
  • SiliconANGLE: At Computex, Nvidia debuts AI GPU compute marketplace, NVLink Fusion and the future of humanoid AI
  • AI News | VentureBeat: Foxconn builds AI factory in partnership with Taiwan and Nvidia
  • The Register - Software: Nvidia opens up speedy NVLink interconnect to custom CPUs, ASICs

@cyberalerts.io //
A new malware campaign is exploiting the hype surrounding artificial intelligence to distribute the Noodlophile Stealer, an information-stealing malware. Morphisec researcher Shmuel Uzan discovered that attackers are enticing victims with fake AI video generation tools advertised on social media platforms, particularly Facebook. These platforms masquerade as legitimate AI services for creating videos, logos, images, and even websites, attracting users eager to leverage AI for content creation.

Posts promoting these fake AI tools have garnered significant attention, with some reaching over 62,000 views. Users who click on the advertised links are directed to bogus websites, such as one impersonating CapCut AI, where they are prompted to upload images or videos. Instead of receiving the promised AI-generated content, users are tricked into downloading a malicious ZIP archive named "VideoDreamAI.zip," which contains an executable file designed to initiate the infection chain.

The "Video Dream MachineAI.mp4.exe" file within the archive launches a legitimate binary associated with ByteDance's CapCut video editor, which is then used to execute a .NET-based loader. This loader, in turn, retrieves a Python payload from a remote server, ultimately leading to the deployment of the Noodlophile Stealer. This malware is capable of harvesting browser credentials, cryptocurrency wallet information, and other sensitive data. In some instances, the stealer is bundled with a remote access trojan like XWorm, enabling attackers to gain entrenched access to infected systems.

Recommended read:
References :
  • malware.news: Novel Noodlophile Stealer spread via bogus AI tools, Facebook ads
  • thehackernews.com: Threat actors have been observed leveraging fake artificial intelligence (AI)-powered tools as a lure to entice users into downloading an information stealer malware dubbed Noodlophile.
  • www.bleepingcomputer.com: Fake AI video generators drop new Noodlophile infostealer malware
  • securityaffairs.com: Threat actors use fake AI tools to trick users into installing the information stealer Noodlophile, Morphisec researchers warn.
  • Blog: New ‘Noodlophile’ infostealer disguised as AI video generator
  • Virus Bulletin: Morphisec's Shmuel Uzan reveals how attackers exploit AI hype to spread malware. Victims expecting custom AI videos instead get Noodlophile Stealer, a new infostealer targeting browser credentials, crypto wallets, and sensitive data.
  • www.scworld.com: Fake image-to-video AI sites deliver novel ‘Noodlophile’ infostealer
  • securityonline.info: Security Online details on the fake platforms
  • SOC Prime Blog: SocPrime blog on Noodlophile Stealer detection
  • socprime.com: SocPrime Article on Noodlophile
  • www.cybersecurity-insiders.com: CyberSecurity Insiders on malware

@mobinetai.com //
The Chicago Sun-Times has come under fire after publishing a summer reading list in its print edition that included several books that don't exist. The article, titled "Summer reading list for 2025," featured a mix of real and AI-generated book titles, making it difficult for readers to distinguish between fact and fiction. Titles like "Tidewater Dreams" by Isabel Allende and "The Last Algorithm" by Andy Weir were included in the list, despite these books not being actual works by the authors. The newspaper confirmed that AI was used in creating the reading list, with a freelancer admitting to using AI for background information but failing to fact-check the final product.

The incident has sparked criticism and embarrassment for the Chicago Sun-Times. The writer responsible for the list has taken full responsibility, stating, "On me 100 percent and I'm completely embarrassed." The Sun-Times Guild expressed horror at the situation and called on Chicago Public Media management to prevent similar occurrences in the future. The newspaper has stated, "We understand this is unacceptable for us to distribute," and is updating its policy to require internal editorial oversight of syndicated content.

Further investigation revealed that the "Heat Index" summer guide, containing the AI-generated reading list, was licensed from King Features, a unit of Hearst. Victor Lim, the vice president of marketing and communications at Chicago Public Media, explained that the Sun-Times historically did not review such newspaper inserts, assuming an editorial process was already in place. The incident highlights the dangers of relying on AI without proper human oversight and fact-checking, raising concerns about the increasing use of AI in journalism and the potential for spreading misinformation.

Recommended read:
References :
  • PCMag Middle East ai: Chicago, Philadelphia Newspapers Publish Reading List With Fake, AI-Generated Books
  • mobinetai.com: Chicago Newspaper Printed Hallucinated Article Recommending Books That Don’t Exist
  • B ? R ? A I ? N: Viral AI-Generated Summer Guide Printed by Chicago Sun-Times Was Made by Magazine Giant Hearst
  • www.theguardian.com: Chicago Sun-Times accused of using AI to create reading list of books that don’t exist
  • futurism.com: Chicago Newspaper Caught Publishing a “Summer Reads†Guide Full of AI Slop
  • mobinetai.com: Chicago Newspaper Printed Hallucinated Article Recommending Books That Don’t Exist
  • www.404media.co: Chicago Sun-Times prints AI-generated summer reading list with books that don't exist
  • Simon Willison's Weblog: Chicago Sun-Times Prints AI-Generated Summer Reading List With Books That Don't Exist

@www.marktechpost.com //
NVIDIA has recently unveiled several key initiatives aimed at expanding its reach in the AI landscape, particularly in compute capabilities and physical AI applications. The company announced DGX Cloud Lepton, an AI platform with a compute marketplace that connects developers building agentic and physical AI applications with tens of thousands of GPUs from a network of cloud providers. This platform will offer NVIDIA Blackwell and other NVIDIA architecture GPUs, allowing developers to access GPU compute capacity in specific regions for both on-demand and long-term computing, supporting strategic and sovereign AI operational requirements. NVIDIA emphasizes that DGX Cloud Lepton unifies access to cloud AI services and GPU capacity across its compute ecosystem, integrating with its software stack to accelerate and simplify AI application development and deployment.

NVIDIA is also making significant investments in Taiwan, establishing AI supercomputers and an overseas headquarters near Taipei. In partnership with Foxconn, NVIDIA is working with the Taiwanese government to build an "AI factory." Furthermore, it disclosed an AI supercomputer for Taiwan's National Center for High-Performance Computing (NCHC) to replace the earlier Taiwania 2 system, which will utilize its GPU hardware. This new supercomputer, based on NVIDIA's HGX H200 platform with over 1,700 GPUs, aims to provide researchers with enhanced performance for AI workloads. Academic institutions, government agencies, and small businesses in Taiwan will have the opportunity to apply for access to this powerful resource, bolstering their projects.

In addition to hardware advancements, NVIDIA has introduced Cosmos-Reason1, a suite of AI models designed to advance physical common sense and embodied reasoning in real-world environments. This suite aims to address the current limitations of AI models in understanding and interacting with the physical world. Moreover, NVIDIA unveiled a new AI Blueprint for Video Search and Summarization (VSS), empowering developers to build AI agents that can analyze video streams for various applications, from manufacturing to smart cities. For instance, Pegatron, an electronics manufacturing firm, reported significant reductions in labor costs and defect rates by utilizing AI agents built with AI Blueprint for VSS. The VSS platform leverages NVIDIA's language models and connects to enterprise data to provide accurate and efficient video analysis.

Recommended read:
References :
  • The Register - Software: Nvidia sets up shop in Taiwan with AI supers and a factory full of ambition
  • insideAI News: NVIDIA Announces DGX Cloud Lepton for GPU Access across Multi-Cloud Platforms
  • eWEEK: NVIDIA’s New AI Video Search and Summarization: How It Can Help With Manufacturing, Training, and More
  • MarkTechPost: NVIDIA Releases Cosmos-Reason1: A Suite of AI Models Advancing Physical Common Sense and Embodied Reasoning in Real-World Environments

@www.searchenginejournal.com //
References: Search Engine Journal , WhatIs ,
Google is aggressively expanding its artificial intelligence capabilities across its platforms, integrating the Gemini AI model into Search, and Android XR smart glasses. The tech giant unveiled the rollout of "AI Mode" in the U.S. Search, making it accessible to all users after initial testing in the Labs division. This move signifies a major shift in how people interact with the search engine, offering a conversational experience akin to consulting with an expert.

Google is feeding its latest AI model, Gemini 2.5, into its search algorithms, enhancing features like "AI Overviews" which are now available in over 200 countries and 40 languages and are used by 1.5 billion monthly users. In addition, Gemini 2.5 Pro introduces enhanced reasoning, through Deep Think, to give deeper and more thorough responses with AI Mode with Deep Search. Google is also testing new AI-powered features, including the ability to conduct searches through live video feeds with Search Live.

Google is also re-entering the smart glasses market with Android XR-powered spectacles featuring a hands-free camera and a voice-powered AI assistant. This project, named Astra, allows users to talk back and forth with Search about what they see in real-time with their cameras. These advancements aim to create more personalized and efficient user experiences, marking a new phase in the AI platform shift and solidifying AI's position in search.

Recommended read:
References :
  • Search Engine Journal: Google Expands AI Features in Search: What You Need to Know
  • WhatIs: Google expands Gemini model, Search as AI rivals encroach
  • www.theguardian.com: Google unveils ‘AI Mode’ in the next phase of its journey to change search

Josh Render@tomsguide.com //
Apple is reportedly undertaking a significant overhaul of Siri, rebuilding it from the ground up with a new AI-centric architecture. This move comes after earlier versions of Siri, which relied on AI, did not perform as desired, struggling to provide helpful and effective responses. Attempts to integrate AI capabilities into the older version only resulted in further complications for Apple, with employees noting that fixing one issue often led to additional problems. Recognizing their delayed start in the AI race compared to other tech companies, Apple is now aiming to create a smarter and more conversational Siri, potentially leveraging a large language model developed by its Zurich AI team.

In a notable shift, Apple is also considering opening its operating systems to allow iPhone users in the European Union to choose third-party AI assistants like ChatGPT or Gemini as their default option, effectively replacing Siri. This potential change is reportedly driven by regulatory pressures from the EU, which are pushing Apple to allow more flexibility in its ecosystem. If implemented, this move would align Apple more closely with competitors like Samsung and Google, who already offer more diverse AI options on their devices. The possibility of integrating external AI assistants could also provide Apple users with access to advanced AI features while the company continues to refine and improve its own Siri.

However, Apple's AI strategy is also facing scrutiny on other fronts. The Trump administration previously raised national security concerns over Apple's potential AI deal with Alibaba, specifically regarding the integration of Alibaba's AI technology into iPhones sold in China. These concerns center around the potential implications for national security, data privacy, and the broader geopolitical landscape, given the Chinese government's regulations on data sharing and content control. While Apple aims to comply with local regulations and compete more effectively in the Chinese market through this partnership, the US government worries that it could inadvertently aid China's AI development and expose user data to potential risks.

Recommended read:
References :
  • thetechbasic.com: Apple Is Rebuilding Siri from Scratch with Smarter AI
  • www.techradar.com: Apple could soon let iPhone owners use alternative voice assistants to Siri, but you can call up Gemini or ChatGPT right now with this simple hack
  • The Tech Basic: Apple Is Rebuilding Siri from Scratch with Smarter AI
  • www.tomsguide.com: Apple could soon allow iPhone users to ditch Siri as the default assistant for ChatGPT or Gemini — if you’re in the EU
  • www.techradar.com: Apple’s ‘AI crisis’ could mean EU users will have the option to swap Siri for another default voice assistant
  • The Tech Portal: Trump administration flags national security concerns over Apple’s AI deal with Alibaba: Report
  • Techloy: Apple might soon let users in Europe replace Siri

@zdnet.com //
Microsoft is intensifying its efforts to enhance the security and trustworthiness of AI agents, announcing significant advancements at Build 2025. These moves are designed to empower businesses and individuals to create custom-made AI systems with improved safeguards. A key component of this initiative is the extension of Zero Trust principles to secure the agentic workforce, ensuring that AI agents operate within a secure and controlled environment.

Windows 11 is set to receive native Model Context Protocol (MCP) support, complete with new MCP Registry and MCP Server functionalities. This enhancement aims to streamline the development process for agentic AI experiences, making it easier for developers to build Windows applications with robust AI capabilities. The MCP, an open standard, facilitates seamless interaction between AI models and data residing outside specific applications, enabling apps to share contextual information that AI tools and agents can utilize effectively. Microsoft is introducing the MCP Registry as a secure and trustworthy source for AI agents to discover accessible MCP servers on Windows devices.

In related news, GitHub and Microsoft are collaborating with Anthropic to advance the MCP standard. This partnership will see both companies adding first-party support across Azure and Windows, assisting developers in exposing app features as MCP servers. Further improvements will focus on bolstering security and establishing a registry to list trusted MCP servers. Microsoft Entra Agent ID, an extension of industry-leading identity management and access capabilities, will also be introduced to provide enhanced security for AI agents. These strategic steps underscore Microsoft's commitment to securing the agentic workforce and facilitating the responsible development and deployment of AI technologies.

Recommended read:
References :
  • www.windowscentral.com: Microsoft takes big step towards agentic Windows AI experiences with native Model Context Protocol support
  • www.zdnet.com: Trusting AI agents to deal with your data is hard, and these features seek to make it easier.
  • www.eweek.com: Microsoft’s Big Bet on AI Agents: Model Context Protocol in Windows 11
  • eWEEK: Microsoft’s Big Bet on AI Agents: Model Context Protocol in Windows 11

@siliconangle.com //
References: Techmeme , SiliconANGLE , siliconangle.com ...
Microsoft Corp. has announced a significant expansion of its AI security and governance offerings, introducing new features aimed at securing the emerging "agentic workforce," where AI agents and humans work collaboratively. The announcement, made at the company’s annual Build developer conference, reflects Microsoft's commitment to addressing the growing challenges of securing AI systems from vulnerabilities like prompt injection, data leakage, and identity sprawl, while also ensuring regulatory compliance. This expansion involves integrating Microsoft Entra, Defender, and Purview directly into Azure AI Foundry and Copilot Studio, enabling organizations to secure AI applications and agents throughout their development lifecycle.

Leading the charge is the launch of Entra Agent ID, a new centralized solution for managing the identities of AI agents built in Copilot Studio and Azure AI Foundry. This system automatically assigns each agent a secure and trackable identity within Microsoft Entra, providing security teams with visibility and governance over these nonhuman actors within the enterprise. The integration extends to third-party platforms through partnerships with ServiceNow Inc. and Workday Inc., supporting identity provisioning across human resource and workforce systems. By unifying oversight of AI agents and human users within a single administrative interface, Entra Agent ID lays the groundwork for broader nonhuman identity governance across the enterprise.

In addition, Microsoft is integrating security insights from Microsoft Defender for Cloud directly into Azure AI Foundry, providing developers with AI-specific threat alerts and posture recommendations within their development environment. These alerts cover more than 15 detection types, including jailbreaks, misconfigurations, and sensitive data leakage. This integration aims to facilitate faster response to evolving threats by removing friction between development and security teams. Furthermore, Purview, Microsoft’s integrated data security, compliance, and governance platform, is receiving a new software development kit that allows developers to embed policy enforcement, auditing, and data loss prevention into AI systems, ensuring consistent data protection from development through production.

Recommended read:
References :
  • Techmeme: Microsoft expands Entra, Defender, and Purview, embedding them directly into Azure AI Foundry and Copilot Studio to help organizations secure AI apps and agents (Duncan Riley/SiliconANGLE)
  • SiliconANGLE: Microsoft Corp. today unveiled a major expansion of its artificial intelligence security and governance offerings with the introduction of new capabilities designed to secure the emerging “agentic workforce,†a world where AI agents and humans collaborate and work together.
  • www.zdnet.com: Trusting AI agents to deal with your data is hard, and these features seek to make it easier.
  • siliconangle.com: Microsoft expands AI platform security with new identity protection threat alerts and data governance

@www.analyticsvidhya.com //
Google's I/O 2025 event was a showcase of cutting-edge advancements in artificial intelligence, particularly in generative media models and tools. CEO Sundar Pichai highlighted the company's milestones before unveiling a suite of AI-powered innovations, including upgrades to existing models and entirely new creative tools. Among the most anticipated announcements were Veo 3, Imagen 4, and Flow, all designed to fuel creativity and transform the way media is created. These tools are aimed at both seasoned professionals and aspiring storytellers, democratizing access to advanced filmmaking capabilities.

The newly launched Flow is positioned as an AI-powered filmmaking tool intended to bring movie ideas to life. It leverages Google's AI models, including Veo and Imagen, to generate videos from narrative prompts. Users can input text in natural language to describe a scene, and Flow will create the visual elements, allowing for exploration of storytelling ideas without the need for extensive filming or manual storyboard creation. Flow also provides the ability to integrate user-created assets, enabling consistent character and image integration within the video.

Beyond basic scene generation, Flow offers advanced controls for manipulating camera angles, perspectives, and motion. Editing tools allow for easy adjustments to focus on specific details or expand the shot to capture more action. This level of control empowers filmmakers to fine-tune their creations and realize their vision with precision. Flow is expected to be available for subscribers of Google AI Pro and is an experimental product called Flow at the upcoming Google I/O event, likely on May 20.

Recommended read:
References :
  • Google DeepMind Blog: Fuel your creativity with new generative media models and tools
  • Analytics Vidhya: Google I/O 2025: AI Mode on Google Search, Veo 3, Imagen 4, Flow, Gemini Live, and More
  • TestingCatalog: Google prepares to launch Flow, a new video editing tool, at I/O 2025
  • www.techradar.com: Want to be the next Spielberg? Google’s AI-powered Flow could bring your movie ideas to life

@zdnet.com //
Google is expanding access to its AI-powered research assistant, NotebookLM, with the launch of a standalone mobile app for Android and iOS devices. This marks a significant step for NotebookLM, transitioning it from a web-based beta tool to a more accessible platform for mobile users. The app retains core functionalities like source-grounded summaries and interactive Q&A, while also introducing new audio-first features designed for on-the-go content consumption. This release aligns with Google's broader strategy to integrate AI into its products, offering users a flexible way to absorb and interact with structured knowledge.

The NotebookLM mobile app places a strong emphasis on audio interaction, featuring AI-generated podcast-style summaries that can be played directly from the project list. Users can generate these audio overviews with a quick action button, creating an experience akin to a media player. The app also supports interactive mode during audio sessions, allowing users to ask questions mid-playback and participate in live dialogue. This focus on audio content consumption and interaction differentiates the mobile app and suggests that passive listening and educational use are key components of the intended user experience.

The mobile app mirrors the web-based layout, offering functionalities across Sources, Chat, and Interactive Assets, including Notes, Audio Overviews, and Mind Maps. Users can now add sources directly from their mobile devices by using the "Share" button in any app. The new NotebookLM app aims to be a research assistant that is accessible to students, researchers, and content creators, providing a mobile solution for absorbing structured knowledge.

Recommended read:
References :
  • TestingCatalog: Google launches NotebookLM mobile app with audio-first features on mobile
  • www.tomsguide.com: Google just added NotebookLM to Android — here's how it can level up your note-taking
  • www.zdnet.com: Google's popular AI tool gets its own Android app - how to use NotebookLM on your phone
  • THE DECODER: Google launches NotebookLM mobile app for Android and iOS
  • www.marktechpost.com: Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source Integration

@owaspai.org //
References: OWASP , Bernard Marr
The Open Worldwide Application Security Project (OWASP) is actively shaping the future of AI regulation through its AI Exchange project. This initiative fosters collaboration between the global security community and formal standardization bodies, driving the creation of AI security standards designed to protect individuals and businesses while encouraging innovation. By establishing a formal liaison with international standardization organizations like CEN/CENELEC, OWASP is enabling its vast network of security professionals to directly contribute to the development of these crucial standards, ensuring they are practical, fair, and effective.

OWASP's influence is already evident in the development of key AI security standards, notably impacting the AI Act, a European Commission initiative. Through the contributions of experts like Rob van der Veer, who founded the OWASP AI Exchange, the project has provided significant input to ISO/IEC 27090, the global standard on AI security guidance. The OWASP AI Exchange serves as an open-source platform where experts collaborate to shape these global standards, ensuring a balance between strong security measures and the flexibility needed to support ongoing innovation.

The OWASP AI Exchange provides over 200 pages of practical advice and references on protecting AI and data-centric systems from threats. This resource serves as a bookmark for professionals and actively contributes to international standards, demonstrating the consensus on AI security and privacy through collaboration with key institutes and Standards Development Organizations (SDOs). The foundation of OWASP's approach lies in risk-based thinking, tailoring security measures to specific contexts rather than relying on a one-size-fits-all checklist, addressing the critical need for clear guidance and effective regulation in the rapidly evolving landscape of AI security.

Recommended read:
References :
  • OWASP: OWASP Enables AI Regulation That Works with OWASP AI Exchange
  • Bernard Marr: Take These Steps Today To Protect Yourself Against AI Cybercrime

Sean Michael@AI News | VentureBeat //
Windsurf, an AI coding startup reportedly on the verge of being acquired by OpenAI for a staggering $3 billion, has just launched SWE-1, its first in-house small language model specifically tailored for software engineering. This move signals a shift towards software engineering-native AI models, designed to tackle the complete software development workflow. Windsurf aims to accelerate software engineering with SWE-1, not just coding.

The SWE-1 family includes models like SWE-1-lite and SWE-1-mini, designed to perform tasks beyond generating code. Unlike general-purpose AI models adapted for coding, SWE-1 is built to address the entire spectrum of software engineering activities, including reviewing, committing, and maintaining code over time. Built to run efficiently on consumer hardware without relying on expensive cloud infrastructure, the models offer developers the freedom to adapt them as needed under a permissive license.

SWE-1's key innovation lies in its "flow awareness," which enables the AI to understand and operate within the complete timeline of development work. Windsurf users have given the company feedback that existing coding models tend to do well with user guidance, but over time tend to miss things. The new models aim to support developers through multiple surfaces, incomplete work states and long-running tasks that characterize real-world software development.

Recommended read:
References :
  • Shelly Palmer: Windsurf, the AI coding startup that is reportedly in the process of being acquired by OpenAI for $3 billion, just launched SWE-1: its first in-house small language model designed specifically for software engineering.
  • AI News | VentureBeat: Windsurf's new SWE-1 AI models tackle the complete software engineering workflow, potentially reducing development cycles and technical debt.
  • Maginative: Windsurf launches SWE-1, its in-house, vertically integrated model family built specifically for software engineering—not just coding.
  • devops.com: Windsurf Launches SWE-1: AI Models Built for the Entire Software Engineering Process
  • shellypalmer.com: Windsurf, the AI coding startup that is reportedly in the process of being acquired by OpenAI for $3 billion, just launched SWE-1: its first in-house small language model designed specifically for software engineering.
  • MarkTechPost: Windsurf Launches SWE-1: A Frontier AI Model Family for End-to-End Software Engineering
  • www.marktechpost.com: Windsurf Launches SWE-1: A Frontier AI Model Family for End-to-End Software Engineering
  • computational-intelligence.blogspot.com: Windsurf Launches SWE-1, Homegrown AI Models for Software Engineering
  • TestingCatalog: Discover Windsurf's new Wave 9 SWE-1 AI model, optimised for real-time, on-device applications. Enjoy low-latency performance on mobile.

@www.bigdatawire.com //
Starburst is enhancing its data platform with new agentic AI capabilities, marking a significant step in the evolution of enterprise AI. These updates include the introduction of Starburst AI Workflows, a suite designed to expedite AI experimentation and deployment, alongside Starburst AI Agent, a pre-built natural language interface for the platform. This move aims to empower data analysts and application-layer AI agents, enabling them to extract faster and more insightful business insights from the data lakehouse. Starburst is also launching Starburst Data Catalog, a modern enterprise-grade metastore solution purpose-built to replace Hive Metastore in Starburst Enterprise.

The new AI capabilities address the growing need for enterprises to leverage AI for better-informed decision-making and increased efficiency. With fragmented data spread across various clouds and formats, many organizations struggle to build effective AI workflows. Starburst's platform updates aim to remove the friction between data and AI, enabling enterprise data teams to rapidly build AI applications and analytics on a single, governed foundation. Starburst uniquely helps enterprises speed up AI adoption, reduce costs, and realize value faster by enabling access to all their data, no matter where it lives, across clouds, on-premises, or hybrid environments, so they don’t need to move or migrate it to build, train, or tune their AI models.

Justin Borgman, CEO and co-founder of Starburst, emphasized that AI's power is directly tied to the data it can access. The company's aim is to remove the barriers between data and AI by bringing distributed, hybrid data lakeside capabilities, enabling enterprise data teams to rapidly build AI and analytics on a governed foundation. Kevin Petrie, an analyst at BARC U.S., noted the significance of Starburst's new tools, highlighting their focus on addressing key risks associated with AI development, such as data access, quality, privacy, and incompatible systems.

Recommended read:
References :
  • AiThority: Starburst Unveils New AI Platform Capabilities to Accelerate Enterprise AI and Agents
  • www.bigdatawire.com: Starburst Brings AI Agents Into Data Platform
  • WhatIs: Addition of new AI capabilities shows Starburst's growth
  • aithority.com: Starburst Unveils New AI Platform Capabilities to Accelerate Enterprise AI and Agents

@insidehpc.com //
NVIDIA and Dataiku are collaborating on the NVIDIA AI Data Platform reference design to support organizations' generative AI strategies by simplifying unstructured data storage and access. This collaboration aims to democratize analytics, models, and agents within enterprises by enabling more users to harness high-performance NVIDIA infrastructure for transformative innovation. As a validated component of the full-stack reference architecture, any agentic application developed in Dataiku will work on the latest NVIDIA-Certified Systems, including NVIDIA RTX PRO Server and NVIDIA HGX B200 systems. Dataiku will also work with NVIDIA on the NVIDIA AI Data Platform reference design, built to support organizations’ generative AI strategies by simplifying unstructured data storage and access.

DDN (DataDirect Networks) also announced its collaboration with NVIDIA on the NVIDIA AI Data Platform reference design. This collaboration aims to simplify how unstructured data is stored, accessed, and activated to support generative AI strategies. The DDN-NVIDIA offering combines DDN Infinia, an AI-native data platform, with NVIDIA NIM and NeMo Retriever microservices, NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, and NVIDIA Networking. This enables enterprises to deploy Retrieval-Augmented Generation (RAG) pipelines and intelligent AI applications grounded in their own proprietary data—securely, efficiently, and at scale.

Starburst is also adding agentic AI capabilities to its platform, including a pre-built agent for insight exploration as well as tools and tech for building custom agents. These new agentic AI capabilities include Starburst AI Workflows, which includes a collection of capabilities, including vector-native AI search, AI SQL functions, and AI model access governance functions. The AI search functions include a built-in vector store that allows users to convert data into vector embeddings and then to search against them. Starburst is storing the vector embeddings in Apache Iceberg, which it has built its lakehouse around.

Recommended read:
References :

@quantumcomputingreport.com //
References: , AI News | VentureBeat
NVIDIA is significantly advancing quantum and AI research through strategic collaborations and cutting-edge technology. The company is partnering with Japan’s National Institute of Advanced Industrial Science and Technology (AIST) to launch ABCI-Q, a new supercomputing system focused on hybrid quantum-classical computing. This research-focused system is designed to support large-scale operations, utilizing the power of 2,020 NVIDIA H100 GPUs interconnected with NVIDIA’s Quantum-2 InfiniBand platform. The ABCI-Q system will be hosted at the newly established Global Research and Development Center for Business by Quantum-AI Technology (G-QuAT).

The ABCI-Q infrastructure integrates CUDA-Q, an open-source platform that orchestrates large-scale quantum-classical computing, enabling researchers to simulate and accelerate quantum applications. This hybrid setup combines GPU-based simulation with physical quantum processors from vendors such as Fujitsu (superconducting qubits), QuEra (neutral atom qubits), and OptQC (photonic qubits). This modular architecture will allow for testing quantum error correction, developing algorithms, and refining co-design strategies, which are all critical for future quantum systems. The system serves as a testbed for evaluating quantum-GPU workflows and advancing practical use cases across multiple hardware modalities.

NVIDIA is also expanding its presence in Taiwan, powering a new supercomputer at the National Center for High-Performance Computing (NCHC). This supercomputer is projected to deliver eight times the AI performance compared to the center's previous Taiwania 2 system. The new supercomputer will feature NVIDIA HGX H200 systems with over 1,700 GPUs, two NVIDIA GB200 NVL72 rack-scale systems, and an NVIDIA HGX B300 system built on the NVIDIA Blackwell Ultra platform, all interconnected by NVIDIA Quantum InfiniBand networking. This enhanced infrastructure is expected to significantly boost research in AI development, climate science, and quantum computing, fostering technological autonomy and global AI leadership for Taiwan.

Recommended read:
References :
  • : Japan’s National Institute of Advanced Industrial Science and Technology (AIST), in collaboration with NVIDIA, has launched ABCI-Q, a new research-focused supercomputing system designed to support large-scale hybrid quantum-classical computing.
  • AI News | VentureBeat: Nvidia is powering a supercomputer at Taiwan’s National Center for High-Performance Computing that’s set to deliver over eight times more AI performance than before.

@blogs.nvidia.com //
Nvidia is significantly expanding its AI infrastructure initiatives by introducing NVLink Fusion, a technology that allows for the integration of non-Nvidia CPUs and AI accelerators with Nvidia's GPUs. This strategic move aims to provide customers with more flexible and customizable AI system configurations, broadening Nvidia's reach in the rapidly growing data center market. Key partnerships are already in place with companies like Qualcomm, Fujitsu, Marvell, and MediaTek, as well as design software firms Cadence and Synopsys, to foster a robust and open ecosystem. This approach allows Nvidia to remain central to the future of AI infrastructure, even when systems incorporate chips from other vendors.

Nvidia is also solidifying its presence in Taiwan, establishing a new office complex near Taipei that will serve as its overseas headquarters. The company is collaborating with Foxconn to build an "AI factory" in Taiwan, which will utilize 10,000 Nvidia Blackwell GPUs. This facility is intended to bolster Taiwan's AI infrastructure and support local organizations in adopting AI technologies across various sectors. TSMC, Nvidia's primary chip supplier, plans to leverage this supercomputer for research and development, aiming to develop the next generation of AI chips.

Furthermore, Nvidia is working with Taiwan's National Center for High-Performance Computing (NCHC) to develop a new AI supercomputer. This system will feature over 1,700 GPUs, GB200 NVL72 rack-scale systems, and an HGX B300 system based on the Blackwell Ultra platform, all connected via Quantum InfiniBand networking. Expected to launch later this year, the supercomputer promises an eightfold performance increase over its predecessor for AI workloads, providing researchers with enhanced capabilities to advance their projects. Academic institutions, government agencies, and small businesses in Taiwan will be able to apply for access to the supercomputer to accelerate their AI initiatives.

Recommended read:
References :

@the-decoder.com //
Google has launched Jules, a coding agent designed to automate tasks such as bug fixing, documentation, and testing. This new tool enters public beta and is available globally, giving developers the chance to have AI file pull requests on their behalf. Jules leverages Google's Gemini 2.5 Pro model and offers a starter tier with five free tasks per day, positioning it as a direct competitor to GitHub Copilot's coding agent and OpenAI's Codex.

Jules differentiates itself by spinning up a disposable Cloud VM, cloning the target repository, and creating a multi-step plan before making changes to any files. The agent can handle tasks like bumping dependencies, refactoring code, adding documentation, writing tests, and addressing open issues. Each change is presented as a standard GitHub pull request for human review. Google emphasizes that Jules "understands your codebase" due to the multimodal Gemini model, which allows it to reason over large file graphs and project history.

The release of Jules in beta signifies a broader shift from code-completion tools to full agentic development. Jules is available to anyone with a Google account and a linked GitHub account, and tasks can be assigned directly from an issue using the assign-to-jules label. This move reflects the increasing trend of AI-assisted programming and automated agents in software development, with both Google and Microsoft vying for dominance in this growing market.

Recommended read:
References :

@www.eweek.com //
Microsoft is embracing the Model Context Protocol (MCP) as a core component of Windows 11, aiming to transform the operating system into an "agentic" platform. This integration will enable AI agents to interact seamlessly with applications, files, and services, streamlining tasks for users without requiring manual inputs. Announced at the Build 2025 developer conference, this move will allow AI agents to carry out tasks across apps and services.

MCP functions as a lightweight, open-source protocol that allows AI agents, apps, and services to share information and access tools securely. It standardizes communication, making it easier for different applications and agents to interact, whether they are local tools or online services. Windows 11 will enforce multiple security layers, including proxy-mediated communication and tool-level authorization.

Microsoft's commitment to AI agents also includes the NLWeb project, designed to transform websites into conversational interfaces. NLWeb enables users to interact directly with website content through natural language, without needing apps or plugins. Furthermore, the NLWeb project turns supported websites into MCP servers, allowing agents to discover and utilize the site’s content. GenAIScript has also been updated to enhance security of Model Context Protocol (MCP) tools, addressing vulnerabilities. Options for tools signature hashing and prompt injection detection via content scanners provide safeguards across tool definitions and outputs.

Recommended read:
References :
  • Ken Yeung: AI Agents Are Coming to Windows—Here’s How Microsoft Is Making It Happen
  • www.eweek.com: Microsoft’s Big Bet on AI Agents: Model Context Protocol in Windows 11
  • www.marktechpost.com: Critical Security Vulnerabilities in the Model Context Protocol (MCP): How Malicious Tools and Deceptive Contexts Exploit AI Agents
  • GenAIScript | Blog: MCP Tool Validation
  • Ken Yeung: Microsoft’s NLWeb Project Turns Websites into Conversational Interfaces for AI Agents

@www.theatlantic.com //
OpenAI has reversed its decision to transition into a fully for-profit entity, opting instead to restructure as a public benefit corporation (PBC). This dramatic pivot was influenced by legal and civic discussions, signaling a significant shift in the company's approach to artificial general intelligence (AGI) development and funding. Initially founded as a counter to the perils of prioritizing profit in the development of powerful AI, a newly obtained letter from OpenAI lawyers to California Attorney General Rob Bonta reveals the company's concern over anything that might hinder its ability to raise substantial capital.

The decision to remain under the control of its non-profit board comes after facing backlash from various stakeholders, including civic leaders and the offices of the Attorney General of Delaware and California. The shift to a PBC structure is aimed at balancing the interests of shareholders with the company's mission of ensuring that AGI benefits humanity. This move acknowledges the need for greater transparency and accountability in AI development, while also navigating the complex landscape of attracting investment and fostering innovation.

OpenAI's restructured commercial arm, operating as a public benefit corporation, will be legally obligated to consider broader social and environmental goals, while still pursuing profit. This pragmatic evolution reflects OpenAI's recognition that the path to achieving its ambitious goals requires a more nuanced approach, addressing both financial sustainability and societal impact. The decision could have profound implications for the future of AI funding, AGI development, and global social systems, possibly setting the stage for the creation of the most powerful non-profit in human history.

Recommended read:
References :
  • Last Week in AI: OpenAI says non-profit will remain in control after backlash, OpenAI launches Codex, an AI coding agent, in ChatGPT, and more!
  • www.marketingaiinstitute.com: OpenAI just pulled off a dramatic pivot—and it could have profound implications for the future of artificial general intelligence (AGI), AI funding, and even global social systems.
  • Last Week in AI: OpenAI has decided not to transition from a nonprofit to a for-profit entity, instead opting to become a public benefit corporation influenced by legal and civic discussions.