News from the AI & ML world

DeeperML

Sean Michael@AI News | VentureBeat //
Global spending on generative AI is predicted to surge to $644 billion in 2025, according to a new report from analyst firm Gartner. This figure represents a substantial 76.4% year-over-year increase compared to 2024. The report, which highlights the escalating adoption and investment in generative AI, provides insights into where enterprises are allocating their resources and where they might gain the most value. Hardware is expected to dominate spending, claiming a staggering 80% of all generative AI expenditure in 2025, driven primarily by manufacturers producing AI-enabled devices.

The forecast indicates that devices will account for $398.3 billion, servers will reach $180.6 billion, software spending will follow at $37.2 billion, and services will total $27.8 billion. This breakdown underscores the significance of hardware in the generative AI landscape. Gartner's analysis suggests that hardware's dominance will intensify over time, as generative AI becomes embedded functionality within software, shifting attributable spending towards hardware.

Recommended read:
References :
  • ??hub: AI can be a powerful tool for scientists. But it can also fuel research misconduct
  • AI News | VentureBeat: Gartner forecasts gen AI spending to hit $644B in 2025: What it means for enterprise IT leaders

Microsoft Threat@Microsoft Security Blog //
Microsoft has uncovered 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders using its AI-powered Security Copilot. These bootloaders are critical components, with GRUB2 commonly used in Linux distributions like Ubuntu, and U-Boot and Barebox prevalent in embedded and IoT devices. The identified vulnerabilities include integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison, potentially enabling threat actors to gain control and execute arbitrary code.

Water Gamayun, a suspected Russian hacking group, has been linked to the exploitation of CVE-2025-26633 (MSC EvilTwin) to deploy SilentPrism and DarkWisp. The group uses malicious provisioning packages, signed .msi files, and Windows MSC files to deliver information stealers and backdoors. These backdoors, SilentPrism and DarkWisp, enable persistence, system reconnaissance, data exfiltration, and remote command execution. The threat actors transitioned to their own infrastructure for staging and command-and-control purposes after using a GitHub repository to push various kinds of malware families.

Recommended read:
References :
  • The Hacker News: The threat actors behind the zero-day exploitation of a recently-patched security vulnerability in Microsoft Windows have been found to deliver two new backdoors called SilentPrism and DarkWisp. The activity has been attributed to a suspected Russian hacking group called Water Gamayun, which is also known as EncryptHub and LARVA-208. "The threat actor deploys payloads primarily by means of
  • Microsoft Security Blog: Using Microsoft Security Copilot to expedite the discovery process, Microsoft has uncovered several vulnerabilities in multiple open-source bootloaders impacting all operating systems relying on Unified Extensible Firmware Interface (UEFI) Secure Boot. Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability in the GRUB2, U-boot, and Barebox bootloaders. The post appeared first on .
  • bsky.app: Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders. https://www.bleepingcomputer.com/news/security/microsoft-uses-ai-to-find-flaws-in-grub2-u-boot-barebox-bootloaders/
  • BleepingComputer: Microsoft uses AI to find flaws in GRUB2, U-Boot, Barebox bootloaders

Michael Nuñez@venturebeat.com //
Runway AI Inc. has launched Gen-4, its latest AI video generation model, addressing the significant challenge of maintaining consistent characters and objects across different scenes. This new model represents a considerable advancement in AI video technology and improves the realism and usability of AI-generated videos. Gen-4 allows users to upload a reference image of an object to be included in a video, along with design instructions, and ensures that the object maintains a consistent look throughout the entire clip.

The Gen-4 model empowers users to place any object or subject in different locations while maintaining consistency, and even allows for modifications such as changing camera angles or lighting conditions. The model combines visual references with text instructions to preserve styles throughout videos. Gen-4 is currently available to paying subscribers and Enterprise customers, with additional features planned for future updates.

Recommended read:
References :

Alex Philippidis@genengnews.com //
Isomorphic Labs, a spinout of Google's DeepMind, has successfully secured $600 million in its first external funding round. The investment, led by Thrive Capital with participation from GV and follow-on capital from existing investor Alphabet, aims to accelerate the company's AI-driven drug design programs and advance therapeutic programs into clinical trials. Isomorphic Labs, founded in 2021 by Sir Demis Hassabis, is focused on developing treatments for millions of patients worldwide by applying AI toward reimagining and accelerating drug discovery, with internal programs primarily focused on oncology and immunology.

The funding will be used to further develop Isomorphic Labs' next-generation AI drug design engine and expand its staff. Their drug discovery portfolio includes small molecule programs being developed with Eli Lilly and Novartis. A key technology is AlphaFold 3, developed jointly with Google DeepMind, which predicts molecular structures with unprecedented precision. CEO Sir Demis Hassabis stated the funding will "turbocharge" their AI drug design engine and advance their programs into clinical development, furthering their mission to solve all disease with AI.

Recommended read:
References :
  • www.genengnews.com: DeepMind Spinout Isomorphic Labs Raises $600M Toward AI Drug Design
  • Crunchbase News: Alphabet-Backed Isomorphic Labs Raises $600M For AI Drug Development
  • Maginative: Isomorphic Labs Secures $600M to Accelerate AI-Powered Drug Discovery
  • AI ? SiliconANGLE: Alphabet spinout Isomorphic Labs raises $600M for its AI drug design engine

Michal Langmajer@Fello AI //
OpenAI has secured a monumental $40 billion funding round, led by SoftBank, which has propelled the company's valuation to $300 billion. This marks the largest funding round for a private tech company in history. The substantial investment highlights the growing importance of AI and the company's leadership position in the race for enterprise AI dominance.

The funding will support OpenAI's expanded research and development efforts as well as upgrades to computational infrastructure. Specifically, a portion of the capital is earmarked for OpenAI's ambitious Stargate infrastructure project, a joint venture with SoftBank and Oracle to establish a network of AI data centers across the United States. In a strategic shift, OpenAI also plans to release a new "open-weight" language model, allowing developers to run it on their own hardware.

Recommended read:
References :
  • Fello AI: OpenAI Secures Historic $40 Billion Funding Round
  • AI News | VentureBeat: $40B into the furnace: As OpenAI adds a million users an hour, the race for enterprise AI dominance hits a new gear
  • InnovationAus.com: OpenAI closes $64bn funding round to boost AI efforts
  • Maginative: OpenAI Secures Record $40 Billion in Funding, Reaching $300 Billion Valuation
  • www.theguardian.com: OpenAI raises up to $40bn in record-breaking deal with SoftBank
  • The Verge: OpenAI just raised another $40 billion round led by SoftBank
  • SiliconANGLE: OpenAI bags $40B in funding, increasing its post-money valuation to $300B
  • techxplore.com: OpenAI says it raised $40 bn at valuation of $300 bn
  • THE DECODER: OpenAI nears completion of multi-billion dollar funding round
  • Kyle Wiggers ?: OpenAI raises $40B at $300B post-money valuation
  • THE DECODER: Softbank leads OpenAI's $40 billion funding round

Matthias Bastian@THE DECODER //
Amazon has launched Nova Act, an AI agent toolkit designed to automate workflows by enabling AI agents to perform tasks autonomously within web browsers. The toolkit includes the Nova Act SDK, which helps developers break down complex processes into manageable commands. This allows the agents to handle tasks such as web searching, payment processing, and answering questions based on on-screen content. The release of Nova Act represents Amazon's entry into the growing field of AI agents, which some industry observers consider a potential growth frontier for AI, with implications for automating white-collar jobs.

Developers can use the Nova Act SDK to build AI agents capable of completing step-by-step tasks in a browser, like submitting time-off requests or placing recurring online orders, without relying on APIs. The toolkit also enables developers to add detailed instructions to improve task reliability and integrate API calls. Amazon aims to make its AI models more accessible to developers and tech enthusiasts, positioning Nova Act as a way to explore the capabilities of Amazon Nova and inspire builders to test and implement ideas at scale in Amazon Bedrock.

Recommended read:
References :
  • Analytics India Magazine: The Nova Act SDK is built to automate workflows by breaking down complex tasks into smaller commands, such as searching, completing checkouts, and answering questions based on on-screen content.
  • THE DECODER: Amazon launches AI agent toolkit with Nova Act SDK
  • Flipboard Tech Desk: Amazon has unveiled Nova Act, a general-purpose AI agent that can take control of a web browser and independently perform some simple actions like making dinner reservations or filling out online forms. Read more at .
  • GeekWire: ‘Nova Act’ moves Amazon further into the AI agent race
  • TestingCatalog: Discover Amazon's Nova Act, a new AI model for automating web tasks. Released as a research preview, it excels in reliability and developer control. Try it now!
  • WIRED: Amazon's AGI Lab Reveals Its First Work: Advanced AI Agents
  • Quartz: Amazon wants its new AI agent to do stuff on the web for you
  • AI ? SiliconANGLE: Amazon.com Inc. today introduced Nova Act, a new artificial intelligence agent that can take control of web browsers and take independent actions.
  • THE DECODER: Nova Act is Amazon's foray into agentic AI that navigates your browser
  • www.it-daily.net: Amazon Nova Act: AI agent for browser control presented
  • Techzine Global: Amazon is making access to its frontier intelligence models easier with the launch of nova.amazon.com.

Michael Nuñez@venturebeat.com //
OpenAI has announced plans to release its first "open-weight" language model since 2019, marking a strategic reversal for the company known for its proprietary AI systems like ChatGPT. Sam Altman, OpenAI’s CEO, revealed the news on X, stating that the model would allow developers to run it on their own hardware, contrasting with OpenAI's cloud-based subscription model. This announcement follows Altman's admission that OpenAI had been "on the wrong side of history" regarding open-source AI, prompted by the release of DeepSeek R1, an open-source model from China.

This move comes as OpenAI faces increasing economic pressure from competitors like DeepSeek and Meta, who are leveraging open-source AI models. These competitors have created an economic reality that OpenAI couldn't ignore, with Meta's Llama models already surpassing one billion downloads. OpenAI reportedly spends $7-8 billion annually on operations, while competitors are utilizing open-source models that operate at a fraction of the cost. The release of an open-weight model signals a potential paradigm shift in the AI industry towards increased collaboration and openness.

Recommended read:
References :
  • Data Science at Home: Is DeepSeek the next big thing in AI? Can OpenAI keep up? And how do we truly understand these massive LLMs?
  • venturebeat.com: OpenAI to release open-source model as AI economics force strategic shift
  • WIRED: Sam Altman Says OpenAI Will Release an ‘Open Weight’ AI Model This Summer
  • Simon Willison's Weblog: We’re planning to release a very capable open language model in the coming months, our first since GPT-2.
  • SiliconANGLE: OpenAI to launch its first ‘open-weights’ model since 2019
  • www.tomsguide.com: OpenAI is planning on launching its first open-weight model in years
  • THE DECODER: OpenAI announces the release of a new open-weight language model. It will have reasoning capabilities and will be available without usage restrictions.

James McKenna@NVIDIA Newsroom //
NVIDIA's Omniverse platform is gaining traction within industrial ecosystems as companies leverage digital twins to train physical AI. The Mega NVIDIA Omniverse Blueprint, now available in preview, empowers industrial enterprises to accelerate the development, testing, and deployment of physical AI. This blueprint provides a reference workflow for combining sensor simulation and synthetic data generation, enabling the simulation of complex human-robot interactions and verification of autonomous systems within industrial digital twins.

At Hannover Messe, leaders from manufacturing, warehousing, and supply chain sectors are showcasing their adoption of the blueprint to simulate robots like Digit from Agility Robotics. They are also demonstrating how industrial AI and digital twins can be used to optimize facility layouts, material flow, and collaboration between humans and robots. NVIDIA ecosystem partners like Delta Electronics, Rockwell Automation, and Siemens are also announcing further integrations with NVIDIA Omniverse and NVIDIA AI technologies at the event, further solidifying Omniverse's role in physical AI development.

Recommended read:
References :
  • NVIDIA Newsroom: Industrial Ecosystem Adopts Mega NVIDIA Omniverse Blueprint to Train Physical AI in Digital Twins
  • NVIDIA Technical Blog: Simulating Robots in Industrial Facility Digital Twins

Alexey Shabanov@TestingCatalog //
References: The Tech Basic , TheSequence
Anthropic has unveiled a groundbreaking innovation in AI research with self-coordinating agent networks in their Claude Research Mode. This tool empowers Claude AI to operate as a central authority, capable of constructing and managing multiple helper bots. These bots collaborate to tackle complex challenges, marking a significant shift in how AI research is conducted. By dynamically creating and coordinating AI agents, Anthropic is redefining the possibilities for AI problem-solving in various fields.

The Claude Research Mode operates akin to a human team, where different individuals handle specific tasks. This multi-agent functionality leverages tools like web search, memory, and the ability to create sub-agents. A master agent can delegate tasks to these sub-agents, fostering dynamic and collaborative interactions within a single research flow. These helper bots are designed to enhance problem-solving capabilities by searching the internet using Brave, remembering crucial details, and engaging in careful deliberation before providing answers. This approach promises to transform how AI is applied in science, business, and healthcare.

Recommended read:
References :
  • The Tech Basic: Anthropic Now Redefines AI Research With Self Coordinating Agent Networks
  • TheSequence: The Sequence Radar #521: Anthropic Help US Look Into The Mind of Claude

ross.kelly@futurenet.com (Ross@Latest from ITPro //
References: SiliconANGLE , Charlie Fink , Unite.AI ...
Sourcetable has launched what it's calling the world's first "self-driving spreadsheet," an AI-powered tool designed to democratize data analysis. The San Francisco-based startup secured $4.3 million in seed funding led by Bee Partners, with participation from investors like Hugging Face CTO Julien Chaumond and GitHub co-founder Tom Preston-Werner. The platform aims to simplify spreadsheet use, making advanced functions accessible to everyone.

The AI-powered spreadsheet allows users to interact with data using natural language, eliminating the need for complex formulas. Users can issue commands through voice or keyboard to analyze data, create financial models, and generate pivot tables. The "autopilot mode" enables the AI to execute multi-step workflows, interpret inconsistent data, and validate outputs in real-time, all while working with existing Excel files. Sourcetable's mission is to empower the 750 million spreadsheet users, many of whom struggle with basic functions, to become proficient data analysts.

Recommended read:
References :
  • SiliconANGLE: Sourcetable gets $4.3M in funding to help everyone become a spreadsheet power user
  • Charlie Fink: Sourcetable Launches AI Spreadsheets With $4.3 Million In New Funding
  • www.itpro.com: Sourcetable, a startup behind a ‘self-driving spreadsheet’ tool, wants to replicate the vibe coding trend for data analysts
  • Unite.AI: Sourcetable Raises $4.3M to Launch the World’s First Self-Driving Spreadsheet, Powered by AI

S.Dyema Zandria@The Tech Basic //
Elon Musk has officially merged his AI startup xAI with the social media platform X, formerly known as Twitter, in an all-stock transaction. The deal, announced on Friday, values xAI at $80 billion and X at $33 billion, or $45 billion including debt. This move combines two of Musk's prominent companies, aiming to create a unified AI-social media powerhouse.

Musk stated that the future of xAI and X are now "intertwined," and that this combination will unlock immense potential. The integration will combine data, models, compute, distribution, and talent from both companies. xAI's Grok chatbot is already integrated into X and utilizes the platform's vast user data for training. The combined company will focus on delivering smarter, more meaningful experiences while seeking truth and advancing knowledge.

Recommended read:
References :
  • Pivot to AI: Elon Musk merges xAI and Twitter — the new AOL Time Warner
  • TechInformed: Elon Musk merges social platform X into xAI in $33bn deal
  • www.tomsguide.com: Elon Musk's AI company just engulfed X for $33 billion — here's what that means
  • Antonio Pequen?o IV: Elon Musk Says AI Has Purchased X, Formerly Known As Twitter, For $33 Billion
  • The Rundown AI: Elon's $113B AI-social merger
  • The Tech Basic: Elon Musk Merges xAI and X in $33 Billion Deal to Accelerate AI Integration
  • The Register - Software: Billionaire Elon Musk's xAI is to acquire billionaire Elon Musk's X in a deal that values the former at $80 billion and the latter at $33 billion.…
  • TechCrunch: Elon Musk’s AI startup, xAI, has acquired his social media platform X, formerly known as Twitter, in an all-stock deal, he announced in a post on X Friday. “xAI has acquired X in an all-stock transaction,â€� Musk said. “The combination values xAI at $80 billion and X at $33 billion ($45B less $12B …
  • www.techrepublic.com: Elon Musk’s xAI Acquires X in $45B Deal to ‘Unlock Immense Potential’
  • www.lemonde.fr: Why Elon Musk is merging his artificial intelligence company xAI with his social media network X
  • THE DECODER: Elon Musk merges xAI and X in all-stock deal

Alexey Shabanov@TestingCatalog //
References: eWEEK , TestingCatalog ,
Google is enhancing its AI ecosystem with new tools and features designed to boost productivity and simplify complex tasks. NotebookLM, Google's AI-powered research assistant, receives major updates including a revamped interface and the "Discover Sources" feature. This feature allows users to search for keywords and import relevant sources into their notebook knowledge base, identifying around 10 results per search and giving users manual control over their knowledge base.

Additionally, Google is integrating AI into travel planning with new tools in Google Maps. Users can now create trip itineraries in AI Overviews by entering travel-related queries into Google, which generates tailored suggestions, flight and hotel results, and an expandable map. Another feature allows Google Maps to analyze trip-related screenshots from platforms like TikTok or Instagram, identifying locations mentioned in the images and compiling them into an itinerary.

Recommended read:
References :
  • eWEEK: These new features include trip itineraries in AI Overviews and screenshot analysis in Google Maps.
  • TestingCatalog: Google's NotebookLM gets major updates, enhancing its AI-powered research capabilities. Discover Sources, revamped interface, and future integrations boost productivity.
  • AI News | VentureBeat: Google’s Gemini 2.5 Pro is the smartest model you’re not using – and 4 reasons it matters for enterprise AI

kevinokemwa@outlook.com (Kevin@windowscentral.com //
Bill Gates, the co-founder of Microsoft, has recently expressed his strong belief in the transformative potential of artificial intelligence (AI) across various sectors. During an interview, Gates stated that AI could even lead to a two-day work week for professionals within the next decade, drastically altering the current work landscape. He highlighted AI's capability to automate tasks and streamline processes, ultimately freeing up individuals to focus on higher-value activities and potentially reducing the required work hours.

Gates also emphasized that humans won't be needed for most things in the future, as AI will be capable of handling repetitive tasks, making things, moving things, and growing food. Furthermore, Gates believes that AI can also significantly improve mental healthcare and education. He suggests AI could help individuals with mild symptoms, improve educational material, and change how creative work is done. Microsoft, which has a big stake in OpenAI, is all in on AI as well and is actively investing in and developing AI solutions.

Recommended read:
References :
  • PCMag Middle East ai: In with Jimmy Fallon, Gates said that humans won’t be needed for “most things†in the coming age of AI.
  • www.laptopmag.com: The Microsoft co-founder is fascinated by AI.
  • www.windowscentral.com: In a recent interview, Microsoft co-founder Bill Gates claimed that AI might present a two-day work opportunity for professionals.
  • www.windowscentral.com: Bill Gates would restart Microsoft as an AI-centric lab after 50 years — "Raising billions of dollars from a few sketch ideas"
  • eWEEK: Bill Gates Predicts AI Will Replace Many Doctors, Teachers Within 10 Years — But These 3 Jobs Are Safe

staff@insideAI News //
References: insideAI News , insidehpc.com ,
Fluidstack announced on March 25, 2025, its collaboration with Borealis Data Center, Dell Technologies, and NVIDIA to deploy and manage exascale GPU clusters across Iceland and Europe. Fluidstack aims to support AI labs, researchers, and enterprises by rapidly deploying high-density GPU supercomputers powered by 100% renewable energy. Borealis Data Center will provide facilities powered by renewable energy in Iceland and the Nordics, leveraging the region's cold climate and geothermal power. Dell PowerEdge XE9680 servers, optimized for AI workloads with NVIDIA HGX H200 and Quantum-2 InfiniBand networking, will be utilized to ensure performance and reliability.

Reports indicate that China's AI data center boom has lost momentum, leaving billions of dollars in idle infrastructure. Triggered by the rise of generative AI applications, China rapidly expanded its AI infrastructure in 2023-2024, constructing hundreds of new data centers with state and private funding. However, many facilities are now underused, returns are falling, and the market for GPU rentals has collapsed. Some data centers became outdated before they were fully operational due to changing market conditions and poor planning.

Recommended read:
References :
  • insideAI News: Fluidstack to Deploy Exascale GPU Clusters in Europe with NVIDIA, Borealis Data Center and Dell
  • insidehpc.com: Fluidstack to Deploy Exascale GPU Clusters in Europe with NVIDIA, Borealis Data Center and Dell
  • www.tomshardware.com: China's AI data center boom goes bust: Rush leaves billions of dollars in idle infrastructure

Ellie Ramirez-Camara@Data Phoenix //
References: Shelly Palmer , Data Phoenix
OpenAI's new image generation tool, integrated into ChatGPT 4o, is experiencing immense popularity, leading to temporary limitations on GPU usage. CEO Sam Altman acknowledged the issue, stating that their GPUs are "melting" due to the overwhelming demand. This surge highlights the significant computational resources required for complex AI image generation, pushing the limits of current infrastructure. The new model within ChatGPT is designed to replace DALL·E as the default image generator and aims to provide superior realism and speed compared to its predecessor.

The company is implementing measures to manage the high demand, including rate-limiting image requests. OpenAI delayed the new GPT-4o image generation feature for free users due to high demand. While there's no word on specific limits for paid users, it's expected they are also experiencing slowdowns. The popularity of Ghibli-style images has even flooded social media, prompting discussions about artistic integrity and the physical constraints of supporting such intensive computational tasks at scale.

Recommended read:
References :
  • Shelly Palmer: OpenAI's image generation has gone so viral that even if you haven’t tried it, you probably know exactly what you’re missing — hyper-realistic AI images created in seconds, now throttled because the GPUs can’t take the heat.
  • Data Phoenix: OpenAI announced GPT-4o's new image generation capabilities this Tuesday. The GPT-4o is set to replace DALL·E as the default image generation model.

@sciencedaily.com //
Recent advancements in quantum computing research have yielded promising results. Researchers at the University of the Witwatersrand in Johannesburg, along with collaborators from Huzhou University in China, have discovered a method to shield quantum information from environmental disruptions, potentially leading to more reliable quantum technologies. This breakthrough involves manipulating quantum wave functions to preserve quantum information, which could enhance medical imaging, improve AI diagnostics, and strengthen data security by providing ultra-secure communication.

UK startup Phasecraft has announced a new algorithm, THRIFT, that improves the ability of quantum computers to model new materials and chemicals by a factor of 10. By optimizing quantum simulation, THRIFT enables scientists to model new materials and chemicals faster and more accurately, even on today’s slower machines. Furthermore, Oxford researchers have demonstrated a 25-nanosecond controlled-Z gate with 99.8% fidelity, combining high speed and accuracy in a simplified superconducting circuit. This achievement advances fault-tolerant quantum computing by improving raw gate performance without relying heavily on error correction or added hardware.

Recommended read:
References :
  • The Quantum Insider: Oxford Researchers Demonstrate Fast, 99.8% Fidelity Two-Qubit Gate Using Simplified Circuit Design
  • www.sciencedaily.com: Researchers find a way to shield quantum information from 'noise'
  • Bernard Marr: Quantum computing is poised to revolutionize industries from drug development to cybersecurity, with the global market projected to reach $15 billion by 2030.

Cyril Belikoff@The Microsoft Cloud Blog //
Microsoft is highlighting Founderz, an online learning platform that has quickly become a leader in AI skilling. Founderz aims to bridge the growing AI skills gap, a challenge highlighted by the IDC Business Opportunity of AI Study, which found that 45% of business leaders feel their workforce lacks the necessary knowledge and skills to implement AI effectively. Microsoft is addressing this gap through initiatives like the Microsoft AI Skills Fest and by supporting innovative organizations.

Founderz was accepted into the Microsoft for Startups Founders Hub, gaining access to AI services, expert guidance, and technology. This included USD150,000 in Microsoft Azure credits, allowing them to scale their platform and refine their AI-powered learning model. Co-founders Anna Cejudo and Pau Garcia-Mila envisioned an online business education that mirrored the depth and collaboration of top business schools, but in a scalable and accessible format built for the AI era.

Recommended read:
References :

@Latest from Laptop Mag //
References: Apple Must , jonnyevans , iThinkDifferent ...
Apple is reportedly developing a new AI-powered health app, codenamed "Project Mulberry," aimed at revolutionizing health tracking on iPhones. Expected to launch alongside iOS 19, the app will use AI algorithms to analyze health metrics from Apple devices and third-party sources. This comprehensive approach will enable the app to provide personalized insights and actionable recommendations to improve overall wellness, potentially transforming how users manage their health.

The app's AI coach will look at data from devices such as Apple Watch and potentially AirPods, which may gain future health-tracking features like heart-rate monitoring and temperature sensing. Apple is collaborating with health experts, including doctors, nutritionists, and therapists, to train the AI agents and create educational videos for the service, which may be called "Health+". The goal is to create an AI coach that can give personalized advice on sleep, exercise, mental well-being and other health issues.

Recommended read:
References :
  • Apple Must: Apple's Health+ service will optimise workouts and nutrition and help you protect your health. Years in the making, insiders call it a doctor in your iPhone.
  • jonnyevans: Apple's Health+ service will optimise workouts and nutrition and help you protect your health. Years in the making, insiders call it a doctor in your iPhone.
  • www.laptopmag.com: Apple’s plan to use AI for personalized health coaching might change everything
  • iThinkDifferent: ithinkdiff.com - Apple’s ‘Project Mulberry’ may bring AI-powered health tracking to iOS 19 and watchOS
  • TechCrunch: Apple is developing a new version of its Health app that includes an AI coach that can advise users on how to get healthier, according to Bloomberg’s Mark Gurman.
  • gHacks Technology News: Apple is working on an AI Doctor for iPhone's Health App
  • The Tech Portal: Apple is making a strong AI push into the healthcare industry, with… Content originally published on
  • www.zdnet.com: Apple's AI doctor will be ready to see you next spring
  • eWEEK: Will Your iPhone Offer AI-Powered Health Advice Soon? Apple’s Reportedly on the Case
  • Analytics India Magazine: Apple is Building an AI Agent-Powered Health App: Reports
  • www.zdnet.com: Oura's AI health coach is live for everyone - here's what it can do for you
  • futurism.com: Apple Quietly Working on AI Agent to "Replica" a Human Doctor
  • techstrong.ai: The Apple AI Doctor Is In: Report

Synced@Synced //
References: Synced , www.theguardian.com
DeepMind has announced significant advancements in AI modeling and biomedicine, pushing the boundaries of what's possible with artificial intelligence. The company's research is focused on creating more effective drugs and medicine, as well as understanding and protecting species around the world.

DeepMind's JetFormer, a novel Transformer model, is designed to directly model raw data, eliminating the need for pre-trained components. JetFormer can understand and generate both text and images seamlessly. This model leverages normalizing flows to encode images into a latent representation, enhancing the focus on essential high-level information through progressive Gaussian noise augmentation. JetFormer has demonstrated competitive performance in image generation and web-scale multimodal generation tasks.

Additionally, DeepMind is exploring how studying honeybee immunity could offer insights into protecting various species. The company's AlphaFold continues to revolutionize biology, aiding in the design of more effective drugs. AlphaFold, which uses AI to determine a protein's structure, has been used to solve fundamental questions in biology, awarded the Nobel prize (in chemistry – to Demis Hassabis and John Jumper) and revolutionised drug discovery. There are approximately 250,000,000 protein structures in the AlphaFold database, which has been used by almost 2 million people from 190 countries.

Recommended read:
References :
  • Synced: DeepMind’s JetFormer: Unified Multimodal Models Without Modelling Constraints
  • www.theguardian.com: AI may help us cure countless diseases – and usher in a new golden age of medicine | Samuel Hume

Alexey Shabanov@TestingCatalog //
References: AI News , TestingCatalog ,
Anthropic is reportedly enhancing Claude AI with multi-agent capabilities, including web search, memory, and sub-agent creation. This upgrade to the Claude Research feature, previously known as Compass, aims to facilitate more dynamic and collaborative research flows. The "create sub-agent" tool would enable a master agent to delegate tasks to sub-agents, allowing users to witness multi-agent interaction within a single research process. These new tools include web_fetch, web_search, create_subagent, memory, think, sleep and complete_task.

Anthropic is also delving into the "AI biology" of Claude, offering insights into how the model processes information and makes decisions. Researchers have discovered that Claude possesses a degree of conceptual universality across languages and actively plans ahead in creative tasks. However, they also found instances of the model generating incorrect reasoning, highlighting the importance of understanding AI decision-making processes for reliability and safety. Anthropic's approach to AI interpretability allows them to uncover insights into the inner workings of these systems that might not be apparent through simply observing their outputs.

Recommended read:
References :
  • AI News: Anthropic provides insights into the ‘AI biology’ of Claude
  • TestingCatalog: Claude may get multi-agent Research Mode with memory and task delegation
  • The Tech Basic: Anthropic developed an impressive innovation through their work in designing smart computer software for businesses. The product is called Claude Research Mode.

Mike Watts@computational-intelligence.blogspot.com //
Recent developments highlight advancements in quantum computing, artificial intelligence, and cryptography. Classiq Technologies, in collaboration with Sumitomo Corporation and Mizuho-DL Financial Technology, achieved up to 95% compression of quantum circuits for Monte Carlo simulations used in financial risk analysis. This project explored the use of Classiq’s technology to generate more efficient quantum circuits for a novel quantum Monte Carlo simulation algorithm incorporating pseudo-random numbers proposed by Mizuho-DL FT, evaluating the feasibility of implementing quantum algorithms in financial applications.

Oxford researchers demonstrated a fast, 99.8% fidelity two-qubit gate using a simplified circuit design, achieving this using a modified coaxmon circuit architecture. Also, a collaborative team from JPMorganChase, Quantinuum, Argonne National Laboratory, Oak Ridge National Laboratory, and the University of Texas at Austin demonstrated a certified randomness protocol using a 56-qubit Quantinuum System Model H2 trapped-ion quantum computer. This is a major milestone for real-world quantum applications, with the certified randomness validated using over 1.1 exaflops of classical computing power, confirming the quantum system’s ability to generate entropy beyond classical reach.

The 2025 IEEE International Conference on Quantum Artificial Intelligence will be held in Naples, Italy, from November 2-5, 2025, with a paper submission deadline of May 15, 2025. Vanderbilt University will host a series of workshops devoted to Groups in Geometry, Analysis and Logic starting May 28, 2025.

Recommended read:
References :

Koray Kavukcuoglu@The Official Google Blog //
Google has unveiled Gemini 2.5 Pro, touted as its "most intelligent model to date," enhancing AI reasoning and workflow capabilities. This multimodal model, available to Gemini Advanced users and experimentally on Google’s AI Studio, outperforms competitors like OpenAI, Anthropic, and DeepSeek on key benchmarks, particularly in coding, math, and science. Gemini 2.5 Pro boasts an impressive 1 million token context window, soon expanding to 2 million, enabling it to handle larger datasets and understand entire code repositories.

Gemini 2.5 Pro excels in advanced reasoning benchmark tests, achieving a state-of-the-art score on datasets designed to capture human knowledge and reasoning. Its enhanced coding performance allows for the creation of visually compelling web apps and agentic code applications, along with code transformation and editing. Google plans to release pricing for Gemini 2.5 models soon, marking a significant step in their goal of developing more capable and context-aware AI agents.

Recommended read:
References :

@Google DeepMind Blog //
Researchers are making strides in understanding how AI models think. Anthropic has developed an "AI microscope" to peek into the internal processes of its Claude model, revealing how it plans ahead, even when generating poetry. This tool provides a limited view of how the AI processes information and reasons through complex tasks. The microscope suggests that Claude uses a language-independent internal representation, a "universal language of thought", for multilingual reasoning.

The team at Google DeepMind introduced JetFormer, a new Transformer designed to directly model raw data. This model, capable of both understanding and generating text and images seamlessly, maximizes the likelihood of raw data without depending on any pre-trained components. Additionally, a comprehensive benchmark called FACTS Grounding has been introduced to evaluate the factuality of large language models (LLMs). This benchmark measures how accurately LLMs ground their responses in provided source material and avoid hallucinations, aiming to improve trust and reliability in AI-generated information.

Recommended read:
References :
  • Google DeepMind Blog: FACTS Grounding: A new benchmark for evaluating the factuality of large language models
  • THE DECODER: Anthropic's AI microscope reveals how Claude plans ahead when generating poetry

Emilia David@AI News | VentureBeat //
OpenAI is enhancing GPT-4o with improved instruction following and problem-solving capabilities. The company has updated GPT-4o to better handle detailed instructions, especially when processing multi-task prompts, thus improving performance and intuition. This model can be accessed by subscribers through the API as "chatgpt-4o-latest" and in ChatGPT.

OpenAI has announced its support for Anthropic’s Model Context Protocol (MCP), an open-source standard designed to streamline the integration between AI assistants and various data systems. With MCP, AI models can connect directly to systems where data lives, eliminating the need for custom integrations and allowing real-time access to business tools and repositories. OpenAI will integrate MCP support into its Agents SDK immediately, with the ChatGPT desktop app and Responses API following soon. This protocol aims to create a unified framework for AI applications to access and utilize external data sources.

ChatGPT Team users can now add internal databases as references, allowing the platform to respond with improved contextual awareness. By connecting internal knowledge bases, ChatGPT Team could become more invaluable to users who ask the platform strategy questions or for analysis. This allows users to perform semantic searches of their data, link directly to internal sources in responses, and ensure ChatGPT understands internal company lingo.

Recommended read:
References :
  • Shelly Palmer: In a surprising move, OpenAI announced yesterday it will adopt rival Anthropic's MCP across its product line.
  • THE DECODER: OpenAI has updated GPT-4o to better handle detailed instructions, especially when processing multi-task prompts.
  • AI News | VentureBeat: OpenAI adds internal data referencing
  • Analytics Vidhya: OpenAI has announced its support for Anthropic’s Model Context Protocol (MCP), an open-source standard designed to streamline the integration between AI assistants and various data systems.

Tom Krazit@Runtime //
References: Runtime
Anthropic is gaining traction in the AI infrastructure space with its Model Context Protocol (MCP), introduced last November as an open standard for secure, two-way connections between data sources and AI-powered tools. This protocol is designed to simplify the process of building AI agents by providing a standard way for applications to retrieve data, allowing agents to take actions based on that data. Microsoft and Cloudflare have already announced support for MCP, with Microsoft highlighting that MCP simplifies agent building and reduces maintenance time.

The MCP protocol works by taking natural language input from a large-language model and providing a standard way for MCP clients to find and retrieve data stored on servers running MCP. This is analogous to the API, which made web-based computing a standard. Previously, developers needed to set up MCP servers locally, which was impractical for most users. This barrier to entry has now been removed.

In other news, Anthropic is facing a legal challenge as music publishers' request for a preliminary injunction in their copyright infringement suit was denied. The publishers alleged that Anthropic's LLM Claude was trained on their song lyrics. However, the judge ruled that the publishers failed to demonstrate specific financial harm and that their list of forbidden lyrics was not final, requiring constant updates to Anthropic's guard rails. The case is ongoing, and the publishers can collect more evidence.

Recommended read:
References :
  • Runtime: Why AI infrastructure companies are lining up behind Anthropic's MCP