News from the AI & ML world

DeeperML - #opensource

Alyssa Mazzina@RunPod Blog //
Deep Cogito, a new San Francisco-based AI company, has emerged from stealth with the release of Cogito v1, a family of open-source large language models (LLMs) ranging from 3B to 70B parameters. These models are trained using Iterated Distillation and Amplification (IDA), a novel technique aimed at achieving artificial superintelligence (ASI). Cogito v1 models are available on Hugging Face, Ollama, and through APIs on Fireworks and Together AI under the Llama licensing terms, allowing for commercial usage up to 700 million monthly users. The company plans to release even larger models, up to 671 billion parameters, in the coming months.

IDA, central to the Cogito v1 release, is described as a scalable and efficient alignment strategy for ASI using iterative self-improvement. It involves amplifying model capabilities through increased computation to derive better solutions and distilling these amplified capabilities back into the model's parameters. Founder Drishan Arora, formerly of Google, states that this creates a positive feedback loop, allowing the model's intelligence to scale more directly with computational resources, instead of being limited by human or larger model supervision. This approach is likened to Google AlphaGo’s self-play strategy, but applied to natural language models.

Cogito v1 models support two distinct modes: Direct Mode for fast, high-quality completions for common tasks, and Reasoning Mode for slower but more thoughtful responses using added compute. The 70B model, trained on RunPod using H200 GPUs, outperforms LLaMA 3.3 70B and even the 109B LLaMA 4 Scout model across major benchmarks. According to Arora, the models are the "strongest open models at their scale," outperforming alternatives from LLaMA, DeepSeek, and Qwen. Deep Cogito claims that the models were developed by a small team in approximately 75 days, highlighting IDA’s potential scalability.

Recommended read:
References :
  • RunPod Blog: Built on RunPod: How Cogito Trained High-Performance Open Models on the Path to ASI
  • AI News | VentureBeat: New open source AI company Deep Cogito releases first models and they’re already topping the charts
  • AI News: Deep Cogito open LLMs use IDA to outperform same size models
  • www.artificialintelligence-news.com: Deep Cogito open LLMs use IDA to outperform same size models

Alyssa Mazzina@RunPod Blog //
San Francisco-based Deep Cogito, backed by RunPod, has unveiled Cogito v1, a family of open-source AI models ranging in size from 3B to 70B parameters. These models are designed to outperform leading alternatives, including those from LLaMA, DeepSeek, and Qwen, across standard benchmarks. The company emphasized that its AI models display both outstanding performance and efficiency which challenges the conventional secrecy surrounding AI advancements.

Cogito v1 models are trained using Iterated Distillation and Amplification (IDA), a novel alignment strategy. Instead of simply distilling from a larger teacher model, which can limit performance, IDA employs compute-intensive subroutines to generate improved answers, then distills this enhanced reasoning back into the model's parameters. This allows the model to learn how to think more effectively, rather than simply mimicking existing patterns. According to founder Drishan Arora, this method uses reasoning to improve the model’s intuition, leading to better decision-making when solving complex problems.

The Cogito v1 models support both Direct Mode for fast, high-quality completions of common tasks and Reasoning Mode for slower, more thoughtful responses using added compute. The 70B model, trained on RunPod using H200 GPUs, has demonstrated superior performance compared to LLaMA 3.3 70B and even the 109B LLaMA 4 Scout model across major benchmarks. Further development includes future checkpoints and larger Mixture of Experts (MoE) models with up to 671B parameters.

Recommended read:
References :
  • bdtechtalks.com: Here is how DeepSeek models disrupted AI norms and revealed that outstanding performance and efficiency don’t require secrecy The post first appeared on .
  • RunPod Blog: At RunPod, we're proud to power the next generation of AI breakthroughs—and this one is big. San Francisco-based Deep Cogito has just released Cogito v1, a family of open-source models ranging from 3B to 70B parameters. Each model outperforms leading alternatives from LLaMA, DeepSeek, and Qwen
  • bdtechtalks.com: Bdtechtalks discusses the innovations powering DeepSeek's AI breakthrough.
  • www.artificialintelligence-news.com: Artificial Intelligence News reports on DeepSeek's AI breakthrough in teaching machines what humans really want.

Ronen Dar@NVIDIA Technical Blog //
NVIDIA has announced the open-source release of the KAI Scheduler, a Kubernetes-native GPU scheduling solution. Available under the Apache 2.0 license, the KAI Scheduler was originally developed within the Run:ai platform. This initiative aims to foster an active and collaborative community by encouraging contributions, feedback, and innovation in AI infrastructure. The KAI Scheduler will continue to be packaged and delivered as part of the NVIDIA Run:ai platform.

NVIDIA's move to open source the KAI Scheduler addresses challenges in managing AI workloads on GPUs and CPUs, which traditional resource schedulers often fail to meet. The scheduler dynamically manages fluctuating GPU demands, reduces wait times for compute access by combining gang scheduling, GPU sharing, and a hierarchical queuing system, and it helps to connect AI tools and frameworks seamlessly. By maximizing compute utilization through bin-packing, consolidation and spreading workloads across nodes, KAI Scheduler reduces resource fragmentation.

Recommended read:
References :
  • NVIDIA Technical Blog: NVIDIA Open Sources Run:ai Scheduler to Foster Community Collaboration
  • thenewstack.io: NVIDIA Open Sources KAI Scheduler To Help AI Teams Optimize GPU Utilization
  • AI News | VentureBeat: Nvidia open sources Run:ai Scheduler to foster community collaboration
  • : NVIDIA opens up GPU utilization tool for underworked AI infrastructure
  • insideAI News: NVIDIA posted a blog announcing the open-source release of the KAI Scheduler, a Kubernetes-native GPU scheduling solution, now available under the Apache 2.0 license.
  • Developer Tech News: NVIDIA open-sourced KAI Scheduler, a Kubernetes solution designed to optimise the scheduling of GPU resources.

Michael Nuñez@AI News | VentureBeat //
References: venturebeat.com , Fello AI , THE DECODER ...
OpenAI, the company behind ChatGPT, has announced a significant strategic shift by planning to release its first open-weight AI model since 2019. This move comes amidst mounting economic pressures from competitors like DeepSeek and Meta, whose open-source models are increasingly gaining traction. CEO Sam Altman revealed the plans on X, stating that the new model will have reasoning capabilities and allow developers to run it on their own hardware, departing from OpenAI's cloud-based subscription model.

This decision marks a notable change for OpenAI, which has historically defended closed, proprietary models. The company is now looking to gather developer feedback to make the new model as useful as possible, planning events in San Francisco, Europe and Asia-Pacific. As models improve, startups and developers increasingly want more tunable latency, and want to use on-prem deplouments requiring full data control, according to OpenAI.

The shift comes alongside a monumental $40 billion funding round led by SoftBank, which has catapulted OpenAI's valuation to $300 billion. SoftBank will initially invest $10 billion, with the remaining $30 billion contingent on OpenAI transitioning to a for-profit structure by the end of the year. This funding will help OpenAI continue building AI systems that drive scientific discovery, enable personalized education, enhance human creativity, and pave the way toward artificial general intelligence. The release of the open-weight model is expected to help OpenAI compete with the growing number of efficient open-source alternatives and counter the criticisms that have come from remaining a closed model.

Recommended read:
References :
  • venturebeat.com: OpenAI to release open-source model as AI economics force strategic shift
  • Fello AI: OpenAI Secures Historic $40 Billion Funding Round
  • SiliconANGLE: OpenAI to launch its first ‘open-weights’ model since 2019
  • THE DECODER: OpenAI plans to release open-weight reasoning LLM without usage restrictions
  • Charlie Fink: OpenAI Raises $40 Billion, Runway AI Video $380 Million, Amazon, Oracle, TikTok Suitors Await Decision

Michael Nuñez@AI News | VentureBeat //
OpenAI, the company behind ChatGPT, has announced a significant strategic shift by planning to release its first open-weight AI model since 2019. This move comes amidst mounting economic pressures from competitors like DeepSeek and Meta, whose open-source models are increasingly gaining traction. CEO Sam Altman revealed the plans on X, stating that the new model will have reasoning capabilities and allow developers to run it on their own hardware, departing from OpenAI's cloud-based subscription model.

This decision marks a notable change for OpenAI, which has historically defended closed, proprietary models. The company is now looking to gather developer feedback to make the new model as useful as possible, planning events in San Francisco, Europe and Asia-Pacific. As models improve, startups and developers increasingly want more tunable latency, and want to use on-prem deplouments requiring full data control, according to OpenAI.

The shift comes alongside a monumental $40 billion funding round led by SoftBank, which has catapulted OpenAI's valuation to $300 billion. SoftBank will initially invest $10 billion, with the remaining $30 billion contingent on OpenAI transitioning to a for-profit structure by the end of the year. This funding will help OpenAI continue building AI systems that drive scientific discovery, enable personalized education, enhance human creativity, and pave the way toward artificial general intelligence. The release of the open-weight model is expected to help OpenAI compete with the growing number of efficient open-source alternatives and counter the criticisms that have come from remaining a closed model.

Recommended read:
References :
  • Data Science at Home: Is DeepSeek the next big thing in AI? Can OpenAI keep up? And how do we truly understand these massive LLMs?
  • venturebeat.com: OpenAI to release open-source model as AI economics force strategic shift
  • WIRED: Sam Altman Says OpenAI Will Release an ‘Open Weight’ AI Model This Summer
  • Fello AI: OpenAI has closed a $40 billion funding round, boosting its valuation to $300 billion. The deal, led by SoftBank, is one of the largest capital infusions in the tech industry and marks a significant milestone for the company.
  • www.theguardian.com: OpenAI said it had raised $40bn in a funding round that valued the ChatGPT maker at $300bn.
  • SiliconANGLE: OpenAI to launch its first ‘open-weights’ model since 2019
  • techxplore.com: OpenAI says it raised $40 bn at valuation of $300 bn
  • SiliconANGLE: OpenAI bags $40B in funding, increasing its post-money valuation to $300B
  • techxplore.com: OpenAI says it raised $40 bn at valuation of $300 bn
  • www.tomsguide.com: OpenAI is planning on launching its first open-weight model in years
  • THE DECODER: OpenAI plans to release open-weight reasoning LLM without usage restrictions
  • www.it-daily.net: OpenAI raises 40 billion dollars from investors
  • bsky.app: OpenAI has raised $40 billion at a $300 billion valuation. For context, Boeing has a $128 billion market cap, Disney has a $178 billion market cap, and Chevron has a $295 billion market cap. So, OpenAI has been valued at something like Boeing plus Disney, or just some $5 billion more than Chevron.
  • THE DECODER: SoftBank and OpenAI announced a major partnership on Monday that includes billions in annual spending and a new joint venture focused on the Japanese market.
  • The Tech Portal: OpenAI has closed a record-breaking $40 billion private funding round, marking the…
  • www.techrepublic.com: Developers Wanted: OpenAI Seeks Feedback About Open Model That Will Be Revealed ‘In the Coming Months’
  • bdtechtalks.com: Understanding OpenAI’s pivot to releasing open source models
  • techstrong.ai: OpenAI to Raise $40 Billion in Funding, Release Open-Weight Language Model
  • Charlie Fink: OpenAI Raises $40 Billion, Runway AI Video $380 Million, Amazon, Oracle, TikTok Suitors Await Decision
  • Charlie Fink: Runway’s Gen-4 release overshadows OpenAI’s image upgrade as Higgsfield, Udio, Prodia, and Pika debut powerful new AI tools for video, music, and image generation.

Dashveenjit Kaur@AI News //
References: AI News , MarkTechPost , AI News ...
DeepSeek, a Chinese AI startup, is causing a stir in the AI industry with its new large language model, DeepSeek-V3-0324. Released with little fanfare on the Hugging Face AI repository, the 641-gigabyte model is freely available for commercial use under an MIT license. Early reports indicate it can run directly on consumer-grade hardware, such as Apple’s Mac Studio with the M3 Ultra chip, especially in a 4-bit quantized version that reduces the storage footprint to 352GB. This innovation challenges the previous notion that Silicon Valley held a chokehold on the AI industry.

China's focus on algorithmic efficiency over hardware superiority has allowed companies like DeepSeek to flourish despite restrictions on access to the latest silicon. DeepSeek's R1 model, launched earlier this year, already rivaled OpenAI's ChatGPT-4 at a fraction of the cost. Now the DeepSeek-V3-0324 features enhanced reasoning capabilities and improved performance. This has sparked a gold rush among Chinese tech startups, rewriting the playbook for AI development and allowing smaller companies to believe they have a shot in the market.

Recommended read:
References :
  • AI News: DeepSeek V3-0324 has become the highest-scoring non-reasoning model on the Artificial Analysis Intelligence Index in a landmark achievement for open-source AI.
  • MarkTechPost: Artificial intelligence (AI) has made significant strides in recent years, yet challenges persist in achieving efficient, cost-effective, and high-performance models.
  • Quinta?s weblog: Chinese AI startup DeepSeek has quietly released a new large language model that’s already sending ripples through the artificial intelligence industry — not just for its capabilities, but for how it’s being deployed.
  • AI News: DeepSeek disruption: Chinese AI innovation narrows global technology divide
  • Composio: Deepseek v3 o324, a new checkpoint, has been released by Deepseek in silence, with no marketing or hype, just a tweet and
  • SiliconANGLE: DeepSeek today released an improved version of its DeepSeek-V3 large language model under a new open-source license.
  • Sify: DeepSeek’s AI Revolution: Creating an Entire AI Ecosystem
  • Composio: Deepseek v3-0324 vs. Claude 3.7 Sonnet

Ryan Daws@AI News //
References: SiliconANGLE , venturebeat.com , AI News ...
DeepSeek, a Chinese AI company, has released DeepSeek V3-0324, an updated AI model that demonstrates impressive performance. The model is now running at 20 tokens per second on a Mac Studio. This model is said to contain 685 billion parameters and its cost-effectiveness challenges the dominance of American AI models, signaling that China continues to innovate in AI despite chip restrictions. Reports from early testers show improvements over previous versions and the model tops non-reasoning AI models in open-source first.

This new model runs on consumer-grade hardware, specifically Apple's Mac Studio with the M3 Ultra chip, diverging from the typical data center requirements for AI. It is freely available for commercial use under the MIT license. According to AI researcher Awni Hannun, the model runs at over 20 tokens per second on a 512GB M3 Ultra. The company has made no formal announcement, just an empty README file and the model weights themselves. This stands in contrast to the carefully orchestrated product launches by Western AI companies.

Recommended read:
References :
  • SiliconANGLE: DeepSeek today released an improved version of its DeepSeek-V3 large language model under a new open-source license.
  • venturebeat.com: DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI
  • AI News: Chinese AI innovation is reshaping the global technology landscape, challenging assumptions about Western dominance in advanced computing. Recent developments from companies like DeepSeek illustrate how quickly China has adapted to and overcome international restrictions through creative approaches to AI development.
  • AI News: DeepSeek V3-0324 tops non-reasoning AI models in open-source first
  • MarkTechPost: DeepSeek AI Unveils DeepSeek-V3-0324: Blazing Fast Performance on Mac Studio, Heating Up the Competition with OpenAI
  • Cloud Security Alliance: Cloud Security Alliance: DeepSeek: Behind the Hype and Headlines
  • Quinta?s weblog: DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI
  • Composio: Deepseek v3-0324 vs. Claude 3.7 Sonnet

Ryan Daws@AI News //
DeepSeek V3-0324, the latest large language model from Chinese AI startup DeepSeek, is making waves in the artificial intelligence industry. The model, quietly released with an MIT license for commercial use, has quickly become the highest-scoring non-reasoning model on the Artificial Analysis Intelligence Index. This marks a significant milestone for open-source AI, surpassing proprietary counterparts like Google’s Gemini 2.0 Pro, Anthropic’s Claude 3.7 Sonnet, and Meta’s Llama 3.3 70B.

DeepSeek V3-0324's efficiency is particularly notable. Early reports indicate that it can run directly on consumer-grade hardware, specifically Apple’s Mac Studio with an M3 Ultra chip, achieving speeds of over 20 tokens per second. This capability is a major departure from the typical data center requirements associated with state-of-the-art AI. The updated version demonstrates substantial improvements in reasoning and benchmark performance, as well as enhanced Chinese writing proficiency and optimized translation quality.

Recommended read:
References :
  • venturebeat.com: DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI
  • AI News: DeepSeek V3-0324 tops non-reasoning AI models in open-source first
  • Analytics Vidhya: DeepSeek V3-0324: Generated 700 Lines of Code without Breaking
  • Analytics India Magazine: The model outperformed all other non-reasoning models across several benchmarks but trailed behind DeepSeek-R1, OpenAI’s o1, o3-mini, and other reasoning models.
  • Cloud Security Alliance: DeepSeek: Behind the Hype and Headlines
  • techstrong.ai: DeepSeek Ups Ante (Again) in Duel with OpenAI, Anthropic
  • www.techradar.com: Deepseek’s new AI is smarter, faster, cheaper, and a real rival to OpenAI's models
  • Analytics Vidhya: DeepSeek V3-0324 vs Claude 3.7: Which is the Better Coder?
  • MarkTechPost: DeepSeek AI Unveils DeepSeek-V3-0324: Blazing Fast Performance on Mac Studio, Heating Up the Competition with OpenAI
  • www.zdnet.com: It's called V3-0324, but the real question is: Is it foreshadowing the upcoming launch of R2?
  • SiliconANGLE: DeepSeek today released an improved version of its DeepSeek-V3 large language model under a new open-source license.
  • Composio: Deepseek v3 o324, a new checkpoint, has been released by Deepseek in silence, with no marketing or hype, just a tweet and The post appeared first on .
  • Composio: Deepseek v3-0324 vs. Claude 3.7 Sonnet

Ryan Daws@AI News //
DeepSeek V3-0324 has emerged as a leading AI model, topping benchmarks for non-reasoning AI in an open-source breakthrough. This milestone signifies a significant advancement in the field, as it marks the first time an open weights model has achieved the top position among non-reasoning models. The model's performance surpasses proprietary counterparts and edges it closer to proprietary reasoning models, highlighting the growing viability of open-source solutions for latency-sensitive applications. DeepSeek V3-0324 represents a new era for open-source AI, offering a powerful and adaptable tool for developers and enterprises.

DeepSeek-V3 now runs at 20 tokens per second on Apple’s Mac Studio, presenting a challenge to OpenAI’s cloud-dependent business model. The 685-billion-parameter model, DeepSeek-V3-0324, is freely available for commercial use under the MIT license. This achievement, coupled with its cost efficiency and performance, signals a shift in the AI sector, where open-source frameworks increasingly compete with closed systems. Early testers report significant improvements over previous versions, positioning DeepSeek's new model above Claude Sonnet 3.5 from Anthropic.

Recommended read:
References :
  • Analytics India Magazine: The model outperformed all other non-reasoning models across several benchmarks but trailed behind DeepSeek-R1, OpenAI’s o1, o3-mini, and other reasoning models.
  • venturebeat.com: DeepSeek-V3 now runs at 20 tokens per second on Mac Studio, and that’s a nightmare for OpenAI
  • AI News: DeepSeek V3-0324 tops non-reasoning AI models in open-source first
  • Analytics Vidhya: DeepSeek V3-0324: Generated 700 Lines of Code without Breaking
  • Analytics Vidhya: DeepSeek V3-0324 vs Claude 3.7: Which is the Better Coder?
  • Cloud Security Alliance: Markets reacted dramatically, with Nvidia alone losing nearly $600 billion in value in a single day, part of a broader...
  • GZERO Media: Just a few short months ago, Silicon Valley seemed to have the artificial intelligence industry in a chokehold.
  • MarkTechPost: DeepSeek AI Unveils DeepSeek-V3-0324: Blazing Fast Performance on Mac Studio, Heating Up the Competition with OpenAI
  • SiliconANGLE: DeepSeek today released an improved version of its DeepSeek-V3 large language model under a new open-source license.
  • techstrong.ai: DeepSeek Ups Ante (Again) in Duel with OpenAI, Anthropic
  • www.zdnet.com: DeepSeek V3 model gets a major upgrade
  • www.techradar.com: DeepSeek’s new AI is smarter, faster, cheaper, and a real rival to OpenAI's models
  • Composio: Deepseek v3 0324: Finally, the Sonnet 3.5 at Home
  • AI News: DeepSeek disruption: Chinese AI innovation narrows global technology divide

@tomshardware.com //
AMD has announced Gaia, an open-source project enabling the local execution of Large Language Models (LLMs) on any PC. This initiative aims to bring AI processing closer to users by facilitating the running of LLMs directly on Windows machines. Gaia is designed to run various LLM models and offers performance optimizations for PCs equipped with Ryzen AI processors, including the Ryzen AI Max 395+.

Gaia utilizes the open-source Lemonade SDK from ONNX TurnkeyML for LLM inference, allowing models to adapt for tasks such as summarization and complex reasoning. It functions through a Retrieval-Augmented Generation agent, combining an LLM with a knowledge base for interactive AI experiences and accurate responses. Gaia offers several agents including Simple Prompt Completion, Chaty, Clip, and Joker. The new AI chatbot has two installers: a mainstream installer that works on any Windows PC and a "Hybrid" installer optimized for Ryzen AI PCs, using the XDNA NPU and RDNA iGPU.

Recommended read:
References :
  • BigDATAwire: NVIDIA Pushes Boundaries of Apache Spark With RAPIDS and Project Aether
  • www.tomshardware.com: AMD introduces Gaia, an open-source project designed to run large language models locally on any PC. It also boasts optimizations to run quickly on Ryzen AI-equipped PCs.

Dean Takahashi@AI News | VentureBeat //
NVIDIA, Google DeepMind, and Disney Research are collaborating on Newton, an open-source physics engine designed to advance robot learning, enhance simulation accuracy, and facilitate the development of next-generation robotic characters. Newton is built on NVIDIA’s Warp framework and aims to provide a scalable, high-performance simulation environment optimized for AI-driven humanoid robots. MuJoCo-Warp, a collaboration with Google DeepMind, accelerates robotics workloads by over 70x, while Disney plans to integrate Newton into its robotic character platform for expressive, interactive robots.

The engine's creation is intended to bridge the gap between simulation and real-world robotics. NVIDIA will also supercharge humanoid robot development with the Isaac GR00T N1 foundation model for human-like reasoning. Newton is built on NVIDIA Warp, a CUDA-based acceleration library that enables GPU-powered physics simulations. Newton is also optimized for robot learning frameworks, including MuJoCo Playground and NVIDIA Isaac Lab, making it an essential tool for developers working on generalist humanoid robots. This initiative is part of NVIDIA's broader effort to accelerate physical AI progress.

Recommended read:
References :
  • AI News | VentureBeat: Nvidia will supercharge humanoid robot development with Isaac GR00T N1 foundation model for human-like reasoning
  • Maginative: NVIDIA, Google DeepMind, and Disney Research Team Up for Open-Source Physics Engine
  • BigDATAwire: The Rise of Intelligent Machines: Nvidia Accelerates Physical AI Progress
  • LearnAI: From innovation to impact: How AWS and NVIDIA enable real-world generative AI success

Matthias Bastian@THE DECODER //
Mistral AI has launched Mistral Small 3.1, a new open-source AI model with 24 billion parameters, designed for high performance with lower computational needs. This model is optimized for speed and efficiency, and can operate smoothly on a single RTX 4090 or a Mac with 32GB RAM, making it ideal for on-device AI solutions. The release includes both pre-trained and instruction-tuned checkpoints, offering flexibility for developers to fine-tune the model for domain-specific applications.

Mistral Small 3.1 stands out for its remarkable efficiency, outperforming Google's Gemma 3 and OpenAI's GPT-4o Mini in text, vision, and multilingual benchmarks. It features an expanded 128K token context window allowing it to handle long-form reasoning and document analysis. The Apache 2.0 license ensures free use and modification, enabling developers to fine-tune the model for specific domains, including legal, healthcare, and customer service applications.

Recommended read:
References :
  • Analytics Vidhya: Compares Mistral 3.1 vs Gemma 3, analyzing which is the better model.
  • Maginative: Highlights Mistral Small 3.1 Outperforming Gemma 3 and GPT-4o Mini.
  • venturebeat.com: Reports Mistral AI dropping a new open-source model that outperforms GPT-4o Mini with fraction of parameters.
  • TestingCatalog: Presents Mistral Small 3, a 24B-parameter open-source AI model optimized for speed.