News from the AI & ML world

DeeperML - #robotics

@www.marktechpost.com //
Meta AI has announced the release of V-JEPA 2, an open-source world model designed to enhance robots' ability to understand and interact with physical environments. V-JEPA 2 builds upon the Joint Embedding Predictive Architecture (JEPA) and leverages self-supervised learning from over one million hours of video and images. This approach allows the model to learn abstract concepts and predict future states, enabling robots to perform tasks in unfamiliar settings and improving their understanding of motion and appearance. The model can be useful in manufacturing automation, surveillance analytics, in-building logistics, robotics, and other more advanced use cases.

Meta researchers scaled JEPA pretraining by constructing a 22M-sample dataset (VideoMix22M) from public sources and expanded the encoder capacity to over 1B parameters. They also adopted a progressive resolution strategy and extended pretraining to 252K iterations, reaching 64 frames at 384x384 resolution. V-JEPA 2 avoids the inefficiencies of pixel-level prediction by focusing on predictable scene dynamics while disregarding irrelevant noise. This abstraction makes the system both more efficient and robust, requiring just 16 seconds to plan and control robots.

Meta's V-JEPA 2 represents a step toward achieving "advanced machine intelligence" by enabling robots to interact effectively in environments they have never encountered before. The model achieves state-of-the-art results on motion recognition and action prediction benchmarks and can control robots without additional training. By focusing on the essential and predictable aspects of a scene, V-JEPA 2 aims to provide AI agents with the intuitive physics needed for effective planning and reasoning in the real world, distinguishing itself from generative models that attempt to predict every detail.

Recommended read:
References :
  • www.computerworld.com: Meta’s recent unveiling of V-JEPA 2 marks a quiet but significant shift in the evolution of AI vision systems, and it’s one enterprise leaders can’t afford to overlook,
  • www.marktechpost.com: Meta AI Releases V-JEPA 2: Open-Source Self-Supervised World Models for Understanding, Prediction, and Planning
  • MarkTechPost: Meta AI Releases V-JEPA 2: Open-Source Self-Supervised World Models for Understanding, Prediction, and Planning
  • The Tech Portal: Social media company Meta has now introduced V-JEPA 2, a new open-source…
  • about.fb.com: Our New Model Helps AI Think Before it Acts
  • AI News | VentureBeat: Meta’s new world model lets robots manipulate objects in environments they’ve never encountered before
  • www.infoq.com: Meta Introduces V-JEPA 2, a Video-Based World Model for Physical Reasoning
  • eWEEK: Dubbed as a “world model,” Meta’s New V-JEPA 2 AI model uses visual understanding and physical intuition to enhance reasoning in robotics and AI agents.

@futurumgroup.com //
NVIDIA is making significant strides in the fields of robotics and climate modeling, leveraging its AI expertise and advanced platforms. At COMPUTEX 2025, NVIDIA announced the latest enhancements to its Isaac robotics platform, including Isaac GR00T N1.5 and GR00T-Dreams, designed to accelerate the development of humanoid robots. These tools focus on streamlining development through synthetic data generation and accelerated training, addressing the critical need for extensive training data. Robotics leaders such as Boston Dynamics and Foxconn have already adopted Isaac technologies, indicating the platform's growing influence in the industry.

NVIDIA's Isaac GR00T-Dreams allows developers to create task-based motion sequences from a single image input, significantly reducing the reliance on real-world data collection. The company has also released simulation frameworks, including Isaac Sim 5.0 and Isaac Lab 2.2, along with Cosmos Reason and Cosmos Predict 2, to further support high-quality training data generation. Blackwell-based RTX PRO 6000 workstations and servers from partners like Dell, HPE, and Supermicro are being introduced to unify robot development workloads from training to deployment. Olivier Blanchard, Research Director at Futurum, notes that these platform updates reinforce NVIDIA's position in defining the infrastructure for humanoid robotics.

In parallel with its robotics initiatives, NVIDIA has unveiled cBottle, a generative AI model within its Earth-2 platform, which simulates global climate at kilometer-scale resolution. This model promises faster, more efficient climate predictions by simulating atmospheric conditions at a detailed 5km resolution. cBottle addresses the limitations of traditional climate models by compressing massive climate simulation datasets, reducing storage requirements by up to 3,000 times. This allows for explicit simulation of convection, driving more accurate projections of extreme weather events and opening new avenues for understanding and anticipating complex climate phenomena.

Recommended read:
References :
  • futurumgroup.com: Olivier Blanchard, Research Director at Futurum, shares insights on how NVIDIA’s Isaac GR00T platform and Blackwell systems aim to accelerate humanoid robotics through simulation, synthetic data, and integrated infrastructure. The post appeared first on .
  • Maginative: NVIDIA’s Earth-2 platform introduces cBottle, a generative AI model simulating global climate at kilometer-scale resolution, promising faster, more efficient climate predictions.

@futurumgroup.com //
References: futurumgroup.com
NVIDIA is significantly advancing the field of humanoid robotics through its Isaac GR00T platform and Blackwell systems. These tools aim to accelerate the development and deployment of robots in manufacturing and other industries. Key to this advancement is NVIDIA's focus on simulation-first, AI-driven methodologies, leveraging synthetic data and integrated infrastructure to overcome the limitations of traditional robot training. This approach is particularly beneficial for European manufacturers seeking to enhance their production processes through AI.

NVIDIA's commitment to AI-powered robotics is evidenced by its substantial investment in hardware and software. The company is constructing the "world's first" industrial AI cloud in Germany, featuring 10,000 GPUs, DGX B200 systems, and RTX Pro servers. This infrastructure will support CUDA-X libraries, RTX, and Omniverse-accelerated workloads, providing a powerful platform for European manufacturers to develop and deploy AI-driven robots. NVIDIA's Isaac GR00T N1.5, an open foundation model for humanoid robot reasoning and skills, is now available, further empowering developers to create adaptable and instruction-following robots.

European robotics companies are already embracing NVIDIA's technologies. Companies such as Agile Robots, Humanoid, Neura Robotics, Universal Robots, Vorwerk and Wandelbots are showcasing AI-driven robots powered by NVIDIA's platform. NVIDIA is also releasing new models and tools, including NVIDIA Halos, a safety system designed to unify hardware, AI models, software, tools, and services, to promote safety across the entire development lifecycle of AI-driven robots. By addressing both the performance and safety aspects of robotics, NVIDIA is positioning itself as a key player in the future of AI-powered automation.

Recommended read:
References :
  • futurumgroup.com: Olivier Blanchard, Research Director at Futurum, shares insights on how NVIDIA’s Isaac GR00T platform and Blackwell systems aim to accelerate humanoid robotics through simulation, synthetic data, and integrated infrastructure.

@techstrong.ai //
References: insidehpc.com , WhatIs , techstrong.ai ...
Amazon is making a substantial investment in artificial intelligence infrastructure, announcing plans to spend $10 billion in North Carolina. The investment will be used to build a cloud computing and AI campus just east of Charlotte, NC. This project is anticipated to create hundreds of good-paying jobs and provide a significant economic boost to Richmond County, positioning North Carolina as a hub for cutting-edge technology.

This investment underscores Amazon's commitment to driving innovation and advancing the future of cloud computing and AI technologies. The company plans to expand its AI data center infrastructure in North Carolina, following a trend among Big Tech companies who are building out infrastructure to meet escalating AI resource requirements. The new "innovation campus" will house data centers containing servers, storage drives, networking equipment, and other essential technology.

Amazon is also focused on improving efficiency by enhancing warehouse operations through the use of AI. The company unveiled AI upgrades to boost warehouse efficiency. These upgrades center around the development of "agentic AI" robots. These robots are designed to perform a variety of tasks, from unloading trailers to retrieving repair parts and lifting heavy objects, all based on natural language instructions. The goal is to create systems that can understand and act on commands, transforming robots into multi-talented helpers, ultimately leading to faster deliveries and improved efficiency.

Recommended read:
References :
  • insidehpc.com: Amazon to Invest $10B in North Carolina AI Infrastructure
  • WhatIs: As it races to compete with big tech rivals for artificial intelligence dominance, Amazon's Tar Heel State investment is part of a $100 billion capital expenditure effort slated for 2025.
  • WRAL TechWire: Global technology giant Amazon plans to launch a cloud computing and artificial intelligence innovation campus in Richmond County, state officials say. The post first appeared on .
  • techstrong.ai: Amazon Plans to Splurge $10 Billion in North Carolina on AI Infrastructure
  • Maginative: Amazon Drops $10B on North Carolina Data Centers to Power Its AI Push
  • www.theguardian.com: Amazon ‘testing humanoid robots to deliver packages’

@techstrong.ai //
Amazon is making a significant push into robotics with the development of humanoid robots designed for package delivery. According to reports, the tech giant is working on the AI software needed to power these robots and is constructing a dedicated "humanoid park" at its San Francisco facility. This indoor testing ground, resembling the size of a coffee shop, will serve as an obstacle course where the robots can practice the entire delivery process, including navigating sidewalks, climbing stairs, and handling packages. The initiative reflects Amazon's continued efforts to enhance efficiency and optimize its logistics operations through advanced automation.

Amazon envisions these humanoid robots eventually riding in its Rivian electric vans and independently completing the last leg of the delivery journey. The company is reportedly testing various robot models, including the Unitree G1, and focusing on developing AI software that will allow them to navigate real-world environments. This move comes as Amazon continues to invest heavily in AI and robotics, including the deployment of over 750,000 robots in its warehouses. The integration of humanoid robots into the delivery process has the potential to reduce physical strain on human workers and address labor shortages, especially during peak seasons.

This initiative is part of a broader trend of leveraging AI and robotics to optimize supply chains and reduce operational costs. While there is no official rollout date for the humanoid delivery robots, Amazon's investment in this technology signals its commitment to exploring innovative solutions for package delivery. Furthermore, it coincides with Amazon investing $10 billion in North Carolina to build new data centers as part of a massive AI infrastructure expansion.

Recommended read:
References :
  • techstrong.ai: Amazon.com Inc. is working on software for humanoid robots that could eventually take the jobs of delivery workers, and it is putting the final touches on construction of an indoor humanoid park the size of a coffee shop in San Francisco to test them, according to a report in The Information.
  • www.theguardian.com: Tech firm is building ‘humanoid park’ in US to try out robots, which could ‘spring out’ of its vans
  • WhatIs: Amazon to launch $10B data center upgrade in North Carolina
  • www.wral.com: Global technology giant Amazon plans to launch a cloud computing and artificial intelligence innovation campus in Richmond County, state officials say.
  • futurism.com: Amazon Testing Humanoid Robots to Ride in Vans, Hand-Deliver Packages
  • shellypalmer.com: Amazon is Testing Humanoid Delivery Robots
  • www.eweek.com: Amazon Tests Humanoid Delivery Bots, Says ‘Robots Will Soon Be Nimbler’
  • Maginative: Amazon Drops $10B on North Carolina Data Centers to Power Its AI Push

@siliconangle.com //
Hugging Face, primarily known as a platform for machine learning and AI development, is making a significant push into the robotics field with the introduction of two open-source robot designs: HopeJR and Reachy Mini. HopeJR is a full-sized humanoid robot boasting 66 degrees of freedom, while Reachy Mini is a desktop unit. The company aims to democratize robotics by offering accessible and open-source platforms for development and experimentation. These robots are intended to serve as tools for AI developers, similar to a Raspberry Pi, facilitating the testing and integration of AI applications in robotic systems. Hugging Face anticipates shipping initial units by the end of the year.

HopeJR, co-designed with French robotics company The Robot Studio, is capable of walking and manipulating objects. According to Hugging Face Principal Research Scientist Remi Cadene, it can perform 66 movements including walking. Priced around $3,000, HopeJR is positioned to compete with offerings like Unitree's G1. CEO Clem Delangue emphasized the importance of the robots being open source. He stated that this enables anyone to assemble, rebuild, and understand how they work and remain affordable, ensuring that robotics isn’t dominated by a few large corporations with black-box systems. This approach lowers the barrier to entry for researchers and developers, fostering innovation and collaboration in the robotics community.

Reachy Mini is a desktop robot designed for AI application testing. Resembling a "Wall-E-esque statue bust" according to reports, Reachy Mini features a retractable neck that allows it to follow the user with its head and auditory interaction. Priced between $250 and $300, Reachy Mini is intended to be used to test AI applications before deploying them to production. Hugging Face's expansion into robotics includes the acquisition of Pollen Robotics, a company specializing in humanoid robot technology, and the release of AI models specifically designed for robots, as well as the SO-101, a 3D-printable robotic arm.

Recommended read:
References :