News from the AI & ML world

DeeperML

Zach Winn@news.mit.edu //
MIT spinout Themis AI is tackling a critical issue in the field of artificial intelligence: AI "hallucinations" or instances where AI systems confidently provide incorrect or fabricated information. These inaccuracies can have serious consequences, particularly in high-stakes applications like drug development, autonomous driving, and information synthesis. Themis AI has developed a novel tool called Capsa, designed to quantify model uncertainty and enable AI models to recognize their limitations. Capsa works by modifying AI models to identify patterns in their data processing that indicate ambiguity, incompleteness, or bias. This allows the AI to "admit when it doesn't know," thereby improving the reliability and transparency of AI systems.

The core idea behind Themis AI's Capsa platform is to wrap existing AI models, identify uncertainties and potential failure modes, and then enhance the model's capabilities. Founded in 2021 by MIT Professor Daniela Rus, Alexander Amini, and Elaheh Ahmadi, Themis AI aims to enable safer and more trustworthy AI deployments across various industries. Capsa can be integrated with any machine-learning model to detect and correct unreliable outputs in seconds. The platform has already demonstrated its value in diverse sectors, including helping telecom companies with network planning, assisting oil and gas firms in analyzing seismic imagery, and contributing to the development of more reliable chatbots.

Themis AI’s work builds upon years of research at MIT into model uncertainty. Professor Rus's lab, with funding from Toyota, studied the reliability of AI for autonomous driving, a safety-critical application where accurate model understanding is paramount. The team also developed an algorithm capable of detecting and mitigating racial and gender bias in facial recognition systems. Amini emphasizes that Themis AI's software adds a crucial layer of self-awareness that has been missing in AI systems. The goal is to enable AI to forecast and predict its own failures before they occur, ensuring that these systems are used responsibly and effectively in critical decision-making processes.
Original img attribution: https://news.mit.edu/sites/default/files/images/202506/MIT-ThemisAI-01-Press.jpg
ImgSrc: news.mit.edu

Share: bluesky twitterx--v2 facebook--v1 threads


References :
Classification:
  • HashTags: #AI #Transparency #ThemisAI
  • Company: MIT
  • Target: AI models
  • Product: Themis AI
  • Feature: Uncertainty Quantification
  • Type: AI
  • Severity: Informative