News from the AI & ML world

DeeperML - #aigovernance

Ken Yeung@Ken Yeung //
Ai2 has unveiled OLMoTrace, an open-source tool designed to bring unprecedented transparency to the inner workings of AI language models. This innovative application allows developers and researchers to trace the outputs of AI models, specifically those within the OLMo family, directly back to the original training data from which they were derived. OLMoTrace is intended to offer a crucial fact-checking mechanism and enable deeper scrutiny of AI-generated content, addressing growing concerns around governance, regulation, and the potential for misinformation. It marks a significant step toward understanding how these complex systems arrive at their conclusions and ensures greater trustworthiness in AI outputs.

The core functionality of OLMoTrace lies in its ability to identify and highlight unique text sequences within a model's response that match verbatim passages in its training data. The tool pinpoints the source documents and presents them alongside the AI-generated output, enabling users to see the precise origin of the information used. This direct traceability differs from methods like retrieval-augmented generation (RAG), which supplements a model's knowledge with external sources. Instead, OLMoTrace reveals how the model learned specific information during its training phase, providing tangible evidence of AI decision-making.

OLMoTrace has a variety of applications, from identifying the root causes of incorrect information within a model's knowledge to verifying whether responses are based on memorization, creative combination, or general knowledge. The tool draws from a vast training dataset of approximately 4.6 trillion tokens, allowing for comprehensive tracing across a wide range of responses. It is currently available on Ai2's flagship OLMo 2 32B model and other models within the OLMo family, with the open-source code freely accessible on GitHub for anyone to use.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • the-decoder.com: Everyone can now trace language model outputs back to their training data with OLMoTrace
  • Ken Yeung: Ai2’s OLMoTrace Tool Reveals the Origins of AI Model Training Data
  • AI News | VentureBeat: What’s inside the LLM? Ai2 OLMoTrace will ‘trace’ the source
  • THE DECODER: Everyone can now trace language model outputs back to their training data with OLMoTrace
Classification:
  • HashTags: #OLMoTrace #AIgovernance #DataTraceability
  • Company: AI2
  • Target: AI Developers, Governance Regulators
  • Product: OLMoTrace
  • Feature: AI Model Data Tracing
  • Type: AI
  • Severity: Informative
Nathan Labenz@The Cognitive Revolution //
DeepMind's Allan Dafoe, Director of Frontier Safety and Governance, is actively involved in shaping the future of AI governance. Dafoe is addressing the challenges of evaluating AI capabilities, understanding structural risks, and navigating the complexities of governing AI technologies. His work focuses on ensuring AI's responsible development and deployment, especially as AI transforms sectors like education, healthcare, and sustainability, while mitigating potential risks through necessary safety measures.

Google is also prepping its Gemini AI model to take actions within apps, potentially revolutionizing how users interact with their devices. This development, which involves a new API in Android 16 called "app functions," aims to give Gemini agent-like abilities to perform tasks inside applications. For example, users might be able to order food from a local restaurant using Gemini without directly opening the restaurant's app. This capability could make AI assistants significantly more useful.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
Classification:
@www.politico.com //
The AI Action Summit in Paris has drawn criticism for its narrow focus on AI's economic benefits, neglecting the potential for abuse and impacts on fundamental rights and ecological limits. Critics argue that the summit's agenda paints a simplistic picture of AI governance, failing to adequately address critical issues such as discrimination and sustainability. This focus is seen as a significant oversight given the leadership role European countries are claiming in AI governance through initiatives like the EU AI Act.

The summit's speaker selection has also been criticized, with industry representatives outnumbering civil society leaders. This imbalance raises concerns that the summit is captured by industry interests, undermining its ability to serve as a transformative venue for global policy discussions. While civil society organizations organized side events to address these shortcomings, the summit's exclusive nature and industry-centric focus limit its potential to foster inclusive and comprehensive AI governance.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • Deeplinks: Why the so-called AI Action Summit falls short
  • encodeai.org: Encode Statement on Global AI Action Summit in Paris
Classification:
  • HashTags: #AIGovernance #AISafety #TechEthics
  • Target: Global Community
  • Product: AI Action Summit
  • Feature: AI Governance
  • Type: AI
  • Severity: Minor