News from the AI & ML world

DeeperML

@techcrunch.com //
Meta is actively developing AI safety systems to mitigate the potential for misuse of its AI models. The company is carefully defining the types of AI systems it deems too risky to release to the public. These include systems that could be used to aid in cyberattacks, chemical, and biological attacks. Meta will flag such systems and may halt their development altogether if the risks are considered too high.

To determine the risk level, Meta will rely on input from internal and external researchers, reviewed by senior-level decision-makers, rather than solely on empirical tests. If a system is deemed high-risk, access will be limited, and it won’t be released until mitigations reduce the risk to moderate levels. In cases of critical-risk AI, which could lead to catastrophic outcomes, Meta will implement more stringent measures. Anthropic is also addressing AI safety through their Constitutional Classifiers, designed to guard against jailbreaks and monitor content for harmful outputs. Leading tech groups, including Microsoft, are also investing in similar safety systems.
Original img attribution: https://techcrunch.com/wp-content/uploads/2021/11/facebook-meta-twist.jpg?resize=1200,675
ImgSrc: techcrunch.com

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • www.techmeme.com: Meta describes what kinds of AI systems it may deem too risky to release, including ones that could aid in cyberattacks, and how such systems will be flagged
  • techcrunch.com: Meta describes what kinds of AI systems it may deem too risky to release, including ones that could aid in cyberattacks, and how such systems will be flagged
Classification: