News from the AI & ML world
@Google DeepMind Blog
//
Google DeepMind is intensifying its focus on AI governance and security as it ventures further into artificial general intelligence (AGI). The company is exploring AI monitors to regulate hyperintelligent AI models, splitting potential threats into four categories, with the creation of a "monitor" AI being one proposed solution. This proactive approach includes prioritizing technical safety, conducting thorough risk assessments, and fostering collaboration within the broader AI community to navigate the development of AGI responsibly.
DeepMind's reported clampdown on sharing research will stifle AI innovation, warns the CEO of Iris.ai, one of Europe’s leading startups in the space, Anita Schjøll Abildgaard. Concerns are rising within the AI community that DeepMind's new research restrictions threaten AI innovation. The CEO of Iris.ai, a Norwegian startup developing an AI-powered engine for science, warns the drawbacks will far outweigh the benefits. She fears DeepMind's restrictions will hinder technological advances.
ImgSrc: lh3.googleuserc
References :
- Google DeepMind Blog: We’re exploring the frontiers of AGI, prioritizing technical safety, proactive risk assessment, and collaboration with the AI community.
- The Next Web: Google DeepMind’s reported clampdown on sharing research will stifle AI innovation, warns the CEO of Iris.ai, one of Europe’s leading startups in the space.
- www.techrepublic.com: DeepMind’s approach to AGI safety and security splits threats into four categories. One solution could be a “monitor†AI.
Classification: