Today's AI/ML headlines are brought to you by ThreatPerspective

Biz & IT Ars Technica

Ex-OpenAI staff call for “right to warn” about AI risks without retaliation

Open letter argues for AI whistleblower provisions due to lack of government oversight.

Illustration of businesspeople with red blank speech bubble standing in line.
Enlarge (credit: Getty Images)


On Tuesday, a group of former OpenAI and Google DeepMind employees published an open letter calling for AI companies to commit to principles allowing employees to raise concerns about AI risks without fear of retaliation. The letter, titled "A Right to Warn about Advanced Artificial Intelligence," has so far been signed by 13 individuals, including some who chose to remain anonymous due to concerns about potential repercussions.

The signatories argue that while AI has the potential to deliver benefits to humanity, it also poses serious risks that include "further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction."

They also assert that AI companies possess substantial non-public information about their systems' capabilities, limitations, and risk levels, but currently have only weak obligations to share this information with governments and none with civil society.


Read 8 remaining paragraphs | Comments


Published: 2024-06-04T21:52:38











© Digital Event Horizon . All rights reserved.

Privacy | Terms of Use | Contact Us