AI Whistleblowers Call for Increased Safety Oversight and Employee Protections

AI Whistleblowers Call for Increased Safety Oversight and Employee Protections

In an open letter published Tuesday, a group of current and former employees from leading artificial intelligence companies, including Anthropic, Google DeepMind and OpenAI, expressed concerns about the lack of safety oversight within the AI industry. The letter, titled "A Right to Warn about Advanced Artificial Intelligence," calls for increased protections for whistleblowers and greater transparency from AI companies regarding the potential risks associated with their technologies.

The signatories, which include AI researchers such as Jacob Hilton, Daniel Kokotajlo, and William Saunders, acknowledge the potential benefits of AI but also highlight the serious risks posed by these technologies. These risks include the exacerbation of existing inequalities, manipulation and misinformation, and the potential loss of control over autonomous AI systems, which could have catastrophic consequences.

The letter argues that AI companies possess substantial non-public information about the capabilities, limitations, and risks of their systems, but currently have limited obligations to share this information with governments and none with civil society. The signatories express skepticism that these companies can be relied upon to share such information voluntarily, given their strong financial incentives to avoid effective oversight.

The employees emphasize the crucial role that current and former employees play in holding AI companies accountable to the public, especially in the absence of effective government oversight. However, they claim that broad confidentiality agreements and the fear of retaliation prevent them from voicing their concerns.

To address these issues, the letter calls upon advanced AI companies to commit to four principles:

  1. Not entering into or enforcing agreements that prohibit criticism of the company for risk-related concerns or retaliate against employees for such criticism.
  2. Facilitating an anonymous process for employees to raise risk-related concerns to the company's board, regulators, and appropriate independent organizations.
  3. Supporting a culture of open criticism and allowing employees to raise risk-related concerns about the company's technologies to various stakeholders.
  4. Not retaliating against employees who publicly share risk-related confidential information after other processes have failed.

In response to the letter, an OpenAI spokesperson defended the company's practices, stating that they have avenues for reporting issues and that they prioritize safety when releasing new technologies. However, recent reports have raised concerns about OpenAI's aggressive tactics to prevent employees from speaking freely about their work, such as requiring departing employees to sign restrictive non-disparagement and non-disclosure agreements.

Google DeepMind and Anthropic have not yet provided an official response.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe