OpenAI Invites Domain Experts to Join New Red Teaming Network for AI Safety

OpenAI Invites Domain Experts to Join New Red Teaming Network for AI Safety
Image Credit: Maginative

OpenAI has put out an open call for experts from diverse fields to join a new "Red Teaming Network" focused on rigorously evaluating and stress testing OpenAI's AI models. The goal is to identify potential risks and improve the safety of systems like ChatGPT and DALL-E before release.

Red teaming—a practice often used to identify vulnerabilities by simulating adversarial attacks—has long been a part of OpenAI’s iterative deployment process for AI systems. While the company has previously engaged with external experts for similar evaluations, the new initiative seeks to establish more continuous and iterative input from a trusted community of experts throughout all stages of development.

"Assessing AI systems requires an understanding of a wide variety of domains, diverse perspectives and lived experiences," said OpenAI in their announcement. The company emphasized seeking geographic diversity along with expertise across fields like psychology, law, education, healthcare, and more.

One of the most striking aspects of the open call is its emphasis on diversity—not just in expertise but also in geographic representation. Fields of interest extend far beyond traditional computer science or AI research, encompassing domains such as biology, law, and even linguistics. This multidisciplinary approach aims to capture a 360-degree view of the risks and opportunities associated with AI technologies.

Members of the network will sign NDAs and be compensated for red teaming projects commissioned by OpenAI. While involvement will be confidential, OpenAI has historically published insights from past red team collaborations in documents like ChatGPT's System Card.

This initiative aligns with OpenAI's stated mission of developing artificial general intelligence that broadly beneficial to everyone. Along with red teaming, OpenAI pointed to other collaborative opportunities for experts to help shape safer AI, like contributing evaluations to their open source repository.

Participants will range from individual subject-matter experts to research institutions and civil society organizations. OpenAI said they will selectively tap network members for projects based on the right fit, rather than involving every expert in testing each new model. The time commitment could be as little as 5-10 hours per year. OpenAI will be selecting members on a rolling basis until December 1, 2023, after which it plans to re-evaluate the program.

As AI capabilities rapidly advance, robust testing by diverse experts provides a check on potential harms. Providing this network offers the broader community a unique opportunity to help shape the development of safer AI. Those interested and eligible to apply are encouraged to do so.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe