OpenAI Believes the Public Should Have A Say in Steering Powerful AI Models

The research lab is forming a “Collective Alignment” team to implement systems that incorporate public input into AI models.

OpenAI Believes the Public Should Have A Say in Steering Powerful AI Models
Image Credit: Maginative

As AI models grow more capable, OpenAI says it is imperative that the public help determine appropriate boundaries on their behavior and should participate in governing these increasingly powerful systems.

It's this conviction that prompted them to launch the Democratic Inputs to AI grant program last summer. Over 1,000 applicants competed for these grants worth $100,000 each and OpenAI funded 10 teams with members across 12 countries. The goal: to design democratic processes for collecting public input on AI.

The grant recipients brought a range of innovative ideas to the table, from video deliberation interfaces and crowdsourced AI audits to mathematical models ensuring fair representation. Notably, AI itself played a crucial role in many projects, assisting in tasks like chat interface customization and data synthesis.

Below is a summary of the 10 projects:

  1. Case Law for AI Policy: Aiming to create a comprehensive case repository for AI interaction scenarios, this project facilitates democratic engagement across experts and the general public, shaping AI behaviors in complex situations.
  2. Collective Dialogues for Democratic Policy Development: This initiative seeks to develop policies reflecting informed public will, using collective dialogues to bridge demographic divides.
  3. Deliberation at Scale: Focused on small group conversations via AI-facilitated video calls, this project aims to capture the essence of group dialogues, enhancing participant connection and understanding.
  4. Democratic Fine-Tuning: By eliciting values from chat dialogues, this approach creates a moral graph of values for fine-tuning AI models, ensuring alignment across cultural and ideological spectrums.
  5. Energize AI: Aligned - a Platform for Alignment: Developing live, large-scale participation guidelines, this project aims for transparent, democratic AI model alignment.
  6. Generative Social Choice: This project distills numerous opinions into a representative summary, using social choice theory to guarantee fair representation.
  7. Inclusive.AI: Targeting underserved populations, this initiative focuses on democratic decision-making in AI, leveraging decentralized governance mechanisms.
  8. Making AI Transparent and Accountable by Rappler: Facilitating understanding and discussion on complex topics, this project integrates offline and online deliberation methods.
  9. Ubuntu-AI: A Platform for Equitable and Inclusive Model Training: Aiming to return value to creators, this project focuses on inclusive knowledge of African creative work in AI training.
  10. vTaiwan and Chatham House: Bridging the Recursive Public: Using the vTaiwan methodology, this initiative fosters participatory processes for AI governance.

The program yielded critical insights on collecting inputs. Public opinion on AI is dynamic, changing frequently, necessitating regular input collection. Bridging the digital divide remains a challenge, with skewed results often favoring those more optimistic about AI. Additionally, reaching agreement within polarized groups and balancing consensus with diverse representation posed significant challenges.

Still, OpenAI feels bolstered by the promise of participatory steering. They have announced that they are forming a “Collective Alignment” team to implement systems that incorporate public input into AI models. This team will work closely with external advisors and continue to integrate grant prototypes into AI steering processes.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe
Mastodon