OpenAI Grant Program: OpenAI is continuing its focus on increasing human control over AI models in anticipation of superintelligent systems, the company announced Friday. According to the OpenAI website, in May, they announced the Democratic Inputs to AI grant program. It then awarded $100,000 to 10 teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems.
This article delves into the key learnings of innovations, outlines the learnings, and calls for researchers and engineers to join OpenAI’s Collective Alignment Team.
How do the OpenAI Grant recipients innovate democratic technology?
OpenAI said in a blog post, “There were far more than 10 qualified teams, but a joint committee of OpenAI employees and external experts in democratic governance selected the final 10 teams to span a set of diverse backgrounds and approaches.” Every team considered experts from different fields, including law journalism, peacebuilding, machine learning, and social science research.
Grant Recipient | Team Members | Objective |
Case Law for AI Policy | Quan Ze (Jim) ChenKevin FengInyoung CheongAmy X. ZhangKing Xia | Creating a robust case repository around AI interaction to make case-law-inspired judgments and democratically engage experts, laypeople, and key stakeholders. |
Collective Dialogues for Democratic Policy Development | Andrew KonyaLisa Schirch-Colin Irwin | Develop policies that reflect collective dialogue to efficiently scale democratic deliberation and find areas of consensus. |
Deliberation at Scale: Socially democratic inputs to AI | Jorim TheunsEvelien NieuwenburgPepijn VerburgLei NelissenBrett HennigRich RippinRan HaaseAldo de MoorCeesJan MolNaomi EstherRolf KleefBram Delisse | Enabling democratic deliberation in small group conversations conducted via AI-facilitated video calls. |
Democratic Fine-Tuning | Joe EdelmanOliver KlingefjordIvan Vendrov | To create a moral graph of values that can be used to fine-tune models. |
Energize AI: Aligned—a Platform for Alignment | Ethan Shaotran, Ido Pesok, and Sam Jones | Develop guidelines for aligning AI models with live, large-scale participation and a ‘community notes’ algorithm. |
Generative Social Choice | Sara FishPaul GölzAriel ProcacciaGili RusakItai ShapiraManuel Wüthrich | To distil large numbers of free-text opinions into a concise slate that guarantees fair representation using mathematical arguments from social choice theory. |
Inclusive.AI: Engaging Underserved Populations in Democratic Decision-Making on AI | Yang WangYun HuangTanusree SharmaDawn Song, Sunny Liu, Jeff Hancock | Facilitating decision-making processes related to AI using a platform with decentralized governance mechanisms (e.g., a DAO) that empower underserved groups. |
Making AI Transparent and Accountable by Rappler | Gemma B. MendozaGilian UyDon Kevin HapalOgoy San JuanMaria Ressa | Start discussion and understanding of participants’ views on complex, polarizing topics via linked offline and online processes. |
Ubuntu-AI: A Platform for Equitable and Inclusive Model Training | Ron EglashJoshua MounseyMicheal NayebareRehema BagumaUssen Kimanuka | Returning value to those who help create it while facilitating LLM development and ensuring more inclusive knowledge of African creative work. |
Taiwan and Chatham House: Bridging the Recursive Public | Alex Krasodomski-JonesCarl MillerFlynn DevineJia-Wei (Peter) CuiShu Yang Lin | Using an adapted vTaiwan methodology to create a recursive, connected participatory process for AI. |
Key Learnings From The Grant Program
The Grant Program covered novel video deliberation interfaces, platforms for crowdsourced audits of AI models, mathematical formulations of representation guarantees, and approaches to map beliefs to dimensions that can be used to fine-tune model behaviour. And a few important learnings cited by OpenAI are as follows:
Public opinion can change frequently
The case studies by companies like Democratic Fine-Tuning, The Case Law, and The Inclusive.AI was based on opinions and feedback. Interestingly, they found that public opinion changes with every fraction of a second.
Bridging across the digital divide is still difficult
Reaching relevant participants across digital and cultural divides might require additional investments in better outreach and better tooling. Some teams found that participants recruited online leaned more optimistic toward AI, which turned the task completely one-sided. Due to the lack of reach and availability in recruiting participants, the Ubuntu-AI team chose to directly incentivize participation by developing a platform that allows African creatives to receive compensation for contributing to machine learning about their designs and backgrounds.
Also Read: 1.4 Billion workers will need AI Reskilling by 2025, says IBM Research
Finding agreement within polarized groups
The Collective Dialogues, Energize.AI, and Recursive Public Teams’ processes were designed to find policy proposals that would be strongly supported across polarized groups. However, most of the teams found that each session always contained a small group of people who felt strongly that restricting AI assistants from answering certain questions was wrong, no matter what. Hence, finding a compromise can be hard when a small group has strong opinions on a particular issue.
Reaching consensus vs. representing diversity
The project highlighted the complications of making decisions or delivering a single outcome. Different teams, like Generative Social Choice and Inclusive,. AI devised a method that highlights a few key positions, showcasing the range of opinions to ensure everyone has an equal say. Hence, OpenAI concluded that there might be tension between trying to reach a consensus and adequately representing the diversity of various opinions. It’s not just about siding with the majority but also giving a platform to different viewpoints.
During the program, teams received hands-on support and guidance to facilitate collaboration. The project also encouraged teams to describe and document their processes in a structured way for a better implementation plan. Also, OpenAI facilitated a special Demo Day in September 2023 for the teams to showcase their concepts to one another.
As part of the implementation plan, the goal of OpenAI is to create a system that addresses problems and uses user input to guide strong AI models. A “collective alignment” team of researchers and engineers is being formed to guarantee that this research continues to advance. The team’s responsibilities include:
- Implement a system for collecting and encoding public input on model behavior into our systems.
- Continue to work with external advisors and grant teams, including running pilots, to incorporate the grant prototypes into steering our models.
Also, if you are an engineer with a diverse technical background and have exceptional research skills, OpenAI is waiting for you. Become part of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity.