OpenAI reveals lessons learned from $1M grant program

    2024.01.17 | exchangesranking | 119onlookers

    OpenAI highlighted five significant lessons learned from its $1 million grant program to involve the public in deciding how artificial intelligence (AI) should behave to better align its models to the values of humanity. 

    In May 2023, the company said it was preparing to award 10 grants worth $100,000 each toward experiments in setting up a “proof-of-concept” democratic process for determining rules for AI systems to follow.

    In a blog post on Jan. 16, the AI company outlined how the 10 teams that received the grants innovated on democratic technology, key learnings from the grant program, and OpenAI’s implementation plans for the new democratic technology.

    According to the post, the teams captured participants’ views in multiple ways and found that public views changed often, potentially affecting how often input processes should occur. OpenAI learned that a collective process should efficiently capture fundamental values and be sensitive enough to detect meaningful changes in views over time.

    In addition, some teams found that bridging the digital divide is still tricky, which can skew results. Recruiting participants across the digital divide was challenging due to platform limitations and issues with understanding the local language or context.

    Related: Governance key to enjoying rapidly developing benefits of AI: WEF panel

    However, some teams found it challenging to agree within polarized groups, especially when a small subgroup held strong opinions on a particular issue. The Collective Dialogues team discovered that a small group strongly opposed restricting AI assistants from answering specific questions in each session, leading to disagreements with the majority voting outcomes.

    Balancing consensus and representing diverse opinions is challenging for a single outcome, according to OpenAI. The Inclusive.AI team explored voting mechanisms, finding that methods reflecting people’s strong feelings and ensuring equal participation are seen as more democratic and fair.

    Regarding fears about the future of AI in governance, the report states that some participants were nervous about AI in policy writing, desiring transparency in its use. However, after deliberation, teams noted increased hope in the public’s ability to guide AI.

    OpenAI says it wants to implement the ideas from the public participants, and it will form a new Collective Alignment team of researchers and engineers to create a system for collecting and encoding public input on its models’ behaviors into OpenAI products and services.

    Magazine: Scam AI ‘kidnappings’, $20K robot chef, Ackman’s AI plagiarism war: AI Eye

    The content on this website comes from the Internet. Due to the inconvenience of proofreading the authenticity and accuracy of the copyright or content of some content, it may be temporarily impossible to confirm the authenticity and accuracy of the copyright or content. For copyright issues or other ssues caused by this, please Call or email this site. It will be deleted or changed immediately after verification.