Key takeaways:
- An uprising is raging at OpenAI. In an open letter, 11 former and current employees are lambasting its inadequate attention to AI safety as the startup hurtles toward AGI.
- Turing award winners Geoffrey Hinton and Yoshua Bengio endorsed the letter, as did Stuart Russell, a renowned computer scientist known for his contributions to AI.
- The signatories want the freedom to air their concerns, without fear of retribution, to AI companies, their boards, regulators and relevant independent organizations.
Eleven former or current employees of OpenAI penned an open letter this week revealing that “frontier AI companies” cannot be trusted to keep society safe from the harms of AI.
“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” wrote the OpenAI signatories, plus two researchers from Google DeepMind.
“However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”
The letter was endorced by Turing award winners Geoffrey Hinton and Yoshua Bengio, and AI luminary Stuart Russell. Signatories who revealed their names from OpenAI were scientists heavily involved in safety AI work: Jacob Hilton, Daniel Kokotajlo, William Saunders, Carroll Wainwright and Daniel Ziegler. From Google DeepMind were Ramana Kumar and Neel Nanda. The rest stayed anonymous.
The letter’s demands
The 13 signatories want advanced AI companies to commit to the following:
- AI companies will not make employees sign contracts banning them from publicly criticizing their AI safety practices nor punish them financially for it.
- AI companies will create a truly anonymous channel for whistleblowers to report these concerns to their boards, regulators and relevant third party organizations.
- AI companies will commit to open communication and let employees raise alarms about its technologies to the public, their boards, regulators or third-party organizations.
- Employees should have the freedom to report their concerns to the public about AI risks without fear of punishment from AI companies, absent other means to do so.
The open letter comes a few weeks after the two leaders of OpenAI’s safety team, former Chief Scientist Ilya Sutskever and Jan Leike, resigned. The latter accused OpenAI of not prioritizing AI safety as they went down the path of commercialization. After other exits, their team was reportedly disbanded.
In response, OpenAI said it is committed to the safe impact of its AI systems. The startup uses a Preparedness Framework by which it assesses the risk posed by its AI models and gauges whether protections are good enough. Recently, it created a new safety and security committee at the board level that included CEO Sam Altman.
In a post on X, signer Hilton wrote that while the Preparedness Framework is “well-drafted and thorough,” OpenAI is “under great commercial pressure, and teams implementing this framework may have little recourse if they find that they are given insufficient time to adequately complete their work.”