Press "Enter" to skip to content

Another OpenAI Executive Exits

Key takeaways:

  • In less than a week, the two leaders of OpenAI’s safety team have resigned: Chief Scientist Ilya Sutskever and Jan Leike.
  • Leike said OpenAI did not prioritize safety of its AI systems, which are getting to be so good they might soon pose a threat to humanity.
  • OpenAI CEO Sam Altman and President Greg Brockman said they weigh both the opportunities and risks of superintelligence (AGI).

A second high profile resignation has hit OpenAI: Jan Leike, who co-led the company’s AI safety team, has left.

In a post on X (Twitter), he said that OpenAI was not paying enough attention to making the superintelligent AI systems it is developing safe enough for society.

“I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,†Leike said on X.

“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all humanity,†he said. “But over the past years, safety culture and processes have taken a backseat to shiny products.”

“Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us,†Leike added.

Leike joins OpenAI co-founder and Chief Scientist Ilya Sutskever, who resigned earlier in the week. Sutskever and Leike led the so-called ‘Superalignment team’ that was tasked with developing a safe, superintelligent AI that can go against AI systems that might harm humanity. The rationale was that it would take another superintelligence to fight superintelligence.

The implications of these exits, especially their timing, arguably will not be felt soon by enterprise clients since many of them are using OpenAI’s models through Microsoft Azure. Microsoft has the right to use OpenAI’s models short of AGI or artificial general intelligence; it is this AGI (superintelligence) that is at the heart of the two resignations. The longer term implications to business, however, remain to be seen.

“Sailing against the wind”

Leike believes OpenAI should use “much more of our bandwidth†on “security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topic.â€

“I am concerned we aren’t on a trajectory to get there,†he said.

Leike said his ‘Superalignment’ team has been “sailing against the wind†over the last few months.

“Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done,†Leike wrote. When OpenAI annnounced the creation of the team last year, it pledge to dedicate 20% of the compute it secures to the team.

“We are long overdue in getting incredibly serious about the implications of AGI,†he added. “Only then can we ensure AGI benefits all of humanity.â€

OpenAI sees the glass half full

OpenAI CEO Sam Altman responded by saying that Leike was “right, we have a lot more to do; we are committed to doing it.â€

OpenAI President Greg Brockman responded in more detail. In a post on X that he and Altman signed, Brockman wrote that the company has raised awareness of the risks and opportunities of AGI “so that the world can better prepare for it.”

Brockman and Altman argued that they have “repeatedly demonstrated the incredible possibilities from scaling deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks.”

The two see a world where AI systems will be integrated into current systems more deeply and act on behalf of humans, instead of merely a tool that generates content after an input.

“We think such systems will be incredibly beneficial and helpful to people, and it’ll be possible to deliver them safely, but it’s going to take an enormous amount of foundational work,” they wrote.

“We believe both in delivering on the tremendous upside and working to mitigate the serious risks; we take our role here very seriously and carefully weigh feedback on our actions.”

Is AI smarter than a cat, yet?

Meta’s acerbic Yann LeCun, officially its chief AI scientist, poured a bucket of cold water on the tit-for-tat.

“It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat,” the Turing award winner said on X. “Such a sense of urgency reveals an extremely distorted view of reality.”

He compares it to the development of aircraft.

“It’s as if someone had said in 1925 ‘we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of the sound over the oceans’,” LeCun wrote. “It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety. It didn’t require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements.”

“The process will be similar for intelligent systems,” said LeCun, a staunch pro-superintelligence advocate who has debated fellow Turing awardees about their fear that AGI poses an existential threat.

LeCun also believes superintelligence will not be here soon. “It will take years for them to get as smart as cats, and more years to get as smart as humans, let alone smarter (don’t confuse the superhuman knowledge accumulation and retrieval abilities of current LLMs with actual intelligence). It will take years for them to be deployed and fine-tuned for efficiency and safety as they are made smarter and smarter.”

The problem at OpenAI is that when a group of people with a “distorted” view of reality get together who believe in an “impending Great Evil, they often fall victim to a spiral of purity that makes them hold more and more extreme beliefs,” LeCun said in a follow-up tweet. This group becomes “toxic” to the company and they become “marginalized and eventually leave.”

Author