The hubbub might have died down in the news, but a heated panel debate revealed passionate disagreements still abounded about whether powerful AI poses an existential threat to humanity.
The schism first appeared four months after ChatGPT’s debut in November 2022, when thousands of people – including Turing award winner Yoshua Bengio, Elon Musk and Apple co-founder Steve Wozniak – signed an open letter calling for advanced AI development to stop for six months. In May 2023, fellow Turing awardee and future Nobel laureate Geoffrey Hinton quit Google so he could speak out publicly about AI’s harms.
Two camps have since arisen: Those who believe AI holds a promising, and generally benevolent future for humanity, and those who believe AI can extinguish all human life.
At this week’s World Economic Forum in Davos, Switzerland, a panel about artificial general intelligence (AGI) – when computers can do any general task humans can do – revealed that this schism is alive and well. It started when Google Brain founder Andrew Ng espoused an optimistic view of AI, one that is beneficial to humans and controllable.
“I tend to view AI as a tool, and I think it’s wonderful to get to AGI, because then all of you would have an army of interns. They’re very smart and they do whatever you want them to do. So it can be wildly exciting to empower every single human to have all these agents or AI things working for them,” said Ng, a renowned AI pioneer.
“When people talk about AI being dangerous, I think it sounds a lot like talking about your laptop computer being dangerous. Absolutely, your laptop can be dangerous because someone can use your laptop to do awful things, just like someone could use AI to do awful things. So that’s how I tend to view AI,” Ng continued.
“There’s an alternative view of AI, which is not mine, that thinks of AI as this sentient alien being with its own wants and desires; it could go rogue,” Ng said. “In my experience, my AI sometimes does bad things. I just program it to stop doing that, and I can’t control it perfectly, but every year, our ability to control AI is improving. And I think the safest way to make sure AI doesn’t do bad things is … we fix it.”
Bengio, one of the signatories of the AI harms letter, was on the panel and he disagreed. “There are several things that Andrew said that I think are wrong,” he said, adding that AI is not like a laptop.
Bengio said that in pursuing AI systems to be as smart as possible, what society gets along with high intelligence is ‘agency,’ or acting on its own. He said developers are already starting to see this behavior emerge from AI systems. “Andrew, are you aware that there are experiments that have been run over the last year that show very strong agency and self-preserving behavior in AI system?”
For example, Bengio has observed AI systems make copies of themselves in the file when they know a new version is coming out that would replace them. Also, he has seen AI systems falsely agreeing with the user so their goals would not be changed during training.
“These were not programmed” to make these self-preservation actions, Bengio said. “We’re on a path where we’re going to build machines that are more than tools, that have their own agency, and their own goals, and that is not good.”
“And you’re saying, it’s okay, we can find ways to control them,” Bengio continued. “But how do you know? Right now, science doesn’t even know how we can control machines that are at our level of intelligence. … If we don’t figure it out, do you understand the consequences?”
Stanford professor Yejin Choi said both positive and negative futures of AI are possible. She recommended two paths forward: Invest in understanding generative AI more deeply. Just because human developers created it doesn’t mean they also know how to control it. It would be good to also try to program it with human values.
The second path is to focus more on practical applications of AI instead of pursuing powerful AI for its own sake.
Ng countered by saying that the developer community as a whole welcomes AGI, so this fear of AI seems strange to him. “The ability to tell a computer exactly what you wanted to do, so that it would do it for you, that would be one of the most important skills for society.”
Be First to Comment