You’d be hard-pressed to find anyone more influential in the field of AI than Geoffrey Hinton. His work on neural networks laid the foundation for modern AI systems, demonstrating how computers could learn and use data in new ways. He’s also worried about AI. Speaking on BBC Radio 4, Hinton was asked how he assesses the potential AI apocalypse, and he offered a chilling response: “10% to 20%.”
We’re the three-year-olds in this relationship
Hinton was awarded both the Nobel Prize and the Turing Prize (a Nobel-like award for computer science) for his work on AI. He’s part of the three “Godfathers of AI,” two of which have expressed fears that AI may pose an existential threat to humanity.
The big problem, Hinton says, is that AIs are on a path of becoming smarter than any of us in some ways.
“You see, we’ve never had to deal with things more intelligent than ourselves before. And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”
“I like to think of it as: imagine yourself and a three-year-old. We’ll be the three-year-olds,” he said.
It’s not the first time Hinton has expressed such fears. He’s previously assessed the risk of an AI apocalypse in the next 30 years at 10%, and he seems to have gone up now. The reason is that AI is progressing much faster than he thought it would and we don’t have nearly enough policy regulation.
Granted, the AIs aren’t smarter than us now, but there’s a good chance they will be in a couple of decades. In the meantime, companies are taking the reins and not ensuring enough guardrails for safe AI use, Hinton says.
“My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely,” he said. “The only thing that can force those big companies to do more research on safety is government regulation.”
Even without an apocalypse, our lives are about to change drastically
Even Hinton’s more optimistic scenarios aren’t exactly encouraging. The main issue boils, once again, to intelligence. AI is simply replacing human intelligence and making humans “not cutting edge anymore.”
What we’re about to see is a change in the order of the Industrial Revolution. Yet, like the Industrial Revolution, AI isn’t about to make everyone better off.
“My worry is that even though it will cause huge increases in productivity, which should be good for society, it may end up being very bad for society if all the benefit goes to the rich and a lot of people lose their jobs and become poorer,” he added.
This type of inequality is also what makes way for extremism and fascism, Hinton warned.
He said he has some regrets about the technology. He’d still do it all the same again, but he strongly worries about how AI will turn out.
“There’s two kinds of regret. There is the kind where you feel guilty because you do something you know you shouldn’t have done, and then there’s regret where you do something you would do again in the same circumstances but it may in the end not turn out well.
“That second regret I have. In the same circumstances, I would do the same again but I am worried that the overall consequence of this is that systems more intelligent than us eventually take control.
Leave a Comment