Start United States USA — Science Why Elon Musk fears artificial intelligence

Why Elon Musk fears artificial intelligence

346
0
TEILEN

Here’s the thing: The risk from AI isn’t just a weird worry of Elon Musk.
Elon Musk is usually far from a technological pessimist. From electric cars to Mars colonies, he’s made his name by insisting that the future can get here faster.
But when it comes to artificial intelligence, he sounds very different. Speaking at MIT in 2014, he called AI humanity’s “biggest existential threat” and compared it to “summoning the demon.”
He reiterated those fears in an interview published Friday with Recode’s Kara Swisher, though with a little less apocalyptic rhetoric. “As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger,” Musk told Swisher. “I do think we need to be very careful about the advancement of AI.”
To many people — even many machine learning researchers — an AI that surpasses humans by as much as we surpass cats sounds like a distant dream. We’re still struggling to solve even simple-seeming problems with machine learning. Self-driving cars have an extremely hard time under unusual conditions because many things that come instinctively to humans — anticipating the movements of a biker, identifying a plastic bag flapping in the wind on the road — are very difficult to teach a computer. Greater-than-human capabilities seem a long way away.
Musk is hardly alone in sounding the alarm, though. AI scientists at Oxford and at UC Berkeley, luminaries like Stephen Hawking, and many of the researchers publishing groundbreaking results agree with Musk that AI could be very dangerous. They are concerned that we’re eagerly working toward deploying powerful AI systems, and that we might do so under conditions that are ripe for dangerous mistakes.
If we take these concerns seriously, what should we be doing? People concerned with AI risk vary enormously in the details of their approaches, but agree on one thing: We should be doing more research.
Musk wants the US government to spend a year or two understanding the problem before they consider how to solve it. He expanded on this idea in the interview with Swisher; the bolded comments are Swisher’s questions:
From Musk’s perspective, here’s what is going on: Researchers — especially at Alphabet’s Google Deep Mind, the AI research organization that developed AlphaGo and AlphaZero — are eagerly working toward complex and powerful AI systems. Since some people aren’t convinced that AI is dangerous, they’re not holding the organizations working on it to high enough standards of accountability and caution.
Max Tegmark, a physics professor at MIT, expressed many of the same sentiments in a conversation last year with journalist Maureen Dowd for Vanity Fair: “When we got fire and messed up with it, we invented the fire extinguisher. When we got cars and messed up, we invented the seat belt, airbag, and traffic light. But with nuclear weapons and A. I., we don’t want to learn from our mistakes. We want to plan ahead.”
In fact, if AI is powerful enough, we might need to plan ahead. Nick Bostrom, at Oxford, made the case in his 2014 book Superintelligence that a badly designed AI system will be impossible to correct once deployed: “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”
In that respect, AI deployment is like a rocket launch: Everything has to be done exactly right before we hit “go,” as we can’t rely on our ability to make even tiny corrections later. Bostrom makes the case in Superintelligence that AI systems could rapidly develop unexpected capabilities — for example, an AI system that is as good as a human at inventing new machine-learning algorithms and automating the process of machine-learning work could quickly become much better than a human.
That has many people in the AI field thinking that the stakes could be enormous. In a conversation with Musk and Dowd for Vanity Fair, Y Combinator’s Sam Altman said, “In the next few decades we are either going to head toward self-destruction or toward human descendants eventually colonizing the universe.”
“Right,” Musk concurred.
In context, then, Musk’s AI concerns are not an out-of-character streak of technological pessimism. They stem from optimism — a belief in the exceptional transformative potential of AI. It’s precisely the people who expect AI to make the biggest splash who’ve concluded that working to get ahead of it should be one of our urgent priorities.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Continue reading...