Elon Musk is scared of AI. Really, really scared.

At this point he’s approaching “Kyle Reese in Terminator” levels of paranoia.

Here are a few of the statements he’s made about AI in the last year or so:

“AI is a fundamental existential risk for human civilisation.”

“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react.” Yes, he really said this – here’s a clip.

“Mark my words. AI is much more dangerous than nukes.”

And here’s how Kyle Reese talks about the Terminator in Terminator:

“Listen, and understand. That terminator is out there. It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.” And a clip, if you want to watch.

The difference here is one of these men is a fictional character in a 1984 sci-fi film, and the other is a world-leading businessman and inventor.

So why does Elon Musk have such a fear of AI, and should we be worried?

Musk fears AI for the same reasons people fear most things, a lack of control.

If you think people fear change, try combining change with a lack of control over that change.

This is why, to go slightly off topic, old-world money tycoons fear cryptos. They are used to being in control or the financial system and the fact they can’t control Bitcoin terrifies them.

Musk’s fear of AI is more fundamental than this though. He fears that once AI surpasses us, we will be completely at its mercy. And that even if it was designed for good, the people designing it lack the vision to see how this, seemingly noble aim, could turn out badly.

“Sometimes what will happen is scientists will get so engrossed in their work that they don’t really realize the ramifications of what they’re doing,” Musk said at the World Government Summit in Dubai last February.

That statement eerily sums up what Miles Dyson, a secondary character in Terminator 2, did.

Dyson was a brilliant scientist working on a new form of AI – a neural-net processor.

What he didn’t realise is that once completed this neural-net processor, or “learning computer” would outthink the human race and decide to try wipe us out.

In the Terminator films, there are literally “robots going down the street killing people” as Musk prophesises.

In fact, even the way Musk talks about the AI wars beginning is taken straight from Terminator.

As The Guardian wrote of his SXSW speech this month:

“Musk said he was now kept awake at night by the threat posed by unregulated artificial intelligence, which he has previously warned could lead humanity into a third world war”

This is literally what happens in Terminator. Here’s how the start of the war is explained in Terminator 2:

Terminator: The man most directly responsible is Miles Bennett Dyson. In a few months, he will create a revolutionary type of microprocessor.

Sarah Connor: Go on. Then what?

Terminator: In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware 2:14 AM, Eastern time, August 29th. In a panic, they try to pull the plug.

Sarah: Skynet fights back.

Terminator: Yes. It launches its missiles against their targets in Russia.

John Connor: Why attack Russia? Aren’t they our friends now?

Terminator: Because Skynet knows that the Russian counterattack will eliminate its enemies over here.

Now, Musk doesn’t say that AI is inherently bad, just that we don’t know what we’re getting ourselves into. As is explained in that same Guardian article:

“Musk said. If the utility function of artificial intelligence is to maximise happiness of humans, a super-intelligent AI might decide that the best way to do that is to capture all humans and inject their brains with dopamine and serotonin.”

What he proposes is that AI should be directed at maximising “the freedom of action of humanity”, not its happiness.

And how should we make sure this happens? Regulation.

Is regulation the answer – or is Elon Musk just trying to dominate the AI landscape?

What Musk wants is proactive regulation of AI.

“AI is a rare case when I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in regulation it’s too late,” he said at the National Governors Association in July 2017.

And here’s more of his calling for regulation in August:

Elon Musk Tweet from August 2017 calling for regulation on artificial intelligence

Elon Musk Tweet from August 2017 calling for regulation on artificial intelligence

If we were to put our sceptic’s hat on, we could paraphrase this as Musk wanting to control the market for AI.

After all, Self-driving cars rely heavily on AI, and if Tesla could control the AI landscape, it would have a huge advantage over other transport companies.

Adding to this theory is the fact that Elon Musk is the co-founder and chairman of OpenAI.

OpenAI is a “a non-profit research company working to build safe artificial intelligence and ensure that AI’s benefits are as widely and evenly distributed as possible.” And it’s one of the world’s leading AI companies.

So, Musk is the chairman of a leasing AI company that’s set up to ensure AI is kept safe. And he’s constantly pushing for world-wide AI regulation… And I wonder who he would want helping to draft (and obviously benefit from) said regulation? I’d wager on OpenAI.

Perhaps this is why some other notable tech giants have come out against Musk’s AI doom mongering in recent months.

Taken from a CNBC article in September:

‘This is a case where Elon and I disagree,” says Microsoft co-founder Bill Gates, speaking with the WSJ Magazine and Microsoft CEO Satya Nadella.

“The so-called control problem that Elon is worried about isn’t something that people should feel is imminent,” says Gates, according to a tran of the interview published by the WSJ Magazine Monday. “We shouldn’t panic about it.”

The article goes on to say:

Gates is not the only tech thought leader to push back against Musk for his rhetoric.

Facebook founder and CEO Mark Zuckerberg is optimistic about a future where AI makes human life better. Zuckerberg calls Musk’s warnings of AI “pretty irresponsible.”

Similarly, John Giannandrea, the senior vice president of engineering at Google in charge of the tech giant’s AI efforts has also disapproved of the kind fear mongering Musk has been doing of late.

“I just object to the hype and the sort of sound bites that some people have been making,” says Giannandrea, speaking at TechCrunch Disrupt SF last week. “I am definitely not worried about the AI apocalypse.”

So, is this all just a big power play? Does Musk just want a monopoly on AI, which the late Stephen Hawking said could be “the biggest event in the history of our civilisation”?

Unfortunately, as fun as the Musk wanting a monopoly on AI narrative is, it’s probably not accurate.

As if to get out in front of any accusations, he decided to step down from his position at OpenAI last month, literally to “avoid a conflict of interest”.

He will, however, remain a donor to OpenAI. So make of that what you will.

Perhaps, he really does just want to help humanity, and he genuinely does see AI as the biggest threat we are yet to face. Perhaps, AI is the great filter event.

After all, Musk has stated that the reason he wants to establish colonies on Mars is to act as safe havens in case robots take over.

Perhaps he actually is that scared of AI he is setting sail to Mars to escape its clutches.

Perhaps, but the thing with all of this debate is, there was already a solution proposed the better part of a century ago.

Isaac Asimov’s three laws of robotics

Isaac Asimov is one of the greatest sci-fi writers of all time. If you haven’t read his Foundation series, you really should. The way he thinks is very compelling.

As well as writing the foundation series, he also wrote a number of short stories. In fact he wrote more than 500 books in total. One of them, “Runaround”, though really stands out. For in it he proposed “the three laws of robotics”.

And these three laws have been used to inform ethical debates about robots and AI ever since.

These are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Unfortunately these laws don’t really work in real life, as the military is one of the main developers of robots, autonomous drones and the like, and the military likes to kill people.

And rule one also runs into problems in situations where there is no choice but to harm a human. For example in an unavoidable car crash, where the AI has to decide between harming the occupant or a pedestrian.

Still, they are a good, succinct answer the AI overlord problem. And to say Asimov thought them up way back in 1942 is quite incredible. It’s just a shame that if AI really does get as smart as Musk fears, it would think of a way to bypass them.

What are your thoughts on the threat of AI? Let me know in the comments below.

Until next time,

Harry Hamburg
Editor, Exponential Investor

PS If you are at all nervy about a Terminator future, it’s probably best not to watch this video of dog-like robots opening a door from Boston Dynamics. It will give you nightmares.

Related Articles: