AI risksTo protect us from the risks of advanced artificial intelligence, we need to act now

By Paul Salmon, Peter Hancock, and Tony Carden

Published 28 January 2019

Artificial intelligence can play chess, drive a car and diagnose medical issues. Examples include Google DeepMind’s AlphaGo, Tesla’s self-driving vehicles, and IBM’s Watson. This type of artificial intelligence is referred to as Artificial Narrow Intelligence (ANI) – non-human systems that can perform a specific task. With the next generation of AI the stakes will almost certainly be much higher. Artificial General Intelligence (AGI) will have advanced computational powers and human level intelligence. AGI systems will be able to learn, solve problems, adapt and self-improve. They will even do tasks beyond those they were designed for. The introduction of AGI could quickly bring about Artificial Super Intelligence (ASI). When ASI-based systems arrive, there is a great and natural concern that we won’t be able to control them.

Artificial intelligence can play chess, drive a car and diagnose medical issues. Examples include Google DeepMind’s AlphaGo, Tesla’s self-driving vehicles, and IBM’s Watson.

This type of artificial intelligence is referred to as Artificial Narrow Intelligence (ANI) – non-human systems that can perform a specific task. We encounter this type on a daily basis, and its use is growing rapidly.

But while many impressive capabilities have been demonstrated, we’re also beginning to see problems. The worst case involved a self-driving test car that hit a pedestrian in March. The pedestrian died and the incident is still under investigation.

The next generation of AI
With the next generation of AI the stakes will almost certainly be much higher.

Artificial General Intelligence (AGI) will have advanced computational powers and human level intelligence. AGI systems will be able to learn, solve problems, adapt and self-improve. They will even do tasks beyond those they were designed for.

Importantly, their rate of improvement could be exponential as they become far more advanced than their human creators. The introduction of AGI could quickly bring about Artificial Super Intelligence (ASI).

While fully functioning AGI systems do not yet exist, it has been estimated that they will be with us anywhere between 2029 and the end of the century.

What appears almost certain is that they will arrive eventually. When they do, there is a great and natural concern that we won’t be able to control them.

The risks associated with AGI
There is no doubt that AGI systems could transform humanity. Some of the more powerful applications include curing disease, solving complex global challenges such as climate change and food security, and initiating a worldwide technology boom.

But a failure to implement appropriate controls could lead to catastrophic consequences.

Despite what we see in Hollywood movies, existential threats are not likely to involve killer robots. The problem will not be one of malevolence, but rather one of intelligence, writes MIT professor Max Tegmark in his 2017 book Life 3.0: Being Human in the Age of Artificial Intelligence.