Tuesday, September 22, 2020

Predictions of AI - Terminator isn't coming



One of the most pervasive and persistent ideas related to the future of AI is the Terminator. In case you should have somehow missed the image of a brutal humanoid robot with a metal skeleton and glaring eyes...well, that’s what it is. The Terminator is a 1984 film by director James Cameron. In the movie, a global AI-powered defense system called Skynet becomes conscious of its existence and wipes most of the humankind out of existence with nukes and advanced killer robots.

There are two alternative scenarios that are suggested to lead to the coming of the Terminator or other similarly terrifying forms of robot uprising. In the first, which is the story from the 1984 film, a powerful AI system just becomes conscious and decides that it just really, really dislikes humanity in general.

In the second alternative scenario, the robot army is controlled by an intelligent but not conscious AI system that is in principle in human control. The system can be programmed, for example, to optimize the production of paper clips. Sounds innocent enough, doesn’t it?

However, if the system possesses superior intelligence, it will soon reach the maximum level of paper clip production that the available resources, such as energy and raw materials, allow. After this, it may come to the conclusion that it needs to redirect more resources to paper clip production. In order to do so, it may need to prevent the use of the resources for other purposes even if they are essential for human civilization. The simplest way to achieve this is to kill all humans, after which a great deal more resources become available for the system’s main task, paper clip production.

There are a number of reasons why both of the above scenarios are extremely unlikely and belong to science fiction rather than serious speculations of the future of AI.

Reason 1:

Firstly, the idea that a superintelligent, conscious AI that can outsmart humans emerges as an unintended result of developing AI methods is naive. As you have seen in the previous chapters, AI methods are nothing but automated reasoning, based on the combination of perfectly understandable principles and plenty of input data, both of which are provided by humans or systems deployed by humans. To imagine that the nearest neighbor classifier, linear regression, the AlphaGo game engine, or even a deep neural network could become conscious and start evolving into a superintelligent AI mind requires a (very) lively imagination.

Note that we are not claiming that building human-level intelligence would be categorically impossible. You only need to look as far as the mirror to see a proof of the possibility of a highly intelligent physical system. To repeat what we are saying: superintelligence will not emerge from developing narrow AI methods and applying them to solve real-world problems.

 Reason 2:

Secondly, one of the favorite ideas of those who believe in super intelligent AI is the so-called singularity: a system that optimizes and “rewires“ itself so that it can improve its own intelligence at an ever accelerating, exponential rate. Such superintelligence would leave humankind so far behind that we become like ants that can be exterminated without hesitation. The idea of exponential intelligence increase is unrealistic for the simple reason that even if a system could optimize its own workings, it would keep facing more and more difficult problems that would slow down its progress, quite like the progress of human scientists requires ever greater efforts and resources by the whole research community and indeed the whole society, which the super intelligent entity wouldn’t have access to. The human society still has the power to decide what we use technology, even AI technology, for. Much of this power is indeed given to us by technology, so that every time we make progress in AI technology, we become more powerful and better at controlling any potential risks due to it.

Separating stories from reality

All in all, the Terminator is a great story to make movies about but hardly a real problem worth panicking about. The Terminator is a gimmick, an easy way to get a lot of attention, a poster boy for journalists to increase click rates, a red herring to divert attention away from perhaps boring, but real, threats like nuclear weapons, lack of democracy, environmental catastrophes, and climate change. In fact, the real threat the Terminator poses is the diversion of attention from the actual problems, some of which involve AI, and many of which don’t.

Share:

0 comments:

Post a Comment