Wednesday, August 26, 2020

What is, and what isn't AI?

AI is NOT FOR THE TECHIES ALONE - Consulting Insight | Magazine ...

The popularity of AI in the media is in part due to the fact that people have started using the term when they refer to things that used to be called by other names. You can see almost anything from statistics and business analytics to manually encoded if-then rules called AI. Why is this so? Why is the public perception of AI so nebulous? Let’s look at a few reasons.

Reason 1: no officially agreed definition

Even AI researchers have no exact definition of AI. The field is rather being constantly redefined when some topics are classified as non-AI, and new topics emerge.

There´s an old (geeky) joke that AI is defined as “cool things that computers can't do.” The irony is that under this definition, AI can never make any progress: as soon as we find a way to do something cool with a computer, it stops being an AI problem. However, there is an element of truth in this definition. Fifty years ago, for instance, automatic methods for search and planning were considered to belong to the domain of AI. Nowadays such methods are taught to every computer science student. Similarly, certain methods for processing uncertain information are becoming so well understood that they are likely to be moved from AI to statistics or probability very soon.

Reason 2: the legacy of science fiction

The confusion about the meaning of AI is made worse by the visions of AI present in various literary and cinematic works of science fiction. Science fiction stories often feature friendly humanoid servants that provide overly-detailed factoids or witty dialogue, but can sometimes follow the steps of Pinocchio and start to wonder if they can become human. Another class of humanoid beings in sci-fi espouse sinister motives and turn against their masters in the vein of old tales of sorcerers' apprentices, going back to the Golem of Prague and beyond.

Often the robothood of such creatures is only a thin veneer on top of a very human like agent, which is understandable as most fiction – even science fiction – needs to be relatable by human readers who would otherwise be alienated by intelligence that is too different and strange. Most science fiction is thus best read as metaphor for the current human condition, and robots could be seen as stand-ins for repressed sections of society, or perhaps our search for the meaning of life.

Reason 3: what seems easy is actually hard…

Another source of difficulty in understanding AI is that it is hard to know which tasks are easy and which ones are hard. Look around and pick up an object in your hand, then think about what you did: you used your eyes to scan your surroundings, figured out where are some suitable objects for picking up, chose one of them and planned a trajectory for your hand to reach that one, then moved your hand by contracting various muscles in sequence and managed to squeeze the object with just the right amount of force to keep it between your fingers.

It can be hard to appreciate how complicated all this is, but sometimes it becomes visible when something goes wrong: the object you pick is much heavier or lighter than you expected, or someone else opens a door just as you are reaching for the handle, and then you can find yourself seriously out of balance. Usually these kinds of tasks feel effortless, but that feeling belies millions of years of evolution and several years of childhood practice.

While easy for you, grasping objects by a robot is extremely hard, and it is an area of active study. Recent examples include Google's robotic grasping project, and a cauliflower picking robot.

…and what seems hard is actually easy

By contrast, the tasks of playing chess and solving mathematical exercises can seem to be very difficult, requiring years of practice to master and involving our “higher faculties” and concentrated conscious thought. No wonder that some initial AI research concentrated on these kinds of tasks, and it may have seemed at the time that they encapsulate the essence of intelligence.

It has since turned out that playing chess is very well suited to computers, which can follow fairly simple rules and compute many alternative move sequences at a rate of billions of computations a second. Computers beat the reigning human world champion in chess in the famous Deep Blue vs Kasparov matches in 1997. Could you have imagined that the harder problem turned out to be grabbing the pieces and moving them on the board without knocking it over! 

Similarly, while in-depth mastery of mathematics requires (what seems like) human intuition and ingenuity, many (but not all) exercises of a typical high-school or college course can be solved by applying a calculator and simple set of rules.

So what would be a more useful definition?

An attempt at a definition more useful than the “what computers can't do yet” joke would be to list properties that are characteristic to AI, in this case autonomy and adaptivity.

Autonomy

The ability to perform tasks in complex environments without constant guidance by a user.

Adaptivity

The ability to improve performance by learning from experience.

When defining and talking about AI we have to be cautious as many of the words that we use can be quite misleading. Common examples are learning, understanding, and intelligence.

You may well say, for example, that a system is intelligent, perhaps because it delivers accurate navigation instructions or detects signs of melanoma in photographs of skin lesions. When we hear something like this, the word "intelligent" easily suggests that the system is capable of performing any task an intelligent person is able to perform: going to the grocery store and cooking dinner, washing and folding laundry, and so on.

Likewise, when we say that a computer vision system understands images because it is able to segment an image into distinct objects such as other cars, pedestrians, buildings, the road, and so on, the word "understand" easily suggest that the system also understands that even if a person is wearing a t-shirt that has a photo of a road printed on it, it is not okay to drive on that road (and over the person).

In both of the above cases, we'd be wrong.

It is important to realize that intelligence is not a single dimension like temperature. You can compare today's temperature to yesterday's, or the temperature in Delhi to that in London, and tell which one is higher and which is lower. We even have a tendency to think that it is possible to rank people with respect to their intelligence – that's what the intelligence quotient (IQ) is supposed to do. However, in the context of AI, it is obvious that different AI systems cannot be compared on a single axis or dimension in terms of their intelligence. Is a chess-playing algorithm more intelligent than a spam filter, or is a music recommendation system more intelligent than a self-driving car? These questions make no sense. This is because artificial intelligence is narrow: being able to solve one problem tells us nothing about the ability to solve another, different problem.

The classification into AI vs non-AI is not a clear yes–no dichotomy: while some methods are clearly AI and other are clearly not AI, there are also methods that involve a pinch of AI, like a pinch of salt. Thus it would sometimes be more appropriate to talk about the "AIness" (as in happiness or awesomeness) rather than arguing whether something is AI or not.

Share:

0 comments:

Post a Comment