Monday, September 21, 2020

About predicting the future - how AI will transform our lives

How Artificial Intelligence Is Transforming Business - Business News Daily

While some forecasts will probably get at least something right, others will likely be useful only as demonstrations of how hard it is to predict, and many don’t make much sense. What we would like to achieve is for you to be able to look at these and other forecasts, and be able to critically evaluate them.

On hedgehogs and foxes

The political scientist Philip E. Tetlock, author of Superforecasting: The Art and Science of Prediction, classifies people into two categories: those who have one big idea (“hedgehogs”), and those who have many small ideas (“foxes”). Tetlock has carried out an experiment between 1984 and 2003 to study factors that could help us identify which predictions are likely to be accurate and which are not. One of the significant findings was that foxes tend to be clearly better at prediction than hedgehogs, especially when it comes to long-term forecasting.

Probably the messages that can be expressed in 280 characters are more often big and simple hedgehog ideas. Our advice is to pay attention to carefully justified and balanced information sources, and to be suspicious about people who keep explaining everything using a single argument.

Predicting the future is hard but at least we can consider the past and present AI, and by understanding them, hopefully be better prepared for the future, whatever it turns out to be like.

AI winters

The history of AI, just like many other fields of science, has witnessed the coming and going of various different trends. In philosophy of science, the term used for a trend is paradigm. Typically, a particular paradigm is adopted by most of the research community and optimistic predictions about progress in the near-future are provided. For example, in the 1960s neural networks were widely believed to solve all AI problems by imitating the learning mechanisms in the nature, the human brain in particular. The next big thing was expert systems based on logic and human-coded rules, which was the dominant paradigm in the 1980s.

The cycle of hype

In the beginning of each wave, a number of early success stories tend to make everyone happy and optimistic. The success stories, even if they may be in restricted domains and in some ways incomplete, become the focus on public attention. Many researchers rush into AI – or at least calling their research AI – in order to access the increased research funding. Companies also initiate and expand their efforts in AI in the fear of missing out (FOMO).

So far, each time an all-encompassing, general solution to AI has been said to be within reach, progress has ended up running into insurmountable problems, which at the time were thought to be minor hiccups. In the case of neural networks in the 1960s, the hiccups were related to handling nonlinearities and to solving the machine learning problems associated with the increasing number of parameters required by neural network architectures. In the case of expert systems in the 1980s, the hiccups were associated with handling uncertainty and common sense. As the true nature of the remaining problems dawned after years of struggling and unsatisfied promises, pessimism about the paradigm accumulated and an AI winter followed: interest in the field faltered and research efforts were directed elsewhere.

Modern AI

Currently, roughly since the turn of the millennium, AI has been on the rise again. Modern AI methods tend to focus on breaking a problem into a number of smaller, isolated and well-defined problems and solving them one at a time. Modern AI is bypassing grand questions about meaning of intelligence, the mind, and consciousness, and focusing on building practically useful solutions in real-world problems. Good news for us all who can benefit from such solutions!

Another characteristic of modern AI methods, closely related to working in the complex and “messy” real world, is the ability to handle uncertainty, which we demonstrated by studying the uses of probability in AI. Finally, the current upwards trend of AI has been greatly boosted by the come-back of neural networks and deep learning techniques capable of processing images and other real-world data better than anything we have seen before.



Share:

0 comments:

Post a Comment