Artificial Intelligence – End of Enlightenment or More Thinking for Us?

Reflect on the evolvement of Artificial Intelligence: How machines are surpassing human capabilities.

Whatever your context, you might be wondering what Artificial Intelligence is and how it might influence your work, business or society. Hefty subjects.

The public discourse on socio-economic impacts of AI is thriving. This summer kicked off with Henry Kissinger’s reflections on the topic. In the Atlantic June 2018 issue, Mr.Kissinger, an American statesman and a former United States Secretary of State, writes eloquently on AI. In particular, he expresses his concern that we have come unprepared and that the age of enlightenment is nearing its end.

One could see the same fears and warnings repeated and further elaborated over the summer. In one publication, it is about machines rendering thinking extraneous. The other one is wondering whether and how AI will end up killing democracy.

Before you read any further, let me share my perspectives. A large portion of the public discussion is wildly extrapolating the near future based on today’s capabilities. Those extrapolations easily miss the point as many of the authors end up humanizing a set of algorithms. This might be a good narrative instrument but not a sound basis for judgment.

In similar fashion, many address AI as a single thing that is out there somewhere. It’s not. To make matters worse, your other sources of information are likely the marketing departments of organizations offering AI-based products or services.

In fact, this public discussion sparked me to write this article. When you notice you have started to attribute human characteristics to a machine you are probably not on the road towards enlightenment.

For you to benefit from the possibilities offered by a technology, it is a good idea to make an effort to understand it first. I will try to help you here: understanding what AI is. I will leave drawing further conclusions mostly as your responsibility.

Let us start with definitions (the boring bit?)

As an informal definition, one might say AI is intelligence exhibited by machines or machines mimicking “cognitive” functions associated with us humans. This might cover topics such as playing strategic games, natural language processing, driving a vehicle and so forth.

The perhaps surprising issue with this definition is that it’s implicitly time bound. Whatever passed as AI in 1990 or 2000 does not fit the bill today as it has already become commonplace. We do not refer to OCR (Optical Character Recognition) as AI. Not even if you use deep learning and TensorFlow for it.

Once we become familiar with a new technology, we do not consider it as requiring human cognition. Those working in the field have even coined the term AI effect: “AI is whatever hasn’t been done yet”.

If you look up a more scientific definition, your textbook (or Wikipedia) might refer to study of intelligent agents. Perhaps it is rational agents if your textbook is a bit newer. Such an agent would be any entity or let us say a device able to perceive its environment and to take actions maximizing its success in some goal. Such agents might be able to learn, hence Machine Learning (c. 1959). They might be able to use prior knowledge, hence Knowledge Representation and Reasoning. Or a bunch of other stuff – we’ll come to that a bit later.

After a bit of pondering you might notice that the above more scientific definition is in fact rather broad. And it should be. You do not want to define a research area based on the particular method or a single approach. It makes more sense to start from the objective of the research – to create agents with rational behaviour.

Anyway, a simple reflex agent – let’s say an automatic faucet – will match this definition. This might not be leading edge but it used to be.

It then makes the whole term Artificial Intelligence sound a bit vague?

Our informal definition says that almost nothing will meet the criteria while the more formal description seems to admit almost everything as AI.

Of course, one could use the leading edge of today and provide a list of approaches that you feel passes as AI. Regrettably, this might not increase our understanding so much (it’s just a list). Furthermore, that list will again be time bound. Today’s leading edge might be obsolete in a few years.

AI is not a single thing

AI is not a single thing either. You could take that literally: AI is not a single super-intelligent mainframe somewhere out there.

There is a further perspective here also. You may not want to consider AI as a single field of research or one specific approach. In fact, the AI field draws from many broad fields of research, perhaps most prominently from mathematics and computer sciences. Many of the approaches come from further areas of research such as psychology, linguistics and economics or for example neurosciences.

Yes, biological nervous systems have provided inspiration for artificial neural networks. There are also clear and distinct differences in the structural and functional properties if you compare such networks to primate brains.

Continuing to list research areas might not increase our understanding either. A better idea might be to look at the different goals one might use AI for; the capabilities that the researchers and engineers have had in their minds.

Reasoning and Planning

Looking at objectives for AI, building machines able to perform Reasoning and Problem-solving might come as the first item. Arguably, both of these would be beneficial things to have. They will also be easier when you are able to use prior knowledge so approaches for Knowledge Representation should be included. Planning would also be a helpful capability if your machine should be able to perform sets of activities. We did not get to Learning yet, but we will already need to draw from a broad range of disciplines such as game theory, probability theory or decision theory. Perhaps you will need a Bayesian network or an evolutionary algorithm to meet your objectives. There is a broad range of approaches here, many of which already tried and tested.

Learning

Learning from experience might also be highly beneficial for our intelligent agent. As it happens, Machine Learning is the one domain where there has been very rapid development in the past years. In other words, that is the source for all the fuzz. In the public discussion and in academia, we all talk about Deep Learning. That is, machine learning with a cascade of many layers of non-linear processing units, usually in the form of an artificial neural network. The word “deep” here just means that besides the input and output layers there is more than one processing layer.

You can think of such a network as a highly expressive function that you can fit to data representing complex non-linear phenomena. The training part is really a kind of an optimization problem – you want to minimize the error between the outputs the network is producing and the desired results.

If you have a bit of relevant knowledge, you will be able to select the right network architectures and the right structures that are suitable for your learning task. Further, if you have a reasonable and representative amount of training data (correct input and output pairs) your network might learn to produce correct results even for inputs previously not seen.

One might characterize this capability as “recognition” to make a distinction from “reasoning”. The network is able to learn to recognize whatever items or concepts you trained it on. We will come back to this a bit later.

Natural Language Understanding and Perception

It turns out that learning is useful in delivering many further capabilities, for example, Natural Language Understanding or Perception.

Interestingly, many machine vision problems might be easier than converting human speech to text. A structurally more simple feedforward network (information travels into one direction in the network i.e. forwards) might be able to learn to classify images. The leading edge speech recognition tools are usually based on so-called recurrent networks (the network also feeds information back into itself).

To understand words or sentences it is helpful to have this recurrence. You can think of it as the algorithm having a kind of a short-term memory, allowing it to identify the correct words with ever-higher accuracy as more inputs (syllables, words) become available. In fact, machines are already getting better than we are in transcribing speech to text.

Motion and Manipulation

Motion and Manipulation might also be beneficial capabilities for our intelligent agent. These are also some of the key objectives in the field of robotics. It might be about tactile intelligence or probabilistic roadmaps, or perhaps robotic mapping.

Traditionally the robotics field may have had less to do with machine learning. The approaches drawing from control theory and other fields have just been more efficient. You would use such approaches to create a robot that does not fall even when walking on uneven terrain. In other words, that robot dog video you saw on YouTube is likely less about a machine learning and more about pre-programmed algorithms kicking in when needed.

There is a domain called policy learning though. In this context, we are not referring to political leaders or elected officials. Rather we want our agent to learn an optimal policy that allows it to maximize its performance in a complex dynamic environment.

Machine learning approaches such as reinforcement learning and in particular, Q-learning might be applicable in this context. The stroke of wit in Q-learning is to use a deep neural network to approximate a so-called Q-function. The intended purpose of such a Q-function is to evaluate how valuable a particular action might be, given the perceived state of the environment. If you have a good approximation for that Q-function, you only need to be able to generate all the possible next actions.

To build a more sophisticated robot that is able to learn, we might use some form of policy learning as a higher layer governing the actions of the robot. This would help the robot learn ever better actions for different scenarios. Then we might use other techniques to execute those actions, such as “take one step forwards without falling down”. All this will be rather complex and will require a great number of different components interacting in concert.

Incidentally, try talking about robotic process automation to a proper robotics researcher: he or she will look you down the nose if there is no physical aspect to your robot. The key challenges are usually with the physical side of it – how to perceive or how to manipulate the real world. Admittedly, automation of more complex business processes might also draw from many fields of research and utilize sophisticated machine learning models.

Social Intelligence and Creativity

One should still mention a few further objectives that are rather visible today, such as to create machines with Social Intelligence and Creativity.

Already for several decades, we have had algorithms that can come up with solid proofs for mathematical theorems. These proofs might be solid but usually not so elegant – and perhaps not very creative, more trial and error.

More prominent today are the various recommendation engines that use machine learning as a means to suggest new products and services for us to buy. Conversational AI platforms aim to provide tools for enabling simple conversations, such as understanding human sentiment or classifying behavior. There is even a field called computational design – tools here can be useful in creating complex graphical user interfaces.

And of course, we’ve seen approaches such as generative adversarial networks used for creating realistic but fake celebrity faces. Perhaps more importantly, similar approaches have also proven very effective in fooling other machine learning models such as those used to classify images.

Think for example about AI tools for processing insurance claims or financial transactions. If your AI application is susceptible to adversarial attacks and there’s a good benefit/risk ratio, you might want to consider carefully how to detect those attacks. It looks like that is a hard problem.

First Conclusions

The whole point here is that there is a plethora of different approaches that can be used to build AI systems and those approaches need to be painstakingly combined by teams of researchers and engineers if you hope to create new and sophisticated applications for real-life environments. It will not happen by accident.

At the same time, it is important to understand that today there exists plenty of rather generic capabilities that might apply directly to your problem. If the existing approach matches well with your objective, it might be relatively straightforward to apply in your organization’s context. The question is whether you are familiar with the possibilities available today – a challenge in a fast-growing field.

But the machines are winning us in Go, in DOTA, …

The argument from Mr. Kissinger and from many other writers is that algorithms are already beating humans in many areas such as playing the game of Go or perhaps in multiplayer video games (Mr. Kissinger was referring to Go, not DOTA).

There appears to be some form of emergent behavior here and on the other hand, there seems to be nothing keeping those machines from learning ever faster. Ergo, they will soon outpace and outwit us in all human ventures, and our role in the world will diminish to relative irrelevance.

Indeed the recent accomplishments have been spectacular. Because of such feats, it is easy to start humanizing the technology. This might however not improve the quality of your extrapolation of near-future capabilities.

If you ask the leading researchers in the field, we are in fact rather far from machines outwitting us in our generic day-to-day endeavors.

To understand this a bit further let us consider the environments in which our intelligent agents will be acting. The complexity of those environments will likely greatly influence how sophisticated the agent need to be.

Complex or simple environments?

Russel and Norvig (Artificial Intelligence – A Modern Approach, 2009) propose a model to classify such environments. For example, is the environment deterministic – can the agent fully predict the next state of the environment based on the action it will choose?

Is it static – i.e. does the environment change while the agent is deliberating on the next action? Can the agent fully observe the environment? How many agents there are in the environment? Are there hidden rules about the environment? Do you need to remember your past actions? Is it discrete (easier) or continuous (harder)?

You guessed it: while the game of Go is a marvelous game, with extreme complexity emerging from simple rules, as an environment for an AI system to grasp, it ticks almost all the boxes as simple.

Admittedly, Go is a multi-agent environment (the AI player and its opponent) but there are no random things taking place between the turns, it is discrete, there is nothing hidden and the environment is fully observable.

Finally, there is no need to remember the history of moves as the current state of the game gives you all the necessary information to figure out the best next move. Or it might do, if you are a grandmaster or some variant of AlphaGo.

Simplifying greatly, you might consider AlphaGo as a kind of a brute-force approach. Instead of using brute-force to calculate more moves forward than is humanly possible, you use brute-force to practice and learn more than a human possibly can. In other words, in this relatively simple environment, a machine learning algorithm will be able to evaluate the value of possible next moves much better than a human can.

Thinking fast or slow?

Daniel Kahneman’s model for System 1 and System 2 thinking provides further interesting perspectives for us (Thinking, Fast and Slow, 2011). To help your memory, Kahneman won a Nobel in 2002 for his work in behavioral economics.

In Kahneman’s work, System 1 refers to fast, instinctive and emotional thinking whereas System 2 is the slower, more deliberative and logical thinking. One can, of course, argue whether this is a valid model to represent all human cognition but Kahneman’s approach appears to have its merits. It also provides a simple framework for us to consider the machine learning capabilities available today.

Our contemporary machine learning systems focus rather on recognition than reasoning. This is to say they focus on System 1 thinking. For example, a machine learning model might correctly recognize the value of a possible next move in a game. It will do this based on what it has learned in the past. However, it will not do reasoning or System 2 thinking. All the work went to the training part – when the model is applied it is only about straightforward recognition.

In other words, our agent would be at a loss if the environment does not provide good opportunities for learning or if there is no pre-existing data set to help identify the right policies.

System 2 thinking is not a strange new creature in the field of AI. Just think of the expert systems that were the focus of much research decades ago. In fact, some hoped that such systems would pave the way for high degrees of automation in the office also. It turns out that considerable amounts of human engineering are required to deploy such systems.

Extrapolating near future capabilities

It is perhaps now easier to see how extrapolation errors might take place. The fact that we are able to create a recognition based AI system that wins humans in a narrow and bounded environment does not imply that we are able to combine this with other types of thinking and deploy such systems into more complex environments, still matching human capabilities.

Of course, we as humankind are pushing the envelope. We are hoping to see autonomous vehicles acting in much more complex environments than the game of Go in the very near future. This is a huge engineering and research challenge tackled by whole industries. Because of the high value and consequent high investments, we are near automating this single task in a relatively complex real-life environment.

We are also looking forward to combining capabilities such as learning and planning, or recognition and reasoning. In other words, building systems and models that are able to combine a meaningful set of System 1 and System 2 thinking.

This might, however, be even more difficult than building that autonomous car. We will take baby steps.

What should I do today?

For now, we do not have an AI system thinking in general on the human level. We are not even near. Arguably, we are closer than when the AI field was conceived. I would still wager that there is a very long road ahead. A road with many grand challenges we do not even know about yet.

What we do have is machines and algorithms that surpass human capabilities in specific environments and tasks. This by itself is no news but during the past few years, both the scope of those environments and the range of those tasks have grown rapidly. What we do have is a tremendous opportunity to use these technologies to improve our work and leisure.

Even though the leading edge tech might be difficult to develop, once it is out there it might be even very easy to apply. You might literally be weeks away from solving a problem you thought was infeasible to automate or perhaps intractable for humans to manage.

The key here is to identify what to automate and what to leave for people. Remember when your boss first introduced the 80/20 rule? You will do better if it is feasible to use those algorithms and machines for the bulk of the work and combine that with human oversight for the more interesting cases. It is very difficult to get machine learning models to perform perfectly, but relatively easy to get the bulk of it right.

Machines and algorithms can work with highly repetitive tasks with higher speed and quality. They can also do large-scale data processing that is just infeasible for us. The right person with the right skills will win hands-down on most tasks in between.

Need to combine creativity with experience? There is ambiguity and no precedents? That stuff is for us.

Where machines win was not fun for us to begin with. The question is really how you are able to combine human efforts and machine capabilities.

More thinking for us – not the end of enlightenment!

Contrary to the worries of many, I believe that going forward it becomes more important to think with your own head, not the other way around. Furthermore, this will apply to all types of work and all levels of society – not just to a select few who are able to build and work with the leading edge.

There are many reasons for this, many of them outlined in the above discussion. Just think about it.

It is cheap to make copies of software. Compute just gets cheaper and more efficient. We can apply software to an increasingly large number of the most boring and tedious tasks we have. We are creating significant increases in productivity without adding considerable burden on the environment.

Moreover, we will still need human oversight, and more of it, as our technology still cannot do many of the things that are very basic for us.

It is not the end of enlightenment. Rather we will have more time to focus on things that we enjoy and that really matter for us.

Symbio will be attending European Business AI & Robotics in Finland on the 23rd -24th October 2018.

Pekka Vainiomäki
Vice President – Strategic Engagements & Global Collaboration, Symbio Europe

Avainsanat: