next up previous contents index
Next: First Experiments Up: The Goal: Evolution of Previous: Multicellular Programming and Swarm-Programming   Contents   Index


A Taxonomy for Artificial and Computational Intelligence

In the last sections and chapters, we have met ANN, DAI, EC, EDI, EP, ES, GA, GP, MAS, MP, OOOP and SP. These are all abbreviations for problem solving approaches that try to create intelligence as defined in the introduction to chapter 4. All these methods are commonly summarized under two catchwords that sound similar but usually describe different parts of computer science: Artificial Intelligence and Computational Intelligence.

Artificial Intelligence (AI) is the oldest and best known research field which has the goal of creating intelligent systems. There are some people that use AI as the generic term for all approaches with that goal and define it for example like John McCarthy in [McCarthy, 2001]:

Q. What is artificial intelligence?

A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

Q. Yes, but what is intelligence?

A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.
This definition of intelligence poses the question on how we can determine whether some unit achieves a goal. Does it have to internally represent this goal in a specific way or is it sufficient that the unit's actions make sense for an observer which interprets them as steps for achieving a goal? In the first case: then how do we know that humans or other animals achieve goals? How do they internally represent goals? We have no clue about this question, so we cannot define intelligence by using explicitly represented goals. This means that the ability to achieve goals in the world is in effect only the ability to do things that make sense for a human observer given either the goals that he preset for a machine or the goals that he can imagine for a person or an animal.

Stuart J. Russell and Peter Norvig also use this definition of intelligence in [Russell and Norvig, 1994] and they call it rationality. A system for them is rational if it does the right thing. The right thing is usually the one which best (or at least well) helps the system in achieving its goals. Stuart J. Russell and Peter Norvig denote this as the "rational agent approach".

There is a slight difference between these definitions of intelligence and the one which has been presented in the introduction of chapter 4 and which is used in this publication. If a system is intelligent or "rational" if it does things that help it in achieving its goals, every correctly working machine and nearly all living beings are intelligent. The machine follows the goal for which it was produced and the creature or plant follows the goal to survive and reproduce. A species that would not follow this goal would die out. But we would usually not call every such system intelligent. This definition therefore seems to be too wide. We call systems intelligent that can perform tasks that seem very complex to us, not ones that can perform any tasks. But whether a performed task is complex enough to call a specific system intelligent is a very subjective and dynamic judgment. Therefore, my definition does not describe intelligence like McCarthy, Russell and Norvig with the added precondition that it must be able to perform complex tasks. Instead, it is formulated by explicitly referring to the judgment of an observer. Using this definition, evolutionary algorithms can create intelligence. But even though they seem to be included in the definition of AI used by McCarthy, Russell and Norvig, they are usually not seen as a part of artificial intelligence. In [Russell and Norvig, 1994], which is one of the best known books about AI, you find nearly nothing about EC, a few pages about Fuzzy Logic and only 35 pages (of 859) about ANNs. This is probably one of the reasons, why the important fields of EC, Fuzzy Logic and ANN have joined forces under the name "computational intelligence".

There are many AI researchers that use more restricted definitions of artificial intelligence (which in my opinion better describe the common orientation of AI). Richard E. Bellman defines AI in [Bellman, 1978] as

...the automation of activities that we associate with human thinking, activities such as decision making, problem solving, learning ...
Richard Stottler [Stottler, 1999] defines AI as follows:

Artificial intelligence is the mimicking of human thought and cognitive processes to solve complex problems.
Patrick Henry Winston's definition of AI in [Winston, 1992] is:

Artificial Intelligence is ...the study of the computations that make it possible to perceive, reason, and act. From the perspective of this definition, Artificial Intelligence differs from most of psychology because of the greater emphasis on computation, ...
Many online encyclopedias define it similarly restricted to human intelligence. The Webopedia [Webopedia, 2002] defines AI as:

The branch of computer science concerned with making computers behave like humans.
Whatis.com [Whatis.com, 2001] defines AI as follows:

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems.
The AI depot [AI Depot, 2001] uses the following definition:

Artificial Intelligence is a branch of Science which deals with helping machines find solutions to complex problems in a more human-like fashion. This usually involves borrowing characteristics from human intelligence, and applying them as algorithms in a computer friendly way.
As we can see in these definitions, most of AI is very human oriented. Russell and Norvig distinguish in [Russell and Norvig, 1994] between AI approaches centred around humans and approaches centred around rationality. As an example for the rationality approach they quote Winston's definition of AI. But as we have seen, he continues by distinguishing AI from psychology by saying that AI concentrates more on what computers can do: computation. With saying this, he acknowledges that AI bases mainly on psychological theories. This also shows in the weighting of subjects in his book (like in most AI books). But Psychology is the study of human intelligence on the basis of behavioural experiments. According to the UNESCO definition, it belongs to the social sciences and not the natural sciences. Also nearly all the other sciences inspiring artificial intelligence research are social sciences. Aaron Sloman writes about these influences in [Sloman, 1998]:

If we construe AI in this way (as studying how information is acquired, processed, stored, used, etc. in intelligent animals and machines) then it obviously overlaps with several older disciplines, including, for instance, psychology, neuroscience, philosophy, logic, and linguistics.
This listing of influences includes the natural science neurophysiology, but that discipline only overlaps with AI as far as we see the research area of artificial neural networks as a part of AI. ANNs are the most important example of the so-called connectionist approach to AI which contrasts to the approach of the classical computationalism. [Internet Encyclopedia of Philosophy, 2001] describes this last approach (which clearly is the central approach of AI) as follows:

According to classical computationalism, computer intelligence involves central processing units operating on symbolic representations. That is, information in the form of symbols is processed serially (one datum after another) through a central processing unit. Daniel Dennett, a key proponent of classical computationalism, holds to a top-down progressive decomposition of mental activity.
This approach, whose influences are described in [Sloman, 1998] as follows, is often called classical AI.

It should be clear from all this that insofar as AI includes the study of perception, learning, reasoning, remembering, motivation, emotions, self-awareness, communication, etc. it overlaps with many other disciplines, especially psychology, philosophy and linguistics. But it also overlaps with computer science and software engineering ...
Most AI researchers consider ANN as a part of AI, because most of us think that the human intelligence is located in the brain. And modelling the brain with an artificial neural network seems to be another approach to modelling human intelligence. But neural cells are an important part of the intelligent functioning and behaviour of many different multicellular creatures, not only of humans. Current ANNs more closely resemble for example the nervous system of the sea-snail Hermissenda than that of humans. But these systems still do a good job in solving complex problems. So when using the human centred definitions of AI, ANN is not really a part of AI, because it does not simulate human intelligence. Moreover, in most publications about AI, it plays a minor role even though it is an important approach to creating intelligent systems. This has lead many ANN researchers to see themselves more as a part of CI than of AI, because artificial intelligence is still often identified with classical AI. This is fostered by the fact that classical AI is the only area approaching the creation of intelligence, that does not have an agreed own name but usually simply calls itself AI.

Computational Intelligence (CI) is even more difficult to characterize than AI as it is more a collection of all the approaches to creating intelligent systems excluding traditional AI than a coherent research field. As the chairs of the ICSC congress "Computational Intelligence: Methods and Applications" put it on the introductory page to [Kuncheva and Porter, 2001]:

Defining "Computational Intelligence" is not straightforward. Several expressions compete to name the same interdisciplinary area. It is difficult, if not impossible, to accommodate in a formal definition disparate areas with their own established individualities such as fuzzy sets, neural networks, evolutionary computation, machine learning, Bayesian reasoning, etc.
The three main research ares united under the term "computational intelligence" are EC, ANN and Fuzzy Logic5. The last is a generalization of traditional (Boolean) logic which allows to mathematically represent and handle uncertainty and vagueness. Its underlying principles, introduced in the 1960s by Lotfi A. Zadeh, can also be applied to set theory and other areas and have been successfully used for many complex controlling tasks in industry6.

Another possibility to distinguish CI from AI is to stress the fact that CI uses subsymbolic knowledge processing whereas classical AI uses symbolic approaches. The CI project at the University of Dortmund [Collaborative Research Center Computational Intelligence, 2002] for example remarks:

In contrast to the traditional field of Artificial Intelligence (AI) CI makes use of subsymbolic, i.e. numerical, knowledge-representation and -processing. The probably most well known techniques of Computational Intelligence are Fuzzy Logic, Artificial Neural Networks, and Evolutionary Algorithms.
For understanding this difference, you have to understand what is meant by symbolic representation. The article [Dictionary of Philosophy of Mind, 2001] provides a good introduction to the difference between the symbolic approach of AI and the distributed representations used in CI. In a nutshell, it can be explained as follows: In a symbolic representation, the knowledge can be decomposed into symbols (e.g. a concept in a semantic net or a proposition in a logic representation) which each have a particular meaning. In contrast, in distributed or subsymbolic representations, a meaning or specific part of the knowledge cannot be clearly located. The knowledge is represented in the whole state of the system. The system produces its own meanings that cannot be understood by humans.

In this sense, human society, its sciences (e.g. psychology) and AI are symbolic while nature, its sciences (e.g. neuroscience) and CI are subsymbolic. The knowledge representations of AI are nearer to human understanding, but the representations in CI are nearer to how nature works. I personally think that the natural approach has a more promising future even though it is easier for us to build symbolic systems which are therefore still often the best choice. The reason for this belief is the following line of thought:

I believe that neither humans nor any other animal thinks in symbols. Language is pure symbols. But we do not think in language. When we think of a concept which can be described by a symbol (a word of the language), we do not think of the symbol itself, but of all the associations that we have when we hear this specific word. These associations can often also be described by symbols, but they are no symbols themselves. They are made of many recollections of past internal states of the system human being produced by sensory inputs combined with the previous states of the system. Put easier, we think in the combined recollections of many images, sounds, smells and other past sensory experiences. Symbols (which represent such a combination) are only a tool for communication between individual creatures. The approximate meaning of the symbols is common knowledge of the creatures which helps them understand each other. A system of symbols meets best the requirements of the communicating creatures if they can develop it themselves. Also the knowledge representation meets best the requirements of a creatures if it can produce its own meanings. Every creature has to adapt its view of the world to its own sensory system and its own needs. It makes not much sense to force human-made meanings and symbols upon a totally dissimilar creature which has to live in a totally other environment than humans. Every artificial system is such a totally dissimilar "creature" in a totally other environment, which means that the most promising approach to making it behave intelligently in its environment should be to let it develop its own view of the world in its own representation. Of course, we cannot give it the choice between all imaginable representations. But we can give it as much freedom as possible by providing it with a powerful, flexible and low-level representation without imposed human meanings. Given a good choice of the function set, GP seems to best solve this problem. The only problem is that the more freedom we give to the creature, the longer it takes to breed it. But with faster computers, distributed computation and better GP methods this problem constantly loses of importance.

Figure 4.1: Research fields trying to create intelligent systems.
\resizebox* {1\columnwidth}{!}{\includegraphics{it-taxonomy.eps}}

Figure 4.1 shows the described relations between Artificial Intelligence, Computational Intelligence, Evolutionary Computation, Artificial Neural Networks, Fuzzy Logic and the distributed versions of EC and AI: Evolution of Distributed Intelligence and Distributed Artificial Intelligence. It shows the different underlying philosophies of AI and (most of) CI and it shows the main influences inspiring the various approaches.

EC and ANN are inspired by the natural sciences while AI is inspired by social sciences. Fuzzy Logic is a little difficult to position, because it is seen as a part of CI even though its inspirations come partly from human thinking and it is very closely related to symbolic approaches7. But clearly its main influences are mathematics and logic. Mathematics is important for about every area in computer science, hence it is presented as the background of the whole taxonomy. Philosophy is in my opinion the basis of and an inspiration for every science. Therefore, it is not noted with AI, even though AI researchers like to mention it as an influence.

Taxonomies are always problematic, because they serve in dividing different approaches. But the best science is the one that does not know any frontiers. It should be clear, that a taxonomy cannot completely define a research field. There are always relations, combinations and variations that cannot be displayed. On the other hand, a taxonomy also serves in getting an overview of a big field and as such can also make you look beyond your frontiers. Likewise, it can make you understand some relations between different approaches which can help in combining them if this seems promising.


next up previous contents index
Next: First Experiments Up: The Goal: Evolution of Previous: Multicellular Programming and Swarm-Programming   Contents   Index
 
© 2002 Peter Schmutter (http://www.schmutter.de)