"The . . . question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."
The above paragraph is taken from Alan Turing's celebrated and oft-quoted paper Computing Machinery and Intelligence, written in 1950.
As Ivars Peterson describes in his recent column in MAA Online (February 26), the IBM chess- playing computer Deep Blue just gave World Chess Champion Garry Kasparov a good run for his money. Should we then credit Deep Blue with intelligence?
Certainly, we generally assume that the ability to play a good game of chess requires intelligence when it comes to people. Why not for chess-playing machines as well?
Well, here is one argument that shows that things are not so simple. One aspect of human intelligence, and a regular component of tests designed to measure intelligence in children and adults seeking employment, is skill at arithmetic. And yet a ten dollar calculator can outperform almost any human being when it comes to arithmetic. Does it follow that a calculator is intelligent? Most people would answer, "No."
How about the ability to solve algebraic equations? Again, when the solution is found by a person, this task is generally regarded as requiring intelligence, but there are a number of computer algebra systems available that can solve algebra problems far quicker, and with far less chance of making an error, than can most people.
In both cases, arithmetic and algebra, the computer arrives at the answer in roughly the same way as a person, by following the appropriate mathematical rules. And yet we regard human proficiency at arithmetic and algebra as requiring intelligence but an even greater machine proficiency at the same tasks as being 'merely mechanical rule following'.
The distinction between the ways mind and machine operate is clearer when it comes to playing chess. Considerable effort has been put into the development of chess-playing computer systems such as Deep Blue. However, they achieve their success not by adopting clever strategies, but by essentially brute force methods. Taking advantage of the immense computational power and speed of today's high performance computers, they examine huge numbers of possible plays, choosing the one that offers the greatest chance of success. There is no 'intelligence' involved. (See Peterson's article.) In contrast, a good human chess player might consider as many as a hundred possible moves (generally far fewer), and follow the likely ensuing play for at most a dozen of those. The initial choice of moves to be considered in detail is one of the things that marks the good human chess player.
Moving away from chess, is it even theoretically possible for a computer system-that is to say, a computer program running on a conventional electronic digital computer-to be intelligent? If a computer program performs a task that requires intelligence if performed by a person, should we describe that program as behaving in an 'intelligent fashion'? On the basis of the above observations about arithmetic, algebra, and chess, the answer would seem to be "No," but are we being guided by human pride rather than rationality-by a desire to be unique in our intelligence. After all, a jet airplane does not stay in the air by converting food to energy and flapping its wings the way a bird does, but we still say that the plane 'flies'. It achieves the same end--flying--in a completely different way, a way more suited to its structure and design.
One difficulty in deciding whether or not to ascribe 'intelligence' to a computer system is that we must first be clear just what we mean by intelligence. Unfortunately, though most of us might feel that we know intelligence when we see it in other people, science has yet to come up with an acceptable definition.
It was precisely the difficulty of defining intelligence in humans that led Alan Turing himself to propose a definition of 'intelligence' that could reasonably be applied to computers. Writing in the same article quoted earlier, Turing formulated a simple test for machine intelligence, known nowadays as the Turing Test.
The Turing Test asks you to imagine you are sitting at a computer terminal through which you can carry out a conversation (typing at a keyboard and reading responses on a screen) with two partners, one a computer, the other a person. Your two partners are named A and B, but you do not know which one is the computer and which one the person. You cannot see either the person or the computer; your only communication with them is through the terminal. If you address a question to A, then A will always answer, and likewise B will always answer a question directed to B. Your task is to try to decide, on the basis of a conversation with A and B, which one is the computer and which one the person. If you are not able to identify the computer reliably, then, says Turing, it is entirely reasonable to say that the computer is 'intelligent'--it passes the Turing test for intelligence. I should point out that, although the hidden person might be expected to answer truthfully, the computer is under no obligation to tell the truth, so in particular, questions such as "Tell me, A, are you a computer?" are unlikely to resolve the issue for you.
With the invention of the digital computer, it was only five years after the appearance of Turing's 1950 paper--in which he specified programming a computer to play chess and to understand natural language as the two most obvious challenges to attempt first--that work began on both challenges.
In 1955, Allen Newell wrote a paper analyzing the problems facing anyone trying to program a computer to play chess, and by 1956, a group at Los Alamos National Laboratory had programmed a computer to play a poor but legal game of chess. At about the same time, Anthony Oettinger began work on automated language translation by programming a Russian-English computer dictionary.
However, neither of these two projects offered anything that might be termed 'intelligent behavior' on the part of the computer; they were simply 'automation' processes whereby straightforward tasks were implemented on a computer. The first genuine attempt to create machine intelligence was made by Allen Newell, Clifford Shaw, and Herbert Simon of the RAND Corporation, who in 1956 produced a computer program called The Logic Theorist. The aim of this program was to prove theorems in mathematical logic, in particular the theorems in the early part of Whitehead and Russell's Principia Mathematica. In fact, The Logic Theorist proved 38 of the first 52 theorems in Principia Mathematica.
It has to be admitted that propositional logic is ideally suited to being performed on a computer, having an extremely simple, rigidly defined language and a small set of fixed, well-defined axioms and rules. Turing's choice of human-machine conversation as an intelligence test for computers was a far more challenging task. Natural language communication is now known to be one of the hardest tasks that faces anyone trying to build 'intelligent machines'. Forget all of those smooth talking computers and robots on television and in the movies, such as Kit, the talking automobile in the TV series Knight Rider, and HAL, the eventually malevolent on-board mission-control computer in Stanley Kubrick's space-travel movie 2001: A Space Odyssey. In the real world, no computer system has come close to passing the Turing test, and, I claim, there is good reason to assume that none ever will.
What that good reason is will be the topic of future columns in this series. My claim-which many readers will regard as heretical coming from a mathematician-is closely bound up with what I see are significant limitations on what can be achieved by the methods of mathematics in the domain of human reasoning and communication. In this regard, I am at least in good mathematical company. Let me end with the following words of Blaise Pascal.
"The difference between the mathematical mind and the perceptive mind: the reason that mathematicians are not perceptive is that they do not see what is before them, and that, accustomed to the exact and plain principles of mathematics, and not reasoning till they have well inspected and arranged their principles, they are lost in matters of perception where the principles do not allow for such arrangement. . . . These principles are so fine and so numerous that a very delicate and very clear sense is needed to perceive them, and to judge rightly and justly when they are perceived, without for the most part being able to demonstrate them in order as in mathematics; because the principles are not known to us in the same way, and because it would be an endless matter to undertake it. We must see the matter at once, at one glance, and not by a process of reasoning, at least to a certain degree. . . . Mathematicians wish to treat matters of perception mathematically, and make themselves ridiculous . . . the mind . . . does it tacitly, naturally, and without technical rules."
Devlin's Angle is updated on the first of each month. The above article is adapted from his forthcoming book Goodbye Descartes: The Quest for a Science of Reasoning and Communication, to be published in the fall by Wiley.
Return to MAA Online