Devlin's Angle

January 1997

Why 2001 Won't be 2001

This month, January 12, to be precise, sees the birthday of HAL, the mission-control computer on the Jupiter-bound spaceship Discovery in Arthur C. Clarke's celebrated science fiction novel 2001, A Space Odyssey.

According to the book, HAL was commissioned at Urbana, Illinois, on January 12, 1997. In Stanley Kubrick's 1968 movie version, the date of HAL's birth was inexplicably changed to January 12, 1992. In any event, whether HAL is just about to be born or preparing to celebrate its fifth birthday, with the year 2,001 practically upon us, it's natural to ask how correct Clarke and Kubrick's vision of the future has turned out to be.

Thirty years ago when the film was made, director Kubrick endowed HAL with capabilities computer scientists thought would be achieved by the end of the century. With a name that, despite Clarke's claim to the contrary, some observers suggested was a simple derivation of IBM (just go back one letter of the alphabet), HAL was, many believed, science fiction-shortly-to-become-fact.

In the movie, a team of five new millennium space explorers set off on a long journey of discovery to Jupiter. To conserve energy, three of the team members spend most of the time in a state of hibernation, their life-support systems being monitored and maintained by the on-board computer HAL. Though HAL controls the entire spaceship, it is supposed to be under the ultimate control of the ship's commander, Dave, with whom it communicates in a soothingly soft, but emotionless male voice (actually that of actor Douglas Rain). But once the vessel is well away from Earth, HAL shows that it has developed what can only be called a "mind of its own." Having figured out that the best way to achieve the mission for which it has been programmed is to dispose of its human baggage (expensive to maintain and sometimes irrational in their actions), HAL kills off the hibernating crew members, and then sets about trying to eliminate its two conscious passengers. It manages to maneuver one crew member outside the spacecraft and sends him spinning into outer space with no chance of return. Commander Dave is able to save himself only by entering the heart of the computer and manually removing its memory cells. Man triumphs over machine--but only just.

It's a good story. (There's a lot more to it than just described.) But how realistic is the behavior of HAL? We don't yet have computers capable of genuinely independent thought, nor do we have computers we can converse with using ordinary language. True, there have been admirable advances in systems that can perform useful control functions requiring decision making, and there are working systems that recognize and produce speech. But they are all highly restricted in their scope. You get some idea of what is and is not possible when you consider that it has taken AT&T over thirty years of intensive research and development to produce a system that can recognize the three words 'yes', 'no', and 'collect' with an acceptable level of reliability for a range of accents and tones. Despite the oft-repeated claims that "the real thing" is just around the corner, the plain fact is that we are not even close to building computers that can reproduce human capabilities in thinking and using language. And according to an increasing number of experts, we never will.

Despite the present view, at the time 2,001 was made, there was no shortage of expert opinion claiming that the days of HAL ("HALcyon days," perhaps?) were indeed just a few years off. The first such prediction was made by the mathematician and computer pioneer Alan Turing. In his celebrated article Computing Machinery and Intelligence, written in 1950, Turing claimed, "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

Though the last part of Turing's claim seems to have come true, that is a popular response to years of hype rather than a reflection of the far less glamorous reality. There is now plenty of evidence, from psychology, sociology, and from linguistics, to indicate that the original ambitious goals of machine intelligence is not achievable, at least when those machines are electronic computers, no matter how big or fast they get. So how did the belief in intelligent machines ever arise?

Ever since the first modern computers were built in the late 1940s, it was obvious that they could do some things that had previously required an "intelligent mind." For example, by 1956, a group at Los Alamos National Laboratory had programmed a computer to play a poor but legal game of chess. That same year, Allen Newell, Clifford Shaw, and Herbert Simon of the RAND Corporation produced a computer program called The Logic Theorist, which coul--and did--prove some simple theorems in mathematics.

The success of The Logic Theorist immediately attracted a number of other mathematicians and computer scientists to the possibility of machine intelligence. Mathematician John McCarthy organized what he called a "two month ten-man study of artificial intelligence" at Dartmouth College in New Hampshire, thereby coining the phrase "artificial intelligence," or AI for short. Among the participants at the Dartmouth program were Newell and Simon, Minsky, and McCarthy himself. The following year, Newell and Simon produced the General Problem Solver, a computer program that could solve the kinds of logic puzzles you find in newspaper puzzle columns and in the puzzle magazines sold at airports and railway stations. The AI bandwagon was on the road and gathering speed.

As is often the case, the mathematics on which the new developments were based had been developed many years earlier. Attempts to write down mathematical rules of human thought go back to the ancient Greeks, notably Aristotle and Zeno of Citium. But the really big breakthrough came in 1847, when an English mathematician called George Boole published a book called An Investigation of the Laws of Thought. In this book, Boole showed how to apply ordinary algebra to human thought processes, writing down algebraic equation in which the unknowns denoted not numbers but human thoughts. For Boole, solving an equation was equivalent to deducing a conclusion from a number of given premises. With some minor modifications, Boole's nineteenth century algebra of thought lies beneath the electronic computer and is the driving force behind AI.

Another direct descendent of Boole's work was the dramatic revolution in linguistics set in motion by MIT linguist Noam Chomsky in the early 1950s. Chomsky showed how to use techniques of mathematics to describe and analyze the grammatical structure of ordinary languages such as English, virtually overnight transforming linguistics from a branch of anthropology into a mathematical science. At the same time that researchers were starting to seriously entertain the possibility of machines that think, Chomsky opened up (it seemed) the possibility of machines that could understand and speak our everyday language.

The race was on to turn the theories into practice. Unfortunately (some would say fortunately), after some initial successes, progress slowed to a crawl. The result was hardly a failure in scientific terms. For one thing, we do have some useful systems, and they are getting better all the time. The most significant outcome, however, has been an increased understanding of the human mind: how unlike a machine it is and how unmechanical human language use is.

One reason why computers cannot act intelligently is that logic alone does not produce intelligent behavior. As neuroscientist Antonio Damasio pointed out in his 1994 book Descartes' Error, you need emotions as well. That's right, emotions. While Damasio acknowledges that allowing the emotions to interfere with our reasoning can lead to irrational behavior, he presents evidence to show that a complete absence of emotion can likewise lead to irrational behavior. His evidence comes from case studies of patients for whom brain damage--either by physical accident, stroke, or disease--has impaired their emotions but has left intact their ability to perform 'logical reasoning', as verified using standard tests of logical reasoning skill. Take away the emotions and the result is a person who, while able to conduct an intelligent conversation and score highly on standard IQ tests, is not at all rational in his or her behavior. Such people often act in ways highly detrimental to their own well being. So much for western science's idea of a 'coolly rational person' who reasons in a manner unaffected by emotions. As Damasio's evidence indicates, truly emotionless thought leads to behavior that by anyone else's standards is quite clearly irrational.

And as linguist Steven Pinker explained in his 1994 book The Language Instinct, language too is perhaps best explained in biological terms. Our facility for language, says Pinker, should be thought of as an organ, along with the heart, the pancreas, the liver, and so forth. Some organs process blood, others process food. The language organ processes language. Think of language use as an instinctive, organic process, not a learned, computational one, says Pinker.

So, while no one would deny that work in AI and computational linguistics has led to some very useful computer systems, the really fundamental lessons that were learned were not about computers but about ourselves. The research was successful in terms not of engineering but of understanding what it is to be human. Though Kubrick got it dead wrong in terms of what computers would be able to do by 1997, he was right on the mark in terms of what we ultimately discover as a result of our science. 2001 shows the entire evolution of mankind, starting from the very beginnings of our ancestors Homo Erectus and taking us through the age of enlightenment into the present era of science, technology, and space exploration, and on into the then-anticipated future of routine interplanetary travel. Looking ahead forty years to the start of the new millennium, Kubrick had no doubt where it was all leading. In the much discussed--and much misunderstood--surrealistic ending to the movie, Kubrick's sole surviving interplanetary traveler reached the end of mankind's quest for scientific knowledge, only to be confronted with the greatest mystery of all: Himself. In acquiring knowledge and understanding, in developing our technology, and in setting out on our exploration of our world and the universe, said Kubrick, scientists were simply starting on a far more challenging journey into a second unknown: the exploration of ourselves.

The approaching new millennium sees Mankind about to pursue that new journey of discovery. Far from taking away our humanity, as many feared, attempts to get computers to think and to handle language have instead led to a greater understanding of who and what we are. As a human being, I like that. For today's scientist, inner space is the final frontier, a frontier made accessible in part by attempts to build a real-world HAL. As a mathematician, I like that, too. Happy birthday, HAL.


The above celebration of the birth of HAL, the computer in the book and film 2001, is abridged from the book Goodbye Descartes: The End of Logic and the Search for a New Cosmology of Mind, by Keith Devlin, published by John Wiley and Sons in late January, 1997, price $27.95.


Devlin's Angle is updated at the beginning of each month.


Keith Devlin (devlin@stmarys-ca.edu) is the editor of FOCUS, the news magazine of the MAA. He is the Dean of Science at Saint Mary's College of California, and the author of Mathematics: The Science of Patterns, published by W. H. Freeman in 1994.

Devlin's Angle Archives