LATER WORK IN AI Work in AI in the 1990s made great strides in several areas. In the field of strategy, a computer program named Chinook became the first such program to win a human world championship in the game of checkers. In the mid-1990s, using a combination of pre-programmed knowledge and learning techniques, the TD-Gammon backgammon-playing program revised the views of expert players with respect to positional judgments. And finally, computer programs for playing chess, using special-purpose parallel-processing computer hardware, finally reached the grand-master level. In May 1997 a combination of computer hardware and software, developed by an IBM team and named Deep Blue, beat the reigning world chess champion, Garry Kasparov, in a six-game match. In the field of planning and scheduling, AI techniquesÑsometimes in conjunction with more traditional operations research methodsÑhave proven successful in many areas of application. These include space applications (for example, scheduling the Hubble Space Telescope and some Space Shuttle mission components), semiconductor manufacturing, heavy manufacturing, and military logistics. As an example of the last-named area, the Dynamic Analysis and Replacing Tool (DART) was used for logistics planning and analysis in the Persian Gulf. Other AI successes have included autonomous system monitoring and diagnosis for spacecraft such as those in NASA''s Voyager and Deep Space programs. Also, although there is still work to be done in the area of continuous, speaker-independent speech recognition, such systems are now sold commercially. In addition, an impressive demonstration of computer-vision techniques was provided by "Hands Across America": the already-mentioned successful journey of the modified minivan NAVLAB from Pittsburgh to San Diego. More broadly, some researchers in AI have taken the concept of an intelligent agent to heart and have worked to understand how collections of such agents should behave. For example, so-called Cooperative Distributed Problem Solving Systems consist of several computational agents designed to act together to solve problems. Systems of this sort have been developed for applications such as distributed electricity management and failure diagnosis. By contrast, in multiagent systems, each agent operates in a "self-interested" manner. These systems have been developed, for example, to do comparison shopping on the World Wide Web and to participate rationally in online auctions on behalf of their human developers. Another application area for intelligent agents that is emerging is entertaining, such as the creation of interesting characters in games, interactive books, and computer-animated movies and television. Learning techniques, including decision trees, neural networks, and reinforcement learning, were successfully deployed in areas such as handwriting recognition, credit-card-fraud detection, and the game-playing and vehicle-navigation applications already mentioned. In the 1990s the new application area of "data mining"Ñlearning from historical dataÑbecame extremely important as large companies began to see a use for the voluminous historical data on customers and suppliers that is now being kept electronically. In the area of reasoning with uncertainty, the technique of using what are called "belief nets" permits a principled but efficient way to develop computer systems that need to make decisions about uncertain actions in the face of uncertain observations, and to learn what actions to take. THE FUTURE OF AI RESEARCH One impediment to building even more useful expert systems has been, from the start, the problem of inputÑin particular, the feeding of raw data into an AI system. To this end, much effort has been devoted to speech recognition, character recognition, machine vision, and natural-language processing. A second problem is in obtaining knowledge.
It has proved arduous to extract knowledge from an expert and then code it for usey the machine, so a great deal of effort is also being devoted to learning and knowledge acquisition. One of the most useful ideas that has emerged from AI research, however, is that facts and rules (declarative knowledge) can be represented separately from decision-making algorithms (procedural knowledge). This realization has had a profound effect both on the way that scientists approach problems and on the engineering techniques used to produce AI systems. By adopting a particular procedural element, called an inference engine, development of an AI system is reduced to obtaining and codifying sufficient rules and facts from the problem domain. This codification process is called knowledge engineering. Reducing system development to knowledge engineering has opened the door to non-AI practitioners. In addition, business and industry have been recruiting AI scientists to build expert systems. In particular, a large number of these problems in the AI field have been associated with robotics (see automata, theory of). There are, first of all, the mechanical problems of getting a machine to make very precise or delicate movements. Beyond that are the much more difficult problems of programming sequences of movements that will enable a robot to interact effectively with a natural environment, rather than some carefully designed laboratory setting. Much work in this area involves problem solving and planning. A radical approach to such problems has been to abandon the aim of developing "reasoning" AI systems and to produce, instead, robots that function "reflexively." A leading figure in this field has been Rodney Brooks of the Massachusetts Institute of Technology. These AI researchers felt that preceding efforts in robotics were doomed to failure because the systems produced could not function in the real world. Rather than trying to construct integrated networks that operate under a centralizing control and maintain a logically consistent model of the world, they are pursuing a behavior-based approach named subsumption architecture. Subsumption architecture employs a design technique called "layering"Ña form of parallel processing in which each layer is a separate behavior-producing network that functions on its own, with no central control. No true separation exists, in these layers, between data and computation. Both of them are distributed over the same networks. Connections between sensors and actuators in these systems are kept short as well. The resulting robots might be called "mindless," but in fact they have demonstrated remarkable abilities to learn and to adapt to real-life circumstances. The apparent successes of this new approach have not convinced many supporters of integrated-systems development that the alternative is a valid one for drawing nearer to the goal of producing true AI. The arguments that have arisen between practitioners of the two different methodologies are in fact profound ones. They have implications about the nature of intelligence in general, whether natural or artificial.