In thefirst flush of enthusiasm at the invention of computers it was believed that wenow finally had the tools with which to crack the problem of the mind, andwithin years we would see a new race of intelligent machines. We are older andwiser now. We look at animals, we lookat humans and we want to be able to build machines that do what they do. Wewant machines to be able to learn in the way that they learn, to speak, toreason and eventually to have consciousness. AI is engineering but, at thisstage, is it also science? Is it, for example, modelling in cognitive science?We would like to think that is both engineering and science but thecontributions that is has made to cognitive science so far are perhaps weakerthan the contributions that biology has given to the engineering. Lookingback at the history of AI, we can see that perhaps it began at the wrong end ofthe spectrum. The ability to walk, incontrast, doesn't impress anyone. Everyone ignored the animal and went straightto the human, and the adult human too, not even the child human. And this iswhat "AI" has come to mean - artificial adult human intelligence. And it is not as if we canignore the latter skills and just carry on with human-level AI. That is, theanimal and child levels may be the key to making really convincing,well-rounded forms of intelligence, rather than the intelligence ofchess-playing machines like Deep Blue, which are too easy to dismiss as"mindless". It took 3 billion years of evolution to produce apes, andthen only another 2 million years or so for languages and all the things thatwe are impressed by to appear. That'scertainly what the history of AI has served to bear out. The basic philosophyis that we need much more understanding of the animal substrates of humanbehaviour before we can fulfil the dreams of AI in replicating convincingwell-rounded intelligence. Professor Kevin Warwick ofthe University of Reading has recently predicted that the new approach willlead to human-level AI in our lifetimes. The reason is that there is no obviousway of getting from here to there - to human-level intelligence from the ratheruseless robots and brittle software programs that we have nowadays. What we aretrying to do in the next generation is essentially to find out what are theright questions to ask.
Theargument I am developing is that there may be limits to AI, not because thehypothesis of "strong AI" is false, but for more mundane reasons. Wecannot make millions of these things and give them the living space in which todevelop their own primitive societies, language and cultures. We can't becausethe planet is already full. That's the main argument, and the reason for thetitle of this talk. What category of problemscould animal-like machines address? The kind of problems we are going to seethis approach tackle will be problems that are somewhat noise and errorresistant and that do not demand abstract reasoning. You must experience thedynamics of your own body in infancy and thrash about until the changinginternal numbers and weights start to converge on the correct behaviour. Amajor part of the appeal is the unique, fragile and unrepeatable nature of thesoftware beings you interact with. What will hold things up? Thereare many things that could hold up progress but hardware is the one that isstaring us in the face at the moment. The major theoretical issue to be solvedis probably representation: what is language and how do we classify the world. Inthe future, we may never be entirely alone, and if the controls are in thehands of our loved ones rather than the state, that may not be such a badthing. And not just drunks, but children, the old and infirm, the blind, allwill be empowered. And our towns and countryside would look so much sparser andmore peaceful. I've beentrying to give an idea of how artificial animals could be useful, but thereason that I'm interested in them is the hope that artificial animals willprovide the route to artificial humans. Icoming decades, weshouldn't expect that the human race will become extinct and be replaced byrobots.