MAJOR THEMES AND TOPICS OF STUDY
The Nature of Representation
Because the topic of mental representation is central in cognitive science, debates about its exact nature continue to occur. Early debates concerned issues such as whether the storage of visual information takes place in terms of holistic images or of propositional networks. Another kind of distinction to be madeÑone imported from the field of artificial intelligenceÑis the one between declarative knowledge and procedural knowledge. That is, early in training on a complex task, people can often describe the rules they are using in fairly explicit terms. With extensive practice, however, "proceduralization" may occur: the ability to state the rules and to reason explicitly about them diminishes even as performance improves.
Different kinds of representational structures have been proposed, such as "schemas" (networks of related information, as in the contractual structure of a business agreement), "scripts" (expected sequences of activities, such as the events that take place in a restaurant or a dentist's office), and "naive physics models" (systems of belief about phenomena in the world, such as people's folk models of how evaporation causes cooling). Researchers have found that these kinds of representations are useful in predicting how people will respond to a given situation. For example, people tend to remember well those elements of a story that fit their schema for a situation, and they often distort inconsistent details so as to make them match better with their long-term-memory representation. There are also larger knowledge structures, such as "semantic networks" that embody knowledge about interconnected categories. An example of semantic networks would be taxonomies of knowledge about animals and plants. Some aspects of these taxonomic category structures appear to be cross-cultural.
Development and Learning
Study of cognitive development is also an important area of cognitive science. Beginning with the work of Jean Piaget and Lev Vygotsky, researchers have examined the ways in which children acquire understanding of the physical world and of conceptual structures. Researchers have found that in many cases there is considerable similarity between children's early performance and the performance of adults who are novices in a given domain. For example, when grouping physics problems, adults who know very little science show a classification pattern similar to that found for young children by grouping the problems according to their surface characteristics rather than according to underlying physical principles.
Such results have led many researchers to take a "child as universal novice" approach to cognitive development. According to this view, the striking differences between thinking in children and adults can best be explained in terms of gains in knowledge, rather than in terms of changing the fundamental logic of the child's thought processes.
One area in which many other researchers have challenged this idea of a general-purpose learning system, however, is that of language. They argue that ordinary learning processes, in which adults do better than children, cannot capture the well-proved ability of children to learn languages. This ability suggests that human infants innately possess it but that it tends to disappear over time. More indirect evidence that language may have a special status in human cognition comes from neurophysiological studies suggesting that certain parts of the brain's left hemisphere are dedicated to language.
Because still other researchers emphasize the commonalities between language learning and other kinds of learning, the degree to which language should be considered separately from other aspects of cognition remains an active question.
The dominant computational model for most of cognitive science's history has been symbolic processing, in which structural descriptions invsymbols are the medium of representation. In the classical approach, the assumption was that human thinking could be described using the same kind of abstract language as is used to describe symbolic computer programs. In particular, such a language includes symbols that correspond to psychologically relevant concepts and rules for combining and reasoning about symbolic descriptions. More recently, with the rise of the concept of neural networks (abstract mathematical models of the human nervous system; see automata, theory of), genetic algorithms, and connectionist systems, new kinds of computational models have been applied to various aspects of human cognition.
Connectionist models of pattern recognition have been able to combine top-down knowledge (constraints from the set of known patterns to the elements that make up the patterns) with bottom-up knowledge (constraints from the perceptual features to the global pattern to be recognized). Such models have shown humanlike behavior in such areas as word recognition and word pronunciation, including making psychologically plausible errors.
These models have had considerable success in some areas, notably perception and pattern recognition, and have also been applied to topics such as language learning, analogical mapping, and categorization. This research has led some workers to challenge the idea of rules operating over symbolic representations. They maintain that human knowledge is expressed in terms of networks of positive and negative connections between large numbers of subconceptual elements. Debate continues on the merits of connectionist modeling. Cognitive scientists such as David Rumelhart and James McClelland argue that symbolic representations are often unnecessary, while others such as Jerry Fodor and Xenon Pylyshyn argue that the generativity and systematicity of mental processing could not be achieved without a true representational level. Currently many researchers favor a hybrid approach that combines a neural net (for modeling low-level, perceptually driven phenomena such as pattern recognition) with a symbolic level (for modeling higher-order knowledge and reasoning).