Try a simple experiment. Stand beside a window, take a look at the scene outside and then sit down and quickly write a half-page report of what you have seen. It is a safe bet that you will mention people walking, cars, buses, streets and buildings, or (depending on the chosen window) grass, trees, hills, rivers and birds flying. It's extremely unlikely the report will mention blue sport utility vehicles, belted kingfishers, or instances of strutting, ambling or swaggering — even if your automotive, avian or human-locomotion expertise allows you to identify things and events at that level of detail. At the other extreme, it did not cross your mind to write expressions such as “intentionally cause their body to move horizontally”, or “self-propelled wheeled vehicle”, which some philosophers might use. Interestingly, your report will contain only terms that are the first to be learned by a child, are usually expressed by a single word in most languages, are remembered best and are preferentially used when we 'talk to ourselves'. They are, in the cognitive-sciences jargon, 'basic concepts'.

Ideally, in a theory of concepts and in lexical semantics, the term 'basic' ought to cover the topmost level of abstraction (something like the undifferentiated essential furniture of the world). Or, at the other extreme — in a tradition that extends back to the philosopher David Hume — they have roots in our most direct access to unadorned sense impressions (a green splotch here and now in front of me). Unfortunately, 'basic concepts' sit comfortably at an intermediate level — they are neither too general nor too specific. Lunch is a basic concept, but so are bread, spoon and banana, all possible parts of that lunch. The desire to decompose such concepts into real basics has proved to be almost irresistible. A famous example is the (alleged) decomposition of 'kill' as 'to cause to become not alive'. It is claimed that in our mental lexicon, kill is just shorthand for the latter composite expression. But consider situations in which the cause is separated from its outcome by a long and/or anomalous chain of intervening events, for example,the suicide of a government advisor after his involvement in a media row had been made public the previous week. It is clear that we will consider this an instance of 'causing to become not alive', but not an instance of 'killing'. Therefore, it seems that if any such decomposition is to hold water, we need an additional component X, so that: kill=CAUSE+BECOME NOT ALIVE+X. That would be acceptable if X were both general (in the same league as CAUSE), and sufficient. But it turns out that X needs to be as specific as 'kill'. So there is no gain in understanding — and therefore no explanatory use — for any such decomposition.

It has been widely assumed in the philosophy of mind, psychology and lexical semantics that basic concepts must be homogeneous under some interesting description or other. For example, they may have a characteristic role to play in concept acquisition, perception, memory or thought. However, there is no serious evidence for any such claim, and when combined with a thorough account of concept possession, this discomforting fact appears to lead to implications that are utterly implausible. For instance, it suggests that similarities among things imply similarities among concepts of those things. A rope may be similar to a snake because of their shape, but our concept of a rope is not at all similar to our concept of a snake. A leading thinker in this domain, Jerry Fodor, concludes that basic concepts have nothing interesting in common except their basicness — basic concepts are boring. But, he hastens to add, the fact that they are isn't!

Fodor champions a different tack altogether, treating basic concepts as 'atoms' that cannot be decomposed, expressing exactly the property that they express (for example, the property of killing) — no less and no more. There are two core components of this atomistic approach: first, a causal link between our mind and the property being exemplified, or evoked in discourse; and second, some efficient way of presenting a good example of that very property. Better still would be a prototypical example, as no one would introduce a child to the concept 'bird' by displaying a penguin, or to the concept 'tree' by displaying a bonsai. There is every reason to suppose that the human mind is natively equipped with the capacity to lock onto the salient property after an encounter, as long as it is supplied with the correct mode of presentation and the appropriate situation. The capacity to generalize instantly and competently from a good example, while retaining the corresponding verbal label, remains awesome, and at present mainly mysterious.

It is tempting to make direct connections between the meaning of a concept and the most obvious inferences that the person who possesses it is disposed to make — for example, the inference going from 'bird' to 'animal' and from 'water' to 'wet'. This move goes under the name of inferential role semantics. The counter to this move is that the content of a concept cannot ultimately consist of any kind of readiness to do something, not even a 'disposition' to draw inferences. It is rather a mental particular that applies to things, notably to the standard instances of the category for which the concept stands. But even failure to identify marginal exemplars (is a whale a fish? Is mercury a metal?) does not count as failure to possess that concept fully.

As much of our mental life consists of applying concepts to things, it may come as a disappointment that the most plausible attempts at an explanation (decomposition, inferential power, strategies of verification) did not work. Halfway between an avowal of impotence and an incitement to do better, the theory of concepts seems to be a signal place where, in spite of a voluminous literature, cognitive science “went wrong” — at least until now.

FURTHER READING

Fodor, J. A. Concepts: Where Cognitive Science Went Wrong (Oxford Univ. Press, Oxford, 1998).

Margolis, E. & Laurence, S. (eds) Concepts: Core Readings (MIT Press, Cambridge, Massachusetts, 1999).

Murphy, G. L. The Big Book of Concepts (MIT Press, Cambridge, Massachusetts, 2002).