Humans regularly intervene in others’ conflicts as third-parties. This has been studied using the third-party punishment game: A third-party can pay a cost to punish another player (the “dictator”) who treated someone else poorly. Because the game is anonymous and one-shot, punishers are thought to have no strategic reasons to intervene. Nonetheless, punishers often punish dictators who treat others poorly. This result is central to a controversy over human social evolution: Did third-party punishment evolve to maintain group norms or to deter others from acting against one’s interests? This paper provides a critical test. We manipulate the ingroup/outgroup composition of the players while simultaneously measuring the inferences punishers make about how the dictator would treat them personally. The group norm predictions were falsified, as outgroup defectors were punished most harshly, not ingroup defectors (as predicted by ingroup fairness norms) and not outgroup members generally (as predicted by norms of parochialism). The deterrence predictions were validated: Punishers punished the most when they inferred that they would be treated the worst by dictators, especially when better treatment would be expected given ingroup/outgroup composition.
Are humans too generous and too punitive? Many researchers have concluded that classic theories of social evolution (e.g., direct reciprocity, reputation) are not sufficient to explain human cooperation; instead, group selection theories are needed. We think such a move is premature. The leap to these models has been made by moving directly from thinking about selection pressures to predicting patterns of behavior and ignoring the intervening layer of evolved psychology that must mediate this connection. In real world environments, information processing is a non-trivial problem and details of the ecology can dramatically constrain potential solutions, often enabling particular heuristics to be efficient and effective. We argue that making the intervening layer of psychology explicit resolves decades-old mysteries in the evolution of cooperation and punishment.
Humans everywhere cooperate in groups to achieve benefits not attainable by individuals. Individual effort is often not automatically tied to a proportionate share of group benefits. This decoupling allows for free-riding, a strategy that (absent countermeasures) outcompetes cooperation. Empirically and formally, punishment potentially solves the evolutionary puzzle of group cooperation. Nevertheless, standard analyses appear to show that punishment alone is insufficient, because second-order free riders (those who cooperate but do not punish) can be shown to outcompete punishers. Consequently, many have concluded that other processes, such as cultural or genetic group selection, are required. Here, we present a series of agent-based simulations that show that group cooperation sustained by punishment easily evolves by individual selection when you introduce into standard models more biologically plausible assumptions about the social ecology and psychology of ancestral humans. We relax three unrealistic assumptions of past models. First, past models assume all punishers must punish every act of free riding in their group. We instead allow punishment to be probabilistic, meaning punishers can evolve to only punish some free riders some of the time. This drastically lowers the cost of punishment as group size increases. Second, most models unrealistically do not allow punishment to recruit labor; punishment merely reduces the punished agent’s fitness. We instead realistically allow punished free riders to cooperate in the future to avoid punishment. Third, past models usually restrict agents to interact in a single group their entire lives. We instead introduce realistic social ecologies in which agents participate in multiple, partially overlapping groups. Because of this, punitive tendencies are more expressed and therefore more exposed to natural selection. These three moves toward greater model realism reveal that punishment and cooperation easily evolve by direct selection—even in sizeable groups.
Keep your word. Love your neighbor. Help those in need. Only marry the child of your father’s sister. Stone those who have had sex out of wedlock. Kill outgroup members. Human moral communities have developed a variety of moral rules, injunctions, prescriptions, and suggestions. How are we to understand the origins and nature of human morality, both at the level of universal building blocks and the level of cultural variation and elaboration? As this volume illustrates, understanding human moral psychology is a truly interdisciplinary endeavor, drawing important contributions from psychologists, neuroscientists, philosophers, biologists, legal scholars, and many others. Our goal in this chapter is to illustrate the utility of taking an adaptationist approach from evolutionary biology to understand universal aspects of moral psychology. We first describe what it means to take an adaptationist approach. We next give several examples of how an adaptationist approach has informed the study of certain aspects of moral psychology. We then briefly conclude with what we see as the value of this approach to the study of moral psychology broadly.
Humans are often generous, even towards strangers encountered by chance and even in the absence of any explicit information suggesting they will meet again. Because game theoretic analyses typically conclude that a psychology designed for direct reciprocity should defect in such situations, many have concluded that alternative explanations for human generosity—explanations beyond direct reciprocity—are necessary. However, human cooperation evolved within a material and informational ecology: Simply adding consideration of one minimal ecological relationship to the analysis of reciprocity brings theory and observation closer together, indicating that ecology-free analyses of cooperation can be fragile. Using simulations, we show that the autocorrelation of an individual's location over time means that even a chance encounter with an individual predicts an increased probability of a future encounter with that same individual. We discuss how a psychology designed for such an ecology may be expected to often cooperate even in apparently one-shot situations.
Why did punishment and the use of reputation evolve in humans? According to one family of theories, they evolved to support the maintenance of cooperative group norms; according to another, they evolved to enhance personal gains from cooperation. Current behavioral data are consistent with both hypotheses (and both selection pressures could have shaped human cooperative psychology). However, these hypotheses lead to sharply divergent behavioral predictions in circumstances that have not yet been tested. Here we report results testing these rival predictions. In every test where social exchange theory and group norm maintenance theory made different predictions, subject behavior violated the predictions of group norm maintenance theory and matched those of social exchange theory. Subjects do not direct punishment toward those with reputations for norm violation per se; instead, they use reputation self-beneficially, as a cue to lower the risk that they personally will experience losses from defection. More tellingly, subjects direct their cooperative efforts preferentially towards defectors they have punished and away from those they haven’t punished; they avoid expending punitive effort on reforming defectors who only pose a risk to others. These results are not consistent with the hypothesis that the psychology of punishment evolved to uphold group norms. The circumstances in which punishment is deployed and withheld–its circuit logic–support the hypothesis that it is generated by psychological mechanisms that evolved to benefit the punisher, by allowing him to bargain for better treatment.