The health and vitality of relationships, groups, and the society at large is strongly challenged by social dilemmas, or conflicts between short-term self-interest and long-term collective interest. Pollution, depletion of natural resources, and intergroup conflict, can be characterized as examples of urgent social dilemmas. Social dilemmas are challenging because acting in one’s immediate self-interest is tempting to everyone involved, even though everybody benefits from acting in the longer-term collective interest. For example, relationships are healthier if partners do not neglect one another’s preferences, organizations are more productive if employees spontaneously exchange one another’s expertise, and nations fare better to the extent that they show respect for one another’s values, norms, and traditions. Similarly, in the long run everyone would benefit from a cleaner environment, yet how many are prepared to voluntarily reduce their carbon footprint by saving more energy or driving or flying less frequently?
One real world social dilemma occurred during the Winter of 1978/1979 in the Netherlands. Due to an unusually heavy snow, a small village in the North of the Netherlands was completely cut off from the rest of country so that there was no electricity to use for light, heating, television, etc. However, one of the 150 inhabitants owned a generator that could provide sufficient electricity to all people of this small community if and only if they exercised substantial restraint in their energy use. For example, they should use only one light, they should not use heated water, the heating should be limited to about 18 degrees Celsius (64 degrees Fahrenheit), and the curtains should be closed. As it turned out, the generator collapsed because most people were in fact using heated water, living comfortably at 21 degrees Celsius (70 degrees Fahrenheit), watching television, and burning several lights simultaneously. After being without electricity for a while, the citizens were able to repair the generator, and this time, they appointed inspectors to check whether people were using more electricity than they agreed upon. But even then, the generator eventually collapsed due to overuse of energy. And again, all inhabitants suffered from the cold and lack of light, and of course, could not watch television.
The remainder of this page offers a formal definition and classification of social dilemmas, covers several theories of cooperation in social dilemmas, and describes three broad strategies for eliciting cooperation in social dilemmas. The majority of this material was taken from Wikipedia’s excellent summary of social dilemmas.
Defining social dilemmas
Social dilemmas are formally defined by two outcome-relevant properties: (1) each person has an individual rational strategy which yields the best outcome (or pay-off) in all circumstances (the non-cooperative choice, also known as the dominating strategy); (2) if all individuals pursue this strategy it results in a deficient collective outcome–everyone would be better off by cooperating (the deficient equilibrium). Researchers frequently use the experimental games method to study social dilemmas in the laboratory. An experimental game is a situation in which participants choose between cooperative and non-cooperative alternatives, yielding consequences for themselves and others. These games are generally depicted with an outcome matrix representing valuable outcomes for participants like money or lottery tickets.
Types of social dilemmas
The literature on social dilemmas has historically revolved three metaphorical stories, the Prisoner’s Dilemma, the Public Goods Dilemma and the Tragedy of the Commons (see Commons Dilemma) and each of these stories has been modeled as an experimental game.
The Prisoner’s Dilemma Game was developed by scientists in the 1950s. The cover story for the game involved two prisoners who are separately given the choice between testifying against the other (non-cooperation with one’s partner) or keeping silent (cooperation with one’s partner). The outcomes are such that each of them is better off testifying against the other but if they both pursue this strategy they are both worse off than by remaining silent.
The Public Goods Game has the same properties as the Prisoner’s Dilemma Game but captures a public good, or a resource from which all may benefit regardless of whether or not they contributed to the good. For instance, people can enjoy the city parks regardless of whether they contributed to their upkeep through local taxes. Most public goods are collectively shared, and are non-excludable. That is, once these goods are provided nobody can be excluded from using them. As a result, there is a temptation to enjoy the good without making a contribution. Those who do so are called free-riders (like taking a free-ride on public transportation), and while it is rational to free-ride, if all do so the public good is not provided and all are worse off. Although public goods are often defined in terms of non-excludability, it is true that in real life some people can be excluded from using some public goods – for example, recognized hooligans are sometimes penalized by a ban on entering the football stadium of their favorite club. Researchers mostly study two public good dilemma games in the laboratory. Participants get a monetary endowment to play these games and decide how much to invest in a private fund versus group fund. Outcomes are such that it is individually rational to invest in the private fund, yet all would be better off investing in the group fund because this yields a bonus. In the continuous game the more people invest in the group fund the larger their share of the bonus. In the step-level game, people get a share of the bonus if the total group investments exceed a critical (step) level.
The Commons Dilemma Game is inspired by the metaphor of the Tragedy of the Commons. This story is about a group of herdsmen having open access to a common parcel of land on which their cattle grazes. It is in each herdman’s interest to keep as many cattle as possible on the commons, even if the commons is damaged as a result. The herdsman receives all the benefits from the additional cows and the damage to the commons is shared by the entire group. Yet if all herdsmen make this individually rational decision, the commons is destroyed and all will suffer.
Compare this with the use of non-renewable resources like water or fish: When water is used at a higher rate than the reservoirs are replenished or fish consumption exceeds its reproductive capacity then we face a tragedy of the commons. The experimental commons game involves a common resource pool (filled with money or points) from which individuals harvest without depleting it. It is individually rational to harvest as much as possible, but the resource collapses if people harvest more than the replenishment rate of the pool.
There are numerous types of social dilemmas, and how they can be presented to people. For example, a take-some dilemma is one where cooperative actions involve exercising restraint on taking from shared resources; the give-some dilemma is one where cooperative actions involve contributing to a shared resources or good. Such situations can also be presented in terms of “leave-some” dilemmas and “keep-some” dilemmas. Also, the outcomes for others and self can be presented in various ways, and people may themselves interpret the same social dilemma in different ways – for example, as doing business or interpersonal exchange. Typically, the ways in which a social dilemma are presented to people makes a big difference in terms of what people expect others to do, and what they do themselves.
Theories of social dilemmas
Game Theory
Social dilemmas have attracted a great deal of interest in the social and behavioural sciences. Economists, biologists, psychologists, sociologists, and political scientists alike are studying when people are selfish or cooperative in a social dilemma. The most influential theoretical approach is game theory (i.e., rational choice theory, expected utility theory). Game theory assumes that individuals are rational actors motivated to maximize their utilities. Utility is often narrowly defined in terms of people’s material self-interest. Game theory thus predicts a non-cooperative outcome in a social dilemma. Although this is a useful starting premise there are many circumstances in which people may deviate from individual rationality, demonstrating the limitations of economic game theory.
Evolutionary theories
Biological and evolutionary approaches provide useful complementary insights into decision-making in social dilemmas. According to selfish gene theory, individuals may pursue a seemingly irrational strategy to cooperate if it benefits the survival of their genes. The concept of inclusive fitness delineates that cooperating with family members might pay because of shared genetic interests. It might be profitable for a parent to help their off-spring because doing so facilitates the survival of their genes.
Reciprocity theories provide a different account of the evolution of cooperation. In repeated social dilemma games between the same individuals cooperation might emerge because people can punish a partner for failing to cooperate. This encourages reciprocal cooperation. Reciprocity can explain why people cooperate in dyads but what about larger groups? Evolutionary theories of indirect reciprocity and costly signaling may be useful to explain large-scale cooperation. When people can selectively choose partners in social dilemmas, it pays to develop a cooperative reputation. Through cooperating people signal to others that they are kind and generous which might make them attractive group members.
Psychological theories
Psychological models offer additional insights into social dilemmas by questioning the game theory assumption that individuals pursue their narrow self-interest. Interdependence theory suggests that people transform a given outcome matrix into an effective matrix that is more consistent with their social dilemma preferences. A prisoner’s dilemma with close kin, for example, changes the outcome matrix into one in which it is rational to be cooperative. Attribution models offer further support for these transformations. Whether individuals approach a social dilemma selfishly or cooperatively might depend upon whether they believe people are naturally greedy or cooperative. Similarly, goal expectation theory assumes that people might cooperate under two conditions: They must (1) have a cooperative goal, and (2) expect others to cooperate. Another psychological model, the appropriateness model, questions the assumption that individuals rationally calculate their outcomes before reaching a decision. Instead many people base their decisions on what people around them do and use simple heuristics, like an equality rule, to decide whether or not to cooperate.
What promotes cooperation?
Studying the conditions under which people cooperate might lead to recommendations to solve social dilemmas in society. The literature distinguishes between three broad classes of solutions–motivational, strategic, and structural–which vary in whether they see actors as motivated purely by self-interest and in whether they change the rules of the social dilemma game.
Motivational solutions
Motivational solutions often assume that people are not exclusively self-interested, but may also have other-regarding preferences. There is a considerable literature on social value orientation which reveals that people differ in probability with which they use self-regarding or other regarding decision rules. Theoretically, the concept of social value orientation extends the “rational self-interest” postulate by assuming that individuals systematically differ in their interpersonal preferences, with some seeking to enhance joint outcomes and equality in outcomes (prosocial orientation), and others seeking to enhance their own outcomes in absolute terms (individualistic orientation) or comparative terms (competitive orientation). People with prosocial orientations weigh the moral implications of their decisions more and see cooperation as the most preferable choice in a social dilemma. When there are conditions of scarcity, like a water shortage, prosocials harvest less from a common resource. Similarly prosocials are more concerned about the environmental consequences of, for example, taking the car or public transport, and are more likely to donate money to help the poor and the ill. Research on the development of social value orientations suggest an influence of factors like family history (prosocials have more sibling sisters), age (older people are more prosocial), culture (more individualists in Western cultures), gender (more women are prosocial), even university course (economics students are less prosocial).
Many people also have group-regarding preferences (social identity). People’s group association is a powerful predictor of their social dilemma behaviour. When people highly identify with a group they contribute more to public goods and harvest less from common resources. The effects of group identification are even more pronounced when there is intergroup competition. When social dilemmas involve two or more groups of players there is much less cooperation than when individuals play. Yet intergroup competition also facilitates intragroup cooperation, especially among men. When a resource is depleting rapidly, people are more willing to compensate for selfish decisions from ingroup members than from outgroup members. Furthermore, the free-rider problem is much less pronounced when there is intergroup competition. However, intergroup competition can be a double-edged sword. Encouraging competition between groups might serve the temporary needs of ingroup members but the social costs of intergroup conflicts can be severe for either group. It is not entirely clear why people cooperate more as part of a group. One possibility is that people become genuinely more altruistic. Other possibilities are that people are more concerned about their ingroup reputation or are more likely to expect returns from ingroup than outgroup members. This needs further investigation.
Another factor that might affect the weight individuals assign to group outcomes is the possibility of communication. A robust finding in the social dilemma literature is that cooperation increases when people are given a chance to talk to each other. It has been quite a challenge to explain this effect. One motivational reason is that communication reinforces a sense of group identity. Another reason is that communication offers an opportunity for moral suasion so that people are exposed to arguments to do what is morally right. But there may be strategic considerations as well. First, communication gives group members a chance to make promises and explicit commitments about what they will do. Yet it is not clear if many people stick to their promises to cooperate. Similarly through communication people are able to gather information about what others do. However, in social dilemmas this information might produce ambiguous results: If I know that most people cooperate I may be tempted to act selfishly?
Strategic solutions
One strategic solution to social dilemmas is the use of reciprocity. One of the most convincing cases supporting the benefits of reciprocity has been made in The Evolution of Cooperation (Axelrod, 1984) which provides compelling evidence in support of the effectiveness of the Tit-For-Tat strategy in eliciting and maintaining patterns of cooperative interaction. Tit-For-Tat is a strictly reciprocal strategy that begins with a cooperative choice, and subsequently imitates the other person’s previous choice. Axelrod’s well-known computer tournament studies revealed that, relative to alternative strategies that were submitted by several experts, Tit-For-Tat elicited greater levels of cooperation from the social environment and yielded greater outcomes than any of the other strategies that were submitted.
Why was Tit-For-Tat, as the exemplar of strict reciprocity, the winner in this tournament? Axelrod argued that Tit-For-Tat effectively elicits patterns of mutual cooperation – thereby enhancing the long-term outcomes for both the actor (who follows Tit-For-Tat) and the dyad – because of four features. Tit-For-Tat is nice because it never initiates noncooperation, it is forgiving because it never responds to a noncooperative choice prior to the last behavior, it is retaliatory because it immediately matches the other person’s noncooperative behavior, and it is clear because interdependent others will soon learn the contingencies between one’s own actions and the subsequent matching by Tit-For-Tat. Empirical research using actual participants has revealed that Tit-For-Tat is among the strategies that are most effective in eliciting and maintaining cooperation between individuals. TFT is a common strategy in real-world social dilemmas – for example, marriage contracts, rental agreements, and international trade policies that all use tactics that resemble TFT.
However, TFT has one very important limitation – it cannot escape from cycles of noncooperative interaction. That is, in social environments in which Tit-For-Tat’s cooperative behavior (e.g., the first choice) is not always reciprocated, Tit-For-Tat is almost bound to be trapped in a cycle of noncooperative interactions in which individuals continue to respond noncooperatively to one another’s noncooperative behavior. The only way out of this pattern of negative reciprocity, or echo-effect, is that one of both persons initiates cooperation. Because Tit-For-Tat does not initiate cooperation, Tit-For-Tat does not actively contribute to breaking out of the pattern of negative reciprocity; if anything, Tit-For-Tat supports the echo-effect.
Even when partners might not meet again, it could be strategically wise to cooperate. When people can selectively choose who to interact with it might pay to be seen as a cooperator. Research shows that people who cooperate create better opportunities for themselves than those who do not cooperate. While noncooperators and norm violaters may elicit tendencies toward punishment, even by uninvolved observers, cooperators may be rewarded, and often are selectively preferred as partners in dyads and groups; and they are more preferred as group leaders. Such effects are more likely to come into being when people’s actions are visible by other. Public acts of altruism and cooperation like charity giving, philanthropy, and bystander intervention are probably manifestations of reputation-based cooperation. And it is interesting to note that the chances of some of these acts – such as being helped by bystanders – are greater in small villages than in urban cities.
Structural solutions
Structural solutions change the rules of the game either through modifying the social dilemma or removing the dilemma altogether. Not surprisingly, many studies have shown that cooperation rates go up the higher the relative outcomes for cooperation. Furthermore, experimental studies show that cooperation is more likely if individuals have the ability to punish defectors. Yet implementation of reward and punishment systems can be problematic for various reasons. First, there are significant costs associated with creating and administering sanction systems. Providing selective rewards and punishments requires support institutions to monitor the activities of both cooperators and non-cooperators, which can be quite expensive to maintain. Second, these systems are themselves public goods because one can enjoy the benefits of a sanctioning system without contribution to its existence. The police, army, and judicial system will fail to operate unless people are willing to pay taxes to support them. This raises the question if many people want to contribute to these institutions. Experimental research suggests that particularly low trust individuals are willing to invest money in punishment systems. A considerable portion of people are quite willing to punish non-cooperators even if they personally do not profit. Some researchers even suggest that altruistic punishment is an evolved mechanism for human cooperation. A third limitation is that punishment and reward systems might undermine people’s voluntary cooperative intention. Some people get a “warm glow” from cooperation and the provision of selective incentives might crowd out their cooperative intention. Similarly the presence of a negative sanctioning system might undermine voluntary cooperation. Research has found that punishment systems decrease the trust that people have in others. Thus, sanctioning is a delicate strategy.
Boundary structural solutions modify the social dilemma structure and such strategies are often very effective. An often studied solution is the establishment of a leader or authority to manage a social dilemma. Experimental studies on commons dilemmas show that overharvesting groups are more willing to appoint a leader to look after the common resource. There is a preference for a democratically elected prototypical leader with limited power especially when people’s group ties are strong. When ties are weak, groups prefer a stronger leader with a coercive power base. The question remains whether authorities can be trusted in governing social dilemmas and field research shows that legitimacy and fair procedures are extremely important in citizen’s willingness to accept authorities.
Another structural solution is reducing group size. Cooperation generally declines when group size increases. In larger groups people often feel less responsible for the common good and believe, rightly or wrongly, that their contribution does not matter. Reducing the scale–for example through dividing a large scale dilemma into smaller more manageable parts–might be an effective tool in raising cooperation.
Another proposed boundary solution is to remove the social from the dilemma, by means of privatization. People are often better in managing a private resource than a resource shared with many others. However it is not easy to privatize moveable resources such as fish, water, and clean air. Privatization also raises concerns about social justice as not everyone may be able to get an equal share. Finally, privatization might erode people’s intrinsic motivation to cooperate.
A number of field studies have explored people’s support for real-world structural solutions. For example, field research on conservation behaviour has shown that selective incentives in the form of monetary rewards are effective in decreasing domestic water and electricity use. Several studies have also examined people’s support for carpool lanes, privatizing a national railway system and improvements in public transit. These studies suggest that not all structural solutions are supported by citizens (e.g., the first carpool lane in Europe or privatization of the British Railway system), and that support for such solutions is often guided by self-interested concerns, especially among those with a proself value orientation.
Conclusions
Social dilemmas are ubiquitous and essential to social life. Theoretically, social dilemmas are important because they capture a strong conflict of motives. Also, what people do matters to other people in important ways. In many ways, social dilemmas address questions as basic as is human nature, including human motivation and adaptability, as well as the evolution of cooperation. Societally, social dilemmas are important because they often create or lead to social issues, problems, or even disasters – as illustrated by the Tragedy of the Commons or the Winter in Groningen – that could have been prevented, or that can still be resolved in a powerful way. In addition to our dealing with our environment, interpersonal issues in marriage, friendship, or organizations, negotiation processes among individuals, groups, and nations, can easily escalate in divorce, interpersonal struggle, enduring conflict or warfare.
As social dilemmas in society become more pressing there is an increasing need for policies. It is encouraging that much social dilemma research is applied to areas such as organizational welfare, public health, local and global environmental change. The emphasis is shifting from pure laboratory research towards research testing combinations of motivational, strategic, and structural solutions. It is also noteworthy that social dilemmas is one of the most interdisciplinary research fields with participation from researchers from, among others, antropologists, biologists, economists, mathematicians, neuroscientists, political scientists, and psychologists who are developing unifying theoretical frameworks to study social dilemmas (like evolutionary theory). For instance, there is a burgeoning neuroeconomics literature studying brain correlates of decision-making in social dilemmas with neuroscience methods. Finally, social dilemma researchers are increasingly using more dynamical experimental designs to see, for instance, what happens if people can voluntarily or involuntarily enter or exit a social dilemma, or play different social dilemmas at the same time within different groups. It is a field that has always been very important, but it seems that this is now more strongly recognized than ever before by theorists and those who seek to use this knowledge to address urgent social issues. We are looking forward to a bright future of social dilemmas.
References
Axelrod, R. A. (1984). The evolution of cooperation. New York: Basic Books.
Batson, C. D. (1994). Why act for the public good. Personality and Social Psychology Bulletin, 20, 603-610.
Dawes, R. and Messick, M. (2000). Social Dilemmas. International Journal of Psychology, 35 (2), 111-116.
Fehr, E., & Gächter, S. (2002). Altruistic punishment in humans. Nature, 415, 137-140.
Frank, R. H. (1988). Passions within reason: The strategic role of the emotions. New York: Norton.
Gintis, H., Bowles, S., Boyd, R., & Fehr, E. (2005, Eds). Moral sentiments and material interests: The foundation of cooperation in economic life. Cambridge, Massachusetts. MIT.
Hardin, G. (1968). The tragedy of the commons. Science, 162, 1243-1248.
Henrich, J., Boyd, R. Bowles, S. Gintis, H., Fehr, E., Camerer, C., McElreath, R., Gurven, M., Hill, K., Barr, A.,
Ensminger, J., Tracer, D., Marlow, F., Patton, J., Alvard, M., Gil-White, F., & Henrich, N. (2005) ‘Economic Man’ in Cross-Cultural Perspective: Ethnography and Experiments from 15 small-scale societies. Behavioral and Brain Sciences, 28, 795-855.
Kollock, P. (1998). Social dilemmas: Anatomy of cooperation. Annual Review of Sociology, 24, 183-214.
Komorita, S, & Parks, C. (1994). Social dilemmas. Westview Press.
Messick, D. M., & Brewer, M. B. (1983). Solving social dilemmas: A review. In L. Wheeler & P. Shaver (Eds.), Review of personality and social psychology (Vol. 4, pp. 11 – 44). Beverly Hills, CA: Sage.
Nowak M. A., & Sigmund, K. (2005). Evolution of indirect reciprocity. Nature 437, 1291-1298.
Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective action. Cambridge: Cambridge University Press.
Ridley, M. (1997). Origins of virtue. London: Penguin Classics
Schneider, S. K., & Northcraft, G. B. (1999). Three social dilemmas of workforce diversity in organizations: A social identity perspective. Human Relations, 52, 1445-1468.
Van Dijk E., & Wilke, H. A. M. (2000). Decision-induced focusing in social dilemmas: Give-some, keep-some, take-some, and leave-some dilemmas. Journal of Personality and Social Psychology, 78, 92-104.
Van Lange, P.A. M., Otten, W., De Bruin, E. M. N., & Joireman, J. A. (1997). Development of prosocial, individualistic, and competitive orientations: Theory and preliminary evidence. Journal of Personality and Social Psychology, 73, 733-746.
Van Vugt, M., & De Cremer, D. (1999). Leadership in social dilemmas: The effects of group identification on collective actions to provide public goods. Journal of Personality and Social Psychology, 76, 587-599.
Weber, M., Kopelman, S., & Messick, D. M. (2004). A conceptual review of social dilemmas: Applying a logic of appropriateness. Personality and Social Psychology Review, 8, 281-307.
Yamagishi, T. (1986). The structural goal/expectation theory of cooperation in social dilemmas. In E. Lawler (Ed.), Advances in group processes (Vol. 3, pp. 51–87). Greenwich, CT: JAI Press.