Prisoner's dilemma

From RationalWiki
Jump to navigation Jump to search
Part of a
convergent series on

Mathematics
link=:category:
1+1=11
This basic modelWikipedia has therefore become the ‘E. coliWikipedia of social psychology’, and has been extensively applied in theoretical biology, economics, and sociology during the past thirty years.
—Christina Fang et al. (2002)[1]

The prisoner's dilemma is a classic problem in game theory. It has the paradoxical outcome that members of a group will consciously steer towards a sub-optimal outcome in certain scenarios.[2][3]

B
don't confess confess
A don't confess A: 3 years

B: 3 years

A: 0

B: 5 years

confess A: 5 years

B: 0

A: 1 year

B: 1 year

The game is usually phrased in terms of two suspects, both of whom have been arrested, and offered some reward if they confess. [4]

If only one of them confesses, this provides evidence of a major crime.
In this case, the confessor is rewarded by a 5 year reduction of sentence, while the other suspect will receive no reward.
If both confess, they will both have their sentences reduced by 1 year.
If neither confesses, their sentences will be reduced by 3 year for a minor crime.

It is obvious that the best overall outcome for the group would be if both prisoners cooperated and stayed silent: rewards of 3 years for both prisoners. However, in the "default" setting of the prisoner's dilemma, we assume that the prisoners are not given the chance to work out such a strategy and that they are interested in their own well-being first.

Prisoner A will now analyze his options:

  • If Prisoner B chooses "don't confess", Prisoner A's best choice will be "confess": A gets the maximum reward of 5 years reduction.
  • If Prisoner B chooses "confess", Prisoner A's best choice will be "confess", too: 1 year reduction is better than 0.

(The case for Prisoner B is symmetric.)

Using this reasoning, both prisoners will choose "confess", even though it is not the best result.

The strategy "confess" is a strictly dominant strategy: the choice of Prisoner B does not change the way Prisoner A will act, and vice versa. The "confess/confess" scenario is also the only Nash equilibriumWikipedia with this payout matrix. More generally,[4] let R be the payout for mutual cooperation (i.e. not confessing); T be the reward for unilateral defection (i.e. confessing); P be the payout for mutual defection; and S be the reward (conventionally set to zero) for unilateral cooperation.

If T > R > P > S, then the Nash equilibrium of the game is mutual defection, whereas 2R > T + S makes mutual cooperation the globally best outcome.

Iterated prisoner's dilemma[edit]

The relationships of iterated prisoner's dilemma strategies

The iterated prisoner's dilemma is when the basic game is played multiple times (sometimes, infinitely many times). Here, co-operation is sometimes a Nash equilibrium.[note 1] This requires that each player pays attention to what the other player did on previous "rounds", and punish or reward the other player as appropriate.

In 1979, Robert Axelrod put out a call for experts in game theory and computational science to send in algorithms for playing an iterative Prisoner's Dilemma.[5] He proposed to have all submitted algorithms compete in a tournament to see which one was the best. A total of fourteen algorithms were submitted, ranging from immensely complicated and computational intensive, to extremely simple. The results were published in the Journal of Conflict Resolution, and as it turned out, the simplest and smallest algorithm won the tournament. It was developed by Anatol Rapoport, of the University of Toronto, and it was called "tit for tat". The "tit for tat" strategy is to cooperate the first time, and then on all subsequent times, the strategy is to do whatever the opponent did on the turn prior to the one you are on. While subsequent algorithms have been developed that can best the "tit for tat" strategy, it remains the most computationally efficient.[6] Because of this, it has been proposed as the strategy that humans employ in social interactions.[citation needed]

An additional strategy that is often followed and debated is the "Grim Trigger": cooperate until the first defection, and from then on out, defect every turn. Grim Trigger tends to work only when there is information exchange.[7]

Relation to international affairs: nuclear detente[edit]

B
defect cooperate
A defect A: +1

B: +1

A: +5

B: +0

cooperate A: +0

B: +5

A: +3

B: +3

The prisoner's dilemma can be used to explain the awkward situation of exact nuclear parity. So long as a first-strike is possible, and the first-strike would eliminate the chance of retaliation, the players are in a "prisoner's dilemma", and, as noted above, are incentivized to defect.

Much of Cold War policy, then, was struggling to prevent a prisoner's dilemma situation. For example, the retention of American nuclear warheads in untraceable submarines, and the retention of Russian arms in untraceable rail cars, prevented the incentive of the first strike, and kept the parties locked in a situation where cooperation remained the best strategy.[note 2]

Nuclear "escalation" could destabilize such parity. For example, the development of MIRVWikipedia (Multiple Independently targetable Re-entry Vehicle) warheads briefly placed the United States ahead of the Soviet Union, but when the latter also employed MIRVs, some US hawks worried that the Soviet's ability to cram more warheads onto a missile than the US (Soviet missiles could carry greater payloads) would put the Reds ahead of the US. Similarly, American development of an effective nuclear shield (i.e. an SDI which actually worked) would have "won" the game and potentially allowed for a US nuclear first strike without the opportunity of retaliation and was greatly feared by Russia, leading the Kremlin to contemplate SDI as a reason for a first-strike were its completion imminent. Fortunately, the Soviets quickly realized that Ronnie's orbital ray guns were pies in the sky, rather than a workable defense against ICBMs, and were happy to let Washington waste its money on this white elephant.

The "game" as studied by political scientists is displayed to the right, with the traditional values.

How people could play such a "game" when the relative survival of the human race, and most complex life on earth, was at stake, is a whole 'nother interesting question. The answer is that it's not really a "game", even though it's called "game theory". The study of these matters is of dire importance to any opposing powers before they commit to sacrificing the life of their people.

In popular culture[edit]

The prisoner's dilemma has been featured in various media in multiple different forms, with the most common being a game called "Split or Steal" that has been used in multiple game shows, such as Golden Balls[9] and Brain Games, and has also been adapted into a popular video game available on Steam.[10]

In the game "Split or Steal," two players are selected and a "pot" is set up, that pot being an even amount of money that both players have a chance at winning. They're given two options, to "split" the pot or to "steal" it, and neither player will be aware of the other's choice until the results of the game are determined. The potential outcomes are as follows:

  • If Player A decides to "split" while Player B decides to "steal," then Player B will receive the whole pot of money while Player A receives nothing.
  • The same goes if Player A "steals" while Player B "splits," and Player A will receive the whole pot while Player B receives nothing.
  • If both players choose to "steal," then neither player will receive any money.
  • If both players choose to "split," then each player will receive half the pot.

As a whole, it would be to both players' advantage to split the pot evenly, so that both could walk away with an even cut of the earnings. If the pot were $500,000, then both players would receive $250,000 and be considerably better off in the end. However, to each individual player, the advantage would paradoxically be to steal the whole pot and earn twice the winnings. Even if the other player were to choose stealing, the first player could take solace in knowing that the double-crosser that chose to backstab them loses everything for their decision. Ultimately, it becomes a matter of whether each player is willing to trust the other, and whether or not they're willing to abuse that trust.

See also[edit]

External links[edit]

Notes[edit]

  1. If the game is played a finite number of times, and both players know how many times it will be played, then cooperation is not a Nash equilibrium. On the last round, cooperating does not incentivize future cooperation (since it's the last round), so both players should defect. By backwards induction, cooperating on the second-to-last round does not incentivize cooperation in the last round, so both players should defect, etc.
  2. The first public analysis of this frightening balance of power was provided by strategist Herman Kahn in his 1961 book On Thermonuclear War.[8]

References[edit]

  1. "On Adaptive Emergence of Trust Behavior in the Game of Stag Hunt." Group Decision and Negotiation 11.6 (2002): 449-467.
  2. Anatol Rapoport "Escape from paradox." Scientific American 217.1 (1967): 50-59.
  3. Open access journal articleShirli Kopelman "Tit for tat and beyond: The legendary work of Anatol Rapoport." Negotiation and Conflict Management Research 13.1 (2020): 60-84.
  4. 4.0 4.1 Open access journal articleWilliam H. Press, and Freeman J. Dyson "Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent." Proceedings of the National Academy of Sciences 109.26 (2012): 10409-10413.
  5. Open access journal articleAmnon Rapoport, et al. (2015) "Is tit-for-tat the answer? On the conclusions drawn from Axelrod's tournaments." PloS one 10.7 (2015): e0134128.
  6. Douglas Hofstadter (1985). Metamagical Themas. ISBN 0-465-04566-9. 
  7. Open access journal articleJulian Romero, and Yaroslav Rosokha "Mixed Strategies in the Indefinitely Repeated Prisoner's Dilemma." SSRN 3290732 (2019)
  8. Herman Kahn "On Thermonuclear War", Princeton University Press (1961) ISBN 9781412815598
  9. Jonny Thomson, "Golden Balls: How one man broke a UK game show using game theory", Big Think (2022): 6-14
  10. Brian Feldman, "Why ‘Split Or Steal’ Is So Compelling", New York Magazine (2020): 3-16