Works in ChatGPT, Claude, or Any AI
Add semantic quote search to your AI assistant via MCP. One command setup.
" "1. Enlarge the shadow of the future
Mutual cooperation can be stable if the future is sufficiently important relative to the present. This is because the players can each use an implicit threat of retaliation against the other's defection-if the interaction will last long enough to make the threat effective. Seeing how this works in a numerical example will allow the formulation of the alternative methods that can enlarge the shadow of the future.
Robert Marshall Axelrod (born May 27, 1943) is an American political scientist and Professor of Political Science and Public Policy at the University of Michigan, best known for his interdisciplinary work on the evolution of cooperation.
Add semantic quote search to your AI assistant via MCP. One command setup.
Related quotes. More quotes will automatically load as you scroll down, or you can use the load more buttons.
Thus cooperation can emerge even in a world of unconditional defection. The development cannot take place if it is tried only by scattered individuals who have no chance to interact with each other. But cooperation can emerge from small clusters of discriminating individuals, as long as these individuals have even a small proportion of their interactions with each other. Moreover, if nice strategies (those which are never the first to defect) come to be adopted by virtually everyone, then those individuals can afford to be generous in dealing with any others. By doing so well with each other, a population of nice rules can protect themselves against clusters of individuals using any other strategy just as well as they can protect themselves against single individuals. But for a nice strategy to be stable in the collective sense, it must be provocable. So mutual cooperation can emerge in a world of egoists without central control by starting with a cluster of individuals who rely on reciprocity.
In this essay, we have developed and illustrated an approach for predicting the membership of alliances among firms developing and sponsoring products requiring technical standardization. We started with two simple and plausible assumptions, that a firm prefers (1) to join a large standardsetting alliance in order to increase the probability of successfully sponsoring a compatibility standard, and (2) to avoid allying with rivals in order to benefit individually from compatibility standards that emerge from the alliance’s efforts. We then defined the concept of utility as an approximation to profit maximization in terms of size and rivalry, and discussed the influences on incentives to ally in order to develop and sponsor standards. We showed that the Nash equilibria are the local minima of an energy function with this type of utility function.
The advice in chapter 6 to players of the Prisoner's Dilemma might serve as good advice to national leaders as well: don't be envious, don't be the first to defect, reciprocate both cooperation and defection, and don't be too clever. Likewise, the techniques discussed in chapter 7 for promoting cooperation in the Prisoner's Dilemma might also be useful in promoting cooperation in international politics.
The core of the problem of how to achieve rewards from cooperation is that trial and error in learning is slow and painful. The conditions may all be favorable for long-run developments. but we may not have the time to wait for blind processes to move us slowly toward mutually rewarding strategies based upon reciprocity. Perhaps if we understand the process better, we can use our foresight to speed up the evolution of cooperation.