Good catch. The reason is that the fame of tit for tat is arguably somewhat overblown. It is not a SPNE and it does not win as universally as is typically reported. Other strategies beat it. I did not choose not to include it but I did not feel it had to be included for the point I was making.
How important is SPNE as a solution concept? I get why it’s formally appealing, but I also think it’s kind of odd to say that the only rational play in the ultimatum game is (offer minimum, accept anything).
An SPNE means that the strategy is credible in every situation that might arise in the game. In other words, the actions prescribed by the strategy remain optimal even after any possible history. Equilibria that rely on non-credible threats are much less compelling.
For example, imagine a father who threatens to disinherit his children if they do not do their homework. If the children believe the threat, they will do their homework and the punishment is never carried out. But if they did fail to do their homework, the cost to the father of actually disinheriting them might be so high that he would prefer not to follow through. The threat would therefore not be credible, and an equilibrium relying on it would be fragile.
Tit-for-tat has a similar issue: in general it is not subgame perfect (except in edge cases). The problem is not with conditional reciprocity itself. Strategies that say “cooperate with cooperators and punish defectors for a finite number of periods (greater than one)” can easily be SPNE for reasonable parameters. So the intuition behind tit-for-tat is sound; the issue is that the literal one-period tit-for-tat rule is a very simplistic implementation of that intuition.
It has nevertheless been given disproportionate prominence, largely because of its success (somewhat exaggerated) in Axelrod’s computer tournaments and because it is so strikingly simple and intuitive. But the effectiveness of conditional reciprocity does not depend on the specific tit-for-tat strategy.
Thank you very much for this. I wonder if you have any thoughts on the antisocial punishment - how it fits in the cooperation framework and whether it’s a thing at all - can it be one of the artifacts of one-off games?
I think there is one, not necessarily the only one, explanation.
As I mentioned in the post, there is a multiplicity of ways to cooperate. A way to cooperate in a group takes the form of social norms. These norms specify rights and duties. They are the rules of the social games in the group. People are meant to be as "nice" as specified by these norms, but they are not expected to be nicer than that. For instance, I have a duty to help somebody in distress in the street, but there are limits to what is expected from me. I do not have a duty to house that person for several years in my home.
People can be more prosocial than what is requested from them. They can go above and beyond the standard expected from them. Doing so brings praise and reputation. However, it only works if it is seen as genuine, i.e. done for the sake of being nice, not to gain reputation. People who are overly nice face the risk of being perceived as try-hards pretending to be nice to raise their social profile. From that perspective, anti-social punishment is punishing those who are perceived as trying to gain social goodwill by artificially exaggerating how nice they really are. The accusations of do-gooders, virtue signallers, and so on are a reflection of this suspicion.
Thanks a lot, very interesting piece. Your explanation of how repeated interactions and the “shadow of the future” make cooperation rational is spot on. I’ve written on my blog about sustainable behaviour, and I keep coming back to the same idea: public reputation is one of the strongest tools we have to overcome both the Prisoner’s Dilemma and the Tragedy of the Commons.
When cooperative behaviour is visible and remembered, incentives shift. People and even more organisations act sustainably not just out of altruism, but because reputation makes cooperation the smart strategy. Your article captures that logic beautifully.
Vernon Smith’s writing on ecological rationality is very interesting. It is somewhat orthogonal to the question of cooperation. His views were shaped by the observation that very simple markets can work remarkably well, in a way consistent with the invisible hand idea, even though they violate many of the assumptions underlying the economic theory of perfect competition.
Aha thanks, I see, looking up Vernon Smith I see that he is not in line with cooperative game theory, more on the moral-emotional side.
I guess both strategies exist: rational and moral-emotional.
For the topic of organizations and sustainability I think a rational approach (game theory) is far more interesting. But also on the individual level I often see rationality under the surface of emotion and morale.
I will dive into the subject of reputation, that seems to be an intriguing subject.
I am happy to see your series of posts starting from "Morality without Skyhooks". Amen to what you have said so far: morality is intersubjective rather than objective and absolute; it arises from implicit social contracts; its construction is motivated by the benefits of cooperation and reciprocity; and that game theory and evolutionary psychology are the right tools for analyzing and critiquing this cooperation.
I fondly hope that you will continue on to point out the further implications of this worldview: That moral codes exist at all levels, from marriages, social cliques, and workplaces, up through societies and the species as a whole. That moral codes without a means of enforcement are of little value. That, as Bentham pointed out, individual rights are social constructs, just as moral rules are.
And I also hope that you eventually get around to pointing out how these philosophical questions will soon take on a new urgency as we find ourselves needing to amend our social contracts to take into account AI (moral?) agents.
I understand the usefulness of game theory but am not a big fan overall. The real world is much more complicated than game theory typically simulates.
Suppose there is a big war that involves most of the world. The Allies end up winning and set up a United Nations to establish International Law so that such a war will not happen again. This should create a stable situation according to game theory. But the situation is not stable. There is always someone who comes along and tries to take over another country, say Ukraine, in violation of International Law. Is this stopped by the International Community so that stability is maintained? Nope. And once one country gets away with violating International Law, it opens the door for other powerful countries to prey on the week and violate law for their own purposes.
Or suppose there are laws and a general societal revulsion of child sex trafficking. Nobody would ever try this, correct? Then some guy comes along and traffics young girls for rich and powerful people, and gets away with it for a very long time. Rich and powerful co-conspirators get away with it forever. Game theory does not show this as a possibility, at least not any game theory situations that I have ever seen.
What really happens is that people (and social animals) are torn between instincts to cooperate with each other and instincts to get the most for yourself in any way possible. Societies are set up to enforce the cooperative instincts, and most people have an inherent moral code at least somewhat consistent with societies laws and norms. But there are always those who look to get what they can for themselves outside of societies laws and norms. If they can get away with some amount of lawbreaking, it only encourages more lawbreaking. A prisoners dilemma dude who gets away with screwing over his buddy is unlikely to ever get the message regarding cooperation.
> The Allies end up winning and set up a United Nations to establish International Law so that such a war will not happen again. This should create a stable situation according to game theory.
No, it shouldn't because enforcement suffers from the free-rider problem.
I'm doubtful the free-rider problem is really the problem. In a totally cooperative system there would be no work to do so there would be no free-riders. Reality is, when it comes to the UN, nobody is really set up to "do the work" or become a "free-rider" when a country violates International Law. A case could be made that the United States was doing most of the work, spending the most on defense, when it comes to keeping Russia in check, while European Nations were free-riding. But then again, the US, as well as all the free-riders, did a lousy job enforcing International Law and keeping Russia out of Ukraine.
I remember one game theory demonstration where a bunch of the tokens were all cooperative, except that every now and then a token would go rogue and steal from all the pacifists tokens, enriching itself relative to the others. The solution to this, according to game theory, is to have law enforcement tokens always ready to hammer down on any rogue token. But of course these law enforcement tokens have to be paid for, and they could go rogue themselves. This seems like a reasonable representation of reality, especially these days where the US, normally the most effective law enforcement entity, has gone rogue and become the most prolific violator of International Law.
> I'm doubtful the free-rider problem is really the problem. In a totally cooperative system there would be no work to do so there would be no free-riders.
Except practically no system is totally cooperative. In fact you seeming expecting one to be suggests you don't understand game theory at all.
Thanks for the post. Axelrod and his The Evolution of Cooperation is hardly mentioned in your post (but it is included in the references). That book is a capstone in the literature of why cooperation emerges. I also miss a mention of institutions as a mechanism that allows to reach a cooperative equilibria, as stated by Greif, for example. But there will be a second post, I am looking forward to read it.
This is great stuff, as usual. But I think you are somewhat too hard on the experimental literature on social preferences in one shot games since the question of to what extent we truly care about others and about fairness/unfairness (even if this caring is just an accidental byproduct of our evolved tendency to maximize or at least manage our own reputations for being good guys) is both important and far from obvious... In fact even those who are the strongest believers in social preferences and the value of the one shot game experimental literature might not argue at all about the evolutionary roots of these preferences. (Curious if you disagree or I'm misunderstanding, thanks!)
I agree that many, perhaps most, behavioural economists working on social preferences would readily accept, if asked, that these preferences must ultimately come from evolution. My general view is not really that they deny an evolutionary origin. It is rather that this often remains in the background and does not shape the methodology very much.
My criticism in the post is actually a bit orthogonal to the question of ultimate origins. I think economists were drawn, by a kind of path of least resistance, towards one-shot games because these settings delivered clear equilibrium benchmarks that could be studied, tested, and turned into papers. That was a productive move in many ways. But it also came with a cost.
One way to describe that cost is to say that economists ended up estimating parameters in one-shot models that may really be reduced-form expressions of deeper behavioural dispositions adapted to repeated interaction. Over time, however, the profession partly lost sight of that background. We then start treating those estimated parameters as if they were the real thing, as if Alice had a stable “fairness parameter” in her head that should travel neatly from game A to game B.
My concern is that behaviour in one-shot games is not especially ecological. It is the temporary response of minds designed to navigate a world of repeated interaction, reputation, and conditional cooperation. So what we observe in the lab may be informative, but it is also a distorted reflection of the underlying reality. In that sense, my worry is not that the one-shot literature is worthless, but that it risks mistaking the shadow for the object casting it.
So I do not disagree that questions about genuine concern for others and fairness are important. My concern is that one-shot games are often a poor window onto the structure that generates those concerns in the first place.
Thanks Lionel - I mostly agree though think that one shot type interactions occur relatively often (tipping at restaurants on the road... anytime we interact with a stranger) - so even if our behavior in these situations is affected by confusion about repeated interaction, this type of behavior is important. I also think even repeated game experiments don't map well to real world repeated interactions - in the real world we play 'quasi-repeated games', never really repeating the same game twice... Anyway again great stuff, thanks!
Hi Daniel, Truly one-shot interactions do exist, but I think they were probably largely absent from the environments in which our social preferences were shaped.
Consider hunter-gatherer groups. People would interact repeatedly with most members of their group, and often also with nearby groups. In some rarer cases, they might encounter genuine strangers. But such encounters were not simple one-shot anonymous interactions in the modern experimental sense. They were often dangerous, but they could also open the door to longer-term relationships, exchanges, alliances, or incorporation into wider social networks. So the situation “A meets B and both know they will never interact again” was likely much less common than it is in modern urban life.
These kinds of anonymous one-shot interactions are, to a large extent, a feature of large-scale modern societies, especially of recent centuries. Our psychology was not designed specifically for them. Indeed, anonymity itself may partly be a temporary feature of modern life: today, many interactions can be recorded, shared, and attached to reputations after the fact. Even not tipping a waiter is no longer fully detached from reputational risk in the way it might once have seemed.
That said, I agree with you that our repeated-game experiments and models are not especially realistic either. Real-life repeated interactions are messy. We rarely play exactly the same game twice. We have imperfect information, we communicate, we signal, we interpret, and we try to shape how others see us. So on top of the strategic structure of repeated interaction, there is also a whole layer of strategic communication and reputation management.
Here again, it is a bit of a lamppost problem: our formal tools push us towards simplified settings. But I still think game theory, understood broadly, helps a great deal. Once we combine repeated games, signalling games, cheap talk, and the available empirical evidence, we can make reasonable sense of real social interaction, in something closer to the spirit of Schelling than of highly abstract one-shot models.
Thanks, again agree - though of course even if 1 shot interactions were absent in evolutionary environments, they are important to understand if they are common in modern societies - or even if 'very likely 1 shot interactions' are common. But it's a good point that anonymity is becoming more uncommon.
I would add that the evolution of our field kind of vindicates (or perhaps informed) my judgement. What has been the progress of the literature on social preferences over the last 20 years? After the wave of experimental results in one-shot games, we had a wave of theories in the 1990s and early 2000s providing models with general mechanisms to make sense of the experimental data. Since then, there has not been any key theoretical innovation. I think this reflects the limitations of the setting we boxed ourselves into.
I think we need to revisit this literature with a broader perspective on what fairness is and how our sense of fairness works. Binmore provides, I believe, the right framework. The challenge is that his model is less straightforward for making predictions and designing experiments. But I think the field has stagnated enough for people to be willing to make bigger and riskier leaps in the search space for new scientific findings on moral preferences.
surprised that a ctl-f for "Tit-for-Tat" came up empty
Good catch. The reason is that the fame of tit for tat is arguably somewhat overblown. It is not a SPNE and it does not win as universally as is typically reported. Other strategies beat it. I did not choose not to include it but I did not feel it had to be included for the point I was making.
How important is SPNE as a solution concept? I get why it’s formally appealing, but I also think it’s kind of odd to say that the only rational play in the ultimatum game is (offer minimum, accept anything).
Hi Daniel, that is a great question.
An SPNE means that the strategy is credible in every situation that might arise in the game. In other words, the actions prescribed by the strategy remain optimal even after any possible history. Equilibria that rely on non-credible threats are much less compelling.
For example, imagine a father who threatens to disinherit his children if they do not do their homework. If the children believe the threat, they will do their homework and the punishment is never carried out. But if they did fail to do their homework, the cost to the father of actually disinheriting them might be so high that he would prefer not to follow through. The threat would therefore not be credible, and an equilibrium relying on it would be fragile.
Tit-for-tat has a similar issue: in general it is not subgame perfect (except in edge cases). The problem is not with conditional reciprocity itself. Strategies that say “cooperate with cooperators and punish defectors for a finite number of periods (greater than one)” can easily be SPNE for reasonable parameters. So the intuition behind tit-for-tat is sound; the issue is that the literal one-period tit-for-tat rule is a very simplistic implementation of that intuition.
It has nevertheless been given disproportionate prominence, largely because of its success (somewhat exaggerated) in Axelrod’s computer tournaments and because it is so strikingly simple and intuitive. But the effectiveness of conditional reciprocity does not depend on the specific tit-for-tat strategy.
I am thrillled to see this article, game theory as developed in behavioral economics is immensely valuable for social philosophy!
Thank you very much for this. I wonder if you have any thoughts on the antisocial punishment - how it fits in the cooperation framework and whether it’s a thing at all - can it be one of the artifacts of one-off games?
I think there is one, not necessarily the only one, explanation.
As I mentioned in the post, there is a multiplicity of ways to cooperate. A way to cooperate in a group takes the form of social norms. These norms specify rights and duties. They are the rules of the social games in the group. People are meant to be as "nice" as specified by these norms, but they are not expected to be nicer than that. For instance, I have a duty to help somebody in distress in the street, but there are limits to what is expected from me. I do not have a duty to house that person for several years in my home.
People can be more prosocial than what is requested from them. They can go above and beyond the standard expected from them. Doing so brings praise and reputation. However, it only works if it is seen as genuine, i.e. done for the sake of being nice, not to gain reputation. People who are overly nice face the risk of being perceived as try-hards pretending to be nice to raise their social profile. From that perspective, anti-social punishment is punishing those who are perceived as trying to gain social goodwill by artificially exaggerating how nice they really are. The accusations of do-gooders, virtue signallers, and so on are a reflection of this suspicion.
Very interesting thank you
Thanks a lot, very interesting piece. Your explanation of how repeated interactions and the “shadow of the future” make cooperation rational is spot on. I’ve written on my blog about sustainable behaviour, and I keep coming back to the same idea: public reputation is one of the strongest tools we have to overcome both the Prisoner’s Dilemma and the Tragedy of the Commons.
When cooperative behaviour is visible and remembered, incentives shift. People and even more organisations act sustainably not just out of altruism, but because reputation makes cooperation the smart strategy. Your article captures that logic beautifully.
Thanks Maria, glad you liked it!
Have you read Vernon Smith? It sounds like you probably have, but if not you need to check out his his work on ecological rationality!
No I do not know him! I will certainly look up his writings.
I have wirtten a blog about environmental psychology in Dutch: https://www.coach-psycholoog-denhaag.nl/psychologie-duurzaamheid/
where I also mention the importance of reputation and social norms.
Vernon Smith’s writing on ecological rationality is very interesting. It is somewhat orthogonal to the question of cooperation. His views were shaped by the observation that very simple markets can work remarkably well, in a way consistent with the invisible hand idea, even though they violate many of the assumptions underlying the economic theory of perfect competition.
Aha thanks, I see, looking up Vernon Smith I see that he is not in line with cooperative game theory, more on the moral-emotional side.
I guess both strategies exist: rational and moral-emotional.
For the topic of organizations and sustainability I think a rational approach (game theory) is far more interesting. But also on the individual level I often see rationality under the surface of emotion and morale.
I will dive into the subject of reputation, that seems to be an intriguing subject.
I am happy to see your series of posts starting from "Morality without Skyhooks". Amen to what you have said so far: morality is intersubjective rather than objective and absolute; it arises from implicit social contracts; its construction is motivated by the benefits of cooperation and reciprocity; and that game theory and evolutionary psychology are the right tools for analyzing and critiquing this cooperation.
I fondly hope that you will continue on to point out the further implications of this worldview: That moral codes exist at all levels, from marriages, social cliques, and workplaces, up through societies and the species as a whole. That moral codes without a means of enforcement are of little value. That, as Bentham pointed out, individual rights are social constructs, just as moral rules are.
And I also hope that you eventually get around to pointing out how these philosophical questions will soon take on a new urgency as we find ourselves needing to amend our social contracts to take into account AI (moral?) agents.
I understand the usefulness of game theory but am not a big fan overall. The real world is much more complicated than game theory typically simulates.
Suppose there is a big war that involves most of the world. The Allies end up winning and set up a United Nations to establish International Law so that such a war will not happen again. This should create a stable situation according to game theory. But the situation is not stable. There is always someone who comes along and tries to take over another country, say Ukraine, in violation of International Law. Is this stopped by the International Community so that stability is maintained? Nope. And once one country gets away with violating International Law, it opens the door for other powerful countries to prey on the week and violate law for their own purposes.
Or suppose there are laws and a general societal revulsion of child sex trafficking. Nobody would ever try this, correct? Then some guy comes along and traffics young girls for rich and powerful people, and gets away with it for a very long time. Rich and powerful co-conspirators get away with it forever. Game theory does not show this as a possibility, at least not any game theory situations that I have ever seen.
What really happens is that people (and social animals) are torn between instincts to cooperate with each other and instincts to get the most for yourself in any way possible. Societies are set up to enforce the cooperative instincts, and most people have an inherent moral code at least somewhat consistent with societies laws and norms. But there are always those who look to get what they can for themselves outside of societies laws and norms. If they can get away with some amount of lawbreaking, it only encourages more lawbreaking. A prisoners dilemma dude who gets away with screwing over his buddy is unlikely to ever get the message regarding cooperation.
> The Allies end up winning and set up a United Nations to establish International Law so that such a war will not happen again. This should create a stable situation according to game theory.
No, it shouldn't because enforcement suffers from the free-rider problem.
I'm doubtful the free-rider problem is really the problem. In a totally cooperative system there would be no work to do so there would be no free-riders. Reality is, when it comes to the UN, nobody is really set up to "do the work" or become a "free-rider" when a country violates International Law. A case could be made that the United States was doing most of the work, spending the most on defense, when it comes to keeping Russia in check, while European Nations were free-riding. But then again, the US, as well as all the free-riders, did a lousy job enforcing International Law and keeping Russia out of Ukraine.
I remember one game theory demonstration where a bunch of the tokens were all cooperative, except that every now and then a token would go rogue and steal from all the pacifists tokens, enriching itself relative to the others. The solution to this, according to game theory, is to have law enforcement tokens always ready to hammer down on any rogue token. But of course these law enforcement tokens have to be paid for, and they could go rogue themselves. This seems like a reasonable representation of reality, especially these days where the US, normally the most effective law enforcement entity, has gone rogue and become the most prolific violator of International Law.
> I'm doubtful the free-rider problem is really the problem. In a totally cooperative system there would be no work to do so there would be no free-riders.
Except practically no system is totally cooperative. In fact you seeming expecting one to be suggests you don't understand game theory at all.
Maybe I don't.
Thanks for the post. Axelrod and his The Evolution of Cooperation is hardly mentioned in your post (but it is included in the references). That book is a capstone in the literature of why cooperation emerges. I also miss a mention of institutions as a mechanism that allows to reach a cooperative equilibria, as stated by Greif, for example. But there will be a second post, I am looking forward to read it.
This is great stuff, as usual. But I think you are somewhat too hard on the experimental literature on social preferences in one shot games since the question of to what extent we truly care about others and about fairness/unfairness (even if this caring is just an accidental byproduct of our evolved tendency to maximize or at least manage our own reputations for being good guys) is both important and far from obvious... In fact even those who are the strongest believers in social preferences and the value of the one shot game experimental literature might not argue at all about the evolutionary roots of these preferences. (Curious if you disagree or I'm misunderstanding, thanks!)
Hi Daniel, thanks for your kind words!
I agree that many, perhaps most, behavioural economists working on social preferences would readily accept, if asked, that these preferences must ultimately come from evolution. My general view is not really that they deny an evolutionary origin. It is rather that this often remains in the background and does not shape the methodology very much.
My criticism in the post is actually a bit orthogonal to the question of ultimate origins. I think economists were drawn, by a kind of path of least resistance, towards one-shot games because these settings delivered clear equilibrium benchmarks that could be studied, tested, and turned into papers. That was a productive move in many ways. But it also came with a cost.
One way to describe that cost is to say that economists ended up estimating parameters in one-shot models that may really be reduced-form expressions of deeper behavioural dispositions adapted to repeated interaction. Over time, however, the profession partly lost sight of that background. We then start treating those estimated parameters as if they were the real thing, as if Alice had a stable “fairness parameter” in her head that should travel neatly from game A to game B.
My concern is that behaviour in one-shot games is not especially ecological. It is the temporary response of minds designed to navigate a world of repeated interaction, reputation, and conditional cooperation. So what we observe in the lab may be informative, but it is also a distorted reflection of the underlying reality. In that sense, my worry is not that the one-shot literature is worthless, but that it risks mistaking the shadow for the object casting it.
So I do not disagree that questions about genuine concern for others and fairness are important. My concern is that one-shot games are often a poor window onto the structure that generates those concerns in the first place.
Thanks Lionel - I mostly agree though think that one shot type interactions occur relatively often (tipping at restaurants on the road... anytime we interact with a stranger) - so even if our behavior in these situations is affected by confusion about repeated interaction, this type of behavior is important. I also think even repeated game experiments don't map well to real world repeated interactions - in the real world we play 'quasi-repeated games', never really repeating the same game twice... Anyway again great stuff, thanks!
Hi Daniel, Truly one-shot interactions do exist, but I think they were probably largely absent from the environments in which our social preferences were shaped.
Consider hunter-gatherer groups. People would interact repeatedly with most members of their group, and often also with nearby groups. In some rarer cases, they might encounter genuine strangers. But such encounters were not simple one-shot anonymous interactions in the modern experimental sense. They were often dangerous, but they could also open the door to longer-term relationships, exchanges, alliances, or incorporation into wider social networks. So the situation “A meets B and both know they will never interact again” was likely much less common than it is in modern urban life.
These kinds of anonymous one-shot interactions are, to a large extent, a feature of large-scale modern societies, especially of recent centuries. Our psychology was not designed specifically for them. Indeed, anonymity itself may partly be a temporary feature of modern life: today, many interactions can be recorded, shared, and attached to reputations after the fact. Even not tipping a waiter is no longer fully detached from reputational risk in the way it might once have seemed.
That said, I agree with you that our repeated-game experiments and models are not especially realistic either. Real-life repeated interactions are messy. We rarely play exactly the same game twice. We have imperfect information, we communicate, we signal, we interpret, and we try to shape how others see us. So on top of the strategic structure of repeated interaction, there is also a whole layer of strategic communication and reputation management.
Here again, it is a bit of a lamppost problem: our formal tools push us towards simplified settings. But I still think game theory, understood broadly, helps a great deal. Once we combine repeated games, signalling games, cheap talk, and the available empirical evidence, we can make reasonable sense of real social interaction, in something closer to the spirit of Schelling than of highly abstract one-shot models.
Thanks, again agree - though of course even if 1 shot interactions were absent in evolutionary environments, they are important to understand if they are common in modern societies - or even if 'very likely 1 shot interactions' are common. But it's a good point that anonymity is becoming more uncommon.
I would add that the evolution of our field kind of vindicates (or perhaps informed) my judgement. What has been the progress of the literature on social preferences over the last 20 years? After the wave of experimental results in one-shot games, we had a wave of theories in the 1990s and early 2000s providing models with general mechanisms to make sense of the experimental data. Since then, there has not been any key theoretical innovation. I think this reflects the limitations of the setting we boxed ourselves into.
I think we need to revisit this literature with a broader perspective on what fairness is and how our sense of fairness works. Binmore provides, I believe, the right framework. The challenge is that his model is less straightforward for making predictions and designing experiments. But I think the field has stagnated enough for people to be willing to make bigger and riskier leaps in the search space for new scientific findings on moral preferences.