I disagree. Metaethics isn’t just metaphysics: it’s also got a ton of insights about language, epistemology, psychology, and disagreement.
Just off the top of my normative-ethicist head, I could point to the recent literature on meta-linguistic negotiation, on quasi-realism, on motivational internalism, on deontic modals, on the Knobe effect, on evolutionary debunking…
But I’ll admit that “buck passing” is a boring topic.
even taking into account the fact that what is or isn't included under the umbrella of metaethics is somewhat up for debate, I can't really defend a literal reading of what I said. nor would I want to because I don't believe it's literally correct to say there's basically nothing except this argument going on in metaphysics. sure does feel like that sometimes though!
idk why I can't edit on my phone, but this should say "basically nothing going on in metaethics" rather than "basically nothing going on in metaphysics"
I know what you mean about how it "sure does feel like that sometimes." I often hear metaethicists complain that the moral realism/anti-realism debate sucks up all the oxygen in the subfield!
Hi Lionel. I've had Binmore's Natural Justice on my reading list for a long time, I'm gonna push it to the top of the pile now. In any case, I think something like the contractualist / game theory approach must be right.
But one question/thought. A key part of morality is its phenomenology. There is a strong emotional element, and indeed you write that "We have moral intuitions **and emotions** that help us play the game of morals well with others in order to go through the game of life seamlessly" (my emphasis). Why? As you know, there are other social arrangements (e.g. institutions) that are also social contracts but don't have the same phenomenology as morality. Explaining this seems to me to be a key part of the story.
I sketched my own answer in this short comment on Pinsof.
Hi Thom, your example (rejection of an unequal 20–80 split) hits on something important. What Binmore adds to emotions as a simple calculation of what the offer implies is the space where this calculation takes place. A bit jumping ahead in my series of posts: fairness norms (which work as equilibria) are articulated with weights that reflect the relative importance of each person in the bargaining. For instance, in a given culture, elders may be given more importance when splitting costs and gains than in another culture. One way to interpret your example is that if somebody offers a split that violates my understanding of my weights, there is more at stake than just that split: it concerns our joint understanding of my relative standing in your eyes (very much what you call status) and how this understanding might play out in future interactions. Hence, I might need to be willing to react disproportionately to small slights, as these may conceal a shadow negotiation about our relative status and how we would split gains and losses in future interactions.
Social emotions like anger at the feeling of unfairness can be seen as bringing into the present the shadow cost of current deviations in future interactions. By doing so, they help me engage in the kind of retaliatory behaviour required for the equilibrium to be sustained (i.e. I might need to feel very angry at attempts to push me around for people not to have incentives to do so).
"Social emotions like anger at the feeling of unfairness can be seen as bringing into the present the shadow cost of current deviations in future interactions"
This exchange with @thomscottphillips got me thinking...
Binmore’s point is that our social emotions evolved for repeated interactions in the game theoretical sense. Since those instincts are built into our biology, they must be very old and evolved in small scale societies, or perhaps even earlier. Large scale societies flip this script. The larger the society, the higher the chance of one-off encounters. Norms for dealing with strangers evolved through a much faster cultural evolution. And these norms vary widely: WEIRD cultures lean on impersonal fairness rules, China on relationship building, and Japan on role-based scripts. Seen together, it looks like a two layer model: innate social emotions at the base for managing repeated interactions, and culturally evolved norms on top for handling one-off interactions.
As a philosophy amateur, it has been simultaneously to my surprise and dismay that, 1, basically all of the moral realism philosophy I've encountered is essentially "given [axiom I say is good/bad], [good/bad things] are good/bad," and 2, many thinkers I've read bafflingly go along with the axioms that fail the toddler's acid test. I've unfortunately been able to obliterate the moral axioms of many I've read from/spoken to using "and what if I simply don't give a damn?"
I’m sympathetic to your overall project, but it seems to be starting midstream. You see morality as a set of Schelling-point conventions that have naturally emerged to foster positive-sum interactions in society.
But the unstated premise, built into the game theory framework, is that each individual is trying to maximize his own well being.
Don’t we have to start by justifying this premise (that your own well being ought to be your primary goal)?—and by giving an account of what “well being” even means, and where the phenomenon comes from?
Hi Rick, this naturalistic approach aims to explain morality and what "ought" means. Thus, we can't start from any "ought" as it would make the investigation circular. But I agree that we should have a good understanding of humans' goals and of the nature and importance of well-being in these goals. I have, in fact, covered the topic of human happiness in depth over several posts last year. See, for instance: https://www.optimallyirrational.com/p/the-truth-about-happiness and https://www.optimallyirrational.com/p/happiness-and-the-pursuit-of-a-good
I see many points of agreement here, especially that morality must ultimately have a biological explanation, which clearly points to happiness as an organizing principle. Although I still think certain foundational questions remain. It's perfectly coherent to view your own happiness as essentially irrelevant to the question of what *ultimate value* you should organize your life around, as per utilitarianism or Kantian deontology (despite that somewhat ambiguous quote of Kant's you give).
I agree with you that, nevertheless, these other theories are simply wrong. But I think it will take more to refute them than just saying that evolution *makes* us pursue our own happiness. Because, strictly it doesn't. There are plenty of people who are in fact altruists. And I also agree with your comment that where we have to start is to give a naturalistic explanation of core moral concepts--especially the concept of "value," which is even more fundamental than "happiness."
A great post, I'm looking forward to the following ones. I'm a big fan of Binmore but I find "Game theory and the social contract" sort of badly written. I eventually extracted from it his bargaining justification of Rawls' approach, which I even teach to math students, but I was never patient enough to go through the whole thing, so I will benefit enormously from this series of posts, thanks!
Hi Marcos, Binmore wrote a somewhat simpler version of his thesis in Natural Justice (2005). But in any case, I'll cover the most important ground here.
Nice essay. I particularly enjoyed the subtle jab at Nietzsche.
You seem to equate fairness with morality, but fairness is but a small slice of morality—one sixth, according to Haidt.
I look forward to the future essays as I’m unfamiliar with Binmore, but I do hope you at some point acknowledge what morality actually is, which is an internal human sense which promotes actions that increase the likelihood of the propagation of the genes that create said internal human sense. Game theory is correct, but only as applied over evolutionary timescales.
The way you write here implies that you think babies learn the rules of game theory and then develop morality. Seems to me to be evolved and intrinsic. Morality, like sympathy, as I’ve written about recently, is a genius evolutionary chess move that offers up selfish psychological incentives to promote altruism that ultimately also are selfish in benefiting the individual and their genes better than the non-altruism would.
Hi Adam, the perspective is clearly evolutionary. I think the story I’ll describe will likely be of interest to you, it’s a bit more complex than the ingenious evolutionary move you describe. But the account I’m developing will be fully compatible with the key evolutionary principles you’d expect to see.
I am not sure I buy the arguments that cooperation is rationa (have come across Binmore but not read these particular books)l. Under certain conditions, cooperative equilibria can be stable under evolutionary or social learning dynamics, but in a rational repeated game framework folk theorems undermine cooperation as a unique solution? So my take on the literature was always that cooperation is a bounded rationality or evolutionary thing? If so, then isnt this naturalistic "grounding" just a case of naturalistic fallacy?
Many people believe that it is more beneficial to them to exploit others than to be fair, as long as they can get away with it, and especially when they can make their exploitation sound fair and reasonable and thus convince their victims that it is not exploitation but fairness. They may even believe it themselves, that their actual stealing is not stealing but fairness. Consequently, the view that fairness is ultimately in self-interest does beg the question, about what is objectively ‘fair’ (not just what you subjectively believe is fair) and how profiting from concealed unfairness would still be contrary to self-interest.
Skimming just two of your essays now, it seems you like the, um, “sky hook” that is “widely held.”
When someone invokes a “widely held intuition” (as above) without evidence, you’re invoking consensus as if consensus were self-evident.
Moral objectivism isn’t an instinct like blinking; it’s a learned posture shaped by culture and institutions. In WEIRD (Western, Educated, Industrialized, Rich, Democratic) cultures, people lean more toward individual-rights morality; in honor cultures, toward relational duties; in collectivist cultures, toward role-based ethics. “Widely held” simply isn’t an empirical claim you can toss off without data.
"Widely held" here is just an empirical claim. If anything is missing, it’s a reference to supporting evidence. One relevant source is Stanford’s work on moral externalisation (e.g. Kyle Stanford, ‘The difference between ice cream and Nazis: Moral externalization and the evolution of human cooperation’, Behavioral and Brain Sciences, 2018).
I’m not using the "widely held intuition" as a skyhook: I am not saying that because many people have this intuition, the view must be true. The fact that many people feel morality to be objective is simply an interesting observation that makes it worthwhile to ask whether morality really is objective, and whether our reasons for thinking so are good enough.
I have been following discussions of morality, fairness, and what "ought" to be done for a long time. Typically missing is any mention of empathy. By this I mean affective empathy rather than cognitive empathy. It just "feels right" for a mother to nurse her newborn, a passer-by to help out someone hurt in an accident, or a person to donate to a worthwhile charity. All of these "morally correct" actions are stimulated mostly by affective empathy.
The problem is we do not all share the same empathetic instincts. Most people (I believe) think torturing other people for fun is morally wrong based on empathetic instincts. However some people have no inherent inhibition regarding torturing others. Society decides what is morally wrong (like torturing people) based on some kind of group consensus. But some things, like killing snakes or welcoming immigrants, are morally ambiguous since there is not group consensus.
I don't mean to downplay the role of "fairness" and positive-sum interactions, I just want to point out that individual feelings of empathy play a huge role in determining morality. And given the diversity of empathetic feelings in humans, there will never be consensus on all moral subjects that all humans can agree on.
Is it an appeal to consequences or an appeal to evidence? The statement ‘If there was no objective this,then this or that would happen’ is basically pointing out that things would be different if the objective thing wasn’t in place so the current state of things serves as evidence
if you take away this kind of appeal to consequences theres basically nothing left of metaethics
It is indeed striking how much this argument drives, implicitly or explicitly, conclusions in that field.
I disagree. Metaethics isn’t just metaphysics: it’s also got a ton of insights about language, epistemology, psychology, and disagreement.
Just off the top of my normative-ethicist head, I could point to the recent literature on meta-linguistic negotiation, on quasi-realism, on motivational internalism, on deontic modals, on the Knobe effect, on evolutionary debunking…
But I’ll admit that “buck passing” is a boring topic.
yeah you are correct about all this.
even taking into account the fact that what is or isn't included under the umbrella of metaethics is somewhat up for debate, I can't really defend a literal reading of what I said. nor would I want to because I don't believe it's literally correct to say there's basically nothing except this argument going on in metaphysics. sure does feel like that sometimes though!
idk why I can't edit on my phone, but this should say "basically nothing going on in metaethics" rather than "basically nothing going on in metaphysics"
I know what you mean about how it "sure does feel like that sometimes." I often hear metaethicists complain that the moral realism/anti-realism debate sucks up all the oxygen in the subfield!
Hi Lionel. I've had Binmore's Natural Justice on my reading list for a long time, I'm gonna push it to the top of the pile now. In any case, I think something like the contractualist / game theory approach must be right.
But one question/thought. A key part of morality is its phenomenology. There is a strong emotional element, and indeed you write that "We have moral intuitions **and emotions** that help us play the game of morals well with others in order to go through the game of life seamlessly" (my emphasis). Why? As you know, there are other social arrangements (e.g. institutions) that are also social contracts but don't have the same phenomenology as morality. Explaining this seems to me to be a key part of the story.
I sketched my own answer in this short comment on Pinsof.
https://www.everythingisbullshit.blog/p/utilitarianism-is-bullshit/comment/164507278
Hi Thom, your example (rejection of an unequal 20–80 split) hits on something important. What Binmore adds to emotions as a simple calculation of what the offer implies is the space where this calculation takes place. A bit jumping ahead in my series of posts: fairness norms (which work as equilibria) are articulated with weights that reflect the relative importance of each person in the bargaining. For instance, in a given culture, elders may be given more importance when splitting costs and gains than in another culture. One way to interpret your example is that if somebody offers a split that violates my understanding of my weights, there is more at stake than just that split: it concerns our joint understanding of my relative standing in your eyes (very much what you call status) and how this understanding might play out in future interactions. Hence, I might need to be willing to react disproportionately to small slights, as these may conceal a shadow negotiation about our relative status and how we would split gains and losses in future interactions.
Social emotions like anger at the feeling of unfairness can be seen as bringing into the present the shadow cost of current deviations in future interactions. By doing so, they help me engage in the kind of retaliatory behaviour required for the equilibrium to be sustained (i.e. I might need to feel very angry at attempts to push me around for people not to have incentives to do so).
"Social emotions like anger at the feeling of unfairness can be seen as bringing into the present the shadow cost of current deviations in future interactions"
Yes, I like that, makes sense.
This exchange with @thomscottphillips got me thinking...
Binmore’s point is that our social emotions evolved for repeated interactions in the game theoretical sense. Since those instincts are built into our biology, they must be very old and evolved in small scale societies, or perhaps even earlier. Large scale societies flip this script. The larger the society, the higher the chance of one-off encounters. Norms for dealing with strangers evolved through a much faster cultural evolution. And these norms vary widely: WEIRD cultures lean on impersonal fairness rules, China on relationship building, and Japan on role-based scripts. Seen together, it looks like a two layer model: innate social emotions at the base for managing repeated interactions, and culturally evolved norms on top for handling one-off interactions.
It is exactly Binmore's and my view.
As a philosophy amateur, it has been simultaneously to my surprise and dismay that, 1, basically all of the moral realism philosophy I've encountered is essentially "given [axiom I say is good/bad], [good/bad things] are good/bad," and 2, many thinkers I've read bafflingly go along with the axioms that fail the toddler's acid test. I've unfortunately been able to obliterate the moral axioms of many I've read from/spoken to using "and what if I simply don't give a damn?"
Very interesting. I had never heard of Binmore, but many of my views come surprisingly close to his.
I’m sympathetic to your overall project, but it seems to be starting midstream. You see morality as a set of Schelling-point conventions that have naturally emerged to foster positive-sum interactions in society.
But the unstated premise, built into the game theory framework, is that each individual is trying to maximize his own well being.
Don’t we have to start by justifying this premise (that your own well being ought to be your primary goal)?—and by giving an account of what “well being” even means, and where the phenomenon comes from?
Hi Rick, this naturalistic approach aims to explain morality and what "ought" means. Thus, we can't start from any "ought" as it would make the investigation circular. But I agree that we should have a good understanding of humans' goals and of the nature and importance of well-being in these goals. I have, in fact, covered the topic of human happiness in depth over several posts last year. See, for instance: https://www.optimallyirrational.com/p/the-truth-about-happiness and https://www.optimallyirrational.com/p/happiness-and-the-pursuit-of-a-good
I see many points of agreement here, especially that morality must ultimately have a biological explanation, which clearly points to happiness as an organizing principle. Although I still think certain foundational questions remain. It's perfectly coherent to view your own happiness as essentially irrelevant to the question of what *ultimate value* you should organize your life around, as per utilitarianism or Kantian deontology (despite that somewhat ambiguous quote of Kant's you give).
I agree with you that, nevertheless, these other theories are simply wrong. But I think it will take more to refute them than just saying that evolution *makes* us pursue our own happiness. Because, strictly it doesn't. There are plenty of people who are in fact altruists. And I also agree with your comment that where we have to start is to give a naturalistic explanation of core moral concepts--especially the concept of "value," which is even more fundamental than "happiness."
I elaborate here: https://ricksint.substack.com/p/moral-values-in-living-color
“Santa Claus fallacy” - thx for an interesting piece on “natural morality” and thx for the concept of the Santa Claus fallacy - it was good 👍
A great post, I'm looking forward to the following ones. I'm a big fan of Binmore but I find "Game theory and the social contract" sort of badly written. I eventually extracted from it his bargaining justification of Rawls' approach, which I even teach to math students, but I was never patient enough to go through the whole thing, so I will benefit enormously from this series of posts, thanks!
Hi Marcos, Binmore wrote a somewhat simpler version of his thesis in Natural Justice (2005). But in any case, I'll cover the most important ground here.
Nice essay. I particularly enjoyed the subtle jab at Nietzsche.
You seem to equate fairness with morality, but fairness is but a small slice of morality—one sixth, according to Haidt.
I look forward to the future essays as I’m unfamiliar with Binmore, but I do hope you at some point acknowledge what morality actually is, which is an internal human sense which promotes actions that increase the likelihood of the propagation of the genes that create said internal human sense. Game theory is correct, but only as applied over evolutionary timescales.
The way you write here implies that you think babies learn the rules of game theory and then develop morality. Seems to me to be evolved and intrinsic. Morality, like sympathy, as I’ve written about recently, is a genius evolutionary chess move that offers up selfish psychological incentives to promote altruism that ultimately also are selfish in benefiting the individual and their genes better than the non-altruism would.
Hi Adam, the perspective is clearly evolutionary. I think the story I’ll describe will likely be of interest to you, it’s a bit more complex than the ingenious evolutionary move you describe. But the account I’m developing will be fully compatible with the key evolutionary principles you’d expect to see.
Looking forward to it. Thanks!
A very fascinating investigation , thankyou ‼️👏
Unfortunately I am not a very articulate writer, but I can say that this was incredible and I am excited for the following posts
🙏
I am not sure I buy the arguments that cooperation is rationa (have come across Binmore but not read these particular books)l. Under certain conditions, cooperative equilibria can be stable under evolutionary or social learning dynamics, but in a rational repeated game framework folk theorems undermine cooperation as a unique solution? So my take on the literature was always that cooperation is a bounded rationality or evolutionary thing? If so, then isnt this naturalistic "grounding" just a case of naturalistic fallacy?
Many people believe that it is more beneficial to them to exploit others than to be fair, as long as they can get away with it, and especially when they can make their exploitation sound fair and reasonable and thus convince their victims that it is not exploitation but fairness. They may even believe it themselves, that their actual stealing is not stealing but fairness. Consequently, the view that fairness is ultimately in self-interest does beg the question, about what is objectively ‘fair’ (not just what you subjectively believe is fair) and how profiting from concealed unfairness would still be contrary to self-interest.
Skimming just two of your essays now, it seems you like the, um, “sky hook” that is “widely held.”
When someone invokes a “widely held intuition” (as above) without evidence, you’re invoking consensus as if consensus were self-evident.
Moral objectivism isn’t an instinct like blinking; it’s a learned posture shaped by culture and institutions. In WEIRD (Western, Educated, Industrialized, Rich, Democratic) cultures, people lean more toward individual-rights morality; in honor cultures, toward relational duties; in collectivist cultures, toward role-based ethics. “Widely held” simply isn’t an empirical claim you can toss off without data.
"Widely held" here is just an empirical claim. If anything is missing, it’s a reference to supporting evidence. One relevant source is Stanford’s work on moral externalisation (e.g. Kyle Stanford, ‘The difference between ice cream and Nazis: Moral externalization and the evolution of human cooperation’, Behavioral and Brain Sciences, 2018).
I’m not using the "widely held intuition" as a skyhook: I am not saying that because many people have this intuition, the view must be true. The fact that many people feel morality to be objective is simply an interesting observation that makes it worthwhile to ask whether morality really is objective, and whether our reasons for thinking so are good enough.
“A secular alternative is the idea that there are moral truths independently of religious beliefs.”
Independent, not independently.
I have been following discussions of morality, fairness, and what "ought" to be done for a long time. Typically missing is any mention of empathy. By this I mean affective empathy rather than cognitive empathy. It just "feels right" for a mother to nurse her newborn, a passer-by to help out someone hurt in an accident, or a person to donate to a worthwhile charity. All of these "morally correct" actions are stimulated mostly by affective empathy.
The problem is we do not all share the same empathetic instincts. Most people (I believe) think torturing other people for fun is morally wrong based on empathetic instincts. However some people have no inherent inhibition regarding torturing others. Society decides what is morally wrong (like torturing people) based on some kind of group consensus. But some things, like killing snakes or welcoming immigrants, are morally ambiguous since there is not group consensus.
I don't mean to downplay the role of "fairness" and positive-sum interactions, I just want to point out that individual feelings of empathy play a huge role in determining morality. And given the diversity of empathetic feelings in humans, there will never be consensus on all moral subjects that all humans can agree on.
Is it an appeal to consequences or an appeal to evidence? The statement ‘If there was no objective this,then this or that would happen’ is basically pointing out that things would be different if the objective thing wasn’t in place so the current state of things serves as evidence