This notion of moral norms is strikingly similar to what Bicchieri calls social norms. How are then social norms different from moral norms? The paradigm example in the social norms literature is of Female Genital Mutilation (FGM). For societies which practice FGM, it does not seem to be a moral norm. Though there can be an element of moral sentiments in cases where such norms are strongly internalised.
Moral norms are indeed social norms. Some social norms seem less moral because they are about personal behaviour more than about how to treat others. But social norms come with a duty to respect them and violation typically comes with a moral condemnation because there is a breach of the rules that form the social contract. Being seen as “rude”, after doing something like eating with your hands at dinner,is already facing a form of negative moral judgment.
Yes, Bicchieri herself makes a strong distinction between social and moral norms, with moral norms being conditional and moral unconditional. This is a subject of both conceptual and empirical critique by my friend Dawn Wang. Not sure if the paper explaining that has already been published, but long story short, they should not be thought as that different.
You fail to answer the Ring of Gyges objection successfully.
Per Binmore, only self-enforcing social contracts count as social contracts. Gyges is immune to social punishment of any kind (that's the entire point of the ring) and displays no internal guilt or regret in the scenario either, therefore we cannot say that any social contract has proved externally OR self-enforcing on him. Indeed, the point of the argument made is that social contracts are NOT 'self'-enforcing at all, that people obey them only out of fear of external enforcement.
Likewise, your version of 'ought' fails for the same reason. People routinely violate what the rest of society regards as moral constraints and often prosper nonetheless in any cost benefit calculation. Your formulation still leaves you effectively endorsing gangs, organized crime, drug lords, and human trafficking as viable societies and moral equilibriums.
I point that out not simply because they are abhorrent, you're quite correct that unpalatable outcomes are not necessarily incorrect conclusions, but because they rather present you with a fork you've been ignoring: at what point do you draw the line between dissidents within a society who may be legitimately punished by that society for 'moral' violations versus what amounts to two 'morally' distinct societies with conflicting 'moral' imperatives? When the law-abiding society says "you must cooperate with police" and the gang society says "snitches get stitches", your argument doesn't answer what the person in the prisoner's dilemma genuinely 'ought' to do except as a function of conditional risk/reward (Which, assuming the gang is willing to kill defectors, would presumably lead you to the counterintuitive position that it is morally wrong for a gang member to testify against his fellow gang members).
Appeal to 'consensus' is a mirage. There isn't one. People move in and out of multiple societies and sub-cultures with different and conflicting norms all the time. The smallest unit of 'society' reduces to the individual and if there is no external objective morality to which the individual must defer than the individual is effectively God and entitled to define his own morality and impose it on others as far as his own will and personal power make practical. 'Practical' may be a limited subset, but you still end up with uber-mench under your theory, the conclusion that social conventions are only truly binding on those who willingly accept them, and therefore anyone who simply rejects them is not subject to moral censure under them.
To return to a previous comment you made, if 'morality' is restricted to within each society because it relies on that society's particular consensus at that particular time, such that we cannot meaningfully condemn a Holocaust occurring within another society as 'wrong' or 'evil' in any sense beyond 'we wouldn't allow that here and now', anymore than whether or not to condemn a player as 'wrong' for holding the ball in his hands varies between football and soccer, you don't have a coherent morality. It really IS 'anything goes' so long as anyone else can be persuaded, tricked, or even forced to agree to it. You push back by noting that such 'arbitrary' arrangements aren't stable equilibriums (not necessarily true), but under your Appeal to Consensus they don't technically NEED to be long-term sustainable, you're sort of smuggling 'It's 'Good' to continue playing the game' in as a value even while denying theory of the good. Without that hidden assumption, even a cult that has everyone take out massive loans, do drugs and orgies and whatever else until the money runs out, then commit mass suicide to avoid any downsides to their hedonistic ways, would constitute a valid society with its own internal moral code and reasonable risk/reward calculus where such behavior is 'morally right' within your framework. You can't coherently draw your lines in terms of relevant population, place, or time, so it really does reduce to 'All things are permissible, but not all things are beneficial' at the individual level and 'it's only really wrong if you get caught or feel guilty' at the society level.
Your constructivism seems an insubstantial illusion, a mere rebranding of 'strategy' as 'morality', that breaks down whether applied to prominent philosophical hypotheticals or the real world as it is. I'm still not seeing anything that successfully differentiates this supposed 'morality' from 'How to Grow Your Small Business' or 'The Unofficial Player Guide to World of Warcraft'. 'Real Life is the Ultimate Game' runs into the problem that if there are no ultimate rules for life than you're left with a meta game regarding by whom and how those rules are invented, that meta game has no rules, and all imposed rules are essentially dependent on punitive force and/or bribes. It's ultimately 'Might makes Right', even if that 'might' is often communal rather than individual.
In the Ring of Gyges scenario, a contractarian account predicts defection. But that is not a refutation of the view. It is the view’s core claim made explicit: there is no mysterious categorical force that binds an agent independently of incentives and enforcement. What keeps rules in place is the combination of external sanctions and internalised mechanisms that evolved in an environment where impunity was rarely certain.
What “ought” means in this framework. My “ought” is not meant to be unconditional. It is a conditional one: if you are participating in a practice and claiming the standing of a compliant participant, then you ought to follow the practice’s rules. The objection you are pressing is that this is not a categorical ought that binds even under perfect impunity. I agree. That is exactly where my view parts ways with moral realism. If one wants an ought that binds Gyges even when nothing can touch him, one is asking for something over and above the social mechanisms that make norms work.
You mention gangs, organised crime, and conflicting “societies”. Pointing out that some people violate moral constraints and sometimes prosper does not undermine the account. Norms are sustained by enforcement, and enforcement is imperfect. There will always be subgroups that try to carve out local equilibria with their own sanctions and rewards, including criminal organisations. Note that describing those equilibria is not endorsing them. It is describing the strategic logic of how they persist. And the fact that people move between overlapping communities with conflicting norms is not a bug in the social-contract picture. The world is a patchwork of coalitions and institutions whose rules compete, and individuals often face genuine conflicts of loyalty and risk. That being said, a society must have a broadly respected overarching social contract to be functional. Failed states are those in which different groups do not abide by shared rules of cooperation.
Your point about the Holocaust raises the question of comparing moral systems across societies. This is a whole question in itself, which I will address in the next post.
Your point about “might makes right” is an important one because bargaining, and bargaining power, play an important role in this contractarian view. But it will not be a simple version of “bullies get what they can and the weak suffer what they must” in every interaction. Once again, the Humean constructivist view I present aims at explaining morality as it works in reality. So we should not expect it to conclude that moral rules should look very different from what they are. Binmore’s conclusion is not naïve about power. Like Marx, he would agree that the moral rules of a society have typically served to justify the social order. In the end, he argues that there will be a strong egalitarian aspect in stable social contracts, but that this is a result of his analysis of the role of social bargaining in social contracts. I will develop that in future posts.
Yes, one is asking for something over and above merely being imperfectly working norms, that's what people generally understand 'Morality' to BE.
"My ought is not meant to be unconditional. It is a conditional one. If you are participating in a practice and claiming the standing of a compliant participant, then you ought to follow the practice's rules." ... "Pointing out that some people violate moral constraints and sometimes prosper does not undermine the account. Norms are sustained by enforcement, and enforcement of imperfect."
Respectfully, these two statements necessarily contradict each other. The probability of detection and enforcement are unavoidably major factors of your 'conditional' ought, which means that whether one 'ought' to follow the practice's rules is then determined only by the projected outcome and one's own priorities and risk tolerance, which the persistence of successful criminals aptly demonstrates that even in real life, falsely claiming legitimacy while secretly cheating is often the optimal strategy to maximize personal gain, while encouraging the societal maintenance of the alleged morals you're secretly violating then merely serves as a means of reducing your own competition while avoiding punishment. As I said, it reduced down to 'it's only wrong if you get caught or feel guilty'. Game theory is abundant with examples of games of imperfect or asymmetric information where misleading the other players into adopting a cooperative approach from which you then defect is the best strategy. Conditional oughts make morality dependent more on success than anything else, the rebel who wins becomes a noble patriot whereas the rebel who loses becomes a vile traitor, with the distinction emerging only after the fact rather than determinable before choosing to rebel or not. It makes morality 'arbitrary' not merely in the sense of being a pure act of will, but even in the sense of being purely random, subject to chance even moreso than choice.
"That being said, a society must have a broadly respected overarching social contract to be functional."
Respectfully, unless you've smuggled in some unstated conception of the 'The Good Society', it does NOT. A tyrant ruling successfully over an enslaved nation may well consider his social order 'functional' with the reason that it does indeed serve his intended aims. 'Functional' is not an objective term in isolation, much like 'optimization', it can only be used in reference to some specific intended outcome. Without a Theory of the Good or Value you don't really present any justified societal outcome against which functionality may be measured and assessed. Even Appeal to Consensus necessarily works with way if you rely on unbounded majority rules: a society with a 60/40 split on some controversial topic would have the 60 be 'right' and the 40 'wrong' only until the 40 kill 30 of the 60, then the 40 are more than the remaining 30 and therefore the 40 are 'right' and the surviving 30 are 'wrong'. This isn't merely abhorrent, it's incoherent. Even without genociding your way to a majority, the same problem occurs due to multiple levels of scale: the conservative household in the liberal city in the conservative state in the conservative country in the liberal multinational alliance in the conservative world... Which is the correct 'morality' then, conservative or liberal?
Incidentally, this is probably more relevant to your next article, but doesn't your constructivism imply that there's no such thing as a valid moral argument against the existing social convention? If morality is nothing more or less than the rules as they exist in a given place or time, there's nothing to appeal to to condemn those existing rules or any possible 'moral' reason to change them, the existing rules are tautologically good and true. That does not seem a particularly congruent account of how morality actually works in the real world.
I basically agree with all of this, but I wonder about the role of empathy. Would a social contract be an equilibrium without empathy? Does empathy lead us to prefer some social contracts over others? Are people who lack empathy more likely to break the social contract?
Empathy is actually a key concept in Binmore's framework. He gives it a particular definition: being able to put oneself in the shoes of others. That is different from sympathy (caring about others' fate). Empathy is what helps people understand the perspective of others and anticipate what they would consider fair.
Thanks for yet another interesting and crystal clear text. However, one note on your rejection of moral realism.
You write:
Moral, legal and ethical systems are conventions, but they are not arbitrary: they are constrained by the structure of conflict and cooperation in society.
This fact can, I think, be described as limited version of moral realism. Humans are confronted with the problems of living together, and cooperating in various ways. Tere are constraints on the possible solutions. These constraints are built into the fabric of our universe, given the specifications of the relevant game scenario. The available strategies are absolute, "out there"; they are fully determined by each game specification. They are discovered, not invented.
This is not full-blown moral realism if it is required that there must be a unique best solution to all moral problems, which may be what some expect. There may be multiple solutions.
However, if one adds the constraint that as many of the actors as possible should be able to live reasonable lives, then one may be able to rank the different solutions. Then we are nearer to the goal that I think many have in mind for moral realism.
Anyway, these are some conjectural thoughts of mine. BTW, thanks for recommending Richard Alexander's The Biology of Moral Systems. I am reading it, so far it is brilliant.
The omertà is a moral code within the mafia. Organised crime requires cooperation and cooperation comes with moral rules: principle to respect, trust, no betrayal, and so on. Your term “collapse” might suggest any equilibrium is moral. Instead morality is about specific equilibria where we can gain from cooperation but where any body could try to get away with higher individual payoffs by not cooperating while other cooperate. These types of equilibrium activate our moral feelings: trust, anger at betrayal, and so on. These feeling help us play the game right. In particular they help trigger punishment for deviations, which keep deviations unprofitable.
I think I see what you’re saying more clearly now.
If I’ve got you right, you’re saying morality isn’t any equilibrium, but a specific kind.
It’s one where cooperation benefits the group, individuals are tempted to defect, and emotions & punishment help keep cooperation stable.
I get that. But here’s my puzzle.
The mafia case certainly checks those boxes. And man, I love learning about the mafia lol. Omertà is really fascinating to me, especially because of how binding it was. Well at least until new constraints were introduced (RICO) and then it wasn’t.
But yeah, there’s cooperation, temptation to defect, trust, anger at betrayal, and real punishment at enforcing the rules.
But what exactly rules it out as moral rather than just effective?
It seems like the difference can’t come from the equilibrium itself, since structurally it looks the same.
I’m not sure if this is quietly appealing to something outside the game — like what the cooperation is for or who it harms.
)So that’s why this has made me curious, just for reference.)
But once we do that, isn’t morality doing more work than just stabilizing cooperation?
I might be missing something, but I’m not seeing where that extra “ought” comes from in the model itself.
Happy New Year, by the way.
I like your name too lol.
“Lionel Page” — sounds like a famous author.
It actually sounds like a name I may haven chosen as a pseudonym back when I used to write fiction haha.
Thanks for the kind comment on my name. It’s my real one 🙂!
To answer your point, you ask what exactly rules it out as moral rather than just effective. The framework explains morality as the type of rules (and the feelings about these rules) that organise social interactions and permit cooperation. That is it. I think your comment suggests that this explanation is underwhelming. I can see how it can be perceived that way, but these rules and their respect matter to us because they are the rules of the game of life that matters to us.
Even without being a moral realist, I read and agree with your constructivist account as a social psychological description, and yet in ethical terms am left where I began, thinking it just isn't talking about the sort of axiology I'm talking about when I judge things morally better or worse. If every time someone drove on the right side of a road, a thousand dog-level sentient beings were tortured on a distant planet, then having that equilibrium would be vastly worse than having the left side equilibrium, without any descriptive difference between the stabilities of the respective social contracts.
Hi J., I think what you experience is your moral sense firing up at the thought of innocent beings suffering. It is totally normal because the example you describe is unecological (they do not arise in the environment we experience), and your moral sense was not designed to discriminate between such situations and our commonly experienced situations. Another example is seeing ice cream on TV: it can make you salivate even though there is no ice cream for you to eat. TV did not exist in ancient times, and our senses are not designed to discriminate between fake and true images of high-energy content.
Note that it is totally reasonable for you to dislike the equilibrium you describe with your moral preferences. It is just that there is no moral truth out there saying it is worse in an absolute way, independently of what you think.
The feelings you might have about the fact that it is worse in an absolute way are understandable, but we need to appreciate that such feelings are not a window into some truth. Similarly, our naive intuition is that the Sun rotates around the Earth. But once these intuitions have an explanation within a framework where the Earth actually rotates around the Sun, they do not have evidential weight anymore. We can accept the supported theory while recognising that it does not match all our intuitions (think also of how unintuitive quantum theory and general relativity are).
Thanks, Lionel. I think I agree with this comment in its entirety. Nevertheless, the moral axiology that these processes have produced in me exists as what it is, just as taste preferences and aesthetic preferences do. I don't think they're "a window into some truth" in a stance-independent sense; they are descriptions of my stances, embedded within various communities of stance-builders. I don't see why I would want to purge my moral stances of evolutionary or cultural-game-theoretical spandrels. Just as with aesthetic preferences, if my reflective subjective stances include spandrels, then so be it.
The gap here is less the permissibility of terrible social arrangements, but that the argument is self-undermining. If two Glauconian contractualists would naturally trust one another's adherence to (implicit or explicit) contracts LESS than two purity-testing, moralizing absolutists, then the latter IS the pragmatic solution. If the larger "equation" only relates motives, incentives and outcomes, then contracts which are reinforced by eternal rewards or punishments, and purity tested with Santa Clause fallacies. "If you do not also think it would be terrible to not believe in Santa, then this implies you are more likely to defect, not just from our Santa coalition, but from other makeshift coalitions as well."
If "truth" is not moralized, then conventions-by-fiat are just as permissible as high effort, falsifiable conventions, and are likely even more sustainable "solutions" to the original problem posited under your premises. Worse still, coopting the fruits of high effort convention (weaponizing science) is also permissible. "Terrible if true" cannot be ruled out as a valid contribution without moralizing some arbitrarily chosen strictness of "truth value."
To use a term from one of Dan Williams' recent posts, those who moralize the truth need be wary of "3rd person naive realism." Even if mapping motives, incentives and outcomes seems coherent and predictive, they are no more strictly real than "selection pressures" or "market forces." These are conventions of cognitive compression (via metaphor) that are immensely useful but strictly false. The compressions feel comprehensive, but they regularly neglect (or explain away) System 2 function, rather than setting the table for System 2 function.
I see three points in your comment. Here are quick answers.
On absolutist beliefs as a cooperation technology:
I agree that absolutist beliefs can help cooperation, and that this may be one reason they are cultural attractors. If people genuinely believe that defections are punished (even when unobserved), that belief can widen the set of stable cooperative equilibria. That said, I also think that, in Western countries, we may be overly influenced here by the Judeo-Christian tradition with its moralising God. This is the topic of a future post.
On “truth needs policing” and why that is still conventional:
I agree that truth claims need policing, but that policing is itself conventional. The rules for accepting and rejecting arguments, and what counts as evidence, are socially maintained norms. They differ across disciplines, institutions, and groups, and they evolve over time. That does not make them arbitrary: they face the pressure of making factual predictions that work. This is exactly the topic of the paper I mentioned in a previous discussion.
On incentives and motives:
I don't remember this post by Dan, but I suppose he is not saying that incentives and motives do not exist, rather that we don't have direct access to them (while often assuming we do).
Dan was sharing Jeffrey Friedman's insight as it relates to blue team's relationship to experts. I coopted Friedman's term/phrase "3rd person naive realism" as I think the temptation also applies when experts defer to one another.
To clarify what I mean by "arbitrary," I mean arbitrary-in-principle, even if it is non-trivial-in-practice. This actually is meant to reflect your own insights regarding what Stephen Pinker called "arbitrary but coordinated" conditions: which side of the road to drive on, the legal age of adulthood, etc. What I am claiming is "arbitrary" in this particular sense is the "strictness" of what counts as "truth" (or the standards of evidence as you've just described). That you rightly point out that the strictness may be a function of "pressures" that "evolve" over time is a perfect example of in-practice, fantastically useful metaphors. We can pretty easily prove, through brain-scans for example, that "pressure" and "forces" are reified, embodied metaphors that nevertheless enable mutual calibration in our species. We bottom out on strictly arbitrary foundations, which is itself an implication of evolutionary theory. I will nevertheless join you and others in the pragmatic use of "the best available foundations" in good faith. Nothing is worse than obstructionist philosophy, and I am trying to ride that fine line.
My friendly challenge to your positions taken here is that if "truth-policing" can be done without moralizing truth, and especially if the argument is that truth's value is in its instrumental utility, then you will incur paradoxical conditions under which selective and willful ignorance (ie. using Santa Clause fallacies as purity tests) is prescribable based on the relative disutility of the strict truth.
Personally, I think we should moralize "effort," not truth per se. But that demands walls of text and borrowed time.
This notion of moral norms is strikingly similar to what Bicchieri calls social norms. How are then social norms different from moral norms? The paradigm example in the social norms literature is of Female Genital Mutilation (FGM). For societies which practice FGM, it does not seem to be a moral norm. Though there can be an element of moral sentiments in cases where such norms are strongly internalised.
Moral norms are indeed social norms. Some social norms seem less moral because they are about personal behaviour more than about how to treat others. But social norms come with a duty to respect them and violation typically comes with a moral condemnation because there is a breach of the rules that form the social contract. Being seen as “rude”, after doing something like eating with your hands at dinner,is already facing a form of negative moral judgment.
Yes, Bicchieri herself makes a strong distinction between social and moral norms, with moral norms being conditional and moral unconditional. This is a subject of both conceptual and empirical critique by my friend Dawn Wang. Not sure if the paper explaining that has already been published, but long story short, they should not be thought as that different.
You fail to answer the Ring of Gyges objection successfully.
Per Binmore, only self-enforcing social contracts count as social contracts. Gyges is immune to social punishment of any kind (that's the entire point of the ring) and displays no internal guilt or regret in the scenario either, therefore we cannot say that any social contract has proved externally OR self-enforcing on him. Indeed, the point of the argument made is that social contracts are NOT 'self'-enforcing at all, that people obey them only out of fear of external enforcement.
Likewise, your version of 'ought' fails for the same reason. People routinely violate what the rest of society regards as moral constraints and often prosper nonetheless in any cost benefit calculation. Your formulation still leaves you effectively endorsing gangs, organized crime, drug lords, and human trafficking as viable societies and moral equilibriums.
I point that out not simply because they are abhorrent, you're quite correct that unpalatable outcomes are not necessarily incorrect conclusions, but because they rather present you with a fork you've been ignoring: at what point do you draw the line between dissidents within a society who may be legitimately punished by that society for 'moral' violations versus what amounts to two 'morally' distinct societies with conflicting 'moral' imperatives? When the law-abiding society says "you must cooperate with police" and the gang society says "snitches get stitches", your argument doesn't answer what the person in the prisoner's dilemma genuinely 'ought' to do except as a function of conditional risk/reward (Which, assuming the gang is willing to kill defectors, would presumably lead you to the counterintuitive position that it is morally wrong for a gang member to testify against his fellow gang members).
Appeal to 'consensus' is a mirage. There isn't one. People move in and out of multiple societies and sub-cultures with different and conflicting norms all the time. The smallest unit of 'society' reduces to the individual and if there is no external objective morality to which the individual must defer than the individual is effectively God and entitled to define his own morality and impose it on others as far as his own will and personal power make practical. 'Practical' may be a limited subset, but you still end up with uber-mench under your theory, the conclusion that social conventions are only truly binding on those who willingly accept them, and therefore anyone who simply rejects them is not subject to moral censure under them.
To return to a previous comment you made, if 'morality' is restricted to within each society because it relies on that society's particular consensus at that particular time, such that we cannot meaningfully condemn a Holocaust occurring within another society as 'wrong' or 'evil' in any sense beyond 'we wouldn't allow that here and now', anymore than whether or not to condemn a player as 'wrong' for holding the ball in his hands varies between football and soccer, you don't have a coherent morality. It really IS 'anything goes' so long as anyone else can be persuaded, tricked, or even forced to agree to it. You push back by noting that such 'arbitrary' arrangements aren't stable equilibriums (not necessarily true), but under your Appeal to Consensus they don't technically NEED to be long-term sustainable, you're sort of smuggling 'It's 'Good' to continue playing the game' in as a value even while denying theory of the good. Without that hidden assumption, even a cult that has everyone take out massive loans, do drugs and orgies and whatever else until the money runs out, then commit mass suicide to avoid any downsides to their hedonistic ways, would constitute a valid society with its own internal moral code and reasonable risk/reward calculus where such behavior is 'morally right' within your framework. You can't coherently draw your lines in terms of relevant population, place, or time, so it really does reduce to 'All things are permissible, but not all things are beneficial' at the individual level and 'it's only really wrong if you get caught or feel guilty' at the society level.
Your constructivism seems an insubstantial illusion, a mere rebranding of 'strategy' as 'morality', that breaks down whether applied to prominent philosophical hypotheticals or the real world as it is. I'm still not seeing anything that successfully differentiates this supposed 'morality' from 'How to Grow Your Small Business' or 'The Unofficial Player Guide to World of Warcraft'. 'Real Life is the Ultimate Game' runs into the problem that if there are no ultimate rules for life than you're left with a meta game regarding by whom and how those rules are invented, that meta game has no rules, and all imposed rules are essentially dependent on punitive force and/or bribes. It's ultimately 'Might makes Right', even if that 'might' is often communal rather than individual.
In the Ring of Gyges scenario, a contractarian account predicts defection. But that is not a refutation of the view. It is the view’s core claim made explicit: there is no mysterious categorical force that binds an agent independently of incentives and enforcement. What keeps rules in place is the combination of external sanctions and internalised mechanisms that evolved in an environment where impunity was rarely certain.
What “ought” means in this framework. My “ought” is not meant to be unconditional. It is a conditional one: if you are participating in a practice and claiming the standing of a compliant participant, then you ought to follow the practice’s rules. The objection you are pressing is that this is not a categorical ought that binds even under perfect impunity. I agree. That is exactly where my view parts ways with moral realism. If one wants an ought that binds Gyges even when nothing can touch him, one is asking for something over and above the social mechanisms that make norms work.
You mention gangs, organised crime, and conflicting “societies”. Pointing out that some people violate moral constraints and sometimes prosper does not undermine the account. Norms are sustained by enforcement, and enforcement is imperfect. There will always be subgroups that try to carve out local equilibria with their own sanctions and rewards, including criminal organisations. Note that describing those equilibria is not endorsing them. It is describing the strategic logic of how they persist. And the fact that people move between overlapping communities with conflicting norms is not a bug in the social-contract picture. The world is a patchwork of coalitions and institutions whose rules compete, and individuals often face genuine conflicts of loyalty and risk. That being said, a society must have a broadly respected overarching social contract to be functional. Failed states are those in which different groups do not abide by shared rules of cooperation.
Your point about the Holocaust raises the question of comparing moral systems across societies. This is a whole question in itself, which I will address in the next post.
Your point about “might makes right” is an important one because bargaining, and bargaining power, play an important role in this contractarian view. But it will not be a simple version of “bullies get what they can and the weak suffer what they must” in every interaction. Once again, the Humean constructivist view I present aims at explaining morality as it works in reality. So we should not expect it to conclude that moral rules should look very different from what they are. Binmore’s conclusion is not naïve about power. Like Marx, he would agree that the moral rules of a society have typically served to justify the social order. In the end, he argues that there will be a strong egalitarian aspect in stable social contracts, but that this is a result of his analysis of the role of social bargaining in social contracts. I will develop that in future posts.
Yes, one is asking for something over and above merely being imperfectly working norms, that's what people generally understand 'Morality' to BE.
"My ought is not meant to be unconditional. It is a conditional one. If you are participating in a practice and claiming the standing of a compliant participant, then you ought to follow the practice's rules." ... "Pointing out that some people violate moral constraints and sometimes prosper does not undermine the account. Norms are sustained by enforcement, and enforcement of imperfect."
Respectfully, these two statements necessarily contradict each other. The probability of detection and enforcement are unavoidably major factors of your 'conditional' ought, which means that whether one 'ought' to follow the practice's rules is then determined only by the projected outcome and one's own priorities and risk tolerance, which the persistence of successful criminals aptly demonstrates that even in real life, falsely claiming legitimacy while secretly cheating is often the optimal strategy to maximize personal gain, while encouraging the societal maintenance of the alleged morals you're secretly violating then merely serves as a means of reducing your own competition while avoiding punishment. As I said, it reduced down to 'it's only wrong if you get caught or feel guilty'. Game theory is abundant with examples of games of imperfect or asymmetric information where misleading the other players into adopting a cooperative approach from which you then defect is the best strategy. Conditional oughts make morality dependent more on success than anything else, the rebel who wins becomes a noble patriot whereas the rebel who loses becomes a vile traitor, with the distinction emerging only after the fact rather than determinable before choosing to rebel or not. It makes morality 'arbitrary' not merely in the sense of being a pure act of will, but even in the sense of being purely random, subject to chance even moreso than choice.
"That being said, a society must have a broadly respected overarching social contract to be functional."
Respectfully, unless you've smuggled in some unstated conception of the 'The Good Society', it does NOT. A tyrant ruling successfully over an enslaved nation may well consider his social order 'functional' with the reason that it does indeed serve his intended aims. 'Functional' is not an objective term in isolation, much like 'optimization', it can only be used in reference to some specific intended outcome. Without a Theory of the Good or Value you don't really present any justified societal outcome against which functionality may be measured and assessed. Even Appeal to Consensus necessarily works with way if you rely on unbounded majority rules: a society with a 60/40 split on some controversial topic would have the 60 be 'right' and the 40 'wrong' only until the 40 kill 30 of the 60, then the 40 are more than the remaining 30 and therefore the 40 are 'right' and the surviving 30 are 'wrong'. This isn't merely abhorrent, it's incoherent. Even without genociding your way to a majority, the same problem occurs due to multiple levels of scale: the conservative household in the liberal city in the conservative state in the conservative country in the liberal multinational alliance in the conservative world... Which is the correct 'morality' then, conservative or liberal?
Incidentally, this is probably more relevant to your next article, but doesn't your constructivism imply that there's no such thing as a valid moral argument against the existing social convention? If morality is nothing more or less than the rules as they exist in a given place or time, there's nothing to appeal to to condemn those existing rules or any possible 'moral' reason to change them, the existing rules are tautologically good and true. That does not seem a particularly congruent account of how morality actually works in the real world.
I basically agree with all of this, but I wonder about the role of empathy. Would a social contract be an equilibrium without empathy? Does empathy lead us to prefer some social contracts over others? Are people who lack empathy more likely to break the social contract?
Empathy is actually a key concept in Binmore's framework. He gives it a particular definition: being able to put oneself in the shoes of others. That is different from sympathy (caring about others' fate). Empathy is what helps people understand the perspective of others and anticipate what they would consider fair.
This fancy sociology works, but I don't think it's true
fun stuff tho! thanks
Thanks for yet another interesting and crystal clear text. However, one note on your rejection of moral realism.
You write:
Moral, legal and ethical systems are conventions, but they are not arbitrary: they are constrained by the structure of conflict and cooperation in society.
This fact can, I think, be described as limited version of moral realism. Humans are confronted with the problems of living together, and cooperating in various ways. Tere are constraints on the possible solutions. These constraints are built into the fabric of our universe, given the specifications of the relevant game scenario. The available strategies are absolute, "out there"; they are fully determined by each game specification. They are discovered, not invented.
This is not full-blown moral realism if it is required that there must be a unique best solution to all moral problems, which may be what some expect. There may be multiple solutions.
However, if one adds the constraint that as many of the actors as possible should be able to live reasonable lives, then one may be able to rank the different solutions. Then we are nearer to the goal that I think many have in mind for moral realism.
Anyway, these are some conjectural thoughts of mine. BTW, thanks for recommending Richard Alexander's The Biology of Moral Systems. I am reading it, so far it is brilliant.
If a violent gang has a stable, self-enforcing code (“snitches get stitches”), does that count as morality on your account?
If yes, you’ve collapsed morality into mere equilibrium.
If no, what non-equilibrium standard are you using to exclude it?
The omertà is a moral code within the mafia. Organised crime requires cooperation and cooperation comes with moral rules: principle to respect, trust, no betrayal, and so on. Your term “collapse” might suggest any equilibrium is moral. Instead morality is about specific equilibria where we can gain from cooperation but where any body could try to get away with higher individual payoffs by not cooperating while other cooperate. These types of equilibrium activate our moral feelings: trust, anger at betrayal, and so on. These feeling help us play the game right. In particular they help trigger punishment for deviations, which keep deviations unprofitable.
This is helpful, thanks.
I think I see what you’re saying more clearly now.
If I’ve got you right, you’re saying morality isn’t any equilibrium, but a specific kind.
It’s one where cooperation benefits the group, individuals are tempted to defect, and emotions & punishment help keep cooperation stable.
I get that. But here’s my puzzle.
The mafia case certainly checks those boxes. And man, I love learning about the mafia lol. Omertà is really fascinating to me, especially because of how binding it was. Well at least until new constraints were introduced (RICO) and then it wasn’t.
But yeah, there’s cooperation, temptation to defect, trust, anger at betrayal, and real punishment at enforcing the rules.
But what exactly rules it out as moral rather than just effective?
It seems like the difference can’t come from the equilibrium itself, since structurally it looks the same.
I’m not sure if this is quietly appealing to something outside the game — like what the cooperation is for or who it harms.
)So that’s why this has made me curious, just for reference.)
But once we do that, isn’t morality doing more work than just stabilizing cooperation?
I might be missing something, but I’m not seeing where that extra “ought” comes from in the model itself.
Happy New Year, by the way.
I like your name too lol.
“Lionel Page” — sounds like a famous author.
It actually sounds like a name I may haven chosen as a pseudonym back when I used to write fiction haha.
Thanks for the kind comment on my name. It’s my real one 🙂!
To answer your point, you ask what exactly rules it out as moral rather than just effective. The framework explains morality as the type of rules (and the feelings about these rules) that organise social interactions and permit cooperation. That is it. I think your comment suggests that this explanation is underwhelming. I can see how it can be perceived that way, but these rules and their respect matter to us because they are the rules of the game of life that matters to us.
Even without being a moral realist, I read and agree with your constructivist account as a social psychological description, and yet in ethical terms am left where I began, thinking it just isn't talking about the sort of axiology I'm talking about when I judge things morally better or worse. If every time someone drove on the right side of a road, a thousand dog-level sentient beings were tortured on a distant planet, then having that equilibrium would be vastly worse than having the left side equilibrium, without any descriptive difference between the stabilities of the respective social contracts.
Hi J., I think what you experience is your moral sense firing up at the thought of innocent beings suffering. It is totally normal because the example you describe is unecological (they do not arise in the environment we experience), and your moral sense was not designed to discriminate between such situations and our commonly experienced situations. Another example is seeing ice cream on TV: it can make you salivate even though there is no ice cream for you to eat. TV did not exist in ancient times, and our senses are not designed to discriminate between fake and true images of high-energy content.
Note that it is totally reasonable for you to dislike the equilibrium you describe with your moral preferences. It is just that there is no moral truth out there saying it is worse in an absolute way, independently of what you think.
The feelings you might have about the fact that it is worse in an absolute way are understandable, but we need to appreciate that such feelings are not a window into some truth. Similarly, our naive intuition is that the Sun rotates around the Earth. But once these intuitions have an explanation within a framework where the Earth actually rotates around the Sun, they do not have evidential weight anymore. We can accept the supported theory while recognising that it does not match all our intuitions (think also of how unintuitive quantum theory and general relativity are).
Thanks, Lionel. I think I agree with this comment in its entirety. Nevertheless, the moral axiology that these processes have produced in me exists as what it is, just as taste preferences and aesthetic preferences do. I don't think they're "a window into some truth" in a stance-independent sense; they are descriptions of my stances, embedded within various communities of stance-builders. I don't see why I would want to purge my moral stances of evolutionary or cultural-game-theoretical spandrels. Just as with aesthetic preferences, if my reflective subjective stances include spandrels, then so be it.
The gap here is less the permissibility of terrible social arrangements, but that the argument is self-undermining. If two Glauconian contractualists would naturally trust one another's adherence to (implicit or explicit) contracts LESS than two purity-testing, moralizing absolutists, then the latter IS the pragmatic solution. If the larger "equation" only relates motives, incentives and outcomes, then contracts which are reinforced by eternal rewards or punishments, and purity tested with Santa Clause fallacies. "If you do not also think it would be terrible to not believe in Santa, then this implies you are more likely to defect, not just from our Santa coalition, but from other makeshift coalitions as well."
If "truth" is not moralized, then conventions-by-fiat are just as permissible as high effort, falsifiable conventions, and are likely even more sustainable "solutions" to the original problem posited under your premises. Worse still, coopting the fruits of high effort convention (weaponizing science) is also permissible. "Terrible if true" cannot be ruled out as a valid contribution without moralizing some arbitrarily chosen strictness of "truth value."
To use a term from one of Dan Williams' recent posts, those who moralize the truth need be wary of "3rd person naive realism." Even if mapping motives, incentives and outcomes seems coherent and predictive, they are no more strictly real than "selection pressures" or "market forces." These are conventions of cognitive compression (via metaphor) that are immensely useful but strictly false. The compressions feel comprehensive, but they regularly neglect (or explain away) System 2 function, rather than setting the table for System 2 function.
I see three points in your comment. Here are quick answers.
On absolutist beliefs as a cooperation technology:
I agree that absolutist beliefs can help cooperation, and that this may be one reason they are cultural attractors. If people genuinely believe that defections are punished (even when unobserved), that belief can widen the set of stable cooperative equilibria. That said, I also think that, in Western countries, we may be overly influenced here by the Judeo-Christian tradition with its moralising God. This is the topic of a future post.
On “truth needs policing” and why that is still conventional:
I agree that truth claims need policing, but that policing is itself conventional. The rules for accepting and rejecting arguments, and what counts as evidence, are socially maintained norms. They differ across disciplines, institutions, and groups, and they evolve over time. That does not make them arbitrary: they face the pressure of making factual predictions that work. This is exactly the topic of the paper I mentioned in a previous discussion.
On incentives and motives:
I don't remember this post by Dan, but I suppose he is not saying that incentives and motives do not exist, rather that we don't have direct access to them (while often assuming we do).
To clarify (and to be sure not to misrepresent Dan), here is the post to which I was referring: https://open.substack.com/pub/conspicuouscognition/p/americas-epistemological-crisis-reprise?utm_source=share&utm_medium=android&shareImageVariant=overlay&r=3nwud0
Dan was sharing Jeffrey Friedman's insight as it relates to blue team's relationship to experts. I coopted Friedman's term/phrase "3rd person naive realism" as I think the temptation also applies when experts defer to one another.
To clarify what I mean by "arbitrary," I mean arbitrary-in-principle, even if it is non-trivial-in-practice. This actually is meant to reflect your own insights regarding what Stephen Pinker called "arbitrary but coordinated" conditions: which side of the road to drive on, the legal age of adulthood, etc. What I am claiming is "arbitrary" in this particular sense is the "strictness" of what counts as "truth" (or the standards of evidence as you've just described). That you rightly point out that the strictness may be a function of "pressures" that "evolve" over time is a perfect example of in-practice, fantastically useful metaphors. We can pretty easily prove, through brain-scans for example, that "pressure" and "forces" are reified, embodied metaphors that nevertheless enable mutual calibration in our species. We bottom out on strictly arbitrary foundations, which is itself an implication of evolutionary theory. I will nevertheless join you and others in the pragmatic use of "the best available foundations" in good faith. Nothing is worse than obstructionist philosophy, and I am trying to ride that fine line.
My friendly challenge to your positions taken here is that if "truth-policing" can be done without moralizing truth, and especially if the argument is that truth's value is in its instrumental utility, then you will incur paradoxical conditions under which selective and willful ignorance (ie. using Santa Clause fallacies as purity tests) is prescribable based on the relative disutility of the strict truth.
Personally, I think we should moralize "effort," not truth per se. But that demands walls of text and borrowed time.
Thank you for your quality response.