Really nice post, though of course I can't help but pick some nits with the taxonomy.
First, I'd dispute that Humean constructivism is all that marginalized in contemporary philosophy. Sharon Street, a pretty prominent contemporary philosopher at NYU, explicitly characterizes her view as Humean Constructivism, which she contrasts with Kantian constructivism. (So it's not as if the only options for the constructivist are Humean, or relativist; Kantian constructivists are no relativists.) While she doesn't emphasize game theory/social equilibria, I do think it could fit pretty neatly into the stuff she *does* say.
I'd also insist that expressivists are missing from the chart, and I think they're significant enough that it's a big gap. While they're intellectual descendants of emotivists, in my view they don't inherit the vulnerabilities you rightly object to here. (They will say that moral claims are true or false, and that's an important part of how they deal with the Frege-Geach problem, though they'll have a distinctive understanding of what they're doing in so saying.) I pick the nits since I'm sympathetic both to expressivism and to the substantive picture you lay out here, and it's not at all clear to me that there's any tension in adopting expressivism as your basic account of what we're doing when we endorse fundamental moral principles, but also endorsing the game theory/social equilibria story as a causal/historical account of *which* fundamental moral principles we find attractive. I don't see any conflict.
Humean constructivism’s place in metaethics. You surely have a better sense than I do of the relative importance of different perspectives in metaethics. From what I have seen, even though Street is very prominent and explicitly describes her view as Humean constructivism, most of the discussion of her work focuses on the Darwinian debunking argument and on the contrast with Kantian constructivism. What seems much less discussed in the standard metaethics literature is a positive, worked-out theory of morality of the Binmore/Sugden/Skyrms/Bicchieri sort, which really uses game theory and equilibria to fill out the “constructive” part. That more economic, equilibrium-focused version of Humean constructivism is what I was thinking of when I said the view is marginal.
Expressivism / quasi-realism. I did include a short discussion of quasi-realism in the post (footnote 7). I like Blackburn’s perspective a lot. It is very sympathetic to the Humean outlook, and I would also like to think that he could easily be convinced of the kind of game-theoretic take I am sketching. There is, however, a stated difference between the two perspectives. The expressivist/quasi-realist approach starts from moral claims as attitudes first and then asks how those attitudes can be organised into shared norms. The truth-apt aspect of moral claims then has to be reconstructed, given that attitudes are not truth-apt per se.
My own view is different here. Once you build a positive theory of morality based on moral norms as equilibria of games, the truth-apt behaviour of moral claims comes quite naturally: within a given equilibrium moral system, claims can come out true or false relative to the rules of that system. The fact that we have strong feelings/attitudes about moral claims comes in addition to this truth-apt aspect. If I say “you can’t do that” in a board game, I can be right and prove it to you by showing you the rulebook, and I can also be angry at you for having taken liberties with the rules. My understanding is that expressivists put things the other way round: they start from the attitudes and then explain how rule-like patterns and truth-talk emerge from those attitudes. Having said that, I agree the difference here is smaller than the differences with many other positions, and quasi-realists are probably among the philosophers most amenable to the kind of perspective I develop in the post. The distance between my view and quasi-realism (at least in Blackburn’s version, which I know best) may be quite small.
To be perfectly precise about how I see things, I should clarify that talking about the truth-apt nature of moral claims relative to a moral code is a description of an idealised situation where the moral code is transparent to all. In practice, a “moral code” is embodied in messy, imperfectly shared expectations. So when someone says “X is wrong”, that person is often making a bid for X to be recognised as wrong on the basis of the shared common ground with other speakers. The end result might be to amend or nudge the shared understanding of that moral system. That is part of how cultural evolution takes place. Common-law systems, which use precedent rulings as a basis for future ones, are an excellent example of this logic.
These views are not entirely trivial and connect to questions about what we do when we discuss the truth or falsity of a claim. Here, my view is consistent: we do so using social norms that are equilibria of argumentative games. The whole picture is, in my view, coherent, but it is somewhat rich. I have an upcoming paper on that question, so I am not planning to go too deep into these issues early in this series of posts.
Hear, hear: "I'm sympathetic both to expressivism and to the substantive picture you lay out here, and it's not at all clear to me that there's any tension in adopting expressivism as your basic account of what we're doing when we endorse fundamental moral principles, but also endorsing the game theory/social equilibria story as a causal/historical account of *which* fundamental moral principles we find attractive. I don't see any conflict."
Makes me think of the proximate-ultimate distinction.
Is there an official name for what I gave the makeshift label of "naturalistic moral pluralism" I described in my comment? Boiled down, the idea is that the persistence of a diversity of moral intuitions, stances or what have you, may be strongly instrumental in preventing collapse to runaway selection in any one direction. I would posit this is compatible with agnosticism and ignosticism about the "realism" of moral truths and/or our access to them.
I don't think exactly? There are certainly positions out there called "pluralism", eg Isaiah Berlin's, that tend to be skeptical of neat, systematic moral theories that boil ethics down to one fundamental principle (eg, utilitarianism, or kantianism), whose supporters I suspect would tend to be sympathetic to the position you're describing.
So I think that the best place to go if you are attracted to Humean constructivism is not modern game theory, but in fact, another 18th century source: Adam Smith's Theory of Moral Sentiments.
Hume is the OG, but Smith is the genius in the next generation who works it all out fully and in the most convincing way. For me, the TMS is the single greatest work in the history of moral philosophy. But if you think Hume is unfairly neglected, well...!
There are no real shortcuts into TMS, and it is a challenging book. But my God, it's rewarding. However, if I was to try and convince you, perhaps you could start here? :)
Hi Paul, I find Smith’s insights impressive (and read the TMS). I think your statement nonetheless undersestimates what game theory has to deliver about the issue of morality. Here I am rather bullish, and the series I am writing will try to convince readers like you.
Thank you for writing this so clearly, especially for the flow charts that label the positions, which helps those new to know which term means what position. To me a moral realist should be (ought?) someone who thinks morals are a description of what a given society thinks good - in the sense of lets be real to what happens in practice. I enjoy that for philosophy that is completely wrong way round.
I do worry about ‘murder’. As someone who has learnt the older languages where the term comes from, murder was quite specifically a wrong kind of killing, whereas a slaying could be justified as a right kind. Which killing is which to me a social ordering decision. Hence ‘murder is wrong’ is somewhat proof by definition: if there is a murder it has to be wrong. As a historian my observation is the array of different societal views about what would be a justified or not justified killing. The statement ‘killing is wrong’ sounds like a clear moral statement but one that no society I know of has actually endorsed. And I say that limiting killing to humans only, not the rest of the ecological world.
"However, moral realists have to – at some point – smuggle in an ought when they argue that some natural properties – like well-being – count as morally important (e.g. “have value”)."
You seem to be doing this yourself, I see no meaningful distinction made here between Theory of the Good and Theory of the Seemly if what is Seemly is explicitly and implicitly tied directly to a rather consequentialist estimation of expected outcome of social cooperation. This strikes me as "Social Cooperation is Good" as a Value. Even if you argue that the proximate is something more like an intuitive cost/benefit calculation at the individual level that merely contingently favors social cooperation as a generally reliable means to the end of personal benefit (reputation and resources?), that merely seems to bring you back to the Harris style position of asserting that each individual may consider their own human flourishing as a moral Good and then extend that to the collective human flourishing as Good.
I don't quite know whether to call this utilitarian or pragmatist or what else, but it's strangely consequentialist for something that seems to be curiously muted about answering the most fundamental levels of "What ought I do?" and "Why?" as moral questions. It's unclear on reading this whether your answer is a relatively social darwinism of just "you ought to do whatever gives you the best chance of successfully propogating your genes" and consider "Why?" to be essentially a meaningless question regarding that ought. Seriously, what is the optimal end state within your proposed moral theory here? Game Theory can optimize strategies for obtaining particular outcomes, but Game Theory itself cannot determine what outcomes are to be considered good or bad.
Hi Steven, I see how you could read my discussion of cooperation as implying that it is good in an absolute way. But the meaning of my statements is more down-to-earth. If you and I can gain from cooperating, it is good from our own point of view. And we might want to make it happen. It is not good in an external way, independent of our views and interests. I am not saying "Social Cooperation is Good as a Value." Rather, I am saying "Social cooperation leads to outcomes we find desirable").
That's not really a moral system then, it's merely substituting saying 'good' for something like 'profitable' or even just 'desired', which seems to put it back in the 'yay / boo' school of thought, essentially just 'It is Good to satisfy my desires'. It then seems like nothing more than hedonism with a layer of long term pragmatism on it at that point, almost 'Effective Selfishness' (like 'Effective Altruism' but without even the pretense of any moral obligation to others).
Game Theory must optimize for a particular outcome, but cannot determine in itself what that outcome ought to be. For you to call this morality, it needs to provide an "ought" somehow, and I still haven't seen you present a solution to the Naturalist Fallacy by explaining how an is can provide an ought.
I read the debate between Steven and Milosz in its entirety with keen interest as I think it gets to the crux of my problem with this series of essays *so far*. (I highlight the “so far” part because Lionel has promised he will tackle these issues in future posts and I will take him at his word.) I am no philosopher so I don’t think I’m adding anything substantive to that part of the discussion but, as far as I can tell, these posts have *so far* been about DE-constructing moral realism and moral systems based on “sky-hooks.” I have not yet detected any construction or RE-construction (to borrow from Milosz’s terminology). Our moral systems may be castles built on sand, but my assumption is that Lionel still thinks we need castles. In his initial post he claimed he was going to suggest a way forward to deal with the complexity of our time (mega-cities, etc.). So he is presumably at some point going to construct (reconstruct?) a new moral system not beholden to “sky-hooks,” yes, or am I wrong in this presumption? (To use Steven’s terminology, he is at some point going to slip in an “ought,” right?)
His chart theoretically differentiates his theory here from asserting that all moral claims are either Impossible or False, therefore his theory must support actually making moral claims and resolving at least some of them to true, as a distinguishable category of claims and resolution method separate from other domains. He must therefore be able to explain how his moral constructivism can make a claim that something is morally 'good', in a true/false sense, rather than merely a calculus of risk/reward (domain of economics rather than morality) or psychological sense/description of 'positive affect / negative affect' (Emotivism). I am not seeing that he has done so.
i do not understand where is the confusion. i suspect that you might expect from this theory more than it tries to do. constructivism explains moral claims in terms of rules of the game, as it is actually played in the given society. you are correct that it's much deeper and more complicated than mere risk/reward or simple affect. but it's still a part of antirealism. those claims do not have a validity outside of the rules of the game. so you cannot take a claim made by moral constructivist, assess it in a language used by realists - "but is it really good" - and then show a problem for constructivists. Binmore claims that there's no qualitative difference between 'you shall not murder' and who goes first when we meet at the door, there's only difference of degree. but it's still 'true' that we should let the people exiting go first.
For it to be 'constructed' it must be constructed FROM something, which is not clearly explicated here as premise or warrants. I'm clear that an antirealist rejects any universal morality as that starting point, but they must still HAVE a starting point and logic chain from that starting point to their end point, and for that end point to count as a 'moral' claim it must be distinguishable from claims that are NOT 'moral claims'. If it cannot be distinguished as belonging to a separate category, then it's an empty set, not belonging to the category of 'Morality' at all, and therefore does not support his claim that morality can be constructed.
He's claimed that his constructivism is distinct from moral relativism that asserts that good and evil are merely whatever each culture arbitrarily says they are, but I'm not seeing how appealing to game theory can successfully replace that appeal to authority. Appeal to Consensus at least provides a value hierarchy that arguably answers 'ought' questions at the first step, even if it breaks down when you inquire more deeply, but appeal to game theory doesn't even get you that far. Game Theory requires those values as inputs, it does not produce them as outputs. For someone who started this series by protesting against 'skyhooks' as the basis of conventional morality, he seems to have committed that same practice of unsupported argument himself.
Hi Lionel, this is a great post and I agree with your general position. One point that I would like however to see addressed in the follow ups is this: in Binmore's books, in addition to the general methodological principles of moral naturalism, his main substantial contribution is a universal argument for Rawlsian ethics based on a slightly extended version of axiomatic bargaining theory. However, I would expect this methodology to lead to a more flexible and varied set of moral arrangements. Anthropologists have found for example that the ultimatum game is played very differently in different societies, and this variety reflects differences in ecological and social conditions, as one would expect based on the principles that you discussed so clearly in your post. I find Binmore's universal argument for Rawlsian egalitarianism quite surprising as a corollary to his moral naturalism (although I find it interesting from the point of view of normative ethics)
You’re entirely right that Binmore’s conclusion lands on something close to Rawlsian egalitarianism. But he also does exactly what you’re asking for: he explains the diversity of fairness norms across times and places.
The key is that the “egalitarian” solution he lands on is *conditional on bargaining power*. In societies where bargaining power is roughly equal, that procedure yields something that looks very much like Rawls’s prescriptions. In societies where bargaining power is very unequal, the same “egalitarian” bargaining solution can lead to quite unequal allocations. It’s definitely something I’ll discuss in the series.
I'm no philosopher, but does this not amount to a kind of moral relativism that could play out over time? This theory would seem to have to accept that as societal and cultural norms shift, the rules of the game of life shift with them, and thus our concepts of what is morally right shift too. Imagine, if you possibly can, that a politically philosophy takes hold whereby there is no respect for the rights of others, where force and power is used to override the rule of law, and where pro-social bargaining is for wimps. If such a philosophy were to endure, so that these became cultural and societal norms, would that not mean that the actions typical of such a society (arbitrary violence, persecution of opponents, dispossessing of minorities) would be considered morally acceptable?
On the “Hobbes nightmare” you describe: from a game-theoretic point of view, a stable social order based purely on arbitrary force and fear is very unlikely. Hobbes thought you could have a durable equilibrium of mutual terror. Binmore (and I) think he largely got that wrong. In the long run, people trapped in that kind of world have very strong incentives to carve out islands of trust and cooperation, so you’d expect other, more cooperative norms to keep re-emerging.
That said, you’ve put your finger on the hard bullet that Binmore and I are prepared to bite: there is no external, view-from-nowhere moral standpoint from which we can declare, in an objective sense, that some past society’s practices were wrong. What we can do is judge them from our current standpoint. So we can say: “human sacrifice in that society was an awful thing”--meaning “given the moral norms in our society, I really don’t like it”--but not “it is an objective moral truth, independent of any standpoint, that those sacrifices were wrong.”
To be clear, I would be very happy to have such an objective stance if I thought it made sense. I just don’t think it does. The next post in the series will address that issue.
Thanks for your response. When I studied jurisprudence a long time ago my tutor was convinced (and convinced most of us) that Gewirth’s principle of generic consistency did provide an objective basis for a moral underpinning of legal rules.
Gewirth simplified: as a human being I have agency that allows me to act with purpose. I need the basic goods of freedom and well-being to do so. Therefore I am rationally committed to claiming those rights for myself. And, in order that I may be afforded those rights, I am rationally committed to respecting and defending those rights for others. From this the principle is: Every agent must act in accordance with their own and all other agents’ generic rights to freedom and well-being.
Thus, the ‘ought’ is derived from and is intrinsic to our human status as purposive agents.
It does sound very appealing, and I can see why it persuaded people. To my ear, though, it follows a broadly Kantian line: start from the fact that we are purposive agents, then use a principle of consistency to move from “I must claim these rights for myself” to “I must recognise the same rights for everyone else”.
From my naturalistic perspective, I see two issues. Either there is an unconditional “ought” being smuggled in at some point—that I must treat my agency, and the agency of others, as morally binding—or the argument is really about what is needed for a “contract” between agents to work. But contracts that work in practice are not necessarily symmetric or egalitarian. Many stable arrangements in history have been quite unequal. So I like the conclusion as a moral ideal, but I’m not convinced it follows just from agency and consistency without one of these extra assumptions. I’ll write a post later that unpack this criticism of Kant’s argument.
I am no philosopher either. My answer though is that if arbitrary violence is the accepted norm, then yes, in that society, violence would be morally acceptable, pretty much by definition.
The reason that this seems so counterintuitive to us is that we evolved for over a million years in small forager groups of only moderate kinship, and those of us that thrived did so by avoiding arbitrary violence and by establishing a reputation as a loyal and dependable person who followed norms which penalized sociopathic behavior. People who were not disturbed by your scenario (we now call them sociopaths) were less prone to surviving and thriving than those that were disturbed.
Fair point except that the people we now call sociopaths are in charge of most of the most powerful countries on earth. This suggests those sociopath genes were more successful than most.
The book “Political Ponerology” is on my reading list and explores exactly this issue. That those with sociopathic genes tend to rise in hierarchic structures (business, corporate ladder, politics…). Might be worth reading.
I think this Binmorean-Humean constructivism is a type of minimal moral realism. The institutional facts of the game of social cooperation lead to cognitivism (first and second theses below), rejecting non-cognitivism and error theory.
Like Dennett's real patterns are what he called mild realism, rejecting Fodor's stronger realism and Rorty's more conventionalist view regarding physical ontology. The real pattern emerges at higher levels, like with the "gliders" of the Conwayian Game of Life, as it becomes a stable, predictive pattern. The rules and penalties of the game of social cooperation become relatively stable, predictive prescriptions and expectations for all moral players of a community over time; they track something that isn't noise in the real world.
"Ethical subjectivism is a form of moral anti-realism that denies the "metaphysical thesis" of moral realism, (the claim that moral truths are ordinary facts about the world).[7] Instead ethical subjectivism claims that moral truths are based on the mental states of individuals or groups of people. The moral realist is committed to some version of the following three statements:[8][9]
1) The semantic thesis: Moral statements have meaning, they express propositions, or are the kind of things that can be true or false.
2) The alethic thesis: Some moral propositions are true.
3) The metaphysical thesis: The metaphysical status of moral facts is robust and ordinary, not importantly different from other facts about the world.
Moral anti-realism is the denial of at least one of these claims.[5] Ethical subjectivists deny the third claim, instead arguing that moral facts are not metaphysically ordinary, but rather dependent on mental states, (individual's beliefs about what is right and wrong).[3] Moral non-cognitivists deny the first claim, while error theorists deny the second claim.[10]
There is some debate as to whether moral realism should continue to require the metaphysical thesis, and therefore if ethical subjectivists should be considered moral realists.[11] Geoffrey Sayre-McCord argues that moral realism should not require mind-independence since there are morally relevant psychological facts which are necessarily mind-dependent, which would make ethical subjectivism a version of moral realism. This has led to a distinction being made between robust moral realism (which requires all three of the theses) and minimal moral realism (which requires only the first two, and is therefore compatible with ethical subjectivism).[12]"
I’m basically with you on this. On your three-thesis picture I’m happy to keep (1) and (2), and drop (3): moral claims are truth-apt and some are true, but what they track are mind--and practice--dependent facts about equilibria and social contracts, not stance-independent moral properties. If you want to call that “minimal moral realism”, I’m fine with the label--the target of the series is the robust, “out there” version.
Fascinating work. Not sure I fully agree, but don’t disagree either. From a neuroscience perspective, the notion that the various faculties required to reach a moral determination are complex, and crucially, can weigh up different moral priorities to try and come up with the best, morally conducive answer holds for me. In this way, the sense that each “moral question” must work through this weighing up, makes sense. Indeed, humans are capable of balancing personal values, with family values, with societal norms in the rostral PFC. An incredible feat!
But whether there are immutable moral truths or not, I’m not sure. We are likely not pre-programmed with morals (assuming we mean higher considerations). However, if we rely on the continuation of the human species at a level that can engage in moral questions and higher thought, then we must attribute some social norms as immutable. That is, theft from someone of equal wealth during times of peace and prosperity would be an absolute moral wrong. If it were not, then the risk is compromising social cohesion and therefore, eventually, the ability for humans to be moral or to even attain general intelligence. If the end point is to be able to be moralistic (or weigh moral decision), then the basic morality of society must be accounted for and may need to be absolute at times.
Hi Dan, I’m with you that some fundamental structures of human interaction might make some norms (and our preference for them) universal for creatures like us. I think we need, however, to be careful not to slide from that into the idea that these rules are objective truths “out there” with a built-in normative force. Even if they are universal, they are still conditional oughts: if we want to do well in the games we play, we need to respect them. They are not unconditional oughts.
One way to see the difference is with Glaucon’s thought experiment about the ring of invisibility. Suppose you had a device that allowed you to cheat without ever being detected. Would you still have a moral duty not to cheat? The Humean perspective, perhaps sadly, says there is no extra, external duty over and above the social story. That doesn’t mean the Humean condones cheating. Cheating has to be prevented for society to work (something everyone wants), so cheaters need to be punished, and social institutions need to be designed so that cheating is not advantageous. But that is still an if–then story, not an unconditional command written into the fabric of the universe.
Thank you. That's very clear and helpful. I'm ust a provincial lawyer, but I've read and thought a lot about the topic over the course of my long life, and I've settled on a Popperian solution that I explain in the paper cited below. While the paper focuses on laws, the analysis applies to moral precepts as well. Both are forms of "objective practical knowledge," and in the paper I suggest that "those of us who work in practical disciplines can learn from our mistakes in much the same way that scientists learn from theirs—by treating all knowledge as conjectural and by using reason and experience critically to help us discover and eliminate errors."
Thanks Jon, That’s a really interesting way to put it. The idea of laws and moral precepts as objective practical knowledge that grows by conjecture and error-correction fits very naturally with the Humean/game-theoretic picture I’m trying to develop: norms as fallible “designs” for social cooperation that we test in practice and revise when they fail, rather than as principles we can get right once and for all from first philosophy. Your Popperian contrast between direct design and evolutionary design for legal institutions feels very close to what I want to say about moral systems as well.
Thank you. That's very clear and helpful. I'm ust a provincial lawyer, but I've read and thought a lot about the topic over the course of my long life, and I've settled on a Popperian solution that I explain in the paper cited below. While the paper focuses on laws, the analysis applies to moral precepts as well. Both are forms of "objective practical knowledge," and in the paper I suggest that "those of us who work in practical disciplines can learn from our mistakes in much the same way that scientists learn from theirs—by treating all knowledge as conjectural and by using reason and experience critically to help us discover and eliminate errors."
This is all very helpful. I now see what you mean about Humean constructivism; the actual working out of why one or another moral system might be the one we construct is given much less emphasis than the defense of the abstract position against meta ethical alternatives.
And on the contrast or not with expressivism, I also think I have a clearer sense of what's going on. Everyone should agree claims about what's in some rulebook are straightforwardly true in a boring, descriptive sense, no expressivism required. Moral claims can be interpreted as descriptive claims about what's true in some rulebook we're collectively presupposing is the relevant one to discuss (your emphasis), or they can be interpreted as exhortations to adopt this or that rulebook (the expressivist emphasis). In fact, our use moral claims is messy enough that both pictures strike me as fruitful idealizations that illuminate diffent aspects of our practice.
Great article. I'm very amenable to this approach myself.
My only minor quibble is that both J.L. Mackie's reform proposal for moral language and Gilbert Harman's conventionalism seem to me to quite close to your own approach, which someone might not expect from the flowchart.
Yes, I thought about that when I did the flowchart. I considered adding a cross-path from Mackie in the graph. Instead I just pointed out the link in the text.
Harman’s position is also close in spirit, but I think we lose something important if we don’t have game theory as a backbone. In particular, the kind of cultural relativism that is popular in some political circles can, I think, lead to misguided policy prescriptions (I’ll discuss that in a post later on).
I don't know if Harman ever referenced game theory, but in "Moral Reletivism Defended," he writes:
"Indeed, it is essential to the proposed explanation of this aspect of our moral views to suppose that the relevant moral understanding is thus the result of *bargaining.* It is nessesary to suppose that, in order to further our interests, we form certain conditional intentions, hoping others will do the same. The others, who have different interests, will form somewhat different conditional intentions. After implicit bargaining, some form of compromise is reached."
I agree that Harman's position has a lot of commonalities with Humean constructivism. He just does not follow the positive theory of how such solutions emerge, which leaves more freedom to his theory than in Binmore's. In fairness, my "radical cultural relativism" label has bundled him with positions that are often found outside of metaethics (e.g. in anthropology).
While I am very much in agreement with the reasoning as presented, there is a prescriptive leap that may need further examination.
"We don't have good reasons to assume there are moral truths 'out there'."
Agreed. However, that is not the same as "abandoning" moral realism. Do we have good reasons to also reject or abandon moral agnosticism or ignosticism? Consider the following, and how it strikes as a prescription:
"However grandiose the theories of the Right and the Good might be, they will only lead us to err endlessly in conceptual mazes if they are misguided. From that perspective, having the epistemic courage to abandon these absolute views and accept the social nature of morality opens the way for us to attain a greater clarity on what morality is and how it works."
This is a very large claim. Consider an alternative we might call "naturalistic moral pluralism" in which the diversity of moral stances, or sensitivities to them, might itself be a makeshift solution to the largest scale of repeat coordination games. Might the pluralism be the currently afforded nash equilibrium that prevents dogmatic, runaway collapse in any particular direction?
Even if everyone recognized the game being played, would it not just open up a metagame in which what is debated is the correct securities or alterations to those rules? Is a full equilibration in this space, or in any other, recognizably better than a nash equilibrium?
There do exist many systems that maintain tolerable "randomness" to hedge against rigidity. Do we have good reasons to assume "moral pluralism" is not one of those systems? Do we have good reasons to assume that "grandiosity" does not enable access, for certain cognitive or social types, or for those who are situationally impaired, to participate in "ethics games" and societal design?
To me, this "naturalistic moral pluralism" would be a recursive extension of what you have developed here, not an indictment of it. But I feel it does resist prescriptive "abandonment" of moral realism, at least without further development to address options of agnosticism, ignosticism and pluralism.
First, I agree there can be a large-scale “game” across communities with different moral systems--the international order is the obvious example, where states with very different internal norms still have to coordinate on trade, war/peace, treaties, etc. But for the diversity of moral systems to be a functional feature in the strong sense you suggest, we’d need a fairly specific selection story: not just competition between individual communities, but meta-communities of communities, where the ones that survive and prosper are precisely those that contain a certain diversity of internal moral codes. I don’t see much historical evidence of that kind of selection structure. In particular, most of the time, large polities seem to have imposed, or generated, some internal coherence rather than moral pluralism between sub-groups.
Second, I completely agree that there is a “metagame”. As Bourdieu stressed, the rules of the game are themselves at stake in the struggles between agents. That is fully compatible with a Humean constructivist perspective, especially in Binmore’s version, where bargaining over time plays a central role in the evolution of fairness norms.
Well-taken, but there will always be a metagame of "preparedness for the next game, without knowledge of its particular rules." One limitation, which you are also invoking, is that we can only understand the fuller game in retrospect once we recognize the pattern of selection that has already happened. That is, unless we can inoculate ourselves to an underlying pattern.
What I am saying is that much like the hedging implicit in immune responses, trading on specificity and sensitivity in pattern recognition and response, the outter fringes of moral reactivity may operate as if by the smoke detector principle, with others more inward acting as canaries in the coalmine against manipulation rather than truth per se. Treating "moral realism" may just be treating the symptoms of strategic under/overfitting. However, I do suspect that your trajectory will be helpful precisely because it addresses the underlying, non-productive tensions also. I just mean to say that any prescript of abandonning moral realism is superfluous to your task and insights.
The most obvious objection against such stable-social-equilibria accounts is the plausible conceivability of stable equilibria that are also morally repugnant. This is the same kind of objection raised against utilitarian theories on the grounds that some option might be utility maximizing but also morally repugnant.
It seems entirely possible that under the right empirical conditions it might be the case that some practice of enslavement, torture, and utter domination of a particular group could also represent a stable equilibrium.
If there are such socially stable equilibria that are also morally repugnant, then being a stable equilibrium cannot be identical with being morally good.
(If one attempts some kind of Rawlsian idealization move, then one will have to provide some account of why certain conditions of idealization are required/justified in order to exclude certain equilibria that are in fact stable given things like the psychological constitution of the relevant agents.)
---
Shouldn’t the Failure of Divine Command/Stance Theories Imply the Failure of All Command/Stance Theories of Morality?
It seems likely that realists and anti-realists are equally mystified by each other’s accounts of moral (and epistemic) normativity. Personally, it never really made sense to me how the Divine Command theorist thought the mere fact that God commanded a thing could suffice to MAKE that thing moral. Nor do I see how any other Human Command theory or Human Stance theory could fair any better.
From a Humean–Binmore point of view, there is no external stance from which we can say “this equilibrium is objectively repugnant” in the realist sense. There are just different equilibria. The fact that we find some of them horrible is not a contradiction of the theory It is exactly our current moral code and preferences speaking.
What we can do is two things:
- Judge from our own standpoint: from where we stand, a slave equilibrium or a terror equilibrium is awful, and we are fully entitled to say so and to try to move away from it.
- Assess them functionally: some equilibria do a much better job at sustaining cooperation and avoiding misery for most players. Others are brittle or keep large groups permanently at the bottom. That gives us good reasons, again from our standpoint, to favour some patterns over others.
On the second point: I am not saying that any “command” or “stance” magically makes things moral in an absolute sense. The only “oughts” I recognise are conditional: If we want social arrangements that leave people generally better off than the realistic alternatives, then we have reason to favour some equilibria over others and to resist others.
This seems more a historical question than a philosophical one so I feel a tad more comfortable responding to this comment. General “moral” repugnance of slavery is a relatively recent phenomenon in human history (more recent even than the first global ban on trading humans as slaves, which came in 1807). Since there are many societies around the globe that still practice outright chattel and sexual slavery, I would also conjecture that the repugnance is not universally shared, even today. From a “functional” perspective slavery worked (and apparently still works in narrower regional contexts) to keep societies in perfect equilibrium. There was (and is) no functional reason to end it from a societal equilibrium perspective. Similarly, the most stable societies historically (at least in terms of internal stability) have been those that keep “…large groups permanently at the bottom.” From a functional perspective those societies are basically perfect. Are we therefore arguing that we should be circling back to Middle Kingdom Egypt or Qin China? (Also, who is the “we” here? Does this functional system of moral philosophy work like Plato’s Republic? Are the people in this comments section the philosopher kings/pharaohs? that will lead humanity into this return to a more functional past?)
> The most obvious objection against such stable-social-equilibria accounts is the plausible conceivability of stable equilibria that are also morally repugnant.
Hume said, in the conclusions of his Enquiry, that “the notion of morals implies some sentiment, so universal and comprehensive as to extend to all mankind, and render the actions and conduct, even of persons the most remote, an object of applause or censure.” This universalism is very different from the notion of a morality which flows from the rules of whichever particular game a society has chosen to play.
Hi Charles. I wrote in another comment “some fundamental structures of human interaction might make some norms (and our preference for them) universal for creatures like us. I think we need, however, to be careful not to slide from that into the idea that these rules are objective truths “out there” with a built-in normative force. Even if they are universal, they are still conditional oughts: if we want to do well in the games we play, we need to respect them. They are not unconditional oughts.”
It is perfectly in line with what I am presenting to say that some general considerations about fairness and morality are likely universal. Binmore for instance points to the Golden Rule as rule for which we likely have an innate preference.
This is a great article, and I'm entirely convinced by your account of Humean constructivism. I have two problems with it, both of which are visible in this sentence: "Instead, [Humean contructivism] posits that our moral code is a human creation that serves to foster social cooperation."
The first problem is your use of the term "moral." "Morality" has traditionally referred to the kind of objectivist, stance-independent ethical theory that you are criticizing here. The term has connotations that are inconsistent with your viewpoint. So I would suggest finding another word for the type of theory that you're formulating. "Ethics" works.
The second problem is that fostering social cooperation is only one of many functions of an ethical code. The ethical code that I live by as a contemporary American demands that I treat people decently even if I will never have any cooperative relationship with them. The function of ethics is to regulate our social life in general, not merely to enable cooperative relationships.
In response to "We ought to respect moral rules because they are the rules of the social games we play," it seems to me that the word "ought" is one with connotations of absolute, transcendent rules, whereas the soccer analogy is likely a more accurate one. (Disclaimer: these thoughts are just conjecture off the top of my head, not carefully reasoned moral philosophy). I tend to look at the rules of the social game of life and other such rules in a descriptive manner, where in my mind "if you want to win at the social game, you ought to follow rule x" = "following rule x leads to desired result y." It is then the case that I follow moral rules for pragmatic reasons, just as I go to the gym regularly because I want to have the social and biological benefits of large muscles and wish to avoid physically and cognitively falling apart in the homestretch of my life. I will not beat children, because beating children will have social and legal repercussions, but moreover (even without those strictures) because beating children feels atrocious to me due to my moral intuitions. A counterargument I often encounter when expressing this view is that "if you're a psychopath who feels no moral guilt, want to murder, and can do so knowing you'll fully get away with it, what's stopping you?"
Really nice post, though of course I can't help but pick some nits with the taxonomy.
First, I'd dispute that Humean constructivism is all that marginalized in contemporary philosophy. Sharon Street, a pretty prominent contemporary philosopher at NYU, explicitly characterizes her view as Humean Constructivism, which she contrasts with Kantian constructivism. (So it's not as if the only options for the constructivist are Humean, or relativist; Kantian constructivists are no relativists.) While she doesn't emphasize game theory/social equilibria, I do think it could fit pretty neatly into the stuff she *does* say.
I'd also insist that expressivists are missing from the chart, and I think they're significant enough that it's a big gap. While they're intellectual descendants of emotivists, in my view they don't inherit the vulnerabilities you rightly object to here. (They will say that moral claims are true or false, and that's an important part of how they deal with the Frege-Geach problem, though they'll have a distinctive understanding of what they're doing in so saying.) I pick the nits since I'm sympathetic both to expressivism and to the substantive picture you lay out here, and it's not at all clear to me that there's any tension in adopting expressivism as your basic account of what we're doing when we endorse fundamental moral principles, but also endorsing the game theory/social equilibria story as a causal/historical account of *which* fundamental moral principles we find attractive. I don't see any conflict.
Hi Daniel,
Thanks a lot for the kind appraisal!
Here are my answers to the two points you raise.
Humean constructivism’s place in metaethics. You surely have a better sense than I do of the relative importance of different perspectives in metaethics. From what I have seen, even though Street is very prominent and explicitly describes her view as Humean constructivism, most of the discussion of her work focuses on the Darwinian debunking argument and on the contrast with Kantian constructivism. What seems much less discussed in the standard metaethics literature is a positive, worked-out theory of morality of the Binmore/Sugden/Skyrms/Bicchieri sort, which really uses game theory and equilibria to fill out the “constructive” part. That more economic, equilibrium-focused version of Humean constructivism is what I was thinking of when I said the view is marginal.
Expressivism / quasi-realism. I did include a short discussion of quasi-realism in the post (footnote 7). I like Blackburn’s perspective a lot. It is very sympathetic to the Humean outlook, and I would also like to think that he could easily be convinced of the kind of game-theoretic take I am sketching. There is, however, a stated difference between the two perspectives. The expressivist/quasi-realist approach starts from moral claims as attitudes first and then asks how those attitudes can be organised into shared norms. The truth-apt aspect of moral claims then has to be reconstructed, given that attitudes are not truth-apt per se.
My own view is different here. Once you build a positive theory of morality based on moral norms as equilibria of games, the truth-apt behaviour of moral claims comes quite naturally: within a given equilibrium moral system, claims can come out true or false relative to the rules of that system. The fact that we have strong feelings/attitudes about moral claims comes in addition to this truth-apt aspect. If I say “you can’t do that” in a board game, I can be right and prove it to you by showing you the rulebook, and I can also be angry at you for having taken liberties with the rules. My understanding is that expressivists put things the other way round: they start from the attitudes and then explain how rule-like patterns and truth-talk emerge from those attitudes. Having said that, I agree the difference here is smaller than the differences with many other positions, and quasi-realists are probably among the philosophers most amenable to the kind of perspective I develop in the post. The distance between my view and quasi-realism (at least in Blackburn’s version, which I know best) may be quite small.
To be perfectly precise about how I see things, I should clarify that talking about the truth-apt nature of moral claims relative to a moral code is a description of an idealised situation where the moral code is transparent to all. In practice, a “moral code” is embodied in messy, imperfectly shared expectations. So when someone says “X is wrong”, that person is often making a bid for X to be recognised as wrong on the basis of the shared common ground with other speakers. The end result might be to amend or nudge the shared understanding of that moral system. That is part of how cultural evolution takes place. Common-law systems, which use precedent rulings as a basis for future ones, are an excellent example of this logic.
These views are not entirely trivial and connect to questions about what we do when we discuss the truth or falsity of a claim. Here, my view is consistent: we do so using social norms that are equilibria of argumentative games. The whole picture is, in my view, coherent, but it is somewhat rich. I have an upcoming paper on that question, so I am not planning to go too deep into these issues early in this series of posts.
Hear, hear: "I'm sympathetic both to expressivism and to the substantive picture you lay out here, and it's not at all clear to me that there's any tension in adopting expressivism as your basic account of what we're doing when we endorse fundamental moral principles, but also endorsing the game theory/social equilibria story as a causal/historical account of *which* fundamental moral principles we find attractive. I don't see any conflict."
Makes me think of the proximate-ultimate distinction.
Is there an official name for what I gave the makeshift label of "naturalistic moral pluralism" I described in my comment? Boiled down, the idea is that the persistence of a diversity of moral intuitions, stances or what have you, may be strongly instrumental in preventing collapse to runaway selection in any one direction. I would posit this is compatible with agnosticism and ignosticism about the "realism" of moral truths and/or our access to them.
I don't think exactly? There are certainly positions out there called "pluralism", eg Isaiah Berlin's, that tend to be skeptical of neat, systematic moral theories that boil ethics down to one fundamental principle (eg, utilitarianism, or kantianism), whose supporters I suspect would tend to be sympathetic to the position you're describing.
So I think that the best place to go if you are attracted to Humean constructivism is not modern game theory, but in fact, another 18th century source: Adam Smith's Theory of Moral Sentiments.
Hume is the OG, but Smith is the genius in the next generation who works it all out fully and in the most convincing way. For me, the TMS is the single greatest work in the history of moral philosophy. But if you think Hume is unfairly neglected, well...!
There are no real shortcuts into TMS, and it is a challenging book. But my God, it's rewarding. However, if I was to try and convince you, perhaps you could start here? :)
https://www.paulsagar.com/_files/ugd/ec3ee6_daf5b69e4db3401f84b3c14228879856.pdf
Hi Paul, I find Smith’s insights impressive (and read the TMS). I think your statement nonetheless undersestimates what game theory has to deliver about the issue of morality. Here I am rather bullish, and the series I am writing will try to convince readers like you.
I wrote an article about Smith before:
https://www.optimallyirrational.com/p/adam-smith-revisited-beyond-the-invisible
Which is a roundabout way of me agreeing with what Daniel has already said, I suppose.
Couldn’t agree more - of Smith's two books, this is the interesting one. I quote it quite extensively in the essay on my Substack.
oh they are both interesting!
https://aeon.co/essays/we-should-look-closely-at-what-adam-smith-actually-believed
I should have said the more interesting. There's a lot of tedious stuff among the insights in W of N. Excellent essay at your link
You find the 70 page discussion of the historical price of silver in Cornwall tedious?!
Thank you for writing this so clearly, especially for the flow charts that label the positions, which helps those new to know which term means what position. To me a moral realist should be (ought?) someone who thinks morals are a description of what a given society thinks good - in the sense of lets be real to what happens in practice. I enjoy that for philosophy that is completely wrong way round.
I do worry about ‘murder’. As someone who has learnt the older languages where the term comes from, murder was quite specifically a wrong kind of killing, whereas a slaying could be justified as a right kind. Which killing is which to me a social ordering decision. Hence ‘murder is wrong’ is somewhat proof by definition: if there is a murder it has to be wrong. As a historian my observation is the array of different societal views about what would be a justified or not justified killing. The statement ‘killing is wrong’ sounds like a clear moral statement but one that no society I know of has actually endorsed. And I say that limiting killing to humans only, not the rest of the ecological world.
"However, moral realists have to – at some point – smuggle in an ought when they argue that some natural properties – like well-being – count as morally important (e.g. “have value”)."
You seem to be doing this yourself, I see no meaningful distinction made here between Theory of the Good and Theory of the Seemly if what is Seemly is explicitly and implicitly tied directly to a rather consequentialist estimation of expected outcome of social cooperation. This strikes me as "Social Cooperation is Good" as a Value. Even if you argue that the proximate is something more like an intuitive cost/benefit calculation at the individual level that merely contingently favors social cooperation as a generally reliable means to the end of personal benefit (reputation and resources?), that merely seems to bring you back to the Harris style position of asserting that each individual may consider their own human flourishing as a moral Good and then extend that to the collective human flourishing as Good.
I don't quite know whether to call this utilitarian or pragmatist or what else, but it's strangely consequentialist for something that seems to be curiously muted about answering the most fundamental levels of "What ought I do?" and "Why?" as moral questions. It's unclear on reading this whether your answer is a relatively social darwinism of just "you ought to do whatever gives you the best chance of successfully propogating your genes" and consider "Why?" to be essentially a meaningless question regarding that ought. Seriously, what is the optimal end state within your proposed moral theory here? Game Theory can optimize strategies for obtaining particular outcomes, but Game Theory itself cannot determine what outcomes are to be considered good or bad.
Hi Steven, I see how you could read my discussion of cooperation as implying that it is good in an absolute way. But the meaning of my statements is more down-to-earth. If you and I can gain from cooperating, it is good from our own point of view. And we might want to make it happen. It is not good in an external way, independent of our views and interests. I am not saying "Social Cooperation is Good as a Value." Rather, I am saying "Social cooperation leads to outcomes we find desirable").
That's not really a moral system then, it's merely substituting saying 'good' for something like 'profitable' or even just 'desired', which seems to put it back in the 'yay / boo' school of thought, essentially just 'It is Good to satisfy my desires'. It then seems like nothing more than hedonism with a layer of long term pragmatism on it at that point, almost 'Effective Selfishness' (like 'Effective Altruism' but without even the pretense of any moral obligation to others).
Game Theory must optimize for a particular outcome, but cannot determine in itself what that outcome ought to be. For you to call this morality, it needs to provide an "ought" somehow, and I still haven't seen you present a solution to the Naturalist Fallacy by explaining how an is can provide an ought.
I read the debate between Steven and Milosz in its entirety with keen interest as I think it gets to the crux of my problem with this series of essays *so far*. (I highlight the “so far” part because Lionel has promised he will tackle these issues in future posts and I will take him at his word.) I am no philosopher so I don’t think I’m adding anything substantive to that part of the discussion but, as far as I can tell, these posts have *so far* been about DE-constructing moral realism and moral systems based on “sky-hooks.” I have not yet detected any construction or RE-construction (to borrow from Milosz’s terminology). Our moral systems may be castles built on sand, but my assumption is that Lionel still thinks we need castles. In his initial post he claimed he was going to suggest a way forward to deal with the complexity of our time (mega-cities, etc.). So he is presumably at some point going to construct (reconstruct?) a new moral system not beholden to “sky-hooks,” yes, or am I wrong in this presumption? (To use Steven’s terminology, he is at some point going to slip in an “ought,” right?)
I think you are assuming a realist conception of morality. Binmore and all other antirealists will not give you one.
His chart theoretically differentiates his theory here from asserting that all moral claims are either Impossible or False, therefore his theory must support actually making moral claims and resolving at least some of them to true, as a distinguishable category of claims and resolution method separate from other domains. He must therefore be able to explain how his moral constructivism can make a claim that something is morally 'good', in a true/false sense, rather than merely a calculus of risk/reward (domain of economics rather than morality) or psychological sense/description of 'positive affect / negative affect' (Emotivism). I am not seeing that he has done so.
i do not understand where is the confusion. i suspect that you might expect from this theory more than it tries to do. constructivism explains moral claims in terms of rules of the game, as it is actually played in the given society. you are correct that it's much deeper and more complicated than mere risk/reward or simple affect. but it's still a part of antirealism. those claims do not have a validity outside of the rules of the game. so you cannot take a claim made by moral constructivist, assess it in a language used by realists - "but is it really good" - and then show a problem for constructivists. Binmore claims that there's no qualitative difference between 'you shall not murder' and who goes first when we meet at the door, there's only difference of degree. but it's still 'true' that we should let the people exiting go first.
For it to be 'constructed' it must be constructed FROM something, which is not clearly explicated here as premise or warrants. I'm clear that an antirealist rejects any universal morality as that starting point, but they must still HAVE a starting point and logic chain from that starting point to their end point, and for that end point to count as a 'moral' claim it must be distinguishable from claims that are NOT 'moral claims'. If it cannot be distinguished as belonging to a separate category, then it's an empty set, not belonging to the category of 'Morality' at all, and therefore does not support his claim that morality can be constructed.
He's claimed that his constructivism is distinct from moral relativism that asserts that good and evil are merely whatever each culture arbitrarily says they are, but I'm not seeing how appealing to game theory can successfully replace that appeal to authority. Appeal to Consensus at least provides a value hierarchy that arguably answers 'ought' questions at the first step, even if it breaks down when you inquire more deeply, but appeal to game theory doesn't even get you that far. Game Theory requires those values as inputs, it does not produce them as outputs. For someone who started this series by protesting against 'skyhooks' as the basis of conventional morality, he seems to have committed that same practice of unsupported argument himself.
Hi Lionel, this is a great post and I agree with your general position. One point that I would like however to see addressed in the follow ups is this: in Binmore's books, in addition to the general methodological principles of moral naturalism, his main substantial contribution is a universal argument for Rawlsian ethics based on a slightly extended version of axiomatic bargaining theory. However, I would expect this methodology to lead to a more flexible and varied set of moral arrangements. Anthropologists have found for example that the ultimatum game is played very differently in different societies, and this variety reflects differences in ecological and social conditions, as one would expect based on the principles that you discussed so clearly in your post. I find Binmore's universal argument for Rawlsian egalitarianism quite surprising as a corollary to his moral naturalism (although I find it interesting from the point of view of normative ethics)
Hi Marcos,
You’re entirely right that Binmore’s conclusion lands on something close to Rawlsian egalitarianism. But he also does exactly what you’re asking for: he explains the diversity of fairness norms across times and places.
The key is that the “egalitarian” solution he lands on is *conditional on bargaining power*. In societies where bargaining power is roughly equal, that procedure yields something that looks very much like Rawls’s prescriptions. In societies where bargaining power is very unequal, the same “egalitarian” bargaining solution can lead to quite unequal allocations. It’s definitely something I’ll discuss in the series.
I'm no philosopher, but does this not amount to a kind of moral relativism that could play out over time? This theory would seem to have to accept that as societal and cultural norms shift, the rules of the game of life shift with them, and thus our concepts of what is morally right shift too. Imagine, if you possibly can, that a politically philosophy takes hold whereby there is no respect for the rights of others, where force and power is used to override the rule of law, and where pro-social bargaining is for wimps. If such a philosophy were to endure, so that these became cultural and societal norms, would that not mean that the actions typical of such a society (arbitrary violence, persecution of opponents, dispossessing of minorities) would be considered morally acceptable?
On the “Hobbes nightmare” you describe: from a game-theoretic point of view, a stable social order based purely on arbitrary force and fear is very unlikely. Hobbes thought you could have a durable equilibrium of mutual terror. Binmore (and I) think he largely got that wrong. In the long run, people trapped in that kind of world have very strong incentives to carve out islands of trust and cooperation, so you’d expect other, more cooperative norms to keep re-emerging.
That said, you’ve put your finger on the hard bullet that Binmore and I are prepared to bite: there is no external, view-from-nowhere moral standpoint from which we can declare, in an objective sense, that some past society’s practices were wrong. What we can do is judge them from our current standpoint. So we can say: “human sacrifice in that society was an awful thing”--meaning “given the moral norms in our society, I really don’t like it”--but not “it is an objective moral truth, independent of any standpoint, that those sacrifices were wrong.”
To be clear, I would be very happy to have such an objective stance if I thought it made sense. I just don’t think it does. The next post in the series will address that issue.
Thanks for your response. When I studied jurisprudence a long time ago my tutor was convinced (and convinced most of us) that Gewirth’s principle of generic consistency did provide an objective basis for a moral underpinning of legal rules.
Gewirth simplified: as a human being I have agency that allows me to act with purpose. I need the basic goods of freedom and well-being to do so. Therefore I am rationally committed to claiming those rights for myself. And, in order that I may be afforded those rights, I am rationally committed to respecting and defending those rights for others. From this the principle is: Every agent must act in accordance with their own and all other agents’ generic rights to freedom and well-being.
Thus, the ‘ought’ is derived from and is intrinsic to our human status as purposive agents.
Would love to know what you think.
It does sound very appealing, and I can see why it persuaded people. To my ear, though, it follows a broadly Kantian line: start from the fact that we are purposive agents, then use a principle of consistency to move from “I must claim these rights for myself” to “I must recognise the same rights for everyone else”.
From my naturalistic perspective, I see two issues. Either there is an unconditional “ought” being smuggled in at some point—that I must treat my agency, and the agency of others, as morally binding—or the argument is really about what is needed for a “contract” between agents to work. But contracts that work in practice are not necessarily symmetric or egalitarian. Many stable arrangements in history have been quite unequal. So I like the conclusion as a moral ideal, but I’m not convinced it follows just from agency and consistency without one of these extra assumptions. I’ll write a post later that unpack this criticism of Kant’s argument.
I am no philosopher either. My answer though is that if arbitrary violence is the accepted norm, then yes, in that society, violence would be morally acceptable, pretty much by definition.
The reason that this seems so counterintuitive to us is that we evolved for over a million years in small forager groups of only moderate kinship, and those of us that thrived did so by avoiding arbitrary violence and by establishing a reputation as a loyal and dependable person who followed norms which penalized sociopathic behavior. People who were not disturbed by your scenario (we now call them sociopaths) were less prone to surviving and thriving than those that were disturbed.
Fair point except that the people we now call sociopaths are in charge of most of the most powerful countries on earth. This suggests those sociopath genes were more successful than most.
The book “Political Ponerology” is on my reading list and explores exactly this issue. That those with sociopathic genes tend to rise in hierarchic structures (business, corporate ladder, politics…). Might be worth reading.
Or perhaps it is better seen as a high risk, high reward recessive?
This is probably the case in rural Afghanistan or war torn Yemen or Sudan or Congo right now, with no need for political philosophy to justify it.
I think this Binmorean-Humean constructivism is a type of minimal moral realism. The institutional facts of the game of social cooperation lead to cognitivism (first and second theses below), rejecting non-cognitivism and error theory.
Like Dennett's real patterns are what he called mild realism, rejecting Fodor's stronger realism and Rorty's more conventionalist view regarding physical ontology. The real pattern emerges at higher levels, like with the "gliders" of the Conwayian Game of Life, as it becomes a stable, predictive pattern. The rules and penalties of the game of social cooperation become relatively stable, predictive prescriptions and expectations for all moral players of a community over time; they track something that isn't noise in the real world.
On ethical subjectivism (non-objectivism, or stance-dependence), per the Wikipedia page: https://en.wikipedia.org/w/index.php?title=Ethical_subjectivism#Relationship_to_moral_anti-realism
"Ethical subjectivism is a form of moral anti-realism that denies the "metaphysical thesis" of moral realism, (the claim that moral truths are ordinary facts about the world).[7] Instead ethical subjectivism claims that moral truths are based on the mental states of individuals or groups of people. The moral realist is committed to some version of the following three statements:[8][9]
1) The semantic thesis: Moral statements have meaning, they express propositions, or are the kind of things that can be true or false.
2) The alethic thesis: Some moral propositions are true.
3) The metaphysical thesis: The metaphysical status of moral facts is robust and ordinary, not importantly different from other facts about the world.
Moral anti-realism is the denial of at least one of these claims.[5] Ethical subjectivists deny the third claim, instead arguing that moral facts are not metaphysically ordinary, but rather dependent on mental states, (individual's beliefs about what is right and wrong).[3] Moral non-cognitivists deny the first claim, while error theorists deny the second claim.[10]
There is some debate as to whether moral realism should continue to require the metaphysical thesis, and therefore if ethical subjectivists should be considered moral realists.[11] Geoffrey Sayre-McCord argues that moral realism should not require mind-independence since there are morally relevant psychological facts which are necessarily mind-dependent, which would make ethical subjectivism a version of moral realism. This has led to a distinction being made between robust moral realism (which requires all three of the theses) and minimal moral realism (which requires only the first two, and is therefore compatible with ethical subjectivism).[12]"
I’m basically with you on this. On your three-thesis picture I’m happy to keep (1) and (2), and drop (3): moral claims are truth-apt and some are true, but what they track are mind--and practice--dependent facts about equilibria and social contracts, not stance-independent moral properties. If you want to call that “minimal moral realism”, I’m fine with the label--the target of the series is the robust, “out there” version.
Fascinating work. Not sure I fully agree, but don’t disagree either. From a neuroscience perspective, the notion that the various faculties required to reach a moral determination are complex, and crucially, can weigh up different moral priorities to try and come up with the best, morally conducive answer holds for me. In this way, the sense that each “moral question” must work through this weighing up, makes sense. Indeed, humans are capable of balancing personal values, with family values, with societal norms in the rostral PFC. An incredible feat!
But whether there are immutable moral truths or not, I’m not sure. We are likely not pre-programmed with morals (assuming we mean higher considerations). However, if we rely on the continuation of the human species at a level that can engage in moral questions and higher thought, then we must attribute some social norms as immutable. That is, theft from someone of equal wealth during times of peace and prosperity would be an absolute moral wrong. If it were not, then the risk is compromising social cohesion and therefore, eventually, the ability for humans to be moral or to even attain general intelligence. If the end point is to be able to be moralistic (or weigh moral decision), then the basic morality of society must be accounted for and may need to be absolute at times.
Hi Dan, I’m with you that some fundamental structures of human interaction might make some norms (and our preference for them) universal for creatures like us. I think we need, however, to be careful not to slide from that into the idea that these rules are objective truths “out there” with a built-in normative force. Even if they are universal, they are still conditional oughts: if we want to do well in the games we play, we need to respect them. They are not unconditional oughts.
One way to see the difference is with Glaucon’s thought experiment about the ring of invisibility. Suppose you had a device that allowed you to cheat without ever being detected. Would you still have a moral duty not to cheat? The Humean perspective, perhaps sadly, says there is no extra, external duty over and above the social story. That doesn’t mean the Humean condones cheating. Cheating has to be prevented for society to work (something everyone wants), so cheaters need to be punished, and social institutions need to be designed so that cheating is not advantageous. But that is still an if–then story, not an unconditional command written into the fabric of the universe.
Thank you. That's very clear and helpful. I'm ust a provincial lawyer, but I've read and thought a lot about the topic over the course of my long life, and I've settled on a Popperian solution that I explain in the paper cited below. While the paper focuses on laws, the analysis applies to moral precepts as well. Both are forms of "objective practical knowledge," and in the paper I suggest that "those of us who work in practical disciplines can learn from our mistakes in much the same way that scientists learn from theirs—by treating all knowledge as conjectural and by using reason and experience critically to help us discover and eliminate errors."
https://docs.google.com/document/d/1CrtC9yvBPx06DDd7ULQ6--gDvMmpBJZO/edit
Thanks Jon, That’s a really interesting way to put it. The idea of laws and moral precepts as objective practical knowledge that grows by conjecture and error-correction fits very naturally with the Humean/game-theoretic picture I’m trying to develop: norms as fallible “designs” for social cooperation that we test in practice and revise when they fail, rather than as principles we can get right once and for all from first philosophy. Your Popperian contrast between direct design and evolutionary design for legal institutions feels very close to what I want to say about moral systems as well.
Thank you. That's very clear and helpful. I'm ust a provincial lawyer, but I've read and thought a lot about the topic over the course of my long life, and I've settled on a Popperian solution that I explain in the paper cited below. While the paper focuses on laws, the analysis applies to moral precepts as well. Both are forms of "objective practical knowledge," and in the paper I suggest that "those of us who work in practical disciplines can learn from our mistakes in much the same way that scientists learn from theirs—by treating all knowledge as conjectural and by using reason and experience critically to help us discover and eliminate errors."
https://docs.google.com/document/d/1CrtC9yvBPx06DDd7ULQ6--gDvMmpBJZO/edit
I look forward to reading more of your work.
This is all very helpful. I now see what you mean about Humean constructivism; the actual working out of why one or another moral system might be the one we construct is given much less emphasis than the defense of the abstract position against meta ethical alternatives.
And on the contrast or not with expressivism, I also think I have a clearer sense of what's going on. Everyone should agree claims about what's in some rulebook are straightforwardly true in a boring, descriptive sense, no expressivism required. Moral claims can be interpreted as descriptive claims about what's true in some rulebook we're collectively presupposing is the relevant one to discuss (your emphasis), or they can be interpreted as exhortations to adopt this or that rulebook (the expressivist emphasis). In fact, our use moral claims is messy enough that both pictures strike me as fruitful idealizations that illuminate diffent aspects of our practice.
Great article. I'm very amenable to this approach myself.
My only minor quibble is that both J.L. Mackie's reform proposal for moral language and Gilbert Harman's conventionalism seem to me to quite close to your own approach, which someone might not expect from the flowchart.
Yes, I thought about that when I did the flowchart. I considered adding a cross-path from Mackie in the graph. Instead I just pointed out the link in the text.
Harman’s position is also close in spirit, but I think we lose something important if we don’t have game theory as a backbone. In particular, the kind of cultural relativism that is popular in some political circles can, I think, lead to misguided policy prescriptions (I’ll discuss that in a post later on).
I don't know if Harman ever referenced game theory, but in "Moral Reletivism Defended," he writes:
"Indeed, it is essential to the proposed explanation of this aspect of our moral views to suppose that the relevant moral understanding is thus the result of *bargaining.* It is nessesary to suppose that, in order to further our interests, we form certain conditional intentions, hoping others will do the same. The others, who have different interests, will form somewhat different conditional intentions. After implicit bargaining, some form of compromise is reached."
I agree that Harman's position has a lot of commonalities with Humean constructivism. He just does not follow the positive theory of how such solutions emerge, which leaves more freedom to his theory than in Binmore's. In fairness, my "radical cultural relativism" label has bundled him with positions that are often found outside of metaethics (e.g. in anthropology).
While I am very much in agreement with the reasoning as presented, there is a prescriptive leap that may need further examination.
"We don't have good reasons to assume there are moral truths 'out there'."
Agreed. However, that is not the same as "abandoning" moral realism. Do we have good reasons to also reject or abandon moral agnosticism or ignosticism? Consider the following, and how it strikes as a prescription:
"However grandiose the theories of the Right and the Good might be, they will only lead us to err endlessly in conceptual mazes if they are misguided. From that perspective, having the epistemic courage to abandon these absolute views and accept the social nature of morality opens the way for us to attain a greater clarity on what morality is and how it works."
This is a very large claim. Consider an alternative we might call "naturalistic moral pluralism" in which the diversity of moral stances, or sensitivities to them, might itself be a makeshift solution to the largest scale of repeat coordination games. Might the pluralism be the currently afforded nash equilibrium that prevents dogmatic, runaway collapse in any particular direction?
Even if everyone recognized the game being played, would it not just open up a metagame in which what is debated is the correct securities or alterations to those rules? Is a full equilibration in this space, or in any other, recognizably better than a nash equilibrium?
There do exist many systems that maintain tolerable "randomness" to hedge against rigidity. Do we have good reasons to assume "moral pluralism" is not one of those systems? Do we have good reasons to assume that "grandiosity" does not enable access, for certain cognitive or social types, or for those who are situationally impaired, to participate in "ethics games" and societal design?
To me, this "naturalistic moral pluralism" would be a recursive extension of what you have developed here, not an indictment of it. But I feel it does resist prescriptive "abandonment" of moral realism, at least without further development to address options of agnosticism, ignosticism and pluralism.
Looking forward to the next!
I’d answer in two points.
First, I agree there can be a large-scale “game” across communities with different moral systems--the international order is the obvious example, where states with very different internal norms still have to coordinate on trade, war/peace, treaties, etc. But for the diversity of moral systems to be a functional feature in the strong sense you suggest, we’d need a fairly specific selection story: not just competition between individual communities, but meta-communities of communities, where the ones that survive and prosper are precisely those that contain a certain diversity of internal moral codes. I don’t see much historical evidence of that kind of selection structure. In particular, most of the time, large polities seem to have imposed, or generated, some internal coherence rather than moral pluralism between sub-groups.
Second, I completely agree that there is a “metagame”. As Bourdieu stressed, the rules of the game are themselves at stake in the struggles between agents. That is fully compatible with a Humean constructivist perspective, especially in Binmore’s version, where bargaining over time plays a central role in the evolution of fairness norms.
Well-taken, but there will always be a metagame of "preparedness for the next game, without knowledge of its particular rules." One limitation, which you are also invoking, is that we can only understand the fuller game in retrospect once we recognize the pattern of selection that has already happened. That is, unless we can inoculate ourselves to an underlying pattern.
What I am saying is that much like the hedging implicit in immune responses, trading on specificity and sensitivity in pattern recognition and response, the outter fringes of moral reactivity may operate as if by the smoke detector principle, with others more inward acting as canaries in the coalmine against manipulation rather than truth per se. Treating "moral realism" may just be treating the symptoms of strategic under/overfitting. However, I do suspect that your trajectory will be helpful precisely because it addresses the underlying, non-productive tensions also. I just mean to say that any prescript of abandonning moral realism is superfluous to your task and insights.
Cheers!
Objection from Repugnant Equilibria.
The most obvious objection against such stable-social-equilibria accounts is the plausible conceivability of stable equilibria that are also morally repugnant. This is the same kind of objection raised against utilitarian theories on the grounds that some option might be utility maximizing but also morally repugnant.
It seems entirely possible that under the right empirical conditions it might be the case that some practice of enslavement, torture, and utter domination of a particular group could also represent a stable equilibrium.
If there are such socially stable equilibria that are also morally repugnant, then being a stable equilibrium cannot be identical with being morally good.
(If one attempts some kind of Rawlsian idealization move, then one will have to provide some account of why certain conditions of idealization are required/justified in order to exclude certain equilibria that are in fact stable given things like the psychological constitution of the relevant agents.)
---
Shouldn’t the Failure of Divine Command/Stance Theories Imply the Failure of All Command/Stance Theories of Morality?
It seems likely that realists and anti-realists are equally mystified by each other’s accounts of moral (and epistemic) normativity. Personally, it never really made sense to me how the Divine Command theorist thought the mere fact that God commanded a thing could suffice to MAKE that thing moral. Nor do I see how any other Human Command theory or Human Stance theory could fair any better.
I appreciate the intuitive pull of this argument.
From a Humean–Binmore point of view, there is no external stance from which we can say “this equilibrium is objectively repugnant” in the realist sense. There are just different equilibria. The fact that we find some of them horrible is not a contradiction of the theory It is exactly our current moral code and preferences speaking.
What we can do is two things:
- Judge from our own standpoint: from where we stand, a slave equilibrium or a terror equilibrium is awful, and we are fully entitled to say so and to try to move away from it.
- Assess them functionally: some equilibria do a much better job at sustaining cooperation and avoiding misery for most players. Others are brittle or keep large groups permanently at the bottom. That gives us good reasons, again from our standpoint, to favour some patterns over others.
On the second point: I am not saying that any “command” or “stance” magically makes things moral in an absolute sense. The only “oughts” I recognise are conditional: If we want social arrangements that leave people generally better off than the realistic alternatives, then we have reason to favour some equilibria over others and to resist others.
This seems more a historical question than a philosophical one so I feel a tad more comfortable responding to this comment. General “moral” repugnance of slavery is a relatively recent phenomenon in human history (more recent even than the first global ban on trading humans as slaves, which came in 1807). Since there are many societies around the globe that still practice outright chattel and sexual slavery, I would also conjecture that the repugnance is not universally shared, even today. From a “functional” perspective slavery worked (and apparently still works in narrower regional contexts) to keep societies in perfect equilibrium. There was (and is) no functional reason to end it from a societal equilibrium perspective. Similarly, the most stable societies historically (at least in terms of internal stability) have been those that keep “…large groups permanently at the bottom.” From a functional perspective those societies are basically perfect. Are we therefore arguing that we should be circling back to Middle Kingdom Egypt or Qin China? (Also, who is the “we” here? Does this functional system of moral philosophy work like Plato’s Republic? Are the people in this comments section the philosopher kings/pharaohs? that will lead humanity into this return to a more functional past?)
> The most obvious objection against such stable-social-equilibria accounts is the plausible conceivability of stable equilibria that are also morally repugnant.
See *Sick Societies* for examples.
https://www.thepsmiths.com/p/review-sick-societies-by-robert-b
Hume said, in the conclusions of his Enquiry, that “the notion of morals implies some sentiment, so universal and comprehensive as to extend to all mankind, and render the actions and conduct, even of persons the most remote, an object of applause or censure.” This universalism is very different from the notion of a morality which flows from the rules of whichever particular game a society has chosen to play.
Hi Charles. I wrote in another comment “some fundamental structures of human interaction might make some norms (and our preference for them) universal for creatures like us. I think we need, however, to be careful not to slide from that into the idea that these rules are objective truths “out there” with a built-in normative force. Even if they are universal, they are still conditional oughts: if we want to do well in the games we play, we need to respect them. They are not unconditional oughts.”
It is perfectly in line with what I am presenting to say that some general considerations about fairness and morality are likely universal. Binmore for instance points to the Golden Rule as rule for which we likely have an innate preference.
This is a great article, and I'm entirely convinced by your account of Humean constructivism. I have two problems with it, both of which are visible in this sentence: "Instead, [Humean contructivism] posits that our moral code is a human creation that serves to foster social cooperation."
The first problem is your use of the term "moral." "Morality" has traditionally referred to the kind of objectivist, stance-independent ethical theory that you are criticizing here. The term has connotations that are inconsistent with your viewpoint. So I would suggest finding another word for the type of theory that you're formulating. "Ethics" works.
The second problem is that fostering social cooperation is only one of many functions of an ethical code. The ethical code that I live by as a contemporary American demands that I treat people decently even if I will never have any cooperative relationship with them. The function of ethics is to regulate our social life in general, not merely to enable cooperative relationships.
On the meaning of morality see my article here: https://open.substack.com/pub/eclecticinquiries/p/bernard-williams-against-morality?r=4952v2&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
Well written as usual.
In response to "We ought to respect moral rules because they are the rules of the social games we play," it seems to me that the word "ought" is one with connotations of absolute, transcendent rules, whereas the soccer analogy is likely a more accurate one. (Disclaimer: these thoughts are just conjecture off the top of my head, not carefully reasoned moral philosophy). I tend to look at the rules of the social game of life and other such rules in a descriptive manner, where in my mind "if you want to win at the social game, you ought to follow rule x" = "following rule x leads to desired result y." It is then the case that I follow moral rules for pragmatic reasons, just as I go to the gym regularly because I want to have the social and biological benefits of large muscles and wish to avoid physically and cognitively falling apart in the homestretch of my life. I will not beat children, because beating children will have social and legal repercussions, but moreover (even without those strictures) because beating children feels atrocious to me due to my moral intuitions. A counterargument I often encounter when expressing this view is that "if you're a psychopath who feels no moral guilt, want to murder, and can do so knowing you'll fully get away with it, what's stopping you?"
"Nothing. C'est la vie."
Curious to read your thoughts.