118 Comments
User's avatar
Paul S's avatar

Something that always amuses me about moral realists: they assume that the "objective moral truths" that they hold so dearly just happen to correspond precisely to their own moral values.

But what if we (somehow) found out that the moral truth was that Genghis had it right: it is objectively best to slaughter one's enemies and rape their women? Would the moral realists all start saddling up the horses and riding out accordingly? Or would they perhaps start coming up with long-winded explanations as to why that can't really be the moral truth?

Kind of like how if you found out that Santa Claus wasn't coming down the chimney to give you presents, but to kill your parents... so you started trying to prove that Santa *has* to give out presents rather than gruesome endings.

A problem not faced by those of us who concluded that Santa doesn't exist.

Lionel Page's avatar

Hi Paul, thanks for joining the discussion. I didn’t know about your book. I’ve just got it from the university library, and I think there is quite a bit of convergence with the view I’m developing here.

On the topic of equality, Binmore does not propose a historical theory of why societies became more egalitarian in outcomes (I offered a conjecture in my post on 28 October), but he has a very interesting explanation of why we have egalitarian intuitions in terms of morality. In short, egalitarianism emerges because social contracts are not enforceable, so the least favoured always need to be on board, otherwise they can drop out of the “contract”. This leads to a justification of Rawlsian egalitarianism following a logic that is, in my view, much more convincing than Rawls’s own arguments. You might find his book Natural Justice interesting. The view I develop in this series of posts largely walks in his footsteps on this question.

Paul S's avatar

If you're interested in that line of thought, I strongly recommend chapter 3 in Joseph Heath's book Cooperation and Social Justice!

Lionel Page's avatar

Thanks, I was aware he had related views. I’ve ordered his book.

What Follows from What's avatar

I don't think I really understand this argument but I would like to. Here are two stabs at it.

I use the term 'property' throughout in a literal sense that would imply realism. The first stab I don't think really works because I don't see how the rejection of moral realism follows:

1. There is an action which many people believe is morally wrong.

2. IF it had the property of being morally wrong, THEN if it were good, someone should do it.

3. Conclude that it does not have the property of being morally wrong, ergo moral realism is false.

How to get to (3) I don't know. Here is another version that I think is better:

1. There is an action which has the property of being morally wrong, call this action A.

2. If it turned out that A was morally good, then someone should do A.

3. It could turn out that A is morally good.

4. So it could turn out that someone should do A. From 2 and 3.

5. If an action has the property of being morally wrong, then no one should ever do it.

6. Necessarily no one should ever do A. From 1 and 5.

7. Therefore, no action has the property of being morally wrong. Reductio, reject 1, lines 4 and 6, assumptions 2, 3 and 5

See reply for more...

What Follows from What's avatar

I think the problem with this argument is that the assumption set in the conclusion is inconsistent in tandem with a few other assumptions that seem very plausible:

(8) Necessarily, if no one should ever do A then A is morally wrong

(9) Necessarily, if A is morally wrong, then A is not morally good.

(10) Necessarily, if no one should ever do A, then A is not morally good. From 8 and 9.

(11) Necessarily, A is not morally good. From 1 and 10.

(11) contradicts (3) which is in the assumption set in line 7. Hence, if (8) and (9) are true the conclusion to the above argument is trivial. This is not to say that the argument you presented initially is wrong, but at the least this way of putting it does not work.

David Pinsof's avatar

Excellent post, Lionel. Very much agreed about Santa Claus fallacies and appeals to intuition being unproductive. I used to be an antirealist, but then I started getting curious why we talk about morality as if it were objective. It seems to me we have no good naturalist theory for why moral talk is objective if it’s not. Why do we need so much false talk? Wouldn’t evolution favor an accurate view of the kind of thing morality is? Then I realized that a good naturalist explanation for why moral talk is objective is that we are referring to objective things in the world: the objective triggers of our moral emotions, absent any biases, defects, or misinformation. This makes sense of our moral talk and explains why people who don’t share our moral judgments seem “inhumane,” like defective humans. Their moral emotions aren’t working properly—they’re psychopaths. Or they were fed bad information—they’re brainwashed. Curious what you think of this view. It is a weaker, more human-centric kind of moral realism than one typically sees, but it is a realism no less. And it requires no skyhooks.

Lionel Page's avatar

Thanks David! I think the question of "why do we feel morality is objective" is a very interesting question. As you point out, it seems to conflict with a functional perspective which would require aspects of our cognition to be "right".

My take here is that Nature (evolution as the metaphor of a designer) faces a difficult problem designing organisms so that they make good decisions. On the one hand, rationality is good for making good decisions, but on the other hand, rationality can be detrimental in strategic situations: you lack the credibility to make crazy threats or promises of acting saintly. So Nature might benefit from locking in some beliefs and emotions that limit our purely rational behaviour. That is the argument from Robert Frank in Passions within Reason. Beliefs in the objectivity of morality can, in that perspective, be useful: they make you more trustworthy and therefore more worthy of being selected as a partner in social interactions. This is the topic of a post I'll publish after the next one (which is going to first develop that naturalistic view of morality).

THE POSTLIBERAL CYBORG's avatar

This is an excellent, tough-minded demolition of moral realism, especially of the “Santa Claus” motivations behind it. The way you line up Parfit, Harris, Huemer, Enoch, etc., and show how much work is being done by a desire for moral objectivity rather than by argument, is both philosophically sharp and psychologically honest.

Coming from a broadly similar position, I’ve been thinking about one further step your piece almost reaches: if morality is an evolved device for coordinating behaviour in repeated social games, then “morality” is not only contingent on human nature – it is also, in principle, exportable beyond it. Once we drop the idea of objective moral truths “out there”, what remains is something like functional alignment: patterns of regulation that stabilise a system, given some target configuration (survival, flourishing, stability, growth, etc.).

Seen that way, moral norms are not so much “beliefs about value” as memetic control protocols: bundles of dispositions, narratives, sanctions and rewards that help a group approximate viable equilibria. They are engineering solutions discovered by cultural evolution. This has two consequences that your naturalism seems to invite:

A system composed solely of machines could, in principle, be “moral” in exactly the same naturalistic sense as humans are: if it implements stable regulatory patterns that minimise certain kinds of breakdown (analogues of conflict, collapse, exploitation, etc.) relative to its own architecture and aims.

The interesting question ceases to be “Are there objective moral truths?” and becomes “What kinds of regulatory architectures best sustain complex systems under different conditions?” – where human moral talk is one local, historically-specific vocabulary for such architectures.

In other words, your rejection of moral realism doesn’t only undercut metaphysics; it also opens the door to a post-human or at least post-anthropocentric ethics, where the key notion is not truth but functional design. I’d be very interested to see your game-theoretic approach pushed explicitly in that direction.

Lionel Page's avatar

Thanks, this perspective is indeed not restricted to humans. What we call moral norms emerge from the gains from cooperation and the need to regulate and organise cooperation in order to tap into those gains. It’s reasonable to assume that any group of agentic units would have to converge on some version of these principles if they are to benefit from cooperation.

Humans are, in the end, also machines, just very complex ones that currently play the game of life far better than any other machines we can observe. So the features of morality that we find intuitive are, on this view, reflections of (approximate) optimal behaviour in social games, given our characteristics as a species (including our impressive but still limited processing power).

That suggests we should expect both fundamental commonalities and substantial differences in the rules adopted in equilibrium by other kinds of agentic units. Agents with very different lifespans, computational power, or cost–benefit structures in interaction would stabilise different rules of cooperation and require a different cognitive architecture to support them.

THE POSTLIBERAL CYBORG's avatar

Thank you, Lionel — your reply confirms the structural point.

If moral norms emerge from cooperation among agentic units, then morality is no longer a set of truths but a design principle — an architecture that stabilizes interaction under specific constraints (lifespan, computational capacity, cost–benefit ratios, etc.).

That is precisely where the rupture begins: once morality is understood as an evolutionary coordination device rather than a transcendent domain, the entire liberal moral framework — dignity, equality, the sovereign individual — becomes a contingent artifact of a particular type of agent, not a universal grammar.

Different agents → different equilibria → different “moralities.”

At that threshold, moral metaphysics dissolves and what remains is systems design: coherence, viability, operability.

Your analysis opens the door.

The post-liberal horizon begins on the other side of it.

For anyone interested in how this conclusion unfolds, here is the expanded essay:

https://albertocarrillocanan.substack.com/p/the-liberal-trinity-dignity-equality

Steve Phelps's avatar

Frank is absolutely the right starting point. Anger and guilt are irrational in the narrow utility sense, but they solve commitment problems logic cannot: anger makes retaliation credible, guilt makes promises hard to break. These emotions stabilise cooperation where rational agents alone would collapse into defection.

But what that machinery produces depends on context.

Frank explains how emotional commitment stabilises reciprocity when trust holds, yet the same evolved hardware can support a very different cooperative architecture — including religion, which you put aside in the original essay. Dawkins treated religion as a maladaptive meme or “virus of the mind,” but David Sloan Wilson, in Darwin’s Cathedral (University of Chicago Press), argues that religious systems often function as group-level cooperation devices — built not on contract or reputation, but on shared identity, ritual, and costly signalling. When (in)direct reciprocity falters, religious moral orders can flourish precisely because commitment is enforced pre-rationally.

History bears this out. During the Antonine and Cyprian plagues, civic reciprocity collapsed — courts, markets, welfare — yet Christian networks grew. They fed widows, buried the dead, nursed plague victims. Cooperation did not disappear: it reconfigured along identity lines when fairness-based reciprocity failed.

This suggests that humans operate two cooperation regimes:

1. Reciprocal / Institutional Fairness

stabilised by direct/indirect reciprocity, reputation, law, and trust in institutions — cooperation conditional on behaviour.

2. Parochial Altruism

stabilised by loyalty, obligation, costly signalling, and ritual — cooperation conditional on membership.

The second regime is powerful, cohesive, and ancient — and identity is its organising principle. Ingroup membership, not behaviour, determines who receives trust and altruism. Ritual, myth, symbols, slogans, purity signals: these mark who counts as “one of us.” Under threat or perceived unfairness, herding instincts activate, identity cues intensify, cooperation narrows, and loyalty becomes the highest virtue. Strong ingroup altruism is favoured when groups compete, because helping insiders and withholding benefits from outsiders increases the relative fitness of group members.

In institutional settings, punishment is directed toward defectors — those who violate norms of fairness and reciprocity. In intergroup-competitive settings, punishment shifts toward outsiders, because excluding rivals increases ingroup success. Neither of these are selected over the other because they are intrinsically *moral*; each is *adaptive* under different selective pressures.

We are watching this play out now. Polarisation is driven less by disagreement than by identity signalling — flags, hashtags, partisan vernacular, purity tests. The moral circle contracts. Cooperation becomes conditional on tribe rather than principle. Institutions lose authority as arbiters of fairness.

So I think your conclusion can be extended.

Religion is not required for morality — but neither is reciprocity uniquely foundational. Both regimes are expressions of the same evolved commitment machinery, activated under different ecological conditions. Evolution explains how both succeed; it cannot tell us which we should cultivate.

Lionel Page's avatar

I agree with many of the things you wrote. The game theoretic perspective I'll develop in that series gives an overarching framework to understand these intertwined dynamics of cooperation and competition.

Daniel Greco's avatar

Have you read "quasi-realists" like Simon Blackburn, or Allan Gibbard? I'd be curious to hear what you'd make of them. Very roughly, the project is starting with a naturalistic picture like the one you're describing, and giving a largely non-revisionary account of moral thought and talk, including thought and talk about "objectivity".

Quasi-realism is a descendant of an earlier view called "emotivism", which said that claims like "it's wrong to kick puppies" were mere expressions of emotion, akin to "Boo to puppy-kicking!" Emotivists denied that moral claims could be true or false, or objective, or anything like that.

The first step from emotivism to quasi-realism is embracing deflationism about truth. In general, for declarative sentences, "p" and "it's true that p" say the same thing. So if "it's wrong to kick puppies" expresses disapproval of puppy-kicking, we shouldn't say "it's true that it's wrong to kick puppies" is confused or ill formed; rather, we should say that it *also* expresses disapproval of puppy kicking.

Quasi-realists are fine with talk of morality as objective, but they'll take "objectivity" talk to express features of our preferences; eg, some of my preferences I also prefer to be universally shared, and prefer to bind myself to keeping, while others I don't. What I'm doing when I call a value "objective" is expressing a preference of the former sort. Quasi-realism is also called "expressivism," since it explains moral talk, including talk about objectivity, in terms of the distinctive states of mind--maybe preferences, states of approval and disapproval, or plans--that it expresses.

This just gives the flavor, but I'm curious about whether the view, so described, still strikes you as too realist, or whether it's consistent with the picture you'll develop here.

Lionel Page's avatar

Hi Daniel, thanks a lot for your thoughtful question/comment.

I am familiar with Blackburn/Gibbard-style quasi-realism. As you say, the project is to start from a non-cognitivist, naturalistic picture (emotivism/expressivism) and then recover as much as possible of ordinary moral thought and talk, including its logical structure and its “objectivity” rhetoric.

The core challenge they’re addressing is the Frege–Geach problem: if moral judgements are not truth-apt propositions but just expressions of attitudes (“boo!”, “hurrah!”, plans, etc.), then why do they behave in arguments as if they were? Why can we meaningfully use them in conditionals, negations, and valid inferences (“If murder is wrong, then getting your brother to murder is wrong; murder is wrong; therefore…”)? That problem arises specifically for non-cognitivist theories that come out of classic emotivism.

My own stance (and Binmore’s) is actually cognitivist. Moral statements, on our view, are truth-apt: they can be true or false within a moral code, where the code itself is a system of rules that sustains an equilibrium in the “game of life”. Those codes are shaped by the logic of game theory. A certain kind of consistency is therefore built into them. Consider the rules of football/soccer. They clearly do not refer to an external objective truth, but they are also not just personal statements of preferences. Because they implement an equilibrium of the game, they have some in-built consistency. You can get implications like:

if this is a penalty, then that similar case must also be a penalty

Similarly, in society, it makes sense to make such logical implications:

if murder is ruled out, then murder by proxy / robot / hired hitman must also be ruled out.

So when we see moral reasoning with conditionals and so on, there is no special mystery to be solved of the quasi-realist sort. We don’t start from “just attitudes” and then try to reconstruct logical structure. We start from descriptive claims about whether an action fits or breaks the rules of a morality game. Once you see moral rules that way, the logic comes for free: they have to feature some in-built consistency to work, just like the rules of any other game or practice. I am going to flesh out this naturalistic/cognitivist view in the next post.

Paul S's avatar

Crucially, what is attractive about the quasi-realist perspective (and here it is really very much an evolution of the basic Humean take) is that it provides room for us to think that our moral values (whilst ultimately subjective) are importantly not reducible *simply* to their evolutionary origins. That is, they don't end up quasi-debunked simply because the evolutionary origins are acknowledged upfront. This strikes me as a significant advantage of the Blackburn style approach compared to the kind of error theory that overly simplistic evolutionary accounts of morality tend to end up with (e.g. Joyce).

Daniel Greco's avatar

Yes I also like it because I think it takes a lot of the wind out of the sails of the putative realist opponent. If you set things up the way most anti-realists do, you're stacking the deck against yourself. "I know you guys believe in things like motherhood, apple pie, and objective morality, but I'm here to tell you it's all bullshit!" No surprise people are skeptical. I don't think motherhood, apple pie, or objective morality are bullshit.

Instead, you get to say: "I believe in motherhood, apple pie, and objective morality too. But I don't think all this "objective morality" talk is nearly as spooky as you make it out to be. I'm going to give you what looks like a pretty modest, unassuming, naturalistically kosher account of what we're doing when we say that something is objectively morally wrong. Now, if you realist agree, great! (Though you'll have to stop treating the objectivity of morality as in tension with a broadly naturalist worldview.) On the other hand, if you think my account leaves something out, the ball is in your court to explain what talk about objective morality amounts to in a way that can't be made sense of in the way I suggest. And historically, your track record isn't great (ie, you tend to just resort to synonyms)..."

Paul S's avatar

The drawback of the quasi-realist view is that it takes the realist view as the default, that which sets the parameters of debate...and casts the subjectivist in the role of showing they are able to match up to parameters set by people who, if the subjectivist is right, are not entitled to be setting the parameters. In the final instance, it is not clear why things should be this way round.

Having said that, I take what is essentially a quasi-realist view in my last book, but about basic equality, rather than morality as a whole:

https://press.princeton.edu/books/hardcover/9780691255347/basic-equality?srsltid=AfmBOoqebfPrQo4xpPVCeXrQ-IBiwszyzFJMGmti8WgBpCe--3L2o4z-

Charles E's avatar

My hesitation with the quasi-realist project is that it seems to breach discursive norms. When a quasi-realist says that morality is objective, or mind-independent, or beyond our sentiments, they mean it in a stipulated sense. Using conventional terms in unconventional ways seems distasteful. They could talk about quasi-objectivity, quasi-mind-independence, etc, but that might take away some of the force of the position.

I'd like something less critical than the error theoretic "all moral claims are false," and more critical than the quasi-realist "we should talk exactly as the realist does."

Daniel Greco's avatar

I don't think Blackburn would concede that. That is, I don't think he'd say: "there's the way everybody already uses these terms, and then there's this novel way I'm stipulating we should use them instead." (Though that is, roughly, what Mackie does.)

Rather, I think Blackburn would say: "I think this is the best account of how we *already* use the terms. It's the realist who has the unrealistically inflated picture of the kinds of metaphysical commitments they involve."

Charles E's avatar

I guess I'm confused on that point. Some of Blackburn's work is highly technical. If it's meant as the best account of everyday practice, is the claim that everyone is implicitly committed to the naturalistic and Humean picture that Blackburn paints? *I* like naturalistic and Humean analyses, but many are motivated to argue with me over these things. Or maybe their self-conception doesn't matter, and the project is more *if* we can explain the realist appearances of first order discourse, then that shows that first order moral practice need not be comitted to non-naturalist metaphysics?

Daniel Greco's avatar

I can't remember how much of this he actually says, and how much is the vague recollection I have, but I suspect he'd like the picture that the "man on the street" does not have super determinate meta-ethical inclinations, and is much more committed to saying it's "really wrong" to kick puppies, or torture people, or what have you, than he is to any particular meta-ethical account of what sort of speech act that is.

So it's not exactly that they're *already* committed to the Humean picture, but they're also not committed to rejecting it.

Wild Pacific's avatar

Emotions as described are indeed mechanisms of support of growing it’s host towards the outcome. The pleasure/suffering is the basis of decision making, and intellectual arrival to a conclusion that matches emotion is akin of mapping one’s path on the map before/after arriving to the desired location. Projection of thoughts in a structured way.

Yes, emotions are necessary components of morals. Finer emotions of satisfactory and coherent thought is too, as a round-about verification of the emotional impetus.

Lionel is right to challenge “absolute” right moral structures, in the strict definition of “absolute”. Morals are vague and changing. But at any given time morals (grown from emotional righteousness) are trunk from which complex decision-making grows, and it is absolutely directional.

We can deduce the direction too! Constructive (at macro level) development of complexity of life. This fractal retreats sometimes but overall grows in this direction endlessly. Desire to think about things in general is part of this core pre-human, pre-life value.

Jonathan Tweet's avatar

Yes, an evolutionary perspective seems to clarify the question. It’s true that “blue” exists, but only in brains adapted to concocting that perception and not out there in the external world of light waves. Yes, “right & wrong” exist, but again only in brains selected to concoct that perception.

Or else when in our evolutionary history did objective morality appear, and how?

Jess Merrick's avatar

I think Sam Harris's position addresses this.

Jonathan Tweet's avatar

Thanks. I've read Moral Landscape, although it's been 15 years. How does Harris address when in our evolutionary past objective morality developed? If I recall his main argument is that the academics who studiously avoid talking about objective morality end up saying ridiculous things, like it's OK to blind children if your culture says it's OK.

Lance S. Bush's avatar

You say "One specific aspect of morality is that we tend to experience moral norms as objective and externally imposed, rather than as mere personal preferences."

I’m broadly sympathetic to your aims but I don’t think this is true, and don’t think Stanford demonstrated that it was true (I’m familiar with the article in question). Stanford relied primarily on Goodwin and Darley’s (2008; 2012) studies, which have significant methodological shortcomings. David Moss and I critique them here:

Bush, L. S., & Moss, D. (2020). Misunderstanding metaethics: Difficulties measuring folk objectivism and relativism. Diametros 17(64): 6-21.

I also include an extensive critique of Stanford’s other sources, such as Nichols and Folds-Bennett (2003) in my dissertation. Others, including Beebe, Davis, Pölzler, and Wright have identified methodological shortcomings with earlier studies as well.

The best available empirical evidence does not support the conclusion that most people tend to experience morality as objective or externally imposed. As the methods used to assess how nonphilosophers think about metaethics have improved, we find a shift towards increasingly larger proportions favoring antirealist response options and those responses have become more consistent. You can see this in some of Pölzler and Wright’s work (which I see was mentioned).

I don’t think this means most people are antirealists. Rather, I believe variation across studies reveals deep and pervasive methodological shortcomings with the methods used to assess how nonphilosophers think about moral realism/antirealism, and that at present we don’t have valid measures. My dissertation consisted in part of a comprehensive critique of the methods used in these studies and I argue, and gather a wealth of empirical evidence, that there are no valid metaethics paradigms. I then argue that the best explanation for this is that in general people are neither realists nor antirealists, but rather have no determinate metaethical positions.

At best, we just don’t have strong evidence that people experience morality in any particular way. However, I think there are a number of studies that hint at a disposition towards objectivism/realism not being universal. People struggle to understand metaethical distinctions and they may simply not be necessary for facilitating the sociofunctional role ordinary moral discourse plays in our societies. At the same time, there is some evidence that cultures conceptualize morality differently from WEIRD populations and that some populations may have no cognate terms for morality at all, and instead have quite different conceptions of normativity. Efforts to offer an adequate conceptual distinction between moral and nonmoral norms have also proven largely unsuccessful. As such, it's unclear there is any universal, shared capacity for distinctively moral cognition, much less one that is a product of natural selection, that would even be a reasonable candidate for a universal tendency towards moral realism.

Instead, I and some others suspect the very notion of morality is culturally constructed and historically contingent, and that it is not the case that a capacity for distinctive moral cognition is a product of natural selection and is universally shared by all human populations. In other words, I don't just deny that most people are moral realists; I'm skeptical that historical populations consistently even thought in distinctively moral terms at all.

Lionel Page's avatar

Hi Lance, that’s a very good point. I remember seeing psychological studies with ambiguous results. We may be overly influenced by our cultural background featuring moralising gods. I plan to talk about this topic again. Your references and pointers are very helpful.

Lance S. Bush's avatar

Great. Fwiw I don't think this threatens your general points. This is just my area of research so I'm always vigilant to comment on it.

Lionel Page's avatar

Totally fair. I am a stickler for precision, and my assessment might have over-relied on Stanford.

Liam Riley's avatar

I agree a cultural analysis blows the assumption of a common belief in objective morality apart. Would be interesting to see how such ideas are treated in non-Western populations (and if the concept even translates at all)

Jamie Freestone's avatar

This is a structural pattern in philosophy: find a thing you hope is real (morality, teleology, intentionality, free will) & dedicate your considerable research & argumentative skill to defending it ingeniously. It’s completely natural & understandable, but I think it’s a huge exercise in Santa Clause fallacy-ing. & people are open about it. They often disclose somewhere in their writing that they believe in the importance of some concept — as you note with Huemer et al. It’s telling that no one goes the other way: hoping that something isn’t real but reluctantly accepting that the evidence says there is free will, morality, etc.

Lionel Page's avatar

I think it is actually the path taken by many naturalists who initially shared beliefs in God, moral truths and free will, and then progressively faced that they had to drop these if they wanted to stick to the evidence they had.

Adam Reith's avatar

Michael Tomasello is one of a small handful of psychologists who has done extensive research with both chimpanzees and human toddlers. He has written a number of excellent books describing the cognitive and behavioral differences between us and chimpanzees and the likely evolutionary pressures that produced these. One such book is

A Natural History of Human Morality

A chapter in this book is devoted to the human impulse to see social norms as objectively true. We can think of this as a stratagem that makes social conformity easier. He uses the example of some young children inventing a game. When they later teach this game to a newcomer they will say "This is how the game is played" and not "These are the arbitrary rules we just made up for a new game". Moral "games" work similarly.

Lionel Page's avatar

Thanks Adam. Indeed, I will discuss this chapter in a later post.

Ichabod Fox's avatar

I've never really understood the fuss about this. Moral values are obviously subjective in the sense that they assume an evaluator. As Ayn Rand said, "Value presupposes an answer to the question: of value to whom and for what?"

That said, if you subjectively value happiness (as conscious beings seem to), you can reason objectively about how to achieve it. To use an analogy, if you subjectively value health, modern medicine will (objectively) help you get there.

Richard Fulmer's avatar

We’re players in an ongoing, multi-generational coordination problem. Morality isn’t arbitrary; it’s the evolved and refined solution to that problem - constrained by facts about human vulnerability, our capacity for suffering and flourishing, and our interdependence. It’s not waiting “out there” to be discovered. It’s something we build, but we’re not building on nothing. The authority of moral norms comes not from their independent existence, but from their necessity for the kind of lives we’re trying to live together.

Nicolas Procel's avatar

Excellently put to words. I'll be adopting the Santa Claus fallacy.

Adam Reith's avatar

In a footnote, you wrote: "As I’ll discuss in a later post, our cognition has likely been shaped by evolution to find the idea of objective moral truths compelling."

Michael Tomasello makes this exact point in his terrific book A Natural History of Human Morality, in which discusses the differing moral behaviors of chimpanzees and humans and the likely evolutionary pressures that produced these differences.

We have evolved to instinctively adopt the social norms of our group. Falsely viewing these norms as objectively true makes such adaptation easier; it's the "spoonful of sugar that makes the medicine go down."

When a group of children, having invented a new game, teach it to a newcomer, they invariably say "These are the rules of the game" and not "these are the arbitrary rules we chose for a game we just made up." Morality is similar.

Lionel Page's avatar

Yes, Tomasello's view is very compatible with what I present. I was not aware he was talking about our perception of moral rules as objective. Very useful, I'll look into it for my post on this topic.

Adam Reith's avatar

Chapter 4 of A Natural History of Morality is entitled “Objective” Morality and is largely devoted to the processes by which mere norms become “objectified.” My game anecdote comes from that chapter.

Shane O'Mara's avatar

Super piece 👌 I learned a lot from it, and am looking forward to reading the next one.

THE POSTLIBERAL CYBORG's avatar

I agree. Please read this: https://reflexionesmarginales.com/blog/category/numero78/page/2/

Any IA can traduce it, but the second part is enough.

Torches Together's avatar

Nice piece, I especially like the appeal to consequences section!

Oddly, I was arguing about moral realism on another post (https://ibrahimdagher.substack.com/p/there-are-no-arguments-for-moral/comment/181836502) , and this post came up on my feed!

A few questions:

1) As someone who rejects moral realism, do you think that the statement "There exist stance-independent moral facts" is false, or incoherent? I lean towards the latter, but that's just because I don't understand what they would look like.

2) My "lay person" understanding is that we can (and perhaps should) define morality in such a way that we assume moral truths (just as we have facts in models of economics, law, sports etc.), but we shouldn't pretend that this is metaphysically privileged in any way. Apparently these are called "institutional facts" (Searle) - is this relevant to the discussion, do you think?

3) I'm not sure I'm with you on the Sam Harris "skyhook" claim. He's simply making the conditional statement: "If there is value in the world, it must be tied to consciousness"; he's not (in this line at least) rejecting the absence of value (nihilism). I feel I agree with all your other points, but agree with Sam here. I think that I define "moral value" in such a way that it can only be measured with regards to conscious experience.

Lionel Page's avatar

Thanks, here are quick answers:

1. On my view, “There exist stance-independent moral facts” is intelligible but false. I understand what is being claimed – facts about what there is reason to do that don’t depend on any creature’s attitudes or practices – I just think we have no good reason to believe such facts exist.

2. If we read Searle’s “institutional facts” as facts inside a practice (“this counts as offside,” “this counts as a breach of contract”), then yes: moral “facts” are facts within moral codes that have evolved to facilitate cooperation and stabilise social equilibria. They’re not metaphysically privileged, just rules of the game for creatures like us. I'll develop this view in the next post.

3. On Harris: I’m happy with the conditional “if there is value, it must be tied to consciousness.” Where I see the skyhook is in the next step. When he says that value can’t be about non-consciousness so it must be about consciousness, he smuggles in a “value exists” that was not present in his initial conditional statement. He assumes that if value is not in A it must be in non-A, and thereby excludes the possibility that it is in neither A nor non-A (i.e. that value does not exist).

TS10's avatar
Jan 1Edited

Do you feel this way about rationality? I think that certain things are rational to do or believe - e.g. it’s wrong to take the one box in a newcomb case. Plenty of people disagree with that substantive judgment, but it’s really hard for me to believe that there isn’t a fact of the matter about whether I should take one or two boxes.

The same with rationality of belief. If p then q; p; I ought to infer q. Objectively, necessarily, even if everyone thought otherwise, etc etc etc.

Lionel Page's avatar

“Rationality” covers a few different things, and I think that matters here.

1. Logic and maths.

If you mean basic logic (and maths), I agree there is a very strong sense in which it is objective. But that is because it is tied to what it is to reason coherently about a world with stable regularities. If you want beliefs that track what happens, you cannot accept contradictions and you have to accept valid inference patterns. The “ought” here is conditional: if you want to think consistently and make reliable predictions, you should reason this way. That is why these rules are so hard to escape, and why we keep converging on them.

2. Newcomb.

On Newcomb, I’m not convinced it’s a good test of “rationality” in the first place. Binmore’s point is basically that the standard story is not a well-posed problem: it quietly combines assumptions about timing, predictability, and free choice that cannot all be true at once. If you tighten the description and make the information structure explicit, you have to relax at least one of these assumptions, and then the “paradox” largely disappears. Different answers typically reflect that people are solving different versions of the problem, because they have made different implicit modelling choices. So I would not treat disagreement over one-boxing versus two-boxing as evidence that there must be some deep, stance-independent fact of the matter about what one “ought” to do.

3. Why this doesn’t get you moral realism.

Even if you think there are objective norms of reasoning, that does not automatically give you objective moral facts. Epistemic norms can be grounded in the aim of truth and successful prediction. Decision norms can be grounded in the aim of achieving your goals given a model. Moral realism claims something stronger: that there are stance-independent reasons to act that bind you regardless of your aims, and that these reasons have a special authority. That is the step I do not see a good route to.

So yes, I’m comfortable being more realist about logic than about morality. The former is anchored by coherence and by how inquiry succeeds. The latter, to me, keeps coming back to intuitions and social functions, for which we already have very good explanations that do not require “moral laws out there”.

Alexei Kapterev's avatar

Although I agree that there are no absolute moral laws that exist independently of human nature and context, rejecting Platonic moral rules does not entail that morality lacks objectivity altogether. We can identify objective moral truths grounded in real facts about human beings. A universal prohibition on arbitrary killing follows from our vulnerability, dependence on cooperation, and need for predictable social space. All societies that didn't follow that law (presumably there were some) collapsed. From what I see, that kind of empirically grounded objectivity survives the critique that there are no metaphysical moral laws “out there”. Some of these objective laws could be temporary, true. But absolute is not the same as objective. Objective truth can be local while still remaining both objective and true.