26 Comments
User's avatar
Roger Sweeny's avatar

Re universality: The Illinois constitution requires laws to be of general applicability. So the legislature used to (and for all I know, still does) pass laws applicable to "all cities over 500,000 people." That universe consisted on one place: Chicago.

Maria Trepp's avatar

Yes I like morality as good cooperation definitely much more than as following strange unbalanced imperatives which harm the morally acting individual!

Max More's avatar

Here's a claim that I question: "behaviour is rational if it can be rationalised, in the sense that one can give reasons for it." I would rather say that behavior is reasonable if it can be rationalized. You can give reasons for why you did something. That does not mean that those reasons are actually the real ones, nor that they are sensible or effective. I would reserve "rational" for something better considered than merely being able to provide reasons.

Lionel Page's avatar

I think you can see this definition both as minimal (we would agree that rationalalisibility is required for rationality) and based on observables (a person statements). It has the merit of being clear and not metaphysical. After decades thinking of rationality, it is a definition that has got some traction in decision theory. You are free to prefer another one but the task of crafting a compelling alternative definition is not simple. I have a full chapter on that discussion in my book “Optimally Irrational “.

Max More's avatar

"If we assume that the players do not care for each other and will not interact in the future, the only rational strategy (the best strategy for them to get what they care about) is to defect."

Not quite. GIVEN the situation setup, then yes. But sometimes in real life you get to look ahead and shape the game. If you are already locked into the restrictive situation described in The Prisoner's Dilemma, then it seems that defect is the rational choice. But in a life with repeated choices and the potential for future planning, players can influence the rules and the environment in which they are applied. You probably agree with this, so please forgive the nitpick.

Max More's avatar

"Kant’s answer is misguided because the contractarian answer to the social question is not an answer to the individual question." An important observation. This is why it can be perfectly consistent to support a law but -- in certain specific circumstances -- to be personally justified in breaking it.

Max More's avatar

Excellent piece!

I have never been a fan of Kant. I endured having to understand and explain his metaphysical and epistemological views as (sadly) a necessary part of earning my doctorate in philosophy, but I had to really push myself through the tedium. I do agree that the most appealing part of Kant in ethics is this: "If these rules have to be accepted by all, and if we have similar bargaining power, a simple logic emerges: we cannot defend rules that give rights to ourselves and impose duties on others in a way that does not apply symmetrically. Hence, in a society of equal members, the only rules of behaviour that we can claim to be allowed to follow are those that we must accept could be followed by others."

Along similar lines, such reasoning leads to supporting "negative" rights and opposing "positive" right. Freedom-from can be universalized whereas freedom-to (using other people's lives and resources) cannot. Contractarian approaches to morality seem the most promising. Many years ago, I enjoyed books by David Gauthier and Jan Narveson.

Blurtings and Blatherings's avatar

Are you familiar with Kant's nebular hypothesis? One needn't subscribe to his metaphysics nor admire his literary style to appreciate that the man was an inspired theorherical cosmologist.

Laura Creighton's avatar

I have a fundamental problem with basing moral principles on the notion that they should be universal imperatives because so many problems we have are a matter of scalability. If you have a handful of people with a problem that you can fix with money, paying out the money is often your best choice. But now you have created an incentive for more people to behave in ways that produce the problem. If people respond to the incentive, then at some point you will have to say "enough is enough". That doesn't mean that your initial payout was a bad idea. Or that stopping it was when you did. Being flexible is necessary for being moral, but that doesn't seem to fit very well with moral absolute oughts.

Squirrel House's avatar

People have been pretending to refute Kantian deontology using rational choice theory for a century. But a prerequisite to refuting an argument is rendering it correctly, which hasn't been done here, unfortunately. I have nothing against a game theoretic account of morality offered as an alternative to Kant's account, indeed, it is even a helpful comparison for bringing out what is unique about Kant's view of the moral law.

From Kant's point of view, the game theoretic cooperative rule is just a hypothetical imperative: if I want to cooperate then I must adopt a universal maxim. But as Kant argues in Groundwork chapters one and two, morality deals with categorical imperatives.

At the end of the day, what the game theorist cannot account for, by Kant's lights, is the unconditional authority of the law, a feature of its modality that Kant thinks no reflective person can honestly deny. You can reject this premise and the argument which it grounds; but to reject an argument is not to refute it.

A more compelling way to frame your argument from a game theoretic perspective than the one presented here would be to start by summarizing Kant's position on the interest we take in the moral law, which he takes to be an interest we have a priori -- tackled in Groundwork III and the Fact of Reason argument in the Analytic of the Second Critique -- and then you could attempt to explain away this pro tanto interest we take in the moral law by citing evolutionary pressures toward prosociality. This would amount to denying that the interest we take in the categorical imperative is a priori. Kant himself considers this possibility -- he says such a thought would reduce moral necessity to "physical necessity", i.e. 'psychological necessity'. But because he equates morality with unconditional obligation, he suggests that such a possibility, if true, would render morality an empty subject.

Lionel Page's avatar

Thanks a lot for this thoughtful answer. Here are some comments.

You write that “from Kant’s point of view, the game theoretic cooperative rule is just a hypothetical imperative”. I should be clear that, if by a “game theoretic cooperative rule” you mean a rule like “cooperate in order to get the benefits from cooperation and avoid the punishments of defection”, then I completely agree: that is a hypothetical imperative. My point in the post is precisely that all the “oughts” that rational choice theory gives us are of that form. There are only hypothetical imperatives; there are no categorical imperatives that are imposed on us by rationality in this thin, decision-theoretic sense.

I take the crux of your comment to be the claim that “what the game theorist cannot account for, by Kant’s lights, is the unconditional authority of the law, a feature of its modality that Kant thinks no reflective person can honestly deny.” Here it seems to me that the Kantian move is simply to assume what is at issue: to say, in effect, “come on, you cannot seriously reject unconditional moral authority”. My answer is that, actually, I can—and many arguably reflective people have done so: Hume, Nietzsche, contemporary error theorists, Binmore, and I would place myself in that camp. The idea that “no reflective person can honestly deny” unconditional obligation is not a neutral starting point, it is precisely what needs to be argued for.

I appreciate your suggestion to frame the argument more explicitly within Kant’s conceptual framework—starting from our interest in the moral law and then denying that this interest is a priori. There are two reasons I don’t do this in the post. First, I do not sacralise past philosophers. They said interesting things, but it makes sense to question their categories in the light of our modern knowledge. In particular, I follow Binmore in challenging the claim that “rationality” leads to the categorical imperative, using our modern (and laboriously constructed) understanding of rationality as coherence of choice under uncertainty and strategic interaction. If Kantians want to say that Kant is using a different, thicker notion of rationality, that is fair enough, but then the burden is on them to spell out that alternative and show why we should accept it.

Second, my Substack is not about doing arcane academic philosophy. I try to keep things as simple and easy to understand as possible, and this post might already be testing the limits of that goal. That being said, I am happy to discuss these issues here with you.

Very briefly, on the a priori/a posteriori point: I think Kant’s distinction is intuitively appealing at a superficial level, but much less helpful once we take modern cognitive science seriously. To be fair to Kant, he was working without two centuries of work on the brain and cognition. From a naturalistic perspective, organisms making decisions face informational problems: they must use available information to choose as well as they can in their environment. Some information is “a posteriori” in the everyday sense – we observe things. But what looks like “a priori” structure in our thinking is, on this view, information encoded in us by evolution, which has “learnt” from trial and error over very long periods.

So, for example, as a baby, my DNA has already encoded that I should stay close to my mother and call for attention if she moves away. There is no mysterious vacuum in which pure a priori knowledge floats; there are embodied, evolved solutions. On this picture, the sharp question “is morality a priori or a posteriori?” loses much of its bite. Like Binmore, I think the right answer is: it is both. Some of our morality is learned in our current environment (family, society, culture), but some fundamental aspects are likely deeply encoded in our cognitive architecture because they solved very general problems for social animals. The near-universality of golden-rule-type norms across cultures is suggestive in this respect: it looks more like a deeply entrenched solution than a theorem of pure reason.

So I am not denying that we feel a special kind of authority in moral demands. I am denying that this feeling is best explained as the “unconditional authority of an a priori moral law”. My broader project is to explain that felt authority in naturalistic terms, and to show why, from that standpoint, Kant’s attempt to derive a categorical imperative from “rationality” in his sense is not something we need to accept.

Marco Senatore's avatar

Will all due respect, applying the economic notion of rationality to Kant’s Categorical Imperative means not to have understood at all his moral philosophy.

Moreover, he proposed several versions of the Imperative, e.g. the one based on autonomy, a notion ignored in the article.

It is autonomy that connects the individual dimension of self-determination with the universal dimension of a kingdom of end: the notion of social contract is not at all present in the Critique of Practical Reason.

Ken Binmore, beyond totally misunderstanding Kant’s moral philosophy, applying to it some concepts developed centuries later, has also been proven wrong in his own field - economics - for instance by behavioural economics.

David Potts's avatar

The key problem, I think, with the author's (and Binmore's) criticism of Kant is a failure to keep separate Kant's talk of "reason" from the concept of instrumental rationality (or rational decision making) in deriving the categorical imperative. Kant does not think, and does not say, that moral rules can be derived by asking what the principles of rational decision-making would recommend. And he *certainly* doesn't think they are derived by asking what would be (materially) "best" for you (or anyone), either individually or given what everyone else is doing.

This is what makes so much of the criticism seem like a straw man attack. Kant doesn't imagine that an individual can "will" others to behave as he expects or would like. He doesn't fail to understand that the outcome of one's actions depends on what others do also. He doesn't fail to understand that a person who behaves morally in the world as it is may well be taken advantage of. (!!)

I know we would never want to "sacralize" the great thinkers of the past (and I agree about that!). But if one's critique of a serious thinker depends on requiring him to miss (even implicitly) the most bog simple, incontestable truisms, maybe one should reconsider whether one's critique really matches its target.

Kant's appeal to reason in deriving the categorical imperative is simple. By "reason," he means the faculty of symbolic cognition (as opposed to imagination, associative memory, stimulus–response, passion, emotion, desire). He thinks it works by "laws," both rules of reasoning and universal principles. Laws are universal, so any rational rule must be universal. Thus, the categorical imperative, being a rule of reason, must be universal. He thinks the constraint of universality, weak as it may seem, is nonetheless strong enough to disqualify any immoral behavior. His argument for this is basically his derivation from the categorical imperative of his four famous example "duties" (ban on suicide, ban on promise-breaking, duty to develop one's talents, duty to help those in need). In each of these examples, he tries to show that to violate the duty would necessarily involve one in a *contradiction*. Not an unfortunate or unwanted outcome, but a logical error. That is the sense in which the moral law is a requirement of reason.

Does Kant's system work? I don't think so (though as castles in the air go, it is a beautiful one). But that doesn't mean we can say whatever we like about it or that Kant's whole problem was just that modern game theory had yet to be invented.

The post states that we can see Kant's project as attempting to provide a rearguard, secular basis for Judeo-Christian moral absolutism. And there's an implication, I think, that it's weird to regard moral rules as unconditional, as though only someone steeped in the Judeo-Christian tradition would think that way. But in fact, the treatment of moral rules as unconditional is completely normal. To think of them any other way is what's unusual. There's now a large social science literature about this. When issues become "moralized," they cease to be treated as matters of taste or matters of cost and benefit, and become "absolute" matters of right and wrong for which considerations of cost are inappropriate or even despicable. Think of famous cases like the Ford Pinto gas tank and disposable cigarette lighters, where companies making cost–benefit calculations came under fire—and fierce moral condemnation—for just that reason (i.e., making cost–benefit calculations). Again, all talk of "rights," as in a right to gun ownership or a right to health care, has the effect of banishing cost considerations from the discussion. (That's just *why* people like to frame their desires in terms of "rights.") Of course, this is impractical, and when faced with practical realities, people become conflicted and evasive in interesting ways—and there's a psychological literature on this, too (e.g., Tetlock)!

I have an idea that the function of moral rules is precisely to foreclose any further discussion about what to do, including rational decision-making, on a given question. There are several reasons this might be useful. One would be that we aren't very good at calculating, and often a rule of thumb is more accurate on average (not to mention being quicker and easier). "Honesty is the best policy" comes to mind as an example. But more fundamentally, I think there are some problems that individual rational decision-making calculations can't solve.

Consider a diet plan, for example. You are determined to lose 30 pounds by the end of one year. You decide to cut out all desserts. But on your birthday? Well, you know perfectly well that eating one dessert will not make the difference in whether you meet your weight loss goal. But what applies on one "special" occasion goes also for the next "special" occasion. And so what counts as "special" becomes ever more lax. The fact is that, although you may say you have an absolute rule forbidding dessert, the truth is that it's just a rule you made up, and it's not really absolute. One exception more or less won't determine the outcome. But that is a slippery slope that, once you've started down, reason has no way of stopping. I know that the conventional explanation of this is hyperbolic discounting, but what I'm saying is that I disagree with that explanation. I don't think it fits the subjective facts. Yes, we discount, but not that much. We know the deadline is coming, we haven't forgotten, and we do care about meeting our goal. More important, I suspect, is the fact that it *is rational* to have dessert on any one occasion. It will feel great, and it won't prevent meeting your goal. It breaks the rule, but you know the "rule" is just something you made up. You can break the rule if it's in your best interest to do so. And—on this occasion—it is! The trouble, of course, is that this will also be true on the next occasion. And on the next. Etc. Reason, it turns out, is unable to defend any rule that depends on an arbitrary cut off. (This shouldn't surprise; "arbitrary" is an antonym of "rational.") The only solution is to treat the rule as *absolute*.

Long term goals, like weight loss or writing deadlines, aren't the only sort of problem rational decision-making can't solve. Cooperative dilemmas are another. Public goods are another. Of course, depending on the particulars of a case, cooperation can certainly be individually rational. But as a general matter, it seems to me that the last several decades of research into this matter, in decision theory, game theory, evolutionary biology, economics, political science, and so forth, has settled on the conclusion that large-scale cooperation among anonymous non-kin individuals cannot be explained by the rational pursuit of individual self-interest. Evidence for this is that large-scale cooperation among anonymous non-kin individuals does not *exist* in the animal kingdom in any but the human species. This is ultimately because gene selection in evolution is a matter of individual gene "self-interest."

David Potts's avatar

[PART 2]

Of course, the human race has overcome the problem of large-scale cooperation. How? In large part, by inventing morality. Moral rules are just the rules you are supposed to obey even when your calculated self-interest would have you break them, even when no one is strong enough to punish you for breaking them, even when no one is looking, even when the probability of getting caught is nugatory. The whole point of morality is that it consists of rules that are supposed to be beyond the reach of cost–benefit calculations. A person who says, "I could have killed/defrauded/stolen from you to get X, but I didn't because I thought the risk of getting caught was too great," is not a moral person! Indeed, this is someone who, if you thought they were serious, you would seek to distance yourself from as much as possible in the future. These aren't rules people are supposed to follow "because it is best for them given what others are doing" or "in order to avoid social sanctions."

By and large, people treat moral rules in a Kantian way, not in Binmore's way: You are supposed to follow moral rules because it is the right thing to do, and decent people do the right thing. That is, people in general suppose this, not just Judeo-Christians. I think this is a prerequisite of any well-functioning, large-scale society.

Of course, it is still true that people stand to gain materially by violating moral rules when they can get away with it. It is even true that the more "moral" a society becomes—the more full of trusting and trustworthy, kindly and agreeable people—the greater the opportunities that arise for successful rule-breaking. Why then don't the temptations to malefaction undermine people's adherence to the moral rules? The answer is that they do, but various mechanisms have evolved in the human race to counteract the temptations and stabilize adherence to the rules within a given society. The one alluded to most often by the author, I think, is punishment. People can negate the rewards of rule-breaking by identifying rule-breakers and punishing them when caught. This raises a well-known problem of second-order free riding: since administering punishment is costly, why won't people shirk and let others do it? The answer is supposed to be second-order punishment of non-punishers. The prospect of an infinite regress is supposed to be blocked by ever-diminishing costs of punishment at higher levels: in a society where most people follow the rules, there will be few rule-breakers, thus requiring few punishers; since few punishers are needed, there will be even fewer shirking non-punishers, thus requiring even fewer second-order punishers of first-order shirkers, and so on. Perhaps this works, but I am skeptical. Just because the total social cost of punishment declines at each level, that doesn't mean the personal cost for individual punishers is any less. In fact, for individual punishers, the cost and incentive to shirk must seem high. Also, looking around at the amount of punishment that is actually ever administered in our society versus the amount of wrongdoing, it is hard to believe that the cost inflicted even comes close to counterbalancing the expected benefits of rule-breaking.

More significant, I suspect, are the various elements of "norm-psychology" that have evolved in our species. We are strongly disposed to crave the good opinion of others and live in terror of their disapproval, not to mention their condemnation. Indeed, I think the threat of social disapproval looms larger in the minds of most people than most material punishments. Speaking of punishment, it is not merely a rational strategy for counter-balancing rule-breaking, but a positive disposition. People by nature keep a sharp eye peeled for rule-breaking and take positive glee in righteous punishment. I think this is what actually overcomes the free rider problem for punishment. Again, we enter every new social environment primed with the expectation that it is governed by an array of norms; keen to discover those norms and highly skilled at doing so; and prone to adopt and then internalize the norms, once found. Once the norms are internalized, we feel guilt when we do not live up to them and shame when our deviations are exposed. These are highly negative feelings. Again, from infancy, as early as three months of age—about at early as it is feasible to conduct experiments—people prefer those who are friendly and helpful to those who are aggressive and mean, choose to associate with the former and avoid the latter, and so forth. These are all innate, genetically based dispositions.

In short, we have evolved to be good little rule-followers. How? The answer is a long story of culture–gene coevolution. I think the rule-following part of the story is largely secondary to the story of how we became a cultural species; that is, a species whose master trait is the accumulation of cultural adaptations. Once this is in place or on its way, the emergence of interpersonal norms as one type of cultural norms seems more-or-less inevitable. The various traits of norm-psychology become selected for once interpersonal norms become sufficiently common.

The point is that the existence of norm-psychology means that the Nash equilibria that form the linchpin of Binmore's account of morality are misleadingly described as everybody doing what is "best for themselves" given what everybody else is doing. This makes it seem as if the equilibria of moral rules are like other equilibria, in which people maximize their material interests and it would be fair to say that people are pursuing their self-interest. This is misleading because it turns out that self-interest or what is best for themselves in the case of moral equilibria turns out to consist not in material goods so much as in avoiding social disapproval and feelings of guilt and shame. That is, following moral rules turns out to be a matter of self-interest—to the extent it is—only because human nature saddles us with a set of psychological impulses that is unique in the natural world and moreover that is specifically designed by evolution to induce us to conform to moral rules! This is reminiscent of defenses of egoism that claim that when an individual sacrifices himself for others, it's really egoistic because he wanted to do it, where "wanted" means no more than that he was psychologically motivated. By this definition, it is a tautology that everyone is an egoist. It seems that Binmore is likewise in danger of thus "proving" that all moral rules are Nash equilibria, even among "homo economicus" individuals.

In fact, following moral rules is not necessarily in our individual self-interest, as that is normally understood, and it is important to acknowledge this. For instance, if you could take a pill that would enable you to perform like Glaucon's perfectly unjust man in Republic, Book II, free from the encumbrances of our norm-psychology, it would be in your interest to take it. One of Kant's most noteworthy innovations in the history of moral philosophy to separate morality from the pursuit of happiness. All previous moral systems in Western philosophy (I'm not well read in other traditions), from Socrates onward, insisted on an alignment between the moral life and self-interest. Kant is the first to say that actually, obedience to moral duty is sometimes hard and not at all in one's self-interest. According to Kant, the function of morality is not to make us happy, but to make us *worthy* of happiness. I think there is truth in this, and a proper social theory should take account of it.

I think there is actually very little in the above that the author would disagree with. Indeed, most of it he has written himself, in this post and earlier ones. And I have enjoyed reading his posts over the last year. Our differences are mostly a matter of emphasis. Still, that doesn't mean they are unimportant.

Lionel Page's avatar

Hi David, thanks for these excellent comments. As you note at the end, there is very little I actually disagree with. This series of posts will flesh out a framework that makes sense of many of the ideas you mention. The difference is more a matter of angle: I’m trying to give a different, and I hope unifying, perspective on the points you raise. Here are a few quick reactions.

You say that Kant’s reason is not instrumental rationality. I agree. My point is not to start by saying “instrumental rationality” is right by definition and Kant is wrong for not using it. Instead, it is to question his claim that the rational way to behave is the one he suggests. The Prisoner’s Dilemma is important because we do have the intuition that cooperating is the right or rational thing to do. But I (like Binmore) argue that this intuition is misguided, as it leads to bad outcomes for us and even for others. Once we see this, we are entitled to ask in what sense this is a reasonable way to behave. Kant’s view of human reason can legitimately be questioned if it leads to such problematic conclusions.

I mostly agree with what you say about norm psychology, about how people often experience morality in a Kantian way, and about the long story of culture–gene evolution. Indeed, this is the core of Binmore’s view. There is no contradiction here with self-interest, because norms are social equilibria and cooperation (behaving in a seemly manner in society) is individually rational given those equilibria. I would add that the problem of an infinite regress of punishment is often overblown, because it assumes a specific type of interaction (e.g. repeated Prisoner’s Dilemmas) while models of partner choice offer ways of “punishing” at low cost (simply not extending cooperation), and the possibility of social communication (gossip) allows for coordinated punishment that basic models ignore (because it is hard to formalise).

Notwithstanding our potential differences about Kant’s notion of rationality, I think the framework I am going to present in this series offers a coherent perspective that can make sense of many of the ideas you put forward.

David Potts's avatar

Thanks for your reply! It was very clarifying.

In regard to Kant's argument for the categorical imperative, the problem I'm trying to point out is not that you say Kant is wrong for not employing instrumental rationality in his argument, but that you present his argument as though he *were* employing instrumental rationality. Phrases like, "Kant tells us it is rational to follow a rule that would work well if everybody else were following it" and "Kant's test tempts us to choose the 'best' action by imagining a world in which everyone complies" betray this. "Work well" and "best" here can only be understood in terms of payoffs. The whole discussion—of *Kant's* theory—is pitched in terms of things being "unpleasant," "working well," "better off," "good," "great," "a better world," and so on. But Kant simply does not say we arrive at the categorical imperative by asking what rules would produce a better world.

However, I would not deny that you can seriously dent Kant's theory if you can show that following the categorical imperative would have disastrous consequences. But that seems doubtful. The categorical imperative does not say we have to cooperate in prisoner's dilemma situations, or even that we should. In the gangster exchange at the railway station example you present, the gangsters have exchanged promises. So, yes, the CI says you have to do what you promised. But the gangsters were under no Kantian moral obligation to make those promises, and I imagine Kant would say they were crazy to do it, given the circumstances. In a world where people commonly broke their promises, disrespected property rights—or acknowledged none to begin with—and so on, the CI would not oblige you to be a patsy (though it would oblige you not to initiate aggression on your own, or deceit, thievery, etc.). Indeed, I don't see anything inconsistent with the CI if a person were to say they refuse to engage in any time-staggered cooperative exchanges with anybody at any time ever, unless they were somehow in a strong position to enforce compliance. Ditto for never trusting what anyone says. This would be a miserable, dysfunctional world indeed, but I don't think it would violate the CI.

This shouldn't be surprising. All the CI says, in essence, is that you must strictly adhere to a set of consistent principles (i.e., the principles must be a logically coherent set). How can that entail that you cooperate in prisoner's dilemmas (not to mention being a trusting fool and making unwise bargains)?

Of course, the subtext of Kant's program is to provide a non-consequentialist reason for cooperative behavior, including in prisoner's dilemma and public good type situations. I do not deny this. There's a rich irony here: saying payoffs don't matter—only duty matters—in order to reap the sweet payoffs of cooperation! He does this precisely because he is well aware of the way the pursuit of self-interest undercuts cooperation. (Formal game theory and mathematical modeling of evolution have sharpened and deepened our understanding of this problem, without doubt, but the basic structure of the problem has been well known and much discussed since antiquity, as I'm sure you know.) So, rather than view him as an enemy, in this light you could view him as an ally! He is providing a rationale, in the form of moral duty, for people to keep their promises in prisoner's dilemma situations. This can be added to guilt, shame, punishment, and the other mechanisms that induce people to cooperate.

You could say his philosophy is ultimately useless if his arguments don't really work and his philosophy isn't true—that's what I would say—but still, his moral philosophy mandates honesty, promise-keeping, and respect for the rights of others, which furthers the cause of making a cooperative world. Of course, his meta-ethics is another matter.

In regard to punishment, your remark brings to mind something I recently read in Robert Boyd's book, A Different Kind of Animal. As you probably know, he is a big proponent of punishment regimes as a means of stabilizing equilibria. In this book, it's the main mechanism; others are mentioned only by the way. And he defends this by saying that at scale, withdrawing cooperation, gossip, and the like, such as you mention, are ineffective. The reasons are that mistaken perceptions of defection are likely to be common, leading to cascades of defection, and that merely withdrawing reciprocity won't induce people to cooperate in a world that isn't already highly cooperative. Targeted sanctions are much more effective (he says), and he gives several reasons: (1) errors and uncertainty won't lead to a cascade of defections, because even though errors will still happen, the supposed defector won't be supposed to have gotten away with it; (2) a minority of punishers can motivate a large number of people to cooperate; thus, cooperation can get going with a relatively small number of enforcers (instead of a large number of reciprocators to withdraw reciprocation); (3) since sanctions are a deterrent, and deterrence means not having to actually sanction that much, sanctions are lower cost than withdrawing cooperation; (4) the cost to a defector of withdrawal of cooperation by a single actor is small relative to the cost a single actor can impose by punishment. Food for thought. The passage is pp. 83-87.

Micah Sadoy's avatar

As you yourself seem to recognize, the notion of rationality that you are working with is not the same as the conception of rationality deployed by Kant. So, your argument could be valid, but it is unsound as a refutation of Kant’s position. (One might show that morality does not follow from I-rationality while failing to show that morality does not follow from K-rationality.)

-

The properly REFLECTIVE nature of human rationality (implying, as discussed by Velleman, something like a demand or interest in “self-understanding”) is something that a purely instrumental conception of rationality fundamentally fails to capture. There is something like an “autonomy” constraint on the rational mind such that it must, in some manner, understand and endorse the principles under which it thinks and acts. Taking some hypothetical imperative as supreme would lead to heteronomy (i.e. a subjection and pacification of the mind to an alien power) insofar as the antecedent goal, and why we should have the antecedent goal, would present itself as brute and mysterious. Any such hypothetical imperative opens itself to something like an OQA objection: “Why should the desire or goal be pursued or have authority?” A categorical imperative – including some clause to “Think for oneself” – says – insofar as you are rational, insofar as you are human, insofar as you can ask yourself any further “Why” – you do and must command yourself. That is: you must be a law unto yourself. This is a command of Reason itself, a command of your own Humanity/Personality. Reflective self-understanding is simply a demand of one’s own rationality, and it is non-contingent and non-hypothetical in the sense that rational nature demands this of itself.

Lionel Page's avatar

Hi Micah, I’m not trying to show that, given Kant's own definition of rationality, his deduction misfires at some technical step. I am questioning whether his way of building rationality into morality is itself a good account of what it is rational to do.

The Prisoner’s Dilemma matters here because many of us have the instinct that “the rational thing to do” is to cooperate. But as Binmore and others have argued, that instinct leads to very bad outcomes for you, and often for others as well, when the incentives are what they are. Once you see that, you are entitled to ask: in what sense is this really a rational way to behave? That is where I think Kant’s take on “reason” can and should be questioned – not because it fails to match some arbitrary thin definition, but because it gives prescriptions that do not hold up under careful scrutiny of their consequences in well-understood situations.

Micah Sadoy's avatar

I am not terribly interested in defending Kant’s first-order normative views (he has some obviously problematic views around sex, gender, and race), but I do think it is just a misreading of Kant’s ethics to think he would recommend unilateral disarmament in a Mexican standoff.

KANT WAS PERFECTLY FINE WITH SELF-DEFENSE. You are not required to sing Kumbaya if someone points a gun in your face.

Kant: “if a certain use of freedom is itself a hindrance to freedom in accordance with universal laws (i.e., wrong), coercion that is opposed to this (as a hindering of a hindrance to freedom) is consistent with freedom in accordance with universal laws, that is, it is right.”

~

The categorical imperative does not say to avoid actions that would result in disaster if everyone did the same. Universal celibacy would lead to the destruction of humanity, and Kant himself was probably celibate. Still, he would have regarded universal celibacy as bad insofar as it rooted out the source of value in the world.

Nor does the categorical imperative say to do what would lead to the best consequences if everyone did the same. It does not suggest unilateral disarmament or voting for Nader.

The principle “Act only in accordance with that maxim through which you can at the same time will that it become a universal law (of nature)” is consistent with taking into consideration whatever game-theoretic situation one wishes to postulate. Just as one can will lethal self-defense as universal law, so one can will something like “IF you find yourself in a Mexican standoff, maintain your posture (while also trying to get out of the Mexican standoff)” as universal law. Indeed, Kant would say that allowing yourself to be murdered would be inconsistent with a respect for the humanity in your person. And, if in some situation describable by game theory a given course of action is expected to result in a violation of one’s humanity, you have a responsibility to defend the humanity in your person.

Blurtings and Blatherings's avatar

Moral realism, I suspect, will always tend to win out over moral nihilism in the minds of most people, not because it's true but because it's more socially adaptive.

Mark Reichert's avatar

I'm curious. Does anyone take Kant seriously any more? Will anyone defend his views? Kant's take on moral philosophy belongs in the trash heap of history, in my opinion.

On the other hand, I am not all in with Hume's "social conventions" either. Certainly there is a social aspect of morality but I believe the core of morality is in the heart of each individual.

Suppose I am a white guy living in the antebellum South. I would be morally opposed to the institution of slavery (at least I hope so, hard to say what I would feel like in a situation I have never experienced) but the power structure of that particular society sees slavery as not immoral. The power structure of any society does not necessarily reflect the morality of all individuals in that society, particularly those that are enslaved or otherwise harmed by the dominate "social convention."

But of course that is not the end of the story. Internal strife can overturn morality established as a social convention, or, probably more likely, outside influences can upset the balance of power. When the North finally decided that slavery in the South was morally wrong, or at least should not spread, a war was started and institutional slavery in the US was ended.

For more current examples, look at what is going on with Russia and the United States right now. The power structure in Russia seems to believe that using force to take over former Soviet republics is a morally acceptable thing to do. Other nations disagree, and the war in Ukraine continues on. The current power structure in the United States seems to believe treating immigrants harshly is not immoral. Other Americans not in power disagree. How each of these cases will be resolved remains uncertain, however I guarantee they will not resolve with all parties agreeing on one universal morality.

Eugine Nier's avatar

The problem with many of your examples, is that your attempt at game theoretic formulation leaves out important elements.

For example, the "lying to the murderer" example only works because the murderer expects most people to tell him the truth.

The problem with your "voting for a third party" example is that it presumes that the outcome of this specific election is the only thing that matters and neglects the effect your vote will have on the future behavior of politicians and voters.

Lev's avatar

is this overly emphasising 'what is realistic', especially by emphasising game theory? would you not consider morality to be referring to something ideal to pursue?

Lionel Page's avatar

My point is really about the argument that morality and what it should be can be deduced from the rules of behaviour a rational being would need to follow to be rational. You can say that morality is about "something ideal to pursue", that would not be really Kant's argument for it. You would need to define what is ideal and why we ought to pursue it. I think you would face the problem of defining something that is a Good or Right way to behave ideally. I have written here against such views. https://www.optimallyirrational.com/p/there-are-no-moral-laws-out-there?r=7eiyw

User's avatar
Comment deleted
Feb 6
Comment deleted
Lionel Page's avatar

Thanks a lot for your thoughtful comment.

I agree with you on something important: my post does not try to engage with Kant’s metaphysical scaffolding: transcendental idealism, intelligible world, transcendental freedom, and so on. Like you, I think that whole picture is ultimately misguided. It is not my goal to reconstruct that deduction step by step and show where it fails; you do that much better, and I’m happy to refer readers to your work for that internal critique.

What I do instead is look at the end result Kant wants and the label he puts on it. He says, in effect: a rational being, as such, must act on the categorical imperative; the moral law is “rationally necessary” for any rational will. My reaction is simply: wait a minute, why should we accept that claim about rationality?

Here the point isn’t that, over the last two centuries, through a lot of hard work in decision theory, economics, game theory, psychology, etc., our thinking about “rational behaviour” has been tamed and disciplined. We now have a widely used, operational notion of rationality (coherence of preferences, consistency of choice, best response to others, etc.) which is not just a philosopher’s stipulation. It’s the notion that underpins how multiple disciplines actually analyse choice and interaction. Kantians may say “that’s not what Kant meant by rationality”, but the fact remains: we do not have a serious, worked-out alternative that plays the same role across domains.

From that standpoint, the claim that a rational agent must act on the CI simply does not follow. In cases like the Prisoner’s Dilemma or mutual deterrence, the CI tells you to act as if your individual choice could move you into the “everyone cooperates” world, even though our best understanding of rational choice under strategic interdependence says that is not how incentives work. So my critique is not: “Kant failed to solve the PD, therefore he’s wrong.” It is: given what we now mean by rationality, there is no categorical imperative that rationality itself forces on us.

In that sense, I see our criticisms as complementary rather than conflicting. I had a look at your careful post on his arguments. You take Kant’s own premises about spontaneity, intelligible world and transcendental freedom as seriously as possible, and argue from within that system that the deduction of the CI does not succeed. I largely set that system aside and ask a different question: if we use the notion of rationality that has emerged from our best current work on decision and interaction, does Kant’s claim that the CI is “rationally necessary” survive? My answer is no, not because I deny that his deduction is sophisticated, but because I do not grant a special status to his metaphysical starting point.