I basically agree with all of this, but I wonder about the role of empathy. Would a social contract be an equilibrium without empathy? Does empathy lead us to prefer some social contracts over others? Are people who lack empathy more likely to break the social contract?
Empathy is actually a key concept in Binmore's framework. He gives it a particular definition: being able to put oneself in the shoes of others. That is different from sympathy (caring about others' fate). Empathy is what helps people understand the perspective of others and anticipate what they would consider fair.
You fail to answer the Ring of Gyges objection successfully.
Per Binmore, only self-enforcing social contracts count as social contracts. Gyges is immune to social punishment of any kind (that's the entire point of the ring) and displays no internal guilt or regret in the scenario either, therefore we cannot say that any social contract has proved externally OR self-enforcing on him. Indeed, the point of the argument made is that social contracts are NOT 'self'-enforcing at all, that people obey them only out of fear of external enforcement.
Likewise, your version of 'ought' fails for the same reason. People routinely violate what the rest of society regards as moral constraints and often prosper nonetheless in any cost benefit calculation. Your formulation still leaves you effectively endorsing gangs, organized crime, drug lords, and human trafficking as viable societies and moral equilibriums.
I point that out not simply because they are abhorrent, you're quite correct that unpalatable outcomes are not necessarily incorrect conclusions, but because they rather present you with a fork you've been ignoring: at what point do you draw the line between dissidents within a society who may be legitimately punished by that society for 'moral' violations versus what amounts to two 'morally' distinct societies with conflicting 'moral' imperatives? When the law-abiding society says "you must cooperate with police" and the gang society says "snitches get stitches", your argument doesn't answer what the person in the prisoner's dilemma genuinely 'ought' to do except as a function of conditional risk/reward (Which, assuming the gang is willing to kill defectors, would presumably lead you to the counterintuitive position that it is morally wrong for a gang member to testify against his fellow gang members).
Appeal to 'consensus' is a mirage. There isn't one. People move in and out of multiple societies and sub-cultures with different and conflicting norms all the time. The smallest unit of 'society' reduces to the individual and if there is no external objective morality to which the individual must defer than the individual is effectively God and entitled to define his own morality and impose it on others as far as his own will and personal power make practical. 'Practical' may be a limited subset, but you still end up with uber-mench under your theory, the conclusion that social conventions are only truly binding on those who willingly accept them, and therefore anyone who simply rejects them is not subject to moral censure under them.
To return to a previous comment you made, if 'morality' is restricted to within each society because it relies on that society's particular consensus at that particular time, such that we cannot meaningfully condemn a Holocaust occurring within another society as 'wrong' or 'evil' in any sense beyond 'we wouldn't allow that here and now', anymore than whether or not to condemn a player as 'wrong' for holding the ball in his hands varies between football and soccer, you don't have a coherent morality. It really IS 'anything goes' so long as anyone else can be persuaded, tricked, or even forced to agree to it. You push back by noting that such 'arbitrary' arrangements aren't stable equilibriums (not necessarily true), but under your Appeal to Consensus they don't technically NEED to be long-term sustainable, you're sort of smuggling 'It's 'Good' to continue playing the game' in as a value even while denying theory of the good. Without that hidden assumption, even a cult that has everyone take out massive loans, do drugs and orgies and whatever else until the money runs out, then commit mass suicide to avoid any downsides to their hedonistic ways, would constitute a valid society with its own internal moral code and reasonable risk/reward calculus where such behavior is 'morally right' within your framework. You can't coherently draw your lines in terms of relevant population, place, or time, so it really does reduce to 'All things are permissible, but not all things are beneficial' at the individual level and 'it's only really wrong if you get caught or feel guilty' at the society level.
Your constructivism seems an insubstantial illusion, a mere rebranding of 'strategy' as 'morality', that breaks down whether applied to prominent philosophical hypotheticals or the real world as it is. I'm still not seeing anything that successfully differentiates this supposed 'morality' from 'How to Grow Your Small Business' or 'The Unofficial Player Guide to World of Warcraft'. 'Real Life is the Ultimate Game' runs into the problem that if there are no ultimate rules for life than you're left with a meta game regarding by whom and how those rules are invented, that meta game has no rules, and all imposed rules are essentially dependent on punitive force and/or bribes. It's ultimately 'Might makes Right', even if that 'might' is often communal rather than individual.
In the Ring of Gyges scenario, a contractarian account predicts defection. But that is not a refutation of the view. It is the view’s core claim made explicit: there is no mysterious categorical force that binds an agent independently of incentives and enforcement. What keeps rules in place is the combination of external sanctions and internalised mechanisms that evolved in an environment where impunity was rarely certain.
What “ought” means in this framework. My “ought” is not meant to be unconditional. It is a conditional one: if you are participating in a practice and claiming the standing of a compliant participant, then you ought to follow the practice’s rules. The objection you are pressing is that this is not a categorical ought that binds even under perfect impunity. I agree. That is exactly where my view parts ways with moral realism. If one wants an ought that binds Gyges even when nothing can touch him, one is asking for something over and above the social mechanisms that make norms work.
You mention gangs, organised crime, and conflicting “societies”. Pointing out that some people violate moral constraints and sometimes prosper does not undermine the account. Norms are sustained by enforcement, and enforcement is imperfect. There will always be subgroups that try to carve out local equilibria with their own sanctions and rewards, including criminal organisations. Note that describing those equilibria is not endorsing them. It is describing the strategic logic of how they persist. And the fact that people move between overlapping communities with conflicting norms is not a bug in the social-contract picture. The world is a patchwork of coalitions and institutions whose rules compete, and individuals often face genuine conflicts of loyalty and risk. That being said, a society must have a broadly respected overarching social contract to be functional. Failed states are those in which different groups do not abide by shared rules of cooperation.
Your point about the Holocaust raises the question of comparing moral systems across societies. This is a whole question in itself, which I will address in the next post.
Your point about “might makes right” is an important one because bargaining, and bargaining power, play an important role in this contractarian view. But it will not be a simple version of “bullies get what they can and the weak suffer what they must” in every interaction. Once again, the Humean constructivist view I present aims at explaining morality as it works in reality. So we should not expect it to conclude that moral rules should look very different from what they are. Binmore’s conclusion is not naïve about power. Like Marx, he would agree that the moral rules of a society have typically served to justify the social order. In the end, he argues that there will be a strong egalitarian aspect in stable social contracts, but that this is a result of his analysis of the role of social bargaining in social contracts. I will develop that in future posts.
Yes, one is asking for something over and above merely being imperfectly working norms, that's what people generally understand 'Morality' to BE.
"My ought is not meant to be unconditional. It is a conditional one. If you are participating in a practice and claiming the standing of a compliant participant, then you ought to follow the practice's rules." ... "Pointing out that some people violate moral constraints and sometimes prosper does not undermine the account. Norms are sustained by enforcement, and enforcement of imperfect."
Respectfully, these two statements necessarily contradict each other. The probability of detection and enforcement are unavoidably major factors of your 'conditional' ought, which means that whether one 'ought' to follow the practice's rules is then determined only by the projected outcome and one's own priorities and risk tolerance, which the persistence of successful criminals aptly demonstrates that even in real life, falsely claiming legitimacy while secretly cheating is often the optimal strategy to maximize personal gain, while encouraging the societal maintenance of the alleged morals you're secretly violating then merely serves as a means of reducing your own competition while avoiding punishment. As I said, it reduced down to 'it's only wrong if you get caught or feel guilty'. Game theory is abundant with examples of games of imperfect or asymmetric information where misleading the other players into adopting a cooperative approach from which you then defect is the best strategy. Conditional oughts make morality dependent more on success than anything else, the rebel who wins becomes a noble patriot whereas the rebel who loses becomes a vile traitor, with the distinction emerging only after the fact rather than determinable before choosing to rebel or not. It makes morality 'arbitrary' not merely in the sense of being a pure act of will, but even in the sense of being purely random, subject to chance even moreso than choice.
"That being said, a society must have a broadly respected overarching social contract to be functional."
Respectfully, unless you've smuggled in some unstated conception of the 'The Good Society', it does NOT. A tyrant ruling successfully over an enslaved nation may well consider his social order 'functional' with the reason that it does indeed serve his intended aims. 'Functional' is not an objective term in isolation, much like 'optimization', it can only be used in reference to some specific intended outcome. Without a Theory of the Good or Value you don't really present any justified societal outcome against which functionality may be measured and assessed. Even Appeal to Consensus necessarily works with way if you rely on unbounded majority rules: a society with a 60/40 split on some controversial topic would have the 60 be 'right' and the 40 'wrong' only until the 40 kill 30 of the 60, then the 40 are more than the remaining 30 and therefore the 40 are 'right' and the surviving 30 are 'wrong'. This isn't merely abhorrent, it's incoherent. Even without genociding your way to a majority, the same problem occurs due to multiple levels of scale: the conservative household in the liberal city in the conservative state in the conservative country in the liberal multinational alliance in the conservative world... Which is the correct 'morality' then, conservative or liberal?
Incidentally, this is probably more relevant to your next article, but doesn't your constructivism imply that there's no such thing as a valid moral argument against the existing social convention? If morality is nothing more or less than the rules as they exist in a given place or time, there's nothing to appeal to to condemn those existing rules or any possible 'moral' reason to change them, the existing rules are tautologically good and true. That does not seem a particularly congruent account of how morality actually works in the real world.
The gap here is less the permissibility of terrible social arrangements, but that the argument is self-undermining. If two Glauconian contractualists would naturally trust one another's adherence to (implicit or explicit) contracts LESS than two purity-testing, moralizing absolutists, then the latter IS the pragmatic solution. If the larger "equation" only relates motives, incentives and outcomes, then contracts which are reinforced by eternal rewards or punishments, and purity tested with Santa Clause fallacies. "If you do not also think it would be terrible to not believe in Santa, then this implies you are more likely to defect, not just from our Santa coalition, but from other makeshift coalitions as well."
If "truth" is not moralized, then conventions-by-fiat are just as permissible as high effort, falsifiable conventions, and are likely even more sustainable "solutions" to the original problem posited under your premises. Worse still, coopting the fruits of high effort convention (weaponizing science) is also permissible. "Terrible if true" cannot be ruled out as a valid contribution without moralizing some arbitrarily chosen strictness of "truth value."
To use a term from one of Dan Williams' recent posts, those who moralize the truth need be wary of "3rd person naive realism." Even if mapping motives, incentives and outcomes seems coherent and predictive, they are no more strictly real than "selection pressures" or "market forces." These are conventions of cognitive compression (via metaphor) that are immensely useful but strictly false. The compressions feel comprehensive, but they regularly neglect (or explain away) System 2 function, rather than setting the table for System 2 function.
I see three points in your comment. Here are quick answers.
On absolutist beliefs as a cooperation technology:
I agree that absolutist beliefs can help cooperation, and that this may be one reason they are cultural attractors. If people genuinely believe that defections are punished (even when unobserved), that belief can widen the set of stable cooperative equilibria. That said, I also think that, in Western countries, we may be overly influenced here by the Judeo-Christian tradition with its moralising God. This is the topic of a future post.
On “truth needs policing” and why that is still conventional:
I agree that truth claims need policing, but that policing is itself conventional. The rules for accepting and rejecting arguments, and what counts as evidence, are socially maintained norms. They differ across disciplines, institutions, and groups, and they evolve over time. That does not make them arbitrary: they face the pressure of making factual predictions that work. This is exactly the topic of the paper I mentioned in a previous discussion.
On incentives and motives:
I don't remember this post by Dan, but I suppose he is not saying that incentives and motives do not exist, rather that we don't have direct access to them (while often assuming we do).
Dan was sharing Jeffrey Friedman's insight as it relates to blue team's relationship to experts. I coopted Friedman's term/phrase "3rd person naive realism" as I think the temptation also applies when experts defer to one another.
To clarify what I mean by "arbitrary," I mean arbitrary-in-principle, even if it is non-trivial-in-practice. This actually is meant to reflect your own insights regarding what Stephen Pinker called "arbitrary but coordinated" conditions: which side of the road to drive on, the legal age of adulthood, etc. What I am claiming is "arbitrary" in this particular sense is the "strictness" of what counts as "truth" (or the standards of evidence as you've just described). That you rightly point out that the strictness may be a function of "pressures" that "evolve" over time is a perfect example of in-practice, fantastically useful metaphors. We can pretty easily prove, through brain-scans for example, that "pressure" and "forces" are reified, embodied metaphors that nevertheless enable mutual calibration in our species. We bottom out on strictly arbitrary foundations, which is itself an implication of evolutionary theory. I will nevertheless join you and others in the pragmatic use of "the best available foundations" in good faith. Nothing is worse than obstructionist philosophy, and I am trying to ride that fine line.
My friendly challenge to your positions taken here is that if "truth-policing" can be done without moralizing truth, and especially if the argument is that truth's value is in its instrumental utility, then you will incur paradoxical conditions under which selective and willful ignorance (ie. using Santa Clause fallacies as purity tests) is prescribable based on the relative disutility of the strict truth.
Personally, I think we should moralize "effort," not truth per se. But that demands walls of text and borrowed time.
I basically agree with all of this, but I wonder about the role of empathy. Would a social contract be an equilibrium without empathy? Does empathy lead us to prefer some social contracts over others? Are people who lack empathy more likely to break the social contract?
Empathy is actually a key concept in Binmore's framework. He gives it a particular definition: being able to put oneself in the shoes of others. That is different from sympathy (caring about others' fate). Empathy is what helps people understand the perspective of others and anticipate what they would consider fair.
You fail to answer the Ring of Gyges objection successfully.
Per Binmore, only self-enforcing social contracts count as social contracts. Gyges is immune to social punishment of any kind (that's the entire point of the ring) and displays no internal guilt or regret in the scenario either, therefore we cannot say that any social contract has proved externally OR self-enforcing on him. Indeed, the point of the argument made is that social contracts are NOT 'self'-enforcing at all, that people obey them only out of fear of external enforcement.
Likewise, your version of 'ought' fails for the same reason. People routinely violate what the rest of society regards as moral constraints and often prosper nonetheless in any cost benefit calculation. Your formulation still leaves you effectively endorsing gangs, organized crime, drug lords, and human trafficking as viable societies and moral equilibriums.
I point that out not simply because they are abhorrent, you're quite correct that unpalatable outcomes are not necessarily incorrect conclusions, but because they rather present you with a fork you've been ignoring: at what point do you draw the line between dissidents within a society who may be legitimately punished by that society for 'moral' violations versus what amounts to two 'morally' distinct societies with conflicting 'moral' imperatives? When the law-abiding society says "you must cooperate with police" and the gang society says "snitches get stitches", your argument doesn't answer what the person in the prisoner's dilemma genuinely 'ought' to do except as a function of conditional risk/reward (Which, assuming the gang is willing to kill defectors, would presumably lead you to the counterintuitive position that it is morally wrong for a gang member to testify against his fellow gang members).
Appeal to 'consensus' is a mirage. There isn't one. People move in and out of multiple societies and sub-cultures with different and conflicting norms all the time. The smallest unit of 'society' reduces to the individual and if there is no external objective morality to which the individual must defer than the individual is effectively God and entitled to define his own morality and impose it on others as far as his own will and personal power make practical. 'Practical' may be a limited subset, but you still end up with uber-mench under your theory, the conclusion that social conventions are only truly binding on those who willingly accept them, and therefore anyone who simply rejects them is not subject to moral censure under them.
To return to a previous comment you made, if 'morality' is restricted to within each society because it relies on that society's particular consensus at that particular time, such that we cannot meaningfully condemn a Holocaust occurring within another society as 'wrong' or 'evil' in any sense beyond 'we wouldn't allow that here and now', anymore than whether or not to condemn a player as 'wrong' for holding the ball in his hands varies between football and soccer, you don't have a coherent morality. It really IS 'anything goes' so long as anyone else can be persuaded, tricked, or even forced to agree to it. You push back by noting that such 'arbitrary' arrangements aren't stable equilibriums (not necessarily true), but under your Appeal to Consensus they don't technically NEED to be long-term sustainable, you're sort of smuggling 'It's 'Good' to continue playing the game' in as a value even while denying theory of the good. Without that hidden assumption, even a cult that has everyone take out massive loans, do drugs and orgies and whatever else until the money runs out, then commit mass suicide to avoid any downsides to their hedonistic ways, would constitute a valid society with its own internal moral code and reasonable risk/reward calculus where such behavior is 'morally right' within your framework. You can't coherently draw your lines in terms of relevant population, place, or time, so it really does reduce to 'All things are permissible, but not all things are beneficial' at the individual level and 'it's only really wrong if you get caught or feel guilty' at the society level.
Your constructivism seems an insubstantial illusion, a mere rebranding of 'strategy' as 'morality', that breaks down whether applied to prominent philosophical hypotheticals or the real world as it is. I'm still not seeing anything that successfully differentiates this supposed 'morality' from 'How to Grow Your Small Business' or 'The Unofficial Player Guide to World of Warcraft'. 'Real Life is the Ultimate Game' runs into the problem that if there are no ultimate rules for life than you're left with a meta game regarding by whom and how those rules are invented, that meta game has no rules, and all imposed rules are essentially dependent on punitive force and/or bribes. It's ultimately 'Might makes Right', even if that 'might' is often communal rather than individual.
In the Ring of Gyges scenario, a contractarian account predicts defection. But that is not a refutation of the view. It is the view’s core claim made explicit: there is no mysterious categorical force that binds an agent independently of incentives and enforcement. What keeps rules in place is the combination of external sanctions and internalised mechanisms that evolved in an environment where impunity was rarely certain.
What “ought” means in this framework. My “ought” is not meant to be unconditional. It is a conditional one: if you are participating in a practice and claiming the standing of a compliant participant, then you ought to follow the practice’s rules. The objection you are pressing is that this is not a categorical ought that binds even under perfect impunity. I agree. That is exactly where my view parts ways with moral realism. If one wants an ought that binds Gyges even when nothing can touch him, one is asking for something over and above the social mechanisms that make norms work.
You mention gangs, organised crime, and conflicting “societies”. Pointing out that some people violate moral constraints and sometimes prosper does not undermine the account. Norms are sustained by enforcement, and enforcement is imperfect. There will always be subgroups that try to carve out local equilibria with their own sanctions and rewards, including criminal organisations. Note that describing those equilibria is not endorsing them. It is describing the strategic logic of how they persist. And the fact that people move between overlapping communities with conflicting norms is not a bug in the social-contract picture. The world is a patchwork of coalitions and institutions whose rules compete, and individuals often face genuine conflicts of loyalty and risk. That being said, a society must have a broadly respected overarching social contract to be functional. Failed states are those in which different groups do not abide by shared rules of cooperation.
Your point about the Holocaust raises the question of comparing moral systems across societies. This is a whole question in itself, which I will address in the next post.
Your point about “might makes right” is an important one because bargaining, and bargaining power, play an important role in this contractarian view. But it will not be a simple version of “bullies get what they can and the weak suffer what they must” in every interaction. Once again, the Humean constructivist view I present aims at explaining morality as it works in reality. So we should not expect it to conclude that moral rules should look very different from what they are. Binmore’s conclusion is not naïve about power. Like Marx, he would agree that the moral rules of a society have typically served to justify the social order. In the end, he argues that there will be a strong egalitarian aspect in stable social contracts, but that this is a result of his analysis of the role of social bargaining in social contracts. I will develop that in future posts.
Yes, one is asking for something over and above merely being imperfectly working norms, that's what people generally understand 'Morality' to BE.
"My ought is not meant to be unconditional. It is a conditional one. If you are participating in a practice and claiming the standing of a compliant participant, then you ought to follow the practice's rules." ... "Pointing out that some people violate moral constraints and sometimes prosper does not undermine the account. Norms are sustained by enforcement, and enforcement of imperfect."
Respectfully, these two statements necessarily contradict each other. The probability of detection and enforcement are unavoidably major factors of your 'conditional' ought, which means that whether one 'ought' to follow the practice's rules is then determined only by the projected outcome and one's own priorities and risk tolerance, which the persistence of successful criminals aptly demonstrates that even in real life, falsely claiming legitimacy while secretly cheating is often the optimal strategy to maximize personal gain, while encouraging the societal maintenance of the alleged morals you're secretly violating then merely serves as a means of reducing your own competition while avoiding punishment. As I said, it reduced down to 'it's only wrong if you get caught or feel guilty'. Game theory is abundant with examples of games of imperfect or asymmetric information where misleading the other players into adopting a cooperative approach from which you then defect is the best strategy. Conditional oughts make morality dependent more on success than anything else, the rebel who wins becomes a noble patriot whereas the rebel who loses becomes a vile traitor, with the distinction emerging only after the fact rather than determinable before choosing to rebel or not. It makes morality 'arbitrary' not merely in the sense of being a pure act of will, but even in the sense of being purely random, subject to chance even moreso than choice.
"That being said, a society must have a broadly respected overarching social contract to be functional."
Respectfully, unless you've smuggled in some unstated conception of the 'The Good Society', it does NOT. A tyrant ruling successfully over an enslaved nation may well consider his social order 'functional' with the reason that it does indeed serve his intended aims. 'Functional' is not an objective term in isolation, much like 'optimization', it can only be used in reference to some specific intended outcome. Without a Theory of the Good or Value you don't really present any justified societal outcome against which functionality may be measured and assessed. Even Appeal to Consensus necessarily works with way if you rely on unbounded majority rules: a society with a 60/40 split on some controversial topic would have the 60 be 'right' and the 40 'wrong' only until the 40 kill 30 of the 60, then the 40 are more than the remaining 30 and therefore the 40 are 'right' and the surviving 30 are 'wrong'. This isn't merely abhorrent, it's incoherent. Even without genociding your way to a majority, the same problem occurs due to multiple levels of scale: the conservative household in the liberal city in the conservative state in the conservative country in the liberal multinational alliance in the conservative world... Which is the correct 'morality' then, conservative or liberal?
Incidentally, this is probably more relevant to your next article, but doesn't your constructivism imply that there's no such thing as a valid moral argument against the existing social convention? If morality is nothing more or less than the rules as they exist in a given place or time, there's nothing to appeal to to condemn those existing rules or any possible 'moral' reason to change them, the existing rules are tautologically good and true. That does not seem a particularly congruent account of how morality actually works in the real world.
This fancy sociology works, but I don't think it's true
fun stuff tho! thanks
The gap here is less the permissibility of terrible social arrangements, but that the argument is self-undermining. If two Glauconian contractualists would naturally trust one another's adherence to (implicit or explicit) contracts LESS than two purity-testing, moralizing absolutists, then the latter IS the pragmatic solution. If the larger "equation" only relates motives, incentives and outcomes, then contracts which are reinforced by eternal rewards or punishments, and purity tested with Santa Clause fallacies. "If you do not also think it would be terrible to not believe in Santa, then this implies you are more likely to defect, not just from our Santa coalition, but from other makeshift coalitions as well."
If "truth" is not moralized, then conventions-by-fiat are just as permissible as high effort, falsifiable conventions, and are likely even more sustainable "solutions" to the original problem posited under your premises. Worse still, coopting the fruits of high effort convention (weaponizing science) is also permissible. "Terrible if true" cannot be ruled out as a valid contribution without moralizing some arbitrarily chosen strictness of "truth value."
To use a term from one of Dan Williams' recent posts, those who moralize the truth need be wary of "3rd person naive realism." Even if mapping motives, incentives and outcomes seems coherent and predictive, they are no more strictly real than "selection pressures" or "market forces." These are conventions of cognitive compression (via metaphor) that are immensely useful but strictly false. The compressions feel comprehensive, but they regularly neglect (or explain away) System 2 function, rather than setting the table for System 2 function.
I see three points in your comment. Here are quick answers.
On absolutist beliefs as a cooperation technology:
I agree that absolutist beliefs can help cooperation, and that this may be one reason they are cultural attractors. If people genuinely believe that defections are punished (even when unobserved), that belief can widen the set of stable cooperative equilibria. That said, I also think that, in Western countries, we may be overly influenced here by the Judeo-Christian tradition with its moralising God. This is the topic of a future post.
On “truth needs policing” and why that is still conventional:
I agree that truth claims need policing, but that policing is itself conventional. The rules for accepting and rejecting arguments, and what counts as evidence, are socially maintained norms. They differ across disciplines, institutions, and groups, and they evolve over time. That does not make them arbitrary: they face the pressure of making factual predictions that work. This is exactly the topic of the paper I mentioned in a previous discussion.
On incentives and motives:
I don't remember this post by Dan, but I suppose he is not saying that incentives and motives do not exist, rather that we don't have direct access to them (while often assuming we do).
To clarify (and to be sure not to misrepresent Dan), here is the post to which I was referring: https://open.substack.com/pub/conspicuouscognition/p/americas-epistemological-crisis-reprise?utm_source=share&utm_medium=android&shareImageVariant=overlay&r=3nwud0
Dan was sharing Jeffrey Friedman's insight as it relates to blue team's relationship to experts. I coopted Friedman's term/phrase "3rd person naive realism" as I think the temptation also applies when experts defer to one another.
To clarify what I mean by "arbitrary," I mean arbitrary-in-principle, even if it is non-trivial-in-practice. This actually is meant to reflect your own insights regarding what Stephen Pinker called "arbitrary but coordinated" conditions: which side of the road to drive on, the legal age of adulthood, etc. What I am claiming is "arbitrary" in this particular sense is the "strictness" of what counts as "truth" (or the standards of evidence as you've just described). That you rightly point out that the strictness may be a function of "pressures" that "evolve" over time is a perfect example of in-practice, fantastically useful metaphors. We can pretty easily prove, through brain-scans for example, that "pressure" and "forces" are reified, embodied metaphors that nevertheless enable mutual calibration in our species. We bottom out on strictly arbitrary foundations, which is itself an implication of evolutionary theory. I will nevertheless join you and others in the pragmatic use of "the best available foundations" in good faith. Nothing is worse than obstructionist philosophy, and I am trying to ride that fine line.
My friendly challenge to your positions taken here is that if "truth-policing" can be done without moralizing truth, and especially if the argument is that truth's value is in its instrumental utility, then you will incur paradoxical conditions under which selective and willful ignorance (ie. using Santa Clause fallacies as purity tests) is prescribable based on the relative disutility of the strict truth.
Personally, I think we should moralize "effort," not truth per se. But that demands walls of text and borrowed time.
Thank you for your quality response.