In my previous post, I described the tension between cooperation and conflict embedded in our daily communications. I pointed out that reputation plays a key role in fostering incentives for honesty and diligence in our statements. Here I look at how reputational incentives vary across social settings.
Nowadays, political discourse is often accused of being taken over by PR. Helped by corporate experts, politicians have adopted ways of speaking to sound confident and convincing that often mask vacuous or misguided positions.1
This concern is not entirely new. More than 2,000 years ago it was already at the heart of one of Plato’s dialogues, the Gorgias (380BC). In it, Socrates confronts three sophists who are expert orators earning an income by teaching rich Athenians how to argue convincingly in the Agora. Socrates chided them for being masters of spin—helping people win arguments with rhetorical tricks instead of arguing to find the truth.
One question not discussed in the dialogue is: why does the sophists’ line of business work? Why are poor arguments, masquerading as good ones, rewarded in the Agora, while they are methodically dismantled and criticised in the circles of philosophers and intellectuals? We may be tempted to say that today, like yesterday, people are quite gullible and susceptible to being influenced by poor arguments. As argued by psychologist Hugo Mercier, in his book Not Born Yesterday (2020), this is likely the wrong answer. People aren't just passive receptacles for any half-baked idea that comes along—we're actually quite discerning. The real question is why people seem to get away with bad arguments in some contexts and not others.
In today’s post, I’ll discuss how the variations in incentives for intellectual rigour are what likely explain the differences between the quality of arguments across different social settings.
Epistemic rigour varies across social settings and their incentives
The notion of epistemic rigour (literally rigour in the domain of knowledge) refers to the care and effort individuals put into ensuring that their communications are well-founded, precise, and honest. It encompasses both the truthfulness of their statements (avoiding deception or misleading claims) and the diligence with which they gather, evaluate, and present information (being thorough, accurate, and clear in their reasoning and evidence).
The existence of reputational incentives in communication games helps us understand why the degree of epistemic rigour of people's statements will on average vary across different social settings as a function of the structure of their incentives.
These reputational incentives can vary across different social settings in different ways. The reputational incentives for epistemic rigour will be stronger when the costs of low epistemic rigour are higher than the benefits. For that to be the case, you need first for people to be accountable for past mistakes with a faithful historical record being available, social costs associated with past mistakes and the potential benefits from persuading people with poor arguments not too high.
Business
In some social settings, the records about the factual accuracy of past statements are absent or hard to find. Interactions can be short-lived and the reputational costs of making inaccurate statements may not be high. This is for instance the case with people moving from one social circle to another.
Take the classic example of the travelling snake oil salesman, peddling his wares from town to town. He could make all sorts of outrageous claims about his miraculous elixirs, knowing full well that he'd be long gone before anyone could call him out. It's no wonder he became a symbol of epistemic untrustworthiness.
In markets where customers have more repeated experiences with businesses, the reputational concerns will limit the temptation to engage in all-out deception. In that case, businesses will often prefer to engage in stating convenient half-truths that unduly increase the appeal of their product while providing them with plausible deniability. One common approach is to use paltering: saying something true but in a way that tends to mislead others.
Imagine an ad for a sugary cereal that proudly declares it's "part of a balanced breakfast!" while conveniently failing to mention that the "balance" comes from the other, much healthier foods on the table. It's not a lie, per se, but it's hardly the whole truth.2
This form of deception can persist because pinpointing it is costly and requires time and effort that most people facing corporate communication do not want to expend. By making it harder to be called out, businesses use deceptive techniques good enough to have an impact but not stringent enough to be, in most cases, denounced.
Politics
Another social setting where the accountability for epistemic errors is lower than science is politics. One possible reason is that the public often is said to have a short memory leading to lower accountability for politicians than what could be achievable. The voters’ short memory may in part come from the way the media works with a focus of attention on what’s happening right now, not what was said before.
Another reason why political arguments can have a lower epistemic rigour is that the reputational costs for using bad arguments can take time to take effect while the short-term benefits for convincing others (even if with bad arguments) can be high. A politician who wants to win an election may be tempted to throw caution to the wind. If a powerful argument proves to be a bit misleading, he/she may be better off addressing the issue after winning the election rather than caring about it now. Politicians are notably famous for promising to be able to deliver a wide range of positive outcomes if put into office. They may not have incentives to only make promises they are sure they can deliver. French President Jacques Chirac famously quipped “Promises only bind those who believe them.”
Finally, politics is a domain where coalitional thinking is often prevalent. In any situation where politicians are seen by a large part of their supporters as presenting “us” vs “them”, the use of arguments to convince the public becomes instrumental.3 Coalitional thinking fosters a selective demand for rigour. We rigorously scrutinise arguments that clash with our group’s positions, seeking counterarguments, while those in harmony with our views are often accepted with less critical examination.
Coalitional thinking is one of the reasons, discussions on social media are often so acrimonious and arguments made in poor faith. The incentives on social media may foster collecting approval from select groups and sometimes chasing their views with more one-sided arguments than one would have initially made, a phenomenon called audience capture.
Science
Why science works
Science is the best example of a social setting where ideas are carefully examined and selected for their quality. It is not perfect but it provides strong social incentives for epistemic rigour. Scientists’ reputations rely on their peers’ assessments of their epistemic contributions. Their past publications are a public record whose findings/interpretations’ validity is continuously reassessed by peers. For that reason, academic scholars are mindful of managing their reputation as communicators.
The reason you can in general “trust science” is not because scientists have found the truth. Indeed, a scientific consensus one day is often found to be wrong later on. However, the ideas that are consensual in a scientific domain were typically generated by researchers with great concern for their long-term reputations and they withstood the criticisms of other researchers who also had strong reputational incentives to put forward reasonable counter-arguments.
The philosopher of science Popper famously described science as a place where the quality of the transparency and the scrutiny in the intellectual debate eliminate weak ideas as in a process of natural selection.
We choose the theory which best holds its own in competition with other theories; the one which, by natural selection, proves itself the fittest to survive. - Popper (1934)
Why scientists can be overly cautious
The same incentives that make scientists careful to be rigorous can push them to be overly cautious, not to be faulted in their judgement. Notably, hedging is much more common in academic writing.
Hedging is the expression of tentativeness and possibility and it is central to academic writing where the need to present unproven propositions with caution and precision is essential… The fact that the public reputation and professional position of every scientist depends on the work and acceptance of peers means a failure to observe appropriate norms of conduct will not merely prevent individuals securing goals, but will incur sanctions with concrete consequences. - Hyland (1996)
These reputational concerns of academic scholars may conflict with the rules of other social settings. It is for instance often said that scholars can be very poor at writing for a general audience. In a discussion on this topic, Steven Pinker (2014) attributed the main issue not to expertise or a desire to sound profound but to the scholar’s anxiety about losing their reputation as rigorous communicators among their peers.
Other domains where academics’ tendency for hedging is often criticised are those where practical decisions need to be made quickly, like politics or business. Talking of his frustration with economists being unable to provide a clear answer, US President Truman famously requested to have a one-handed economist so as not to have to face their “on the one hand, on the other hand” answers to his questions.
When science fails
Let’s assume that science works in large part due to the reputational incentives it gives to scientists. In that case, we can predict that the quality of scientists’ contributions will deteriorate when these reputational incentives for epistemic rigour are weaker.
One such situation is when high rewards exist for academic stardom: lucrative book contracts and talk tours. It is hard to sell bestsellers with books filled with “on the one hand, on the other hand” statements. Bold conjectures and dazzling perspectives are more likely to generate the required wow factor in the audience. The recent demise of very famous behavioural scientists who became public stars can be seen as reflecting the twist in incentives for epistemic rigour generated by the high rewards for academics to achieve public stardom.
Another reason why science may fail is when scientific debates overlap political issues. Coalitional thinking is not too likely to interfere with debates in particle physics. But for topics in social sciences, the answer to some factual issues can become endowed with political meaning: is climate change real, are vaccines safe, are gender differences biological? In such cases, coalitional logic can interfere with the selection of the most epistemically rigorous ideas. A recent article in Skeptic Magazine made this point:
All else equal, the more relevant science is for policy, the less reliable it will likely be. This is because scientists, like everyone else, are individuals with their own values, biases, and incentives. They probably already had strong views about policy before they analyzed any data, which means they’re even more likely than normal to report results selectively, publish biased studies, herd on a politically desirable conclusion, and so on. Unfortunately, this means that we should be more skeptical of scientific findings when that question is particularly politicized or policy-relevant. - Fowler (2014)
An example of the effect of changing reputational incentives
A striking illustration of how the reputational incentives of a social setting can shape the quality of the epistemic exchanges was given by a study of the evolution of the comment section of the Huffington Post website, when the rules to register and post comments changed (Moore et al., 2020).4 Initially, commenters were fully anonymous and could just choose any username. If their username was banned for bad behaviour, they could create a new one right away. Under that system, the comment section was described as a troll’s paradise.
Later the comment section required commenters to identify themselves. Being banned made it much harder to register a new username. This simple change had a significant impact on the exchanges on the forum. To start with, the usage of swear words and offensive terms decreased. That would be expected to reduce the risks of being banned. In addition, the use of words to manage the epistemic validity of statements increased. Words giving explanations for statements like “because”, indicating the strength of one’s confidence like “believe”, and qualifying words, like “perhaps”, became more frequent.
Under the new regime, the usernames became durable. As a consequence, even though users were anonymous, they may have experienced more incentives to invest in and preserve the reputation of their online persona on that forum, leading to greater usage of expressions used to manage the reputational costs and benefits associated with epistemic statements that can be either contradicted or supported by others. These results illustrate that the quality of social exchanges can be improved by shaping the incentives of the social settings, an insight relevant to considerations about how to improve discussions in public spaces.5
At the end of the Gorgias, Socrates fails to change the views of the sophists. While he scolds them for chasing money and power instead of truth, they laugh at him for being an idealist.6 In a way, one of the lessons of that dialogue, is that you cannot convince people to engage in good-faith arguments if they have strong incentives not to.
So next time you find yourself exasperated by the sophists in the modern-day Agora remember that the fault lies not just with individual actors, but with the flawed incentive structures in which they operate. For exchanges and debates to be characterised by epistemic rigour, the incentives of a social setting need to reward people for making good arguments and increase the reputational costs of making misleading statements.
References
Fowler, A., 2014, Who should you trust? Why appeals to scientific consensus are often uncompelling. Skeptic Magazine
Hyland, K., 1996. Writing without conviction? Hedging in science research articles. Applied Linguistics, 17(4), pp.433-454.
Mercier, H., 2020. Not born yesterday: The science of who we trust and what we believe. Princeton University Press.
Moore, A., Fredheim, R., Wyss, D. and Beste, S., 2021. Deliberation and identity rules: The effect of anonymity, pseudonyms and real-name requirements on the cognitive complexity of online news comments. Political Studies, 69(1), pp.45-65.
Pinker, S., 2014. Why academics stink at writing. The chronicle of higher education, 61(5), pp.2-9.
Popper, K., 1934. The logic of scientific discovery.
An illustration of this is the evolution of British political sitcoms. In Yes Minister (1980-1988), politicians are manipulated by high administrators (bureaucrats). In The Thick of It (2005-2012) politicians are under the control of the Director of Communications for the Government (spin doctor).
It is for instance reported that many people support Donald Trump in spite of their negative opinion about his personality.
The last sophist arguing with Socrates, Callicles, tells him, in substance, it is good to be a philosopher at 20, but if you are still one at 40 you are misguided: “Philosophy, Socrates, is a graceful thing if someone engages in it in moderation at the right time of life; but if he spends too much time on it, it's the undoing of a human being. […] when I see an older man still engaging in philosophy and not giving it up, that man, Socrates, is in need of a flogging”.
Take a scientist out of the lab where discipline is imposed on their thinking, and they are typically little better than a standard Normie in my experience.
I'm open.
But my reading of the history of the scientific methodology is that it developed from specific "actionable" methodologies used to improved agriculture, construction and hunting. For example, astrology and astronomy developed originally as a way to figure out when to plant crops.
These methodologies were once subject to independent validation through observation and experimentation. If a pronouncement couldn't be independently confirmed by the people expected to follow through on the prediction, those people would eventually find a new expert.
This was intentional. It allowed farmers, hunters and others to independently verify statements made by leaders and elites.
Reputations only developed later, as the predictions were either verified or dismissed.
The process developed eventually into the now well known concept of scientific peer review, along with the perhaps less well known concept of "trust, but verify" in politics and other areas. This verification can be surprisingly nuanced, which is why former US president Donald Trump can be a bad person, and still dominate the current election cycle.
Good leaders don't have to be nice people.
Of course, there's less "peer review" going on today. Its been replaced with the scientific "consensus," decided by "experts" which the populace should accept without question or concern.
But those "communication games" played by science to build their consensus and retain their position in the societal hierarchy won't help them in the long-term.
They'll all eventually lose their jobs or worse, unless they revisit the concept of independent peer review and note the public's growing obsession with verification.