Preamble (Lionel):
This is a guest post from David Pinsof. David has a PhD in psychology from UCLA and blogs at Everything is Bullshit, where he deconstructs the stories we tell ourselves (and others). His Substack is a great read that I recommend to the readers of Optimally Irrational. In this post, David looks at self-deception, a topic I have discussed myself at length. I like his take on why and how self-deception works. It is somewhat richer and more insightful than the way it has been often discussed (including by me).
Everyone knows what it means to lie. You say something false, and you know it’s false, but you say it anyway, because you’re a liar. Here’s a classic example:
And here’s a more sobering example from the subsequent president:
I could go on all day, but you get the idea. It’s pretty obvious why people lie: self-interest. They’re trying to protect their reputations or advance their agendas.
But why do people lie to themselves? Lying to yourself doesn’t protect your reputation or advance your agenda. It doesn’t do much of anything. It might even hurt you. It hurts other people when you lie to them, so why wouldn’t it hurt you to lie to yourself? We generally don’t punch ourselves in the face, yet it seems like we often lie to ourselves. What a strange thing to do! Why do we do this?
According to the evolutionary biologist Robert Trivers, we do it to more effectively lie to others. Here’s how Trivers famously put it in the foreword to The Selfish Gene:
“If… deceit is fundamental to animal communication, then there must be strong selection to spot deception and this ought, in turn, to select for a degree of self-deception, rendering some facts and motives unconscious so as not to betray—by the subtle signs of self-knowledge—the deception being practiced. Thus, the conventional view that natural selection favors nervous systems which produce ever more accurate images of the world must be a very naïve view of mental evolution.”
I love this idea. It gives me chills. It was used to great dramatic effect in the show Battlestar Galactica, about a war between humans and evil robots called “cylons.” A major plot twist (spoiler alert) was that many of the human protagonists turned out to be cylons in disguise who had been programmed to sincerely believe they were humans—and were even given fake human memories.
Awesome twist! But we’re not here to entertain—we’re here to gain insight—and I need to point out what Trivers’ idea gets wrong, and what it gets right.
The evolution of deceptive signals
Let’s start with what it gets wrong. Deception isn’t really “fundamental” to animal communication. One would be on firmer ground in making the opposite claim—that truth (or signal reliability) is fundamental to animal communication. The reason is that if signalers send deceptive signals, recipients will evolve to ignore those signals, and signalers will evolve to stop wasting their time sending them. In order for a signaling system to evolve by natural selection, it has to be beneficial for both the signaler and the recipient, enhancing the pair’s fitness relative to their less communicative rivals.
Now deception can evolve, but only when it’s mixed with mutually beneficial communication. For instance, if I lie to you once—e.g., saying I liked your screenplay when I thought it sucked—it might still benefit you to listen to me on other topics. But as I transition into bigger and bolder lies, moving beyond fake compliments and into fake jobs and fake relationships, it will eventually no longer benefit you to listen to me. At that point, you’ll stop listening to me, and I’ll stop talking to you. The same cost-benefit logic applies to the game theory of animal communication.
But deception can also evolve amidst a background of otherwise beneficial inferences, which are a kind of one-way communication with the environment. For example, if I’m a bird, and I’m flying around looking for tasty bugs to eat, I might make the inference: “That looks like a stick—don’t eat it.” That’s a good inference for me to make, because 99% of the time, things that look like sticks are, in fact, sticks. But 1% of the time, the thing that looks like a stick is actually a bug, cleverly disguised as a stick—you know, a stick bug.
That’s okay, because my inference “That looks like a stick—don’t eat it” still benefits me most of the time. That’s how critters like stick bugs evolve: by slipping under the radar of other animals’ inferences. Which means deception is a kind of mimicry. It’s not about making stuff up; it’s about exploiting the (communicative) inferences of other animals. And it cannot be too widespread, or else the animals getting exploited will evolve to stop making those maladaptive inferences.
But nature is complicated, and there is one, special case where deception can become widespread, at least temporarily.
To see how, let’s imagine that bird populations are starving and cannot survive without stick bugs as a source of protein. But they also cannot go around chomping on sticks, because that’s not good for their survival either. You might say (forgive me) that the birds are in a sticky situation.
It is at times like this, when animals face an obstacle to their survival and reproduction—known as a “selection pressure”—that evolution kicks into gear. Adaptations emerge. In the birds’ case, they might evolve keener eyesight, and the ability to better discriminate between sticks and stick bugs. Birds with such an ability will survive and reproduce in greater numbers than their less perceptive rivals, passing on their superior eyesight to their offspring.
But of course, the stick bugs are also facing a selection pressure, namely predation by birds. So in response, they might evolve an even more cunning disguise to fool the birds. Round and round it goes, with birds evolving keener and keener eyesight, and stick bugs evolving better and better disguises, and so on. At any point in this back-and-forth, the stick bugs (i.e., the liars) might have the upper hand, explaining how deception could become widespread, at least temporarily. This is what’s known as an evolutionary arms race.
The nuclear weapon of self-deception
According to Trivers, it was precisely this kind of arms race that shaped the evolution of the human mind. The advent of language kicked off an arms race between liars and lie detectors, with increasingly cunning liars competing against increasingly perceptive lie detectors. The result was the capacity to detect the most subtle nonverbal cues of deceit—beads of sweat, vocal tremors—and the ability to conceal these cues with a powerful new weapon: self-deception. George Costanza summed up the logic in an episode of Seinfeld:
Again, this is a really cool idea, but is there any evidence for it? Shouldn’t we expect humans to be extremely good at detecting lies by now—the culmination of a million-year-long arms race? Well, are we?
A lot of research suggests that the answer is no: humans are mediocre lie detectors. Most purported cues of lying—the nervous stammer, the shifty eyes—are weak or unreliable. Polygraphs don’t seem to work. Neither do “micro-expressions”—i.e., quick changes in facial expressions. And I’m constantly fooled by the dazzling lies we call “fiction.” Chris Hemsworth always tricks me into thinking he’s Thor—in fact, he seems way more like the god of thunder than an actor named Chris Hemsworth.
It turns out that the only semi-reliable way to tell when a person is lying is to actually listen to what they’re saying, check it for consistency with what they said before (or with what you independently know is true), figure out if they have an incentive to lie to you, and listen to whether they are openly giving you detailed information or being cagey and handwavey. Detecting lies is therefore not a distinct skill: it’s just called “being good at reasoning.”
Unfortunately, lying to yourself does not protect you from people who are good at reasoning. No matter how fully you deceive yourself, it won’t transform your lie into something that is consistent with what you said before—or with observable reality. Nor will it make your lie more plausible or coherent. Nor will it endow your fib with all the sumptuous detail of a Faulkner novel. To successfully lie to someone, your story has to be believable, consistent, logical, and realistically detailed.
You might even get the details from reality itself, fixating on the real-life information that fits your story and ignoring the information that doesn’t. In the same way, you might get the logical consistency from reality itself, embracing the logical inferences that support what you want to believe and rejecting the ones that don’t. But once you start doing these things, you begin doing something altogether different from lying. You begin doing something more akin to spin, truth-bending, embellishing, propagandizing, or—my favorite—bullshitting.
The philosopher Harry Frankfurt defines bullshitting as communicating without regard for the truth. Bullshitters aren’t interested in accurately describing reality; they’re interested in some other goal, like gaining status, sympathy, or approval. The implication is that bullshit can sometimes be true. Bullshitters will tell the truth (or embellish it) when it serves their wily goals and ignore, dismiss, or minimize the truth when it doesn’t. When a bullshitter tells the truth, they do so as a side effect of pursuing their real goal: to manipulate you in some way—e.g., to persuade you, impress you, or flatter you. Given this definition of bullshit, it becomes clear that our lives are full of it.
So maybe there was an arms race in our evolutionary past, but a slightly different one than what Trivers had in mind. It wasn’t an arms race between liars and lie detectors, but between bullshitters and bullshit detectors. The culmination of this arms race wasn’t cylon-esque self-deception, but virtuosic bullshitting and paranoid bullshit detection.
This is another cool idea, but is there any evidence for it? Shouldn’t we expect humans to be very good at detecting bullshit by now—the culmination of a million-year-long arms race? Well, are we?
A lot of research, reviewed by the psychologist Hugo Mercier in his excellent book Not Born Yesterday, suggests that the answer is yes. People are generally skeptical of other people’s arguments, monitoring them carefully for inconsistencies, especially when the arguments come from people they have reason to be suspicious of. Gullibility is not our problem: hyper-skepticism and isolated demands for rigor are the problem. Conspiracy theories don’t arise from gullibility but from its opposite: suspicion of the conventional wisdom and an overeagerness to avoid being duped.
And even though this hyper-skepticism can look irrational on its face, it has a hidden rationality to it. We’re hyper-skeptical of claims made by the outgroup—the people we fear, dislike, and distrust. And we believe ingroup-flattering absurdities (e.g., “We were chosen by God,”) because it is instrumentally rational for us to do so. The benefits of status and tribal solidarity often outweigh the costs of false beliefs, particularly if those beliefs are vague, unactionable, or unfalsifiable.
Besides, it’s not even obvious that people sincerely “believe” their group-serving bullshit at all. They might just pay it lip service—as a kind of “reflective belief” or “credence”—while ignoring it in their day-to-day interactions with the world. This allows them to have their cake and eat it: they can continue navigating the world effectively while showing allegiance to their tribe at the same time.
Our hyper-social species
And now we have arrived at the part that Trivers got 100% right: the importance of other people—i.e., the members of our tribe—in our evolutionary past. We bullshit ourselves to bullshit others.
This is worth emphasizing. We don’t just bullshit ourselves for its own sake. It’s not a strategy for feeling warm and happy inside. This is the mistake that a lot of lay people and social scientists (who should know better) make. Many scholars think we have a fundamental need to feel good about ourselves, protect our fragile egos, pursue happiness, and avoid bad vibes. But we don’t have such desires—they make no evolutionary sense—because they’re inside our heads, when fitness is out there in the world. The people we’re trying to manipulate and persuade—they’re out there in the world, and their judgments matter for our fitness. Their judgments might even be the single greatest thing that matters for our fitness. If you want to understand the nature of our hyper-social species, you cannot take your eye off the ball of reputation and status.
And when you keep your eye on this ball, everything starts to make sense. It becomes clear why we bullshit ourselves. If you spend a lot of time convincing yourself of some claim (e.g., “I’m a good person,” “My tribe is superior,”), you’re going to have a much easier time convincing other people of that claim later on. The excuses and rationalizations will more quickly leap to mind. The holes in your arguments will be conveniently patched up. The good points you made to yourself will rise to the surface, the bad points forgotten. You’ll speak more smoothly and less hesitatingly, dazzling your listeners with all the colorful details of your one-sided story. Just as any good lawyer prepares their opening statement in advance of the trial, you prepare your many opening statements in advance of your many trials, including trials that never end up taking place, because it’s better to be over-prepared than caught off guard.
You might even take this a step further. You might perform actions in the world for the purpose of convincing yourself of something—say, donating to charity to convince yourself that you’re a good person, or composting to convince yourself that you care about the environment. We all do this sort of thing—behaving in ways that fit the stories we want to tell about ourselves. Again, the lawyer analogy is helpful: just as a lawyer carefully gathers evidence to build their case, we carefully gather evidence to build our own cases, and we often do that by creating such evidence.
Have you ever felt guilty about something and tried to convince yourself that you’re not a bad person? I’m guessing you probably have. And it wasn’t lying. You weren’t making stuff up. But you weren’t exactly seeking the truth either. You were occupying the middle-ground between lying and truth-seeking, between science and superstition, between the pit of man’s fears and the summit of his knowledge: the bullshit zone.
In the end, the idea that we bullshit ourselves to bullshit others is both more relatable, less scintillating, and more depressing than Trivers’ original idea. It doesn’t call to mind Battlestar Galactica or Cold War spy movies; it merely calls to mind the actual, unflattering stuff that humans do in the real world, and the flattering stories we tell ourselves about it.
"I could go on all day, but you get the idea. It’s pretty obvious why people lie: self-interest. They’re trying to protect their reputations or advance their agendas."
That's an oversimplification. People also often lie to protect themselves from physical harm or to avoid legal consequences. There's also a large literature on prosocial lying (i.e., lying to protect or otherwise benefit others). Same with evasive bullshitting. Most everyday bullshitting is evasive (rather than persuasive) and is largely done to avoid hurting someone else's feelings or reputation. Rather than having a "disregard for the truth," it's actually 100% focused on avoiding revealing a truth (same with prosocial lying).
Your description of "the bullshit zone" just sounds like another way of describing the rationalization process brought on by cognitive dissonance (its recent replication failure not withstanding). I suppose "bullshit zone" might be catchier to some people, but it feels less explanatory than cognitive dissonance theory.
In terms of lying, the majority of research on lie detection shows that we're pretty bad at spotting it (no better than chance). The research on bullshit shows that some people are pretty bad at detecting it and some are better, but this largely varies as a function of intelligence (and thinking quality): both of the person hearing the bullshit and the person doing the bullshitting. Generally, we're only really good at spotting bad bullshitters, just as it's easier to spot bad liars.
The bad bullshitters (and liars) also tend to be the ones who bullshit more frequently (Littrell el al., 2021) which might contribute to people's inaccurate beliefs that they're good at spotting deception in general (I'd argue that's partly driven by an availability heuristic; they only recall the times they spotted the (bad) liars/BSers because they're unaware of all the times they were actually duped by good liars/BSers).
In a way, this might be argued to be similar to the example of dishonest signalling you gave (the stick bug). Stick bugs are actually eaten by some species of birds quite frequently, rather than avoided as you stated above. Not necessarily because the birds are always looking for food, though, but because they're looking for sticks for nest building (or perhaps they've developed better detection skills). Purportedly, it helps transport their eggs to new areas (the eggs are coated in calcium oxalate, which birds can't digest, so they poop out the eggs which later hatch in new areas). (Suetsugu et al., 2018). So, the dishonest signalling of stick bugs is effective at bullshitting some species of birds and other predators, but is an example of bad bullshitting against the (more perceptive?) species of birds who are their natural predators.
To extend your idea about "evolution making us better at things," as people get better at detecting BS and lies, people also get better at BSing and lying. For example, people with higher intelligence are not only better at spotting BS, but are able to craft better, more convincing lies and bullshit (Turpin et al., 2021). In other words, smarter people are better able to detect bullshit but smarter people are also able to bullshit better (and get away with it more often).
I agree that people sometimes bullshit themselves as a consequence of trying to bullshit others. I have some preliminary data showing that a sort of self illusory truth effect might occur where someone repeats some bullshit enough times that they start to believe it. I think people also sometimes bullshit themselves as a form of self-motivation (e.g, overconfidence).
This was a great update to Trivers ideas! And more believable than how he describes things. I remember I read Trivers self-deception book back in 2011 and if I recall correctly it had two major categories of problems: 1. I think he was prone to some "over-adaptationism" in how he framed how the mind was formed by evolution. If there's a plausible adaptive need for something, just add a specialized brain module! 2. I think he cited a bunch of findings on how people supposedly self-decieve and detect lies that in hindsight seem like typical pre replication crisis psychology. I remember something about people drawing letters on their forehead in the wrong direction in a way that proves they're self-centred, something like that. I've never looked up the study but it's exactly the sort of thing that I would bet doesn't replicate and also doesn't necessarily imply what he argued it did.
But even so the book has affected how I think about psychology quite a lot (at the time I thought it was amazing). It's nice to see someone sort of salvage the useful essence of it. I still wonder how much of this requires us to invoke specific evolutionary pressures though, or if self-bullshitting can be adequately explained by simple learning. Like is it: "social status is rewarding, the end" or is it "social status is rewarding and threats to it evoke certain (more-or-less) specific responses"?