13 Comments
Feb 11Liked by David Pinsof

"I could go on all day, but you get the idea. It’s pretty obvious why people lie: self-interest. They’re trying to protect their reputations or advance their agendas."

That's an oversimplification. People also often lie to protect themselves from physical harm or to avoid legal consequences. There's also a large literature on prosocial lying (i.e., lying to protect or otherwise benefit others). Same with evasive bullshitting. Most everyday bullshitting is evasive (rather than persuasive) and is largely done to avoid hurting someone else's feelings or reputation. Rather than having a "disregard for the truth," it's actually 100% focused on avoiding revealing a truth (same with prosocial lying).

Your description of "the bullshit zone" just sounds like another way of describing the rationalization process brought on by cognitive dissonance (its recent replication failure not withstanding). I suppose "bullshit zone" might be catchier to some people, but it feels less explanatory than cognitive dissonance theory.

In terms of lying, the majority of research on lie detection shows that we're pretty bad at spotting it (no better than chance). The research on bullshit shows that some people are pretty bad at detecting it and some are better, but this largely varies as a function of intelligence (and thinking quality): both of the person hearing the bullshit and the person doing the bullshitting. Generally, we're only really good at spotting bad bullshitters, just as it's easier to spot bad liars.

The bad bullshitters (and liars) also tend to be the ones who bullshit more frequently (Littrell el al., 2021) which might contribute to people's inaccurate beliefs that they're good at spotting deception in general (I'd argue that's partly driven by an availability heuristic; they only recall the times they spotted the (bad) liars/BSers because they're unaware of all the times they were actually duped by good liars/BSers).

In a way, this might be argued to be similar to the example of dishonest signalling you gave (the stick bug). Stick bugs are actually eaten by some species of birds quite frequently, rather than avoided as you stated above. Not necessarily because the birds are always looking for food, though, but because they're looking for sticks for nest building (or perhaps they've developed better detection skills). Purportedly, it helps transport their eggs to new areas (the eggs are coated in calcium oxalate, which birds can't digest, so they poop out the eggs which later hatch in new areas). (Suetsugu et al., 2018). So, the dishonest signalling of stick bugs is effective at bullshitting some species of birds and other predators, but is an example of bad bullshitting against the (more perceptive?) species of birds who are their natural predators.

To extend your idea about "evolution making us better at things," as people get better at detecting BS and lies, people also get better at BSing and lying. For example, people with higher intelligence are not only better at spotting BS, but are able to craft better, more convincing lies and bullshit (Turpin et al., 2021). In other words, smarter people are better able to detect bullshit but smarter people are also able to bullshit better (and get away with it more often).

I agree that people sometimes bullshit themselves as a consequence of trying to bullshit others. I have some preliminary data showing that a sort of self illusory truth effect might occur where someone repeats some bullshit enough times that they start to believe it. I think people also sometimes bullshit themselves as a form of self-motivation (e.g, overconfidence).

Expand full comment
author

Thanks, Shane. I agree we often lie or bullshit to benefit others, though if we’re doing this to benefit our cronies, our tribe, or our family, it gets blurry on whether this counts as “self-interested.” Or if we’re doing it to avoid hurting someone’s feelings, presumably there’s a self-interested component in that we do not want to face a backlash from the person. I’m generally skeptical of the idea that we bullshit ourselves to boost our own motivation, or to achieve any other kind of internal state (I’ve written a ton about this—I link to several pieces in the post). I’d be more partial to the idea that we bullshit ourselves to get other people jazzed up and persuade them to support us in our endeavors (and maybe our own motivation tracks our estimate of how likely others will support us). Thanks for your comment and looking forward to reading about your research on self-bullshitting—sounds fascinating.

Expand full comment
Feb 11·edited Feb 11Liked by David Pinsof

Yeah, I agree with several of your points. The toughest issue in lying/bullshitting research (and philosophy) is that there's really no way to know if someone is lying or bullshitting (both of which are intentional acts) without the ability to read people's minds. What might be a lie from one person's mouth would represent a sincere belief from another's. So, it can often be tricky to parse which rhetoric is self-interested or group interested (e.g. partisan cheerleading) and which reflects the mindset of a deluded true believer. From an outside perspective, we might mislabel one for the other more often than we realize.

As to self-bullshitting as motivation, it depends on how you construe both concepts. For example, in 1995, Peter McNeely was clearly outmatched when he stepped in the ring with an at-his-peak Mike Tyson. I'm sure part of his motivation was financial (I'm guessing he got a good payday from it) but all fighters have to "psyche themselves out" before a fight (many athletes do this). As such, it's reasonable to assume McNeely had to engage in some arguably bullshitty (if not delusional) self-talk to convince himself he had a chance (NARRATOR: He did not have a chance). To an outside observer, I think that could reasonably construed as self-bullshitting-as-motivation.

Complete digression here....but to your point about the blurriness of determining whether certain acts are self-interested...I kind of feel the same way about altruism and whether "true altruism" really exists. If a person engages in an altruistic act, by definition they should get nothing in return. But if being altruistic makes a person feel good (i.e., they get that hit of dopamine reward), then they're not really being altruistic for non-self-interested reasons. They want that neuropsychological reward. Thus, that's not truly "altruism," at least not in the strictest sense (which I unfairly feel it should be judged by, lol).

P.S. Hope to have the self-bullshitting paper submitted by this summer. I'll post on X if it gets published.

Expand full comment
Feb 22Liked by David Pinsof

This was a great update to Trivers ideas! And more believable than how he describes things. I remember I read Trivers self-deception book back in 2011 and if I recall correctly it had two major categories of problems: 1. I think he was prone to some "over-adaptationism" in how he framed how the mind was formed by evolution. If there's a plausible adaptive need for something, just add a specialized brain module! 2. I think he cited a bunch of findings on how people supposedly self-decieve and detect lies that in hindsight seem like typical pre replication crisis psychology. I remember something about people drawing letters on their forehead in the wrong direction in a way that proves they're self-centred, something like that. I've never looked up the study but it's exactly the sort of thing that I would bet doesn't replicate and also doesn't necessarily imply what he argued it did.

But even so the book has affected how I think about psychology quite a lot (at the time I thought it was amazing). It's nice to see someone sort of salvage the useful essence of it. I still wonder how much of this requires us to invoke specific evolutionary pressures though, or if self-bullshitting can be adequately explained by simple learning. Like is it: "social status is rewarding, the end" or is it "social status is rewarding and threats to it evoke certain (more-or-less) specific responses"?

Expand full comment
author

Thanks, Vilgot. I too was a fan of the Trivers theory, but it kept kind of collapsing in my head, and this was my best attempt to salvage what was right about it. Yea, there doesn’t necessarily need to be an adaptation for self-bullshitting; it’s possible this could arise through reinforcement learning as we figure out which bullshitting tactics are successful and which ones aren’t. Maybe we get a nudge toward self-bullshitting from evolution; maybe we don’t. It would be interesting to see if there’s cultural variation in self-bullshitting or whether it’s universal. I’d love to run a cross-cultural, international poll asking people questions like “have you ever felt guilty and tried to convince yourself that you’re not a bad person?” I honestly don’t have a strong prediction about what you’d find. Could be everyone says “yes, of course”; or could be you’d find some interesting cultural variation.

Expand full comment
Feb 7Liked by David Pinsof

Shortly after I read this post, Google's Generative AI wrote out Python code calling a function mean() that calculates the mean of a 2D array using the linalg sub-library of the Numpy library. I give the details only to stress that such a function either exists or doesn't, and there is no such function in the Numpy library. However, there is a function by that name in a related Pandas library. The AI made a "mashup" of two objects. Amazon's Alexa often responds honestly to a question with "I don't know that one." An LLM AI can be a good bullshitter because it has been trained on texts written by human beings - the master bullshitters. An AI may be able to pass a Turing test for the group dynamics you mentioned. It seems to care that I think its answers are correct.

Expand full comment

‘Bullshitting’ is a much better term than the cool marketing language of ‘hallucinating’ for what LLMs do.

Expand full comment
Feb 7Liked by David Pinsof

“The benefits of status and tribal solidarity often outweigh the costs of false beliefs, particularly if those beliefs are vague, unactionable, or unfalsifiable.” True.

Expand full comment

I don’t think you use the words dogma or religion in this piece, which I find curious. Not sure if that was on purpose.

I would also add that education and religion have much in common. Here’s a project I’m working on, relating the two. https://open.substack.com/pub/scottgibb/p/religion-education-and-identity?r=nb3bl&utm_campaign=post&utm_medium=web

Expand full comment
author

Sort of on purpose. I don’t see a sharp discontinuity between religious belief and other kinds of dogma or bullshit (like ideologies or subcultures). I think most of our beliefs have a social, bullshitty element to them and religion is not unique in that regard. Will check out the piece—thanks.

Expand full comment

So your definition of bullshit is? Pretty similar to dogma?

Expand full comment
author

Yea pretty similar, though I think of bullshit as a bit broader, encompassing wily hypocrisy and bait-and-switch maneuvers in addition to stubbornly clinging to a single set of beliefs (dogmatism).

Expand full comment

Got it. I’ll keep reading your Substack.

Expand full comment