Redirected here from a more recent post. Interesting post as always. I have a slight pushback and a reframing in my current understanding. Curious to know what you think today about those:
“For overconfidence to increase our chances of success relative to an appropriate level of confidence (in non-social context), you would need to explain why the highest chances of success cannot be reached with the appropriate level of confidence. »
I would imagine that doubting oneself in most activities would not help in completing them. If I were forced to perform a feat, I would always want to feel confident. So, I would rather say confidence is the default and doubt act primarily as a guide for whether one should engage in a behavior at all. Overconfidence seems to offer a benefit in risk-taking situations when the payoff is “worth” the risk. Calibrating confidence to the level of correct risk-taking seems plausible. There’s also the problem of determining what counts as an appropriate level of confidence. If you’ve climbed hundreds of mountains in all climates, is it overconfident to think you can do it again ? or is it just the neutral best guess estimate? There is risks in overconfidence but calibrating to doubting every new situation might be even more problematic.
In social setting “If we were consistently objective in our self-assessments, we would be at a disadvantage compared to others who confidently put inflated claims forward. »
I approach this mainly from the Enigma of Reason perspective. Also, I don’t think overconfidence helps in lying as I recall that studies and reviews that suggest humans cannot reliably distinguish lies from truth (might be wrong of me). In this context as well I think, confidence is always preferable for performance, but the strategic question is how confident can I appear because I will be held more responsible for my claims if I am more confident. So, we probably appear confident in what we can justify. Concerning ourselves we would like to make the most inflated claims we can that remain believable. The more we actively search for and interpret these claims, the more confident we feel? This would be overconfidence to a neutral observer. However, I would assume this confidence to be fairly well calibrated to our social justification power. Not the truth, the amount of evidence we were able to find supporting our claims. Consequently, overconfident people tend to be more convincing since they have more supporting arguments.
Some findings might also be explained more parsimoniously through the fundamental bias of Oeberst 2023. For example, in task reporting, being more aware of one’s own chores but not those of partners could lead to a misestimation without invoking motivated overconfidence. It may not be an evolved design to inflate one’s perception of work; it could simply be a side effect of being self-focused by necessity. I think it leads back to the same idea of calibrating based on optimal choice for us and not optimal representation of a situation. Rejoins your linked post where subjective reference point selects for efficient coding and not objectivity.
Thanks, these are very good questions. My view is the following.
On confidence and performance, I agree that confidence can help action and that it may increase one’s willingness to take risks. Since some risks pay off, overconfidence can sometimes generate large successes. But this does not come for free. For every winning lottery ticket obtained because I was overconfident, there are also losing tickets that I would not have taken with properly calibrated beliefs. In situations involving non-strategic risk, where outcomes do not depend on other people’s beliefs and reactions, calibrated beliefs should minimise the costs of error. A decision-maker who assesses his chances correctly is in the best position to choose the option with the highest expected utility. In that sense, miscalibrated beliefs move choices away from what is actually optimal.
This is also where my evolutionary perspective matters methodologically. Evolution is a process pushing towards functional efficiency. Unless we have good reasons to think that a limitation is hard to avoid, my default is to assume that the system is doing something sensible rather than malfunctioning.
That is why I find Trivers’ point so interesting. In strategic interactions, the logic I described above is no longer straightforward, because my beliefs can affect other people’s beliefs and behaviour. In that case, it is no longer obvious that calibrated beliefs lead to the best outcome. Trivers insights are backed by later results in game theory.
On your second point, I think the issue is indeed connected to plausible deniability and, more broadly, to strategic interaction in communication. But I would be cautious with the claim that humans are simply bad at detecting lies. A lot of the early evidence comes from neutral, low-stakes settings where people tell lies in artificial conditions and observers try to spot them from limited cues. Real social interactions are richer than that. People can question each other, probe for inconsistencies, and pick up signs of cognitive load and nervousness. In those settings, self-deception may still help by making deception more credible (I have a paper on this question).
On the third point, I agree that processes such as belief-consistent information processing may explain part of what we observe. But my methodological instinct is to ask why our minds work that way. Without a plausible account of why such a limitation would persist, I am reluctant simply to assume the limitation. I would rather start asking what the good reasons behind the puzzling pattern of behaviour we observe might be. Often, we find that when the problem we face is properly understood, a behaviour that initially seemed puzzling is actually a very good way to navigate the situation.
Trying to update after reading your reply and paper:
Whatever calibrated beliefs might mean precisely with respect to overconfidence, the system is calibrated with respect to fixed parameters (or ‘facts’) which seems to be the core idea.
In social settings, however, the parameters change partly as a function of one’s own calibration, because your beliefs influence others. In this context it might be strategic to inflate beliefs for more persuasion effect even it makes them less accurate.
I am not (in my mind at least) arguing against a functional account by appealing to limitations. I think ‘truth’ or correct estimation is not self-evident, and ‘biases’ can be very effective ways of navigating the world. So effective, in fact, that I wonder whether we need anything more than the argumentative aspect of reasoning, belief-consistency and self-as-reference heuristics to explain overconfidence. Evidence that beliefs inflate when people are asked to convince others points to an additional, very local, motivational effect. So yes it seems so.
It also seems that the baseline is often already shifted toward overconfidence, even before persuasion stakes. Is this a more general level overshoot-to-persuade-more? or is there another possible function to this. If our beliefs exist primarily to guide action rather than to map reality accurately, then believing ourselves to be (slightly) more capable, competent, or worthy than we currently are naturally encourages us to aim higher. I would expect this second kind to apply also in non-social context.
To avoid altering my own beliefs too much, could I say that lie detection is not very effective because people are actually monitoring confidence (your point, I think) and coherence cues (Mercier’s point)? Their usefulness depends on how closely they track truth, and that relation varies across settings.
Would you agree with this reframing? I also hope I’m not imposing with my comments
As Gilbert said,
"My boy, you may take it from me,
That of all the afflictions accurst
With which a man’s saddled
And hampered and addled,
A diffident nature’s the worst."
Redirected here from a more recent post. Interesting post as always. I have a slight pushback and a reframing in my current understanding. Curious to know what you think today about those:
“For overconfidence to increase our chances of success relative to an appropriate level of confidence (in non-social context), you would need to explain why the highest chances of success cannot be reached with the appropriate level of confidence. »
I would imagine that doubting oneself in most activities would not help in completing them. If I were forced to perform a feat, I would always want to feel confident. So, I would rather say confidence is the default and doubt act primarily as a guide for whether one should engage in a behavior at all. Overconfidence seems to offer a benefit in risk-taking situations when the payoff is “worth” the risk. Calibrating confidence to the level of correct risk-taking seems plausible. There’s also the problem of determining what counts as an appropriate level of confidence. If you’ve climbed hundreds of mountains in all climates, is it overconfident to think you can do it again ? or is it just the neutral best guess estimate? There is risks in overconfidence but calibrating to doubting every new situation might be even more problematic.
In social setting “If we were consistently objective in our self-assessments, we would be at a disadvantage compared to others who confidently put inflated claims forward. »
I approach this mainly from the Enigma of Reason perspective. Also, I don’t think overconfidence helps in lying as I recall that studies and reviews that suggest humans cannot reliably distinguish lies from truth (might be wrong of me). In this context as well I think, confidence is always preferable for performance, but the strategic question is how confident can I appear because I will be held more responsible for my claims if I am more confident. So, we probably appear confident in what we can justify. Concerning ourselves we would like to make the most inflated claims we can that remain believable. The more we actively search for and interpret these claims, the more confident we feel? This would be overconfidence to a neutral observer. However, I would assume this confidence to be fairly well calibrated to our social justification power. Not the truth, the amount of evidence we were able to find supporting our claims. Consequently, overconfident people tend to be more convincing since they have more supporting arguments.
Some findings might also be explained more parsimoniously through the fundamental bias of Oeberst 2023. For example, in task reporting, being more aware of one’s own chores but not those of partners could lead to a misestimation without invoking motivated overconfidence. It may not be an evolved design to inflate one’s perception of work; it could simply be a side effect of being self-focused by necessity. I think it leads back to the same idea of calibrating based on optimal choice for us and not optimal representation of a situation. Rejoins your linked post where subjective reference point selects for efficient coding and not objectivity.
Thanks, these are very good questions. My view is the following.
On confidence and performance, I agree that confidence can help action and that it may increase one’s willingness to take risks. Since some risks pay off, overconfidence can sometimes generate large successes. But this does not come for free. For every winning lottery ticket obtained because I was overconfident, there are also losing tickets that I would not have taken with properly calibrated beliefs. In situations involving non-strategic risk, where outcomes do not depend on other people’s beliefs and reactions, calibrated beliefs should minimise the costs of error. A decision-maker who assesses his chances correctly is in the best position to choose the option with the highest expected utility. In that sense, miscalibrated beliefs move choices away from what is actually optimal.
This is also where my evolutionary perspective matters methodologically. Evolution is a process pushing towards functional efficiency. Unless we have good reasons to think that a limitation is hard to avoid, my default is to assume that the system is doing something sensible rather than malfunctioning.
That is why I find Trivers’ point so interesting. In strategic interactions, the logic I described above is no longer straightforward, because my beliefs can affect other people’s beliefs and behaviour. In that case, it is no longer obvious that calibrated beliefs lead to the best outcome. Trivers insights are backed by later results in game theory.
On your second point, I think the issue is indeed connected to plausible deniability and, more broadly, to strategic interaction in communication. But I would be cautious with the claim that humans are simply bad at detecting lies. A lot of the early evidence comes from neutral, low-stakes settings where people tell lies in artificial conditions and observers try to spot them from limited cues. Real social interactions are richer than that. People can question each other, probe for inconsistencies, and pick up signs of cognitive load and nervousness. In those settings, self-deception may still help by making deception more credible (I have a paper on this question).
On the third point, I agree that processes such as belief-consistent information processing may explain part of what we observe. But my methodological instinct is to ask why our minds work that way. Without a plausible account of why such a limitation would persist, I am reluctant simply to assume the limitation. I would rather start asking what the good reasons behind the puzzling pattern of behaviour we observe might be. Often, we find that when the problem we face is properly understood, a behaviour that initially seemed puzzling is actually a very good way to navigate the situation.
Thanks for the reply !
Trying to update after reading your reply and paper:
Whatever calibrated beliefs might mean precisely with respect to overconfidence, the system is calibrated with respect to fixed parameters (or ‘facts’) which seems to be the core idea.
In social settings, however, the parameters change partly as a function of one’s own calibration, because your beliefs influence others. In this context it might be strategic to inflate beliefs for more persuasion effect even it makes them less accurate.
I am not (in my mind at least) arguing against a functional account by appealing to limitations. I think ‘truth’ or correct estimation is not self-evident, and ‘biases’ can be very effective ways of navigating the world. So effective, in fact, that I wonder whether we need anything more than the argumentative aspect of reasoning, belief-consistency and self-as-reference heuristics to explain overconfidence. Evidence that beliefs inflate when people are asked to convince others points to an additional, very local, motivational effect. So yes it seems so.
It also seems that the baseline is often already shifted toward overconfidence, even before persuasion stakes. Is this a more general level overshoot-to-persuade-more? or is there another possible function to this. If our beliefs exist primarily to guide action rather than to map reality accurately, then believing ourselves to be (slightly) more capable, competent, or worthy than we currently are naturally encourages us to aim higher. I would expect this second kind to apply also in non-social context.
To avoid altering my own beliefs too much, could I say that lie detection is not very effective because people are actually monitoring confidence (your point, I think) and coherence cues (Mercier’s point)? Their usefulness depends on how closely they track truth, and that relation varies across settings.
Would you agree with this reframing? I also hope I’m not imposing with my comments