Human ability to spot liars and falsehoods
“We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others’deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature.”
I have been unable to find a non-gated version of this study by Bond and DePaulo. What the main result above (’54 %’) means is that people are hardly better than chance at identifying deception on average. This is the result of an analysis of 206 studies which have looked at this, with almost 25.000 ‘participants’ – it’s not just a fluke, we really are that bad at telling whether people tell us the truth or not. This link has more:
“There are a number of reasons for this poor ability; among them poor feedback in daily life (i.e. a person only knows about the lies they have caught); the general tendency among people to believe others until proven otherwise (i.e. a “truth bias”; ), and especially a faulty understanding of what liars actually look like (i.e. the difference between people’s perceived clues to lying, compared to the actual clues; ). […]
Most of the studies reviewed were laboratory based and involved observers judging strangers. But similar results are found even when the liars and truth tellers are known to the observers (also reviewed by . If the lies being told are low stakes, so that little emotion is aroused and the lie can be told without much extra cognitive effort, there may be few clues available on which to base a judgment. But even studies of high stakes lies, in which both liars and truth tellers are highly motivated to be successful, suggest an accuracy level that is not much different from chance.”
All of this is of course complicated greatly by the problem that the truth/lie-variable often isn’t binary in our everyday lives – another way to think about it is to think of any statement* as having a truth component, a continuous variable going from 0 to 1 and spanning the entire range in between. Also, I’m not sure if adding confounding stuff that’s actually true to a non-obvious lie isn’t one of several common strategies employed in order to make lies harder to spot.
*if we use Popperian terminology and add ‘basic’ in front of ‘statement’, we also take care of the problem that some statements, e.g. value judgments, have an undefined truth component. But most statements aren’t basic statements, so anyway…