constantly blunder into disasters. It would be better for the species if no one exaggerated, but our brains were not selected for the benefit of the species, and no individual can afford to be the only honest one in a community of selfenhancers. 89
Overconfidence makes the tragedy of predation even worse. If people were completely rational, they would launch an act of predatory aggression only if they were likely to succeed and only if the spoils of the success exceeded the losses they would incur in the fighting. By the same token, the weaker party should concede as soon as the outcome was a foregone conclusion. A world with rational actors might see plenty of exploitation, but it should not see many fights or wars. Violence would come about only if the two parties were so closely matched that a fight was the only way to determine who was stronger.
But in a world with positive illusions, an aggressor may be emboldened to attack, and a defender emboldened to resist, well out of proportion to their odds of success. As Winston Churchill noted, “Always remember, however sure you are that you can easily win, that there would not be a war if the other man did not think he also had a chance.”90 The result can be wars of attrition (in both the game-theoretic and military sense), which, as we saw in chapter 5, are among the most destructive events in history, plumping out the tail of high-magnitude wars in the power-law distribution of deadly quarrels.
Military historians have long noted that leaders make decisions in war that are reckless to the point of delusion.91 The invasions of Russia by Napoleon and, more than a century later, by Hitler are infamous examples. Over the past five centuries, countries that initiated wars have ended up losing them between a quarter and a half of the time, and when they won the victories were often Pyrrhic.92 Richard Wrangham, inspired by Barbara Tuchman’s The March of Folly: From Troy to Vietnam and by Robert Trivers’s theory of self-deception, suggested that military incompetence is often a matter not of insufficient data or mistakes in strategy but of overconfidence.93 Leaders overestimate their prospects of winning. Their bravado may rally the troops and intimidate weaker adversaries, but also may put them on a collision course with an enemy who is not as weak as they think and who may be under the spell of an overconfidence of its own.
The political scientist Dominic Johnson, working with Wrangham and others, conducted an experiment to test the idea that mutual overconfidence could lead to war.94 They ran a moderately complicated war game in which pairs of participants pretended to be national leaders who had opportunities to negotiate with, threaten, or mount a costly attack on each other in competition for diamonds in a disputed border region. The winner of the contest was the player who had more money, if their nation survived at all, at the end of several rounds of play. The players interacted with each other by computer and could not see each other, so the men didn’t know whether they were playing with another man or with a woman, and vice versa. Before they began, participants were asked to predict how well they would do relative to everyone else playing the game. The experimenters got a nice Lake Wobegon Effect: a majority thought they would do better than average. Now, in any Lake Wobegon Effect, it’s possible that not many people really are self-deceived. Suppose 70 percent of people say they are better than average. Since half of any population really is above average, perhaps only 20 percent think too well of themselves. That was not the case in the war game. The more confident a player was, the worse he or she did. Confident players launched more unprovoked attacks, especially when playing each other, which triggered mutually destructive retaliation in subsequent rounds. It will come as no surprise to women that the overconfident and mutually destructive pairs of players were almost exclusively men.
To evaluate the overconfidence theory in the real world, it’s not enough to notice in hindsight that certain military leaders proved to be mistaken. It has to be shown that at the time of making a fateful decision, a leader had access to information that would have convinced a disinterested party that the venture would probably fail.
In Overconfidence and War: The Havoc and Glory of Positive Illusions, Johnson vindicated Wrangham’s hypothesis by looking at the