to get good perspective.” Eastman described the core trait of the best forecasters to me as: “genuinely curious about, well, really everything.”
Ellen Cousins researches fraud for trial lawyers. Her research naturally roams from medicine to business. She has wide-ranging interests on the side, from collecting historical artifacts to embroidery, laser etching, and lock picking. She conducts pro bono research on military veterans who should (and sometimes do) get upgraded to the Medal of Honor. She felt exactly the same as Eastman. Narrow experts are an invaluable resource, she told me, “but you have to understand that they may have blinders on. So what I try to do is take facts from them, not opinions.” Like polymath inventors, Eastman and Cousins take ravenously from specialists and integrate.
Superforecasters’ online interactions are exercises in extremely polite antagonism, disagreeing without being disagreeable. Even on a rare occasion when someone does say, “‘You’re full of beans, that doesn’t make sense to me, explain this,’” Cousins told me, “they don’t mind that.” Agreement is not what they are after; they are after aggregating perspectives, lots of them. In an impressively unsightly image, Tetlock described the very best forecasters as foxes with dragonfly eyes. Dragonfly eyes are composed of tens of thousands of lenses, each with a different perspective, which are then synthesized in the dragonfly’s brain.
One forecast discussion I saw was a team trying to predict the highest single-day close for the exchange rate between the U.S. dollar and Ukrainian hryvnia during an extremely volatile stretch in 2014. Would it be less than 10, between 10 and 13, or more than 13? The discussion started with a team member offering percentage predictions for each of the three possibilities, and sharing an Economist article. Another team member chimed in with a Bloomberg link and online historical data, and offered three different probability predictions, with “between 10 and 13” favored. A third teammate was convinced by the second’s argument. A fourth shared information about the dire state of Ukrainian finances. A fifth addressed the broader issue of how exchange rates change, or don’t, in relation to world events. The teammate who started the conversation then posted again; he was persuaded by the previous arguments and altered his predictions, but still thought they were overrating the possibility of “more than 13.” They continued to share information, challenge one another, and update their forecasts. Two days later, a team member with specific expertise in finance saw that the hryvnia was strengthening amid events he thought would surely weaken it. He chimed in to inform his teammates that this was exactly the opposite of what he expected, and that they should take it as a sign of something wrong in his understanding. In contrast to politicians, the most adept predictors flip-flop like crazy. The team finally homed in on “between 10 and 13” as the heavy favorite, and they were correct.
In separate work, from 2000 to 2010 German psychologist Gerd Gigerenzer compiled annual dollar-euro exchange rate predictions made by twenty-two of the most prestigious international banks—Barclays, Citigroup, JPMorgan Chase, Bank of America Merrill Lynch, and others. Each year, every bank predicted the end-of-year exchange rate. Gigerenzer’s simple conclusion about those projections, from some of the world’s most prominent specialists: “Forecasts of dollar-euro exchange rates are worthless.” In six of the ten years, the true exchange rate fell outside the entire range of all twenty-two bank forecasts. Where a superforecaster quickly highlighted a change in exchange rate direction that confused him, and adjusted, major bank forecasts missed every single change of direction in the decade Gigerenzer analyzed.
* * *
• • •
A hallmark of interactions on the best teams is what psychologist Jonathan Baron termed “active open-mindedness.” The best forecasters view their own ideas as hypotheses in need of testing. Their aim is not to convince their teammates of their own expertise, but to encourage their teammates to help them falsify their own notions. In the sweep of humanity, that is not normal. Asked a difficult question—for example, “Would providing more money for public schools significantly improve the quality of teaching and learning?”—people naturally come up with a deluge of “myside” ideas. Armed with a web browser, they don’t start searching for why they are probably wrong. It is not that we are unable to come up with contrary ideas, it is just that our strong instinct is not to.
Researchers in Canada and the United States began a 2017 study by asking a politically diverse and well-educated group of adults to read arguments