55 pages • 1 hour read
Philip E. Tetlock, Dan GardnerA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
“We are all forecasters. When we think about changing jobs, getting married, buying a home, making an investment, launching a product, or retiring, we decide based on how we expect the future will unfold. These expectations are forecasts.”
The authors begin by democratizing the idea of forecasting. By drawing attention to the fact that making predictions is something we all do daily, they seek to make the topic of their book accessible and relatable. It is as though the reader has already taken the first step toward becoming a superforecaster.
“Every day, corporations and governments pay for forecasts that may be prescient or worthless or something in between. And every day, all of us—leaders of nations, corporate executives, investors, and voters—make critical decisions on the basis of forecasts whose quality is unknown.”
Here, the authors set out the problem that the Good Judgment Project is seeking to fix: the current low quality of forecasts, which are nevertheless used by national institutions to make imperative decisions that affect millions of lives. The authors assert that the idea that these forecasts are of unknown quality should set alarm bells ringing.
“Accuracy is seldom even mentioned. Old forecasts are like old news—soon forgotten—and pundits are almost never asked to reconcile what they said with what actually happened. The one undeniable talent that talking heads have is their skill at telling a compelling story with conviction, and that is enough.”
This quote centers on a main preoccupation of the authors: the impossibility of improving without sufficient feedback. The fact that forecasters in the media are rarely held accountable for their inaccuracies seemingly minimizes the stakes of their enterprise and allows them to get away with mediocrity. The book will go on to contrast this lack of accountability with the feedback-heavy culture of the Good Judgment Project.
“Our desire to reach into the future will always exceed our grasp. But debunkers go too far when they dismiss all forecasting as a fool’s errand. I believe it is possible to see into the future, at least in some situations and to some extent, and that any intelligent, open-minded, and hardworking person can cultivate the requisite skills.”
While Tetlock acknowledges that the future is unknowable and always slightly beyond our grasp, he argues it is an exaggeration to say that all forecasting is futile. Instead, he advocates for a middle ground, where it is possible to see into the future some of the time. He also suggests there are techniques and mental aptitudes that make it more possible to conquer this seemingly insurmountable task. When he states that “any intelligent, open-minded, and hardworking person can cultivate the requisite skills,” he demonstrates a growth mindset.
“Gathering all evidence and mulling it over may be the best way to produce accurate answers, but a hunter-gatherer who consults statistics on lions before deciding whether to worry about the shadow moving in the grass isn’t likely to live long enough to bequeath his accuracy-maximizing genes to the next generation. Snap judgments are sometimes essential.”
This quote aims to explain the prevalence of spontaneous, System-1-style thinking, including snap judgments. From an evolutionary perspective, such fast decision-making was essential to our hunter-gatherer ancestors who continually faced immediate threats to their survival. Although most of us live in a very different world from these primitive ancestors, we typically maintain their mode of thinking.
“If we are serious about measuring and improving […] forecasts must have clearly defined terms and timelines. They must use numbers. And one more thing is essential: we must have lots of forecasts.”
Tetlock asserts the importance of measuring forecasts with numbers instead of vague verbal terms like “maybe” or “probably not.” Numeric quantification can test the accuracy of forecasts and hold forecasters accountable. It also makes forecasting a more scientific process, as does having multiple forecasts, so that the data from those forecasts can be aggregated.
“They deploy not one analytical idea but many and seek out information not from one source but many. Then they synthesize it all into a single conclusion. In a word, they aggregate.”
The authors argue that considering multiple perspectives is essential to good forecasting. The best forecasters do not seize upon their first intuition but instead seek to enter as many perspectives as possible. Then, such forecasters aggregate, which involves giving due weight to different sides of an argument.
“Think how shocking it would be to the intelligence professionals who have spent their lives forecasting geopolitical events—to be beaten by a few hundred ordinary people and some simple algorithms. It actually happened.”
The authors announce the extraordinary power of superforecasters and invite the reader to imagine what it would be like for untrained “ordinary people” to beat trained professionals in their sphere of forecasting. The short emphatic statement that follows—"It actually happened”—creates suspense.
“Doug has no special expertise in international affairs, but he has a healthy curiosity about what’s happening. He reads the New York Times. He can find Kazakhstan on a map. So he volunteered for the Good Judgment Project. Once a day, for an hour or so, his dining room table became his forecasting center, where he opened his laptop, read the news, and tried to anticipate the fate of the world.”
Here, the authors introduce a typical superforecaster in an understated, casual way, much like they present other superforecasters throughout the book. The authors emphasize that Doug is not a subject specialist. His habit of reading the news and his curiosity about the fate of the world could be characteristics of any intelligent retiree; however, the difference is that Doug is contributing to a professional forecasting body.
“For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded. It would be facile to reduce superforecasting to a bumper-sticker slogan, but if I had to, that would be it.”
The emphasis on the fact that beliefs are hypotheses rather than treasures draws attention to the mental flexibility required of superforecasters. The authors go on to contrast such open-mindedness with identity politics, in which many people become identified with their beliefs.
“It is a huge mistake to belittle belief updating. It is not about mindlessly adjusting forecasts in response to whatever is on CNN. Good updating requires the same skills used in making the initial forecast and is often just as demanding. It can even be more challenging.”
In response to the criticism that superforecasters are so good at their jobs only because they keep a constant eye on the news, Tetlock argues that updating a forecast to account for new information is not as easy as it seems. It requires the same deliberation used in the initial forecast, as forecasters must decide how much weight to give to new facts. Finding the right balance places even more demands on their skills.
“I understand the desire for fail-safe rules that guarantee good results. It is the source of our fatal attraction to hedgehog pundits and their false certainty. But there is no magical formula, just broad principles with lots of caveats. Superforecasters understand the principles but also know that their application requires nuanced judgments. And they would rather break the rules than make a barbarous forecast.”
Here, the authors show that at its core, good forecasting requires flexibility. Superforecasters look for a middle way that respects helpful rules but still allows for those rules to be broken when necessary. The authors contrast such flexibility with a general discomfort with uncertainty and the resulting attention paid to hedgehog pundits who deliver a false sense of certainty at a great cost to accuracy.
“The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement. It is roughly three times as powerful a predictor as its closest rival, intelligence.”
Here, the authors show that a growth mindset is what most ensures that forecasters rise into the ranks of superforecasters. Rather than being satisfied with the knowledge they have, these successful individuals strive for improvement via the process of continually updating what they know and how they think.
“Of course aggregation can only do its magic when people form judgments independently, like the fairgoers guessing the weight of the ox. The independence of judgments ensures that errors are more or less random, so they cancel each other out […] In so many ways, a group can get people to abandon independent judgment and buy into errors. When that happens, the mistakes will pile up, not cancel out.”
The authors highlight the importance of maintaining each individual group member’s independent judgment. Without this, a copycatting effect can take place, which in the long term will lead to more mistakes and the same erroneous thoughts being multiplied. Thus, aggregation only works when the people in a group see themselves as individuals.
“Consensus is not always good; disagreement not always bad. If you do happen to agree, don’t take that argument—in itself—as proof that you are right. Never stop doubting. Pointed questions are as essential to a team as vitamins are to a human body.”
Here, the authors reveal that a superforecaster’s secret sauce is doubt. Without this essential ingredient (which can be nurtured by others disagreeing with us), we can fall into complacency and our forecasts can suffer. The analogy of a vitamin-enhanced human body vividly paints the picture of how important doubt in one’s judgment is.
“Leaders must decide, and to do that they must make and use forecasts. The more accurate those forecasts are, the better, so the lessons of superforecasting should be of intense interest to them.”
The authors posit that superforecasting should be essential to good leadership, as leaders must make decisions about not only their own future but also the futures of other people. Thus, accurate forecasts should play a key role in leaders’ decisions. Such assertions underscore that Tetlock and Gardner are interested in more than science or information for its own sake; they want to change the world on a practical level.
“The humility required for good judgment is not self-doubt—the sense that you are untalented, unintelligent, or unworthy. It is intellectual humility. It is a recognition that reality is profoundly complex, that seeing things clearly is a constant struggle, when it can be done at all, and that human judgment must therefore be riddled with mistakes.”
Here, the authors define the quality of humility, which is useful to aspiring superforecasters. Rather than having doubt in one’s innate abilities, the type of humility that benefits forecasting comprises acknowledging the complexity of a situation and the limits of one’s knowledge. Such humility fosters the healthy doubt that is essential to good judgment.
“We may have no evidence that superforecasters can foresee events like those of September 11, 2001, but we do have a warehouse of evidence that they can forecast questions like: Will the United States threaten military action if the Taliban don’t hand over Osama bin Laden? Will the Taliban comply? […] To the extent that such forecasts can anticipate the consequences of events like 9/11, and these consequences make a black swan what it is, we can forecast black swans.”
Here, the authors show that while shocking black swan events like 9/11 are beyond the scope of prediction, forecasters can still offer insight into other ancillary events that might lead up to the black swan. This means that paying attention to the likelihood of small but significant events can help forecast the big ones, even with the latter’s aura of unpredictability.
“Brushing off surprises makes the past look more predictable than it was—and this encourages the belief that the future is much more predictable than it is.”
The human discomfort with uncertainty leads us to hindsight bias in which we mistakenly declare that such past events could have been forecast. Like much of the book, this passage suggests the mind is a double-edged sword, capable of both extraordinary insight and cognitive distortion.
“If you are as mathematically inclined as Taleb, you get used to the idea that the world we live in is but one that emerged, quasi-randomly, from a vast population of once-possible worlds. The past did not have to unfold as it did, the present did not have to be what it is, and the future is wide open. History is a virtually infinite array of possibilities.”
Whereas superforecasters are humble about their predictive abilities, Nassim Taleb goes one step further, asserting that the future is so unpredictable that everything emerges as a matter of chance. The reference to a “vast population of once-possible worlds” gives the impression of chaotic randomness.
“I also believe that humility should not obscure the fact that people can, with considerable effort, make accurate forecasts about at least some developments that really do matter. To be sure, in the big scheme of things, human foresight is puny, but it is nothing to sniff at when you live on that puny human scale.”
The authors challenge Taleb’s view that forecasting is futile, even as they acknowledge that human foresight is minimal in the “big scheme of things.” However, being humans in a human world, the authors suggest, gives us some insight into human affairs. Thus, we have the equipment to predict the actions of our fellow beings.
“Forecasters themselves will realize […] that […] higher expectations will ultimately benefit them, because it is only with the clear feedback that comes from rigorous testing that they can improve their foresight. This could be huge—an ‘evidence-based forecasting’ revolution similar to the ‘evidence-based medicine’ revolution, with consequences every bit as significant.”
Tetlock and Gardner repeatedly emphasize the importance of feedback to improving forecasting standards. They compare forecasting to medicine, an area where the public has come to expect evidence-based testing. However, by reminding us that there was a “revolution” in the standardization of evidence-based testing in medicine, the authors show how what is now common sense was not always the case.
“It follows that the goal of forecasting is not to see what’s coming. It is to advance the interests of the forecaster and the forecaster’s tribe. Accurate forecasts may help do that sometimes, and when they do accuracy is welcome, but it is pushed aside if that’s what the pursuit of power requires.”
Here, the authors outline the vested interests that any establishment has in keeping accuracy a secondary consideration in forecasting. Instead, those in power typically want forecasts to present a favorable picture of their own future and so further their power. This desire is a serious obstacle to quality control in forecasting.
“Another critical dimension of good judgment is asking good questions. Indeed, a farsighted forecast flagging disaster or opportunity can’t happen until someone thinks to pose the question. What qualifies as a good question? It’s one that gets us thinking about something worth thinking about. So one way to identify a good question is what I call the smack-in-the-forehead test: when you read the question after time has passed, you smack your forehead and say, ‘If only I had thought of that before!’”
Here, the authors show that big ideas and the questions they lead to are an important part of forecasting, as such questions ensure that predictions are about fundamentally important topics. The colloquialism referring to a “smack-in-the-forehead test” indicates that these questions should feel important on a visceral level. This will indicate that the questions are worth answering.
“The ‘try, fail, analyze, adjust, try again’ cycle—and […] the grit to keep at it and keep improving. Bill Flack is perpetual beta. And that, too, is why he is a superforecaster.”
At the end of the book, the authors return to superforecaster Bill Flack, who was also the first to be mentioned, and underscore that Flack’s commitment to perpetual learning and improvement is the reason behind his success. They show that a humble tolerance for frustration in addition to the willingness to learn from past mistakes are paramount to the art and science of learning to predict the future.
Plus, gain access to 8,800+ more expert-written Study Guides.
Including features:
Business & Economics
View Collection
Canadian Literature
View Collection
Common Reads: Freshman Year Reading
View Collection
New York Times Best Sellers
View Collection
Politics & Government
View Collection
Psychology
View Collection
Science & Nature
View Collection
Self-Help Books
View Collection
Teams & Gangs
View Collection
The Best of "Best Book" Lists
View Collection