2.1 Betting as training
- Test your understanding: According to this section, who is expected to make more accurate predictions: subject-matter experts, or people with a general “superforecasting” skill set?
- Explore: What would be a way to combine the different strengths of gamblers, subject-matter experts, and superforecasters?
2.2 Honest forecasting vs. extremism
- Test your understanding: In a forecasting contest with a small number of entrants, if you believe an event is only 11% likely to happen, what is the problem with rounding that down and guessing 0%?
- Explore: What is an example of a business situation that calls for black-or-white forecasting, that is, committing strongly to the position that an event will certainly happen or not happen?
2.3 Updating on new facts
- Test your understanding: This section, written in the earliest days of the COVID-19 pandemic, shows a graph of case counts in China by day. Based on that graph, what was an argument for saying that the risk of COVID-19 would be minimal?
- Explore: When you are attempting to predict the outcome or timing of a project within your organization, what is an example of new evidence that might cause you to adjust your forecast?
2.4 Mitigating risk
- Test your understanding: According to this section, why does a volatile stock market or an emerging pandemic suggest a strategy of making more modest forecasts—that is, closer to 50%—even for seemingly unrelated propositions?
- · Inside your organization, who plays the role of chief risk officer? How would you ensure that this person is taking into account a sufficiently broad set of risks to the enterprise?
2.5 Distinguishing luck from skill
- Test your understanding: According to this section, what is wrong with using a short track record (that is, the accuracy of a small number of binary forecasts) to determine how skillful a forecaster is?
- Explore: What is an example of a prediction that you have made in your life where, in retrospect, you believe you got the right answer for the wrong reason? How can you tell?
2.6 Political forecasting
- Test your understanding: Why do political pundits have a mediocre track record in making accurate, time-bound predictions?
- Explore: Do you consider yourself more of a hedgehog or a fox? If a little bit of both, what are the contexts in which you are more foxlike?
2.7 Teaming up
- Test your understanding: This section argues that most forecasters would benefit from having a collaborator, even one selected at random. Why is it better to work independently and then combine forecasts, as opposed to collaborating closely along the way?
- Explore: When making a forecast within your organization, how might you find a collaborator who thinks differently from you, has other expertise or information sources, or a different forecasting style?
2.8 Managing uncertainty
- Test your understanding: Assume you believe a political election is genuinely a 50/50 tossup. How can you use this belief to your advantage?
- Explore: Consider this argument: “The world is an inherently unknowable place. A butterfly may flap its wings and eventually cause a hurricane somewhere else. Therefore, there is never any benefit to making any kind of prediction.” How would you critique this argument?
2.9 Betting revisited
- Test your understanding: Why is it a good idea to practice making bets on your beliefs, “proportional to an amount of money that it would slightly hurt you to lose”?
- Explore: The investor Peter Thiel, in his 2014 book Zero to One, argued that the best interview question is “What important truth do very few people agree with you on?” Think about your answer to this question. How might your translate this answer into a proposition that you can bet on?
2.10 Algorithms vs. experts
- Test your understanding: Which method is more likely to give you an accurate prediction of how the Supreme Court will decide a case: talking to legal experts, or using a simple algorithm trained on hundreds of prior decisions?
- Explore: One risk of using a simple algorithm is that conditions may have changed enough—such as by a change in the Justices on the Court—that the old algorithm has less predictive power. How can you adjust your forecasting technique when you believe this has happened?
2.11 Managing underconfidence
- Test your understanding: According to this section, what is the problem with underconfidence: having a strong basis for making a confident prediction but not acting on it?
- Explore: The section says, “Public equities are annoyingly efficient for the retail investor, but it’s less crazy to think you can make an advantageous bet on residential real estate or when playing poker against a well-known set of bozos.” Within your own industry, what is an area where you have a meta-rational belief that you can beat the market, and why?
2.12 Forecasting crime
- Test your understanding: If you believe that a recent change in the crime rate has been caused by a temporary change in circumstances that will soon end, would you rather use a mean-reversion or a trend-following technique to forecast future crime rates?
- Explore: Some metrics (such as the example given in this section, homicides in New York City) have a data history going back hundreds of years. In what circumstances would a very long-term data history be useful for making forecasts today, and when might the long-term data lead you astray?
2.13 Calibration and error correction
- Test your understanding: Why is it useful for forecasters to examine their biggest misses: predictions that were the farthest off the mark?
- Explore: Consider this sentence: “The funny thing about probabilistic forecasting is that ‘ex-post mistakes’ can be dug up and studied one at a time, as I did above, while ‘ex-ante mistakes’ are revealed only by stepping back and looking at groups of forecasts together.” What practical steps does this imply for someone who wants to get better at forecasting?
2.14 Diverging forecasts
- Test your understanding: How would you summarize the forecasting technique used by Superforecasters in predicting whether Greta Thunberg would win the Nobel Peace Prize?
- Explore: This section suggests six possible reasons why prediction markets gave a much higher probability of a 2020 Donald Trump win than poll aggregators did. Which of these reasons, or others, do you find most persuasive? What does this imply for making better forecasts in the future?
2.15 Lessons, part I
- Test your understanding: Why is the author skeptical of the idea that artificial intelligence will inevitably lead to more accurate forecasts?
- Explore: This section argues that you should “size your positions based on your confidence level,” placing comparatively larger bets where you are more confident that you have an accurate belief that is different from the consensus. How would you put this idea into action within a business (as opposed to investing or gambling) context?
2.16 Lessons, part II
- Test your understanding: When is it appropriate to “abstain” from a bet (e.g., in a forecasting contest, giving a forecast of 50% for a binary proposition)?
- Explore: This section observes that an “extremized Superforecasters” ensemble forecast, taking the average of all Superforecaster predictions for each proposition and moving it 1/3 toward the nearest extreme of 0% or 100%, typically performs well. Why do you think it is useful to do the “extremizing” step, as opposed to just looking at the straight average of these forecasters?
Return to textbook