The Signal and the Noise

The Signal and the Noise: Why So Many Predictions Fail-but Some Don't BY Nate Silver. Penguin Press HC, The. Hardcover, 544 pages. $27.

The cover of The Signal and the Noise: Why So Many Predictions Fail-but Some Don't

Political forecaster Nate Silver, who has made the frontiers of digital speculation his comfort zone, wants you to learn one thing above all else from The Signal and the Noise: Just because a prediction is wrong, that doesn’t mean it’s a bad prediction. And just because it’s right, that doesn’t mean the person who made it is smart.

Silver doesn’t offer one comprehensive theory for what makes a good prediction in his interdisciplinary tour of forecasting. But he does give us a well-worn literary analogy. Drawing on a pet image used by psychologist Philip Tetlock (who in turn adapted it from Isaiah Berlin, who cribbed it from Leo Tolstoy and Greek poetry), Silver explains that there are two main types of prognosticators: the hedgehog and the fox.

Hedgehogs, Silver says, are those who believe “in governing principles about the world that behave as though they were physical laws.” Foxes, by contrast, “are scrappy creatures who believe in a plethora of little ideas and in taking a multitude of approaches toward a problem.”

The author casts himself as a fox, and he thinks you should be one, too. As Silver explains, predictions typically fail when people—hedgehog people—ignore new information that conflicts with their worldview. And to remind us of how far afield the hedgehogs can wander, he cues up plenty of humiliating tape. There’s the economist who predicted a nine-percentage-point victory for Al Gore in the 2000 presidential ballot, based on an outmoded Vietnam-era model adapted from the computation of troop casualties. And there is the whole battery of Kremlinologists who missed the imminent decline of the Soviet Union because of their hidebound views of how communist leaders retained power. The heroes of The Signal and the Noise are those who stay nimble, forever incorporating new ideas and new information without drowning in a sea of extraneous data—people, in short, like Nate Silver.

Silver emerged from obscurity during the protracted 2008 Democratic primary; previously a baseball-statistics guru, he waded into the crowded but fairly amateur world of election forecasting with FiveThirtyEight.com, a site that aggregated and analyzed polling data. Silver took the statistical literacy he developed to predict player performance in the major leagues and successfully applied it to the campaign cycle—combining the shifts in polling numbers with fund-raising data to arrive at a picture of where the campaign was trending in real time. He presented his findings in bell curves, which placed the most likely outcomes in the center and the remotest ones on the extremes. Those curves went a long way toward explaining the Democratic primary results that so many conventional pundits found so anomalous: the steady momentum building behind political neophyte Barack Obama’s campaign against the heavily favored Hillary Clinton juggernaut. In recognition of how he advanced well ahead of the forecasting pack in 2008, Silver won the imprimatur that most online geeks at least secretly covet when the New York Times purchased the hosting rights to his blog in 2010.

The core of Silver’s approach is what’s known as probabilistic thinking. Rather than predicting an outcome outright, this strategy crunches through the odds of many possible outcomes. As more information accrues, the forecaster updates those predictions accordingly, taking special care to weigh the estimates before more new information arrives.

Statisticians know this method as Bayesian reasoning, stemming from a 250-year-old posthumously published paper by the English minister Thomas Bayes. Bayes’s rule is a simple morsel of algebra, but it has become central to modern statistics. Silver might have organized his entire book around Bayesian reasoning, though Bayes himself doesn’t put in an official appearance until midway through the book.

The downside of Bayesian reasoning, of course, is the flatly embarrassing failure of a probabilistic prediction to come to pass. Predictions fail for many reasons, but Silver contends that the most frequent cause is a flaw in the forecaster’s assumption, as opposed to an inherent shortcoming in the methodology. The now notorious ratings agencies that continued to stamp AAA ratings on collections of highly risky mortgages into 2008, he notes, would nonetheless present their findings with a full flourish of statistical exactitude, working out the odds of a security paying out to two decimal places. But the larger problem, of course, was with the foundational assumptions that shaped the universe of mortgage-backed securities—the supposition, for example, that the worst-case scenario was a manageable downturn in housing prices that was adequately provided for in their models. Silver also notes the ways that such flawed computational premises have distorted the policy debates over counterterrorism policy and global warming—though with less analytic rigor than he musters in his anatomy of the housing debacle.

You might call this species of cocked-up forecasting the tyranny of significant digits; more broadly, it is the cardinal mistake of dressing up uncertainty—an incalculable unknown—with risk, a highly calculable gamble with discrete odds. Risk is gambling on a flush in poker, knowing the odds are one in four of drawing the suit that you need; uncertainty is playing poker without a clear idea of the rules or the distribution of cards in the deck.

As Silver notes, the tension between risk and uncertainty is especially present in the finance industry because it involves a vast amount of data and an unlimited budget for computing power to churn through it. Silver sets up this discussion nicely with an overview of the famous chess matches between Deep Blue and Garry Kasparov in 1997—though the welter of inside-chess detail he supplies here may have some readers longing for Silver’s trademark, and far more accessible, inside-baseball computations.

The moral here is simple: Just as IBM’s software couldn’t rely only on superior processing power to beat the chess champion, the most sophisticated Wall Street algorithms are unable to reliably beat the market. Likewise, the best computer model is unable to flawlessly predict elections—and the same holds true, alas, for the best humans.

Still, Silver sees tremendous power at the interface of these two worlds—humans playing chess with a laptop or two at their sides. If Silver’s political predictions can be characterized as “computer-assisted reporting,” you might say he sees a future for human-assisted computing.

As the dust settles on another presidential campaign cycle, Silver’s book is a useful gloss on the tricky business of making predictions correctly. But as we look back over the vast wreckage of unforced human error that’s overtaken much of the past year’s political drama—ranging from Mitt Romney’s leaked fund-raising video to Clint Eastwood’s empty-chair address before the Republican National Convention—we probably have more cause than ever to leave the messy business of predicting the course of human events to the professionals. That is, after all, what Nate Silver gets paid for.

Chris Wilson is a Yahoo! News columnist.