1. Headline
  1. Headline

Video: Blogger Nate Silver reveals how he called election

  1. Closed captioning of: Blogger Nate Silver reveals how he called election

    >>> well, president obama was not the only one who had a good night on tuesday.

    >> in a race that came down to the wire a blogger for the "new york times" accurately predicted the outcome. andrea, good morning.

    >> good morning, guys. well, he says he's a nerd and he is trying to make math a little bit cool. nate silver is becoming somewhat of a celebrity with his presidential predictions. his blog called 538 which represents the number of electoral votes is quickly becoming the one to watch come election time. president obama may have been the big winner this week, but coming in a close second, "new york times" blogger, statistician, and self-described greek nate silver .

    >> nate silver . lord and god of the algorithm.

    >> for the second straight presidential election silver's 538 blog was pretty close to perfect in predicting the result. he nailed the outcome in all 50 states . when you were a little kid did you have all the answers?

    >> i think i always had a lot of questions. that's what smart people do.

    >> reporter: looking for a better way to pick the winning candidates silver created his own formula for playing the odds in political races. he detailed it in his new book "the signal and the noise."

    >> people tend to see all this data, all the polls and see the fluctuations and get very distracted by it. in the book i call it noise as opposed to signal.

    >> romney/ryan will win with 281 electoral votes .

    >> reporter: avoiding the spin of talking heads silver sticks to the numbers, averaging statewide polls, factoring in for uncertainty, and then running a variety of scenarios for how these probabilities would play out.

    >> when we say that obama had like a 91% chance of winning the electoral college that meant when we looked at all the combinations of different states he won about 90% of those and romney won the other 10%.

    >> projecting that.

    >> reporter: like jonah hill 's character in the movie "money ball" silver got his start in baseball predicting player performance. do you think you're the jonah hill of politics?

    >> maybe he's the nate silver of baseball. i'm not sure.

    >> reporter: with four more years for the president, nate has his own vision for the country.

    >> i think math does not need to be intimidating. it can be fun and interesting and useful.

    >> reporter: you're making it kind of cool.

    >> yeah, hopefully right? hopefully kids will pursue more math and science education . we need more of that to compete in this country.

    >> sales of silver's book "the signal and the noise" are any indication math indeed took a big step up in the coolness factor this week. it rose to number two on amazon behind, any idea what the number one book is?

    >> no.

    >> the latest installment of " diary of a wimpy kid ."

    >> hey, very good. nate silver on his trail.

    >> wimpy kid wins.

    >> and the nerdy kid.

    >> they're together.

By
TODAY books
updated 11/8/2012 5:46:23 PM ET 2012-11-08T22:46:23

New York Times blogger and statistician Nate Silver defied the experts when he correctly predicted the outcome of the presidential election. In “The Signal and The Noise,” Silver explains the mindset and methodology behind his forecasting. Here’s an excerpt.

The Promise and Pitfalls of “Big Data”

The fashionable term now is “Big Data.” IBM estimates that we are generating 2.5 quintillion bytes of data each day, more than 90 percent of which was created in the last two years.

This exponential growth in information is sometimes seen as a cure-all, as computers were in the 1970s. Chris Anderson, the editor of Wired magazine, wrote in 2008 that the sheer volume of data would obviate the need for theory, and even the scientific method.

This is an emphatically pro-science and pro-technology book, and I think of it as a very optimistic one. But it argues that these views are badly mistaken. The numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning. Like Caesar, we may construe them in self-serving ways that are detached from their objective reality.

Data-driven predictions can succeed—and they can fail. It is when we deny our role in the process that the odds of failure rise. Before we demand more of our data, we need to demand more of ourselves.

This attitude might seem surprising if you know my background. I have a reputation for working with data and statistics and using them to make successful predictions. In 2003, bored at a consulting job, I designed a system called PECOTA, which sought to predict the statistics of Major League Baseball players. It contained a number of innovations—its forecasts were probabilistic, for instance, outlining a range of possible outcomes for each player—and we found that it outperformed competing systems when we compared their results. In 2008, I founded the Web site FiveThirtyEight, which sought to forecast the upcoming election. The FiveThirtyEight forecasts correctly predicted the winner of the presidential contest in forty-nine of fifty states as well as the winner of all thirty-five U.S. Senate races.

Penguin

After the election, I was approached by a number of publishers who wanted to capitalize on the success of books such as Moneyball and Freakonomics that told the story of nerds conquering the world. This book was conceived of along those lines—as an investigation of data-driven predictions in fields ranging from baseball to finance to national security.

But in speaking with well more than one hundred experts in more than a dozen fields over the course of four years, reading hundreds of journal articles and books, and traveling everywhere from Las Vegas to Copenhagen in pursuit of my investigation, I came to realize that prediction in the era of Big Data was not going very well. I had been lucky on a few levels: first, in having achieved success despite having made many of the mistakes that I will describe, and second, in having chosen my battles well.

Baseball, for instance, is an exceptional case. It happens to be an especially rich and revealing exception, and the book considers why this is so— why a decade after Moneyball, stat geeks and scouts are now working in harmony.

The book offers some other hopeful examples. Weather forecasting, which also involves a melding of human judgment and computer power, is one of them. Meteorologists have a bad reputation, but they have made remarkable progress, being able to forecast the landfall position of a hurricane three times more accurately than they were a quarter century ago. Meanwhile, I met poker players and sports bettors who really were beating Las Vegas, and the computer programmers who built IBM’s Deep Blue and took down a world chess champion.

  1. Stories from
    1. Will and Kate Try Their Hands at Deejaying
    2. Lupita Nyong'o Is PEOPLE's Most Beautiful
    3. The Voice's Top 10 Is Revealed in Dramatic - and Confusing - Fashion
    4. Get the Look: Whitney Port's Pretty Dinner Table
    5. The Wahlberg Family's Boston Marathon Photo Diary

But these cases of progress in forecasting must be weighed against a series of failures.

If there is one thing that defines Americans—one thing that makes us exceptional—it is our belief in Cassius’s idea that we are in control of our own fates. Our country was founded at the dawn of the Industrial Revolution by religious rebels who had seen that the free flow of ideas had helped to spread not just their religious beliefs, but also those of science and commerce. Most of our strengths and weaknesses as a nation—our ingenuity and our industriousness, our arrogance and our impatience—stem from our unshakable belief in the idea that we choose our own course.

But the new millennium got off to a terrible start for Americans. We had not seen the September 11 attacks coming. The problem was not want of information. As had been the case in the Pearl Harbor attacks six decades earlier, all the signals were there. But we had not put them together. Lacking a proper theory for how terrorists might behave, we were blind to the data and the attacks were an “unknown unknown” to us.

There also were the widespread failures of prediction that accompanied the recent global financial crisis. Our naïve trust in models, and our failure to realize how fragile they were to our choice of assumptions, yielded disastrous results. On a more routine basis, meanwhile, I discovered that we are unable to predict recessions more than a few months in advance, and not for lack of trying. While there has been considerable progress made in controlling inflation, our economic policy makers are otherwise flying blind.

The forecasting models published by political scientists in advance of the 2000 presidential election predicted a landslide 11-point victory for Al Gore. George W. Bush won instead. Rather than being an anomalous result, failures like these have been fairly common in political prediction. A long-term study by Philip E. Tetlock of the University of Pennsylvania found that when political scientists claimed that a political outcome had absolutely no chance of occurring, it nevertheless happened about 15 percent of the time. (The political scientists are probably better than television pundits, however.)

There has recently been, as in the 1970s, a revival of attempts to predict earthquakes, most of them using highly mathematical and data-driven techniques. But these predictions envisaged earthquakes that never happened and failed to prepare us for those that did. The Fukushima nuclear reactor had been designed to handle a magnitude 8.6 earthquake, in part because some seismologists concluded that anything larger was impossible. Then came Japan’s horrible magnitude 9.1 earthquake in March 2011.

There are entire disciplines in which predictions have been failing, often at great cost to society. Consider something like biomedical research. In 2005, an Athens-raised medical researcher named John P. Ioannidis published a controversial paper titled “Why Most Published Research Findings Are False.” The paper studied positive findings documented in peer-reviewed journals: descriptions of successful predictions of medical hypotheses carried out in laboratory experiments. It concluded that most of these findings were likely to fail when applied in the real world. Bayer Laboratories recently confirmed Ioannidis’s hypothesis. They could not replicate about two-thirds of the positive findings claimed in medical journals when they attempted the experiments themselves

Big Data will produce progress—eventually. How quickly it does, and whether we regress in the meantime, will depend on us.

Why the Future Shocks Us

Biologically, we are not very different from our ancestors. But some stone-age strengths have become information-age weaknesses.

Human beings do not have very many natural defenses. We are not all that fast, and we are not all that strong. We do not have claws or fangs or body armor. We cannot spit venom. We cannot camouflage ourselves. And we cannot fly. Instead, we survive by means of our wits. Our minds are quick. We are wired to detect patterns and respond to opportunities and threats without much hesitation.

“This need of finding patterns, humans have this more than other animals,” I was told by Tomaso Poggio, an MIT neuroscientist who studies how our brains process information. “Recognizing objects in difficult situations means generalizing. A newborn baby can recognize the basic pattern of a face. It has been learned by evolution, not by the individual.”

The problem, Poggio says, is that these evolutionary instincts sometimes lead us to see patterns when there are none there. “People have been doing that all the time,” Poggio said. “Finding patterns in random noise.”

The human brain is quite remarkable; it can store perhaps three terabytes of information. And yet that is only about one one-millionth of the information that IBM says is now produced in the world each day. So we have to be terribly selective about the information we choose to remember.

Alvin Toffler, writing in the book Future Shock in 1970, predicted some of the consequences of what he called “information overload.” He thought our defense mechanism would be to simplify the world in ways that confirmed our biases, even as the world itself was growing more diverse and more complex.

Our biological instincts are not always very well adapted to the information-rich modern world. Unless we work actively to become aware of the biases we introduce, the returns to additional information may be minimal—or diminishing.

  1. More in books
    1. Harlan Coben returns with ‘Six Years’
    2. ‘I Would Die 4 U’: How Prince became an icon
    3. ‘Until I Say Good-Bye’: Living for love in the face of ALS
    4. Letters from the life of George H.W. Bush
    5. Mom turns sleeping baby into fairy-tale star

The information overload after the birth of the printing press produced greater sectarianism. Now those different religious ideas could be testified to with more information, more conviction, more “proof”—and less tolerance for dissenting opinion. The same phenomenon seems to be occurring today. Political partisanship began to increase very rapidly in the United States beginning at about the time that Tofller wrote Future Shock and it may be accelerating even faster with the advent of the Internet.

These partisan beliefs can upset the equation in which more information will bring us closer to the truth. A recent study in Nature found that the more informed that strong political partisans were about global warming, the less they agreed with one another.

Meanwhile, if the quantity of information is increasing by 2.5 quintillion bytes per day, the amount of useful information almost certainly isn’t. Most of it is just noise, and the noise is increasing faster than the signal. There are so many hypotheses to test, so many data sets to mine—but a relatively constant amount of objective truth.

The printing press changed the way in which we made mistakes. Routine errors of transcription became less common. But when there was a mistake, it would be reproduced many times over, as in the case of the Wicked Bible.

Complex systems like the World Wide Web have this property. They may not fail as often as simpler ones, but when they fail they fail badly. Capitalism and the Internet, both of which are incredibly efficient at propagating information, create the potential for bad ideas as well as good ones to spread. The bad ideas may produce disproportionate effects. In advance of the financial crisis, the system was so highly levered that a single lax assumption in the credit ratings agencies’ models played a huge role in bringing down the whole global financial system.

Regulation is one approach to solving these problems. But I am suspicious that it is an excuse to avoid looking within ourselves for answers. We need to stop, and admit it: we have a prediction problem. We love to predict things— and we aren’t very good at it.

The Prediction Solution

If prediction is the central problem of this book, it is also its solution.

Prediction is indispensable to our lives. Every time we choose a route to work, decide whether to go on a second date, or set money aside for a rainy day, we are making a forecast about how the future will proceed—and how our plans will affect the odds for a favorable outcome.

Not all of these day-to-day problems require strenuous thought; we can budget only so much time to each decision. Nevertheless, you are making predictions many times every day, whether or not you realize it.

For this reason, this book views prediction as a shared enterprise rather than as a function that a select group of experts or practitioners perform. It is amusing to poke fun at the experts when their predictions fail. However, we should be careful with our Schadenfreude. To say our predictions are no worse than the experts’ is to damn ourselves with some awfully faint praise.

Prediction does play a particularly important role in science, however. Some of you may be uncomfortable with a premise that I have been hinting at and will now state explicitly: we can never make perfectly objective predictions. They will always be tainted by our subjective point of view.

But this book is emphatically against the nihilistic viewpoint that there is no objective truth. It asserts, rather, that a belief in the objective truth—and a commitment to pursuing it—is the first prerequisite of making better predictions. The forecaster’s next commitment is to realize that she perceives it imperfectly.

Prediction is important because it connects subjective and objective reality. Karl Popper, the philosopher of science, recognized this view. For Popper, a hypothesis was not scientific unless it was falsifiable—meaning that it could be tested in the real world by means of a prediction.

What should give us pause is that the few ideas we have tested aren’t doing so well, and many of our ideas have not or cannot be tested at all. In economics, it is much easier to test an unemployment rate forecast than a claim about the effectiveness of stimulus spending. In political science, we can test models that are used to predict the outcome of elections, but a theory about how changes to political institutions might affect policy outcomes could take decades to verify.

I do not go as far as Popper in asserting that such theories are therefore unscientific or that they lack any value. However, the fact that the few theories we can test have produced quite poor results suggests that many of the ideas we haven’t tested are very wrong as well. We are undoubtedly living with many delusions that we do not even realize.

But there is a way forward. It is not a solution that relies on half-baked policy ideas—particularly given that I have come to view our political system as a big part of the problem. Rather, the solution requires an attitudinal change.

This attitude is embodied by something called Bayes’s theorem, which I introduce in chapter 8. Bayes’s theorem is nominally a mathematical formula. But it is really much more than that. It implies that we must think differently about our ideas—and how to test them. We must become more comfortable with probability and uncertainty. We must think more carefully about the assumptions and beliefs that we bring to a problem.

The book divides roughly into halves. The first seven chapters diagnose the prediction problem while the final six explore and apply Bayes’s solution.

Each chapter is oriented around a particular subject and describes it in some depth. There is no denying that this is a detailed book—in part because that is often where the devil lies, and in part because my view is that a certain amount of immersion in a topic will provide disproportionately more insight than an executive summary.

The subjects I have chosen are usually those in which there is some publicly shared information. There are fewer examples of forecasters making predictions based on private information (for instance, how a company uses its customer records to forecast demand for a new product). My preference is for topics where you can check out the results for yourself rather than having to take my word for it.

A Short Road Map to the Book

The book weaves between examples from the natural sciences, the social sciences, and from sports and games. The book builds from relatively straightforward cases, where the successes and failures of prediction are more easily demarcated, into others that require slightly more finesse.

Chapters 1 through 3 consider the failures of prediction surrounding the recent financial crisis, the successes in baseball, and the realm of political prediction—where some approaches have worked well and others haven’t. They should get you thinking about some of the most fundamental questions that underlie the prediction problem. How can we apply our judgment to the data— without succumbing to our biases? When does market competition make forecasts better—and how can it make them worse? How do we reconcile the need to use the past as a guide with our recognition that the future may be different?

Chapters 4 through 7 focus on dynamic systems: the behavior of the earth’s atmosphere, which brings about the weather; the movement of its tectonic plates, which can cause earthquakes; the complex human interactions that account for the behavior of the American economy; and the spread of infectious diseases. These systems are being studied by some of our best scientists. But dynamic systems make forecasting more difficult, and predictions in these fields have not always gone very well.

Chapters 8 through 10 turn toward solutions—first by introducing you to a sports bettor who applies Bayes’s theorem more expertly than many economists or scientists do, and then by considering two other games, chess and poker. Sports and games, because they follow well-defined rules, represent good laboratories for testing our predictive skills. They help us to a better understanding of randomness and uncertainty and provide insight about how we might forge information into knowledge.

Bayes’s theorem, however, can also be applied to more existential types of problems. Chapters 11 through 13 consider three of these cases: global warming, terrorism, and bubbles in financial markets. These are hard problems for forecasters and for society. But if we are up to the challenge, we can make our country, our economy, and our planet a little safer.

The world has come a long way since the days of the printing press. Information is no longer a scarce commodity; we have more of it than we know what to do with. But relatively little of it is useful. We perceive it selectively, subjectively, and without much self-regard for the distortions that this causes. We think we want information when we really want knowledge.

The signal is the truth. The noise is what distracts us from the truth. This is a book about the signal and the noise.

Excerpted from The Signal and The Noise by Nate Silver. Copyright © 2012 by Nate Silver. Excerpted by permission of Penguin. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

© 2012 MSNBC Interactive

Discuss:

Discussion comments

,

Most active discussions

  1. votes comments
  2. votes comments
  3. votes comments
  4. votes comments

More on TODAY.com

  1. Robyn Beck / AFP/Getty Images

    Lupita Nyong’o is People’s Most Beautiful person

    4/23/2014 11:54:41 AM +00:00 2014-04-23T11:54:41
  1. Courtesy of Savannah Guthrie

    Savannah’s honeymoon dispatch: Letting it hang out on the best vacation ever

    4/23/2014 10:56:55 AM +00:00 2014-04-23T10:56:55
  1. ‘Sharing economy’ can bring convenience, cash... and trouble

    It’s a multibillion-dollar industry: More and more companies connecting strangers to share, swap and rent everything from clothes to bikes to children’s toys. But all that trust can cause trouble.

    4/23/2014 12:10:09 PM +00:00 2014-04-23T12:10:09
  1. Getty Images

    video Emma Stone calls out boyfriend for sewing remark

    4/23/2014 1:03:09 PM +00:00 2014-04-23T13:03:09