In `Nate Silver's book about forecasting, The Signal and The Noise, and in an article he wrote for the New York Times called "The Weatherman is Not a Moron", Silver tells the story of how weather forecasting has, as he would say, gotten better in recent years.
The chance of lightning coming out of the sky and killing you in 1940, if you were an American (most of Silver's data is from his home country) was one in 400,000. Today, it is one in eleven million. This is partly due to people spending less time outdoors, but it is also, Silver claims, thanks to better storm predictions.
In the 1970s, the high temperature forecasts were wrong, on average, by about six degrees. Today they are only wrong by half that amount, by three degrees. And when hurricane forecasters in the 1980s predicted where the hurricane would hit land they were usually out by 350 miles. Today, they only by 100 miles. These improvements in forecasting save millions of lives, millions of dollars, and many millions of us from wearing clothes that are too warm or too hot.
The point of telling this story of the weather is not only to point out how good meteorologists have become, of course. The point is to ask two questions: what have weather forecasters done that has made their forecasts so much better? And what can we learn from that to make our forecasts better?
There are a number of secrets to better forecasting, as Silver shows in The Signal and the Noise. Most important of all is that forecasts are much better if they do not rely solely on either quantitative or qualitative information, but combine both the statistics created by large sets of data and formulas for making sense of them, with human intuition. The skill of the good weather forecaster lies in managing the balance between the two. Even in the case of large data sets, humans make things better: they improve rain forecasts by 25% and temperature forecasts by 10%. I think the skill of the smart culture detective lies in applying the same principles to improve her or his social forecasting.
There are four principles in Silver's suggestions that I think are especially useful for cultural forecasting:
- we should accept the limitations of our forecasts. Rather than perfect, we should make our predictions probabilistic. This is hard to accept. It is hard for us to think probabilistically (and to say the word out loud too) but we have to try. By making them probabilistic, we are giving up neatness and precision for accuracy. Which would you rather? It also means we are not seeking to be exactly right. Instead we accept that there is a cone of uncertainty. Silver's predictions work this way. We are seeking to be as near right as possible. To combat the ideal of perfection, we should therefore make multiple forecasts and create a list of possible outcomes, and test them.
- we should also accept that a forecast is not a concrete, unchanging statement. Rather, we should see it as a living thing. It is based on the best information we have at the moment we make it, but that it is likely to change and be updated as any new information comes to hand.
- we should look for consensus. Weathermen do this. If they think there's going to be rain, and all their colleagues do not, it is time to check your thinking. Ideally, we should get anonymous input and have people offer their opinions before they know other people's thoughts. This is a way to reduce bias. If we know that a respected person has forecast a particular thing, for instance, that is more likely to effect our judgment.
- we should avoid over-fitting our prediction. This is an easy mistake to make. A prediction is a model, a way of representing the world, like a map. It should do, as Silver says, “an honest job of representing the underlying landscape”. Over-fitting happens when a too eager researcher adds all data points in to a model, making it more sophisticated and complex but rendering it less good. It reports every little bit of noise, but by doing so, it loses the signal. This, when you consider it, is more confirmation of the importance of simplicity. “Needlessly complicated models may fit the noise in a problem rather than the signal,” Silver wrote, “doing a poor job of replicating its underlying structure and causing predictions to be worse.” What that means in practical terms is accepting that there will be plenty of noise around any signal, and we should be able to weigh the evidence at hand, and have the confidence to discount the noise and describe the signal. This, at least, is what a good forecaster does.