Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A lot of words to say very little.

>What you are essentially saying is that every prediction that doesn't claim certainty is always correct.

This seems to be the crux of it. No, what I am saying is that when you want to laugh at the polls/pollsters, you should be arguing how their methodology was wrong and what they could have done to find the true proportion +/- some standard error with however much confidence level. Simply laughing at pollsters when the minority wins is not good enough. 20% chance of winning and then actually winning does not seem all too unlikely to any reasonable person.



> you should be arguing how their methodology was wrong and what they could have done to find the true proportion +/- some standard error with however much confidence level

There are countless of arguments about how their methodology was wrong and what could be improved, all over the internet. Almost nobody disputes that the models were bad. I thought this part was obvious.

> 20% chance of winning and then actually winning does not seem all too unlikely to any reasonable person.

It seems about 80% unlikely. But the question isn't whether it seems unlikely or not, but whether it makes sense to call the prediction results "wrong". 20% chance of an asteroid not hitting Earth and then not hitting Earth might not seem extremely unlikely, but the prediction that it had an 80% chance of hitting Earth is still a bad prediction.


Is it really a bad prediction? Or just a wrong one?

Sure, you could say wrong predictions are bad. However, maybe it was the best prediction given the data available before a given event. Its weird assigning a kinda subjective good/bad to an event when it is just an assigned probability based on what is known. All we can really conclude is that the improbable happened; we didn't have enough data or the right methodologies to make a more accurate prediction. Learn what we can, and apply it to the next event.


I would say that predicting the chances of an asteroid hitting Earth based on looking at a crystal ball is a pretty bad prediction.

I agree with your general point. Whether a better prediction could've been made based on available data is really the key question here, and this is exactly what allows us to say that crystal ball predictions are bad.

In the context of predicting election results, I think it is fair to assume that, in principle, there should be enough available data (or an ability to collect such data) to make more accurate predictions. This also seems to be the assumption of all major polling agencies, and was also the assumption in investigating the results of Brexit polls. This is precisely where it differs from dice rolling. It therefore makes sense to assume that the predictions were inaccurate due to methodological reasons, as opposed to pollsters having no practical way of accessing the relevant data.


I'm not sure which ones you read. Nate Silver's analysis seemed pretty sound. His team pointed out that, just as with the housing crisis, if the models were wrong in one state, they were probably wrong in all states in the same way, giving Trump a decent chance at winning.

Note that Taleb criticized the speed that Nate changed estimates, not the estimates themselves.


There are countless of arguments about how their methodology was wrong and what could be improved, all over the internet.

Could you link to a couple of your favourites? Non of the ones I've seen felt very convincing.


Sure:

[1] https://fivethirtyeight.com/features/the-polls-missed-trump-...

[2] http://www.pewresearch.org/fact-tank/2016/11/09/why-2016-ele...

While it will take more time to figure out the precise reasons for the failure of the polls, the consensus is that such a failure indeed happen, and there are a number of competing hypotheses for why it happened. Other than on HN, nobody claims that the errors were due to some inherent unpredictability which cannot be addressed through better methodology.


I think everybody agrees that something went wrong and many people think that something should be done (although I personally think that medias inability to interpret and report on polling errors and uncertainty did greatly exasperate the situation). I just haven't seen any good articles making a strong argument about what went wrong and how to fix it


I don’t know, I’m under the impression that people here think that winning with a 20% predicted chance of winning isn’t indicative of anything wrong with the polls, because it’s like predicting the outcome of a die, and therefore nothing should be done at all. This false analogy is precisely what I’m trying to disprove.

I agree that there are currently no strong arguments about what were wrong and how to fix it, but I think it's because it takes time to investigate those things. According to Pew, the American Association for Public Opinion Research has a committee investigating it, and they should release their report in May. It took a 6-months investigation to come up with a report about what went wrong with the Brexit polls, so it will probably take a similar amount of time with this.


You're conflating the polls with the predictive models. The polls never said Trump had a 20% chance of winning because the is not how polls work.

There where a dozen or so models which took the polls as input (some added other inputs as well) and produced a probability of a candidate winning. Some of those models where garbage (Huffington post I had Trump at ~1.5-2%) and some where pretty good (fivethirtyeight had Trump at ~30% and trending upwards). The question about how you should interpret these numbers is a more open one. What does it mean to give numeric probabilities to events which are completely unique and will only occur once (cue discussion of Bayesian vs Frequentist inference here).

Admittedly the question about if the model was right or wrong is difficult to disentangle from the question about bad polling. No model can work correctly if you feed it garbage data. The only criticism you could make is that they should have been even more critical of data they where getting from certain polls than they where.

Now as to was there something wrong with the polls? Obviously. But the interesting question is what went wrong. They where pretty good at forecasting the national popular vote, while at the same time getting certain mid-western swing states dramatically wrong. So there is obviously something in their methodology which seems to works fine when looking at the country but fails when looking at certain states.

But like you I look forward to seeing more detailed investigations coming out in the next few month.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: