by probe on 11/3/20, 4:04 AM with 169 comments
by throwawaygh on 11/3/20, 4:38 AM
He's explicitly not doing this.
Here's how I think about it. Silver is answering the question: "how much would the polls (as an aggregate) need to differ from the final result in order for Candidate X to win/lose, conditioned on some reasonable priors?"
Taleb is pointing out that the polls could be really wrong in all sorts of ways that are impossible to predict a priori.
The whole argument is sort of pointless from an intellectual/academic perspective. It's a war of public personalities more than anything else.
It's both the case that Silver designed a good piece of software that does what it's supposed to do and also the case that Taleb's skepticism is valid. But then, that sort of skepticism of statistical models is always valid, and yet we use these models to great effect in all sorts of settings.
by fortenforge on 11/3/20, 4:41 AM
As others have pointed out, this is definitely not what Nate's model was doing. In fact, he did have a version of the model that did exactly this, the so called "Now-Cast" and as expected it was even more volatile than the real model:
https://projects.fivethirtyeight.com/2016-election-forecast/...
by baron_harkonnen on 11/3/20, 4:56 AM
For his claim to make sense you would have to view probabilities as linear, which is not simply not how probabilities work. An event that has a 0.01 chance of happening is an order of magnitude more likely than one that has as 0.001 chance. However the different between and 0.5 and 0.501 is essentially negligible (hence probability is not linear because 0.001 does not mean the same everywhere).
I've generally given Taleb a pass as a bit of a crank that doesn't care about the philosophical interpretation of probability, but after seeing him misunderstand probability in such a major way I've started writing him off entirely.
edit: I should add I'm not particularly more convinced that Silver really understands all that much either, just that this is all two very loud people that don't know much. Neither should be taken seriously. This isn't Bradly Efron arguing with Andrew Gelman.
by smeeth on 11/3/20, 4:56 AM
Some of the top comments are making the same mistake, probably because they aren't aware of this twitter beef history (and I envy them), so I'm putting this in a separate comment.
These comments are all pointing out that Taleb's criticism is of something that Silver's models don't do:
> the election were to happen today, what is the probability of each candidate winning?
What people seem to be forgetting, is that Silver's 2016 model explicitly did this. Taleb made his criticisms starting in 2018 (even published an options pricing paper on the subject) then Silver changed his model for the 2020 cycle so that it no longer was an "if the election were today" model.
I can't say for sure whether or not Silver changed his model because of Taleb, buuuuut he deserves at least a little credit here. Yes, I know Taleb is a colossal asshole people love to hate, but he's usually right about these sorts of things.
Here is a pretty fair and balanced take on it for those who would like to read more: https://towardsdatascience.com/why-you-should-care-about-the...
by shalmanese on 11/3/20, 5:15 AM
by travisgriggs on 11/3/20, 4:46 AM
There's values for both types in programmerdom, is there not the same for both in the world of statisticians?
by dragonwriter on 11/3/20, 6:19 AM
Silver's model fairly explicitly does address both the likely direction and the uncertainty in future poll movements, based on historical evidence. The 2016 model results were more volatile than you'd like to see, but it's at least plausible that's because 2016 was an unusual cycle out in the statistical tail in behavior. The 2020 forecast has behaved in a more conventional manner with mild noisy ups and downs early on, and then a smooth progression of greater certainty, the 2012 projection was similar. While this obviously is way to few to generalize from (but better than try to assess from the behavior of 2016 alone), it's consistent with the idea that the model usually behaves in the way you'd expect a forecasting model to behave, and 2016 was just a collision of unlikely events.
by MR4D on 11/3/20, 4:37 AM
If that’s true, then Nate Silver just does poll analysis, not predictions. Predictions should be on the day of the election, not the day you’re reading their blog post.
In other words, Taleb is right.
by tmsh on 11/3/20, 5:01 AM
* make predictions with a range
* or make predictions with a certainty coefficient (p value, etc.)
Predicting a single value without either is not a truly quantified "prediction."It's weird now not getting that can lead to confusion. I agree with others in that those who don't communicate both values of information aren't being 100% clear.
by handmodel on 11/3/20, 4:25 AM
"if the election were to happen today, what is the probability of each candidate winning?"
This is not at all what Silver's model does. Silver writes and makes clear that there is more variance in the model a few months out than a few days out. (In other words - Biden may be 90% now but four months ago if the polls were identical he would be at less than 90% since there would be more time for voter change and real world events.)
I honestly don't even get what Taleb is truly arguing here. He is using a lot of statistical arguments but ultimately Silver is making a model to predict an outcome. It is like a weather forecast.
I may say that four months from now the chance it rains in Los Angeles is 2%. However, the day before with more info I may be able to say it will rain 90% of the time. There isn't anything wrong with this, even though Taleb seems to suggest this wrong (from my read)
by thaumaturgy on 11/3/20, 5:21 AM
The majority of 538's ad impressions come from people who start F5-mobbing the site during election time because they want to know before anyone else who's going to win. (A tiny few might be from people trying to figure out which races are worth donating towards.)
So that puts 538 into this awkward position where their audience is demanding something that 538 is unable to provide, but if they get too in-your-face in saying, "look, that's not how this works", then they stand to lose a lot of ad revenue during peak season.
That leads them to do very silly things, like bury this:
> A 10 percent chance of winning, which is what our forecast gives Trump, is roughly the same as the odds that it’s raining in downtown Los Angeles. And it does rain there.
...in the same page as this:
> We simulate the election 40,000 times to see who wins most often.
And that's some bullshit.
Silver and the rest of his staff over the last few months have spent a CVS receipt's worth of text on what went wrong with their predictions -- er, sorry, "models" -- in 2016, and why this year is different, and how uncertainty works, and why they're offering no guarantees really, "but please do keep coming back, and hey check out this cool new simulator thingy where you too can make guesses that are about as accurate as ours".
I almost really wouldn't care, except for one kind of big problem with 538: It is a political observer-effect in action.
Journos, hacks, and other newspeople trying to get one more page impression or soundbite before the end of the day keep referencing 538. When 538 says a candidate is doing well according to their models, it can and almost certainly is changing people's behavior. Likewise when 538 says a candidate is doing poorly. That's hugely problematic and, annoyingly, the more accurate 538 becomes, the more pronounced and dangerous this effect will be.
by jgalt212 on 11/3/20, 11:23 PM
by egonschiele on 11/3/20, 4:39 AM
by Traster on 11/4/20, 12:13 PM
Even if Clinton were ahead at that point in the election there was a greater than 10% chance of something changing before election day. But this is just the idiots critique - You said it was 90% would happen but the 10% happened! Well, yes, that happens 10% of the time, and if you're making lots of predictions it happens often.
But let's examine the critique, was Clinton a 90% favourite to win? Well probably, I mean in order for Trump to win from that position not only did he need a decent polling error in his favour he also needed the almost unique situation of the FBI director announcing he was re-opening an investigation into Clinton. We can actually see from the polls the effect this had. That sounds like something, in combination with a polling miss, and in combination with a favourable distribution of Trump's vote by state combined to give Trump a win. It seems reasonable to me to fit that firmly under a 10% probability of happening. If that was in your list of likely scenarios at the point the model was at 90% then you are massively over-estimating how often shocks like that happen.
The 90% criticism of 538 only works if you can provide a reasonable discussion of why you think that what happened between the 90% and the result had more than a 10% chance of happening.
In fact I think 90% of Nassim's problem is that he says things like "When someone says are event and its opposite are extremely possible I infer either 1) 50/50 or 2) the predictor is shamelessly hedging, in other words, BS."
He's totally wrong. Nate says "Extremely Possible" because when he says X is 75% people hear "X is 100%", and if Nassim had actually paid as much attention to 538 as he should before making that criticism, then he would know that. Every single election cycle 538 have struggled against the fact that people don't understand probabilities intuitively.
by rramach on 11/3/20, 4:55 AM
Edward says that Nate's prediction should be interpreted as: “If nothing else changes between now and the election, Joe Biden has a 85% chance of winning.” (Silver’s argument) But the problem is this prediction is not testable as election only occurs once at the end. Thus, one should use Nate's early predictions as pure entertainment.
Nate's final predictions show that they are well-calibrated across all of his political predictions but it is hard to estimate how accurate his model predictions are just for presidential elections given he has predicted only a few so far (unless one assumes all elections have similar uncertainty, which is clearly not true).
by kodyo on 11/3/20, 3:12 PM
by ceilingcorner on 11/3/20, 6:35 AM
The fact of the matter is that election polling is not really accurate anymore. People are afraid to publicly admit their support of a certain candidate, even in a supposedly anonymous phone call. Considering that people have been doxxed online for this, I really don’t blame them for being skeptical and private.
That’s not even mentioning the social aspects of the media. Even if Trump were ahead in all the polls, do you think Silver and other journalist-statisticians would report it as such? I doubt it.
Pundits seems to consistently miss larger geopolitical trends, like Brexit, offshoring of jobs to east Asia, rise of right wing parties in Europe and India, and so on. These qualitative trends will end up being far more influential than a collection of polls.
by person_of_color on 11/3/20, 4:59 AM
by 75dvtwin on 11/3/20, 7:25 AM
Because they are using empirical evidence (eg previous data) from 'experiments' that are effectively, unrelated to current situation.
In my personal view, of course,
Most previous elections in the last 30+ years, were between RINOs (Republican In Name Only) and Democrats. Those are pretty much two fractions of the same bribe-taking, accountability-avoiding global Cartel.
This Election is between the representative of citizens of the Republic (Trump) and the Cartel's candidate.
This really did not happen from the time Bushes-then-Clintons took over both parties and turned them into co-existing fractions of one party.
- - -
Another analogy: we do not use same methods and design constraints when building anti-lock breaks control software, as we do when we build a daily revenue reporting system.
- - -
Frame of reference is different basically. And we do not know the tensor(s) that help us to move between the coordinate systems.
- - -
So we are not going to be able to use even 2016 polls (because then folks did not realize how Corrupt the RINOs+Dems are )
- - -
My prediction is that Trump wins, GOP takes House and retains Senate.
This will also be the largest percent wise vote for a GOP president, by Latino and African American voters.
- - -
Will check back in 9 days or so, to see if I was right !
Cheers.
by ramraj07 on 11/3/20, 4:51 AM
Heres an alternative hypothesis that involves no fancy math just anthropology - Taleb is right that the stats don't make sense, but Nate is also not an idiot - it's just in his financial best interests to make a website that can throw out confidence inducing pseudonumbers that will make people like us visit it every few hours for months at a time every two years. Perhaps he knows perfectly well the farce he's pulling (at least at a fundamental level though he might have convinced himself otherwise as academics always do) but just plays the game to make sure he's relevant. And thus, we are all the idiots.
by stopachka on 11/3/20, 4:49 AM
I think this is the key to Taleb’s argument, and it’s pretty damning for Mr Siver.
If there’s high uncertainty, you can’t say whether someone’s probability to win is low. The uncertainty widens the probability.
For example, Say I told you there’s a 2% chance it will rain Monday next year in New York - This would _have_ to be a bogus prediction. So much could happen until then. I have _high_ uncertainty. There’s no way that uncertainty is accounted for with 2%
by FandangoRanger on 11/3/20, 4:34 AM
Nate's already coping really hard on Twitter.