by skadamat on 5/28/25, 5:49 PM with 34 comments
by gcanyon on 5/31/25, 2:06 PM
The underlying issue (which the article discusses to some extent) is how confounding factors can make the data misleading/allow the data to be misinterpreted.
To discuss "The Illusion of Causality in Charts" I'd want to consider how one chart type vs. another is more susceptible to misinterpretation/more misleading than another. I don't know if that's actually true -- I haven't worked up some examples to check -- but that's what I was hoping for here.
by nwlotz on 5/31/25, 3:59 PM
I think the issues described in this piece, and by other comments, are going to get much worse with the (dis)information overload AI can provide. "Hey AI, plot thing I don't like A with bad outcome B, and scale the axes so they look heavily correlated". Then it's picked up on social media, a clout-chasing public official sees it, and now it's used to make policy.
by djoldman on 5/31/25, 6:53 PM
1. In general, humans are not trained to be skeptical of data visualizations.
2. Humans are hard-wired to find and act on patterns, illusory or not, at great expense.
Incidentally, I've found that avoiding the words "causes," "causality," and "causation" is almost always the right path or at the least should be the rule as opposed to the exception. In my experience, they rarely clarify and are almost always overreach.
by justonceokay on 5/31/25, 3:12 PM
The shape of it is that there is a statistic about population and then that statistic is used to describe a member of that population. For example, a news story that starts with “70% of restaurants fail in their first year, so it’s surprising that new restaurant Pete’s Pizza is opening their third location!”
But it’s only surprising if you know absolutely nothing about Pete and his business. Pete’s a smart guy. He’s running without debt and has community and government ties. His aunt ran a pizza business and gave him her recipes.
In a Bayesian way of thinking, the newscasters statement only makes sense if the only prior they have is the average success rate of restaurants. But that is an admittance that they know nothing about the actual specifics of the current situation, or the person they are talking about. Additionally there is zero causal relationship between group statistics and individual outcomes, the causal relationship goes the other way. Pete’s success will slightly change that 70% metric, but the 70% metric never bound Pete to be “likely to fail”.
Other places I see the “bound by statistics” problem is in healthcare, criminal proceedings, racist rhetoric, and identity politics.
by NoTranslationL on 5/31/25, 1:40 PM
[1] https://apps.apple.com/us/app/reflect-track-anything/id64638...
by singularity2001 on 5/31/25, 10:18 PM
we do have a pretty good intuition for it but if you look at the details and ask people what is the difference between correlation and causality and how do you distinguish it things get rabbit holey pretty quick
by JackSlateur on 6/1/25, 10:04 PM
by qixv on 5/31/25, 7:53 PM