by gaius_baltar on 9/29/23, 4:11 PM with 91 comments
by bri3d on 9/29/23, 4:42 PM
Maybe:
* This account already scored poorly on spamminess and attempting to post a bare link (with no content) on their page pushed them over the edge?
* The EFF Privacy Tips link was somehow used in a parallel spam campaign, or some characteristic of the page itself causes it to be flagged as spammy?
* Something about this specific browser session was flagged, preventing posting? (this might go along with the No Violations observation about the page).
I find it much more interesting to try to reverse engineer these strange behaviors of some spam detection systems than to just chalk them up to a "Facebook hates the EFF" conspiracy theory - if this happened to me, I'd be doing some more tests to try to see what's up.
by clnq on 9/29/23, 5:12 PM
by jrmg on 9/29/23, 4:45 PM
And without more evidence, that seems more plausible to me than Facebook banning links to the EFF.
by jqpabc123 on 9/29/23, 4:12 PM
by harles on 9/29/23, 5:32 PM
by compiler-guy on 9/29/23, 5:59 PM
That's 10 false positives _every single day_. And in all likelihood their spam detector isn't that good, and there are many, many more posts than that.
Every now and then one of those false positives will be an interesting web site or one that feels really obviously wrong. But that's just what statistics does. When you take enough samples from a distribution, even very low-probability events happen.
Sometimes something will feel really out of the ordinary and wrong, but it happened entirely by chance.
If there were evidence of systematic decisions like this, it would be more of a story. But what we have here is just a big nothing-burger.
by factorymoo on 9/29/23, 5:53 PM
The reality is that moderation relies heavily on imperfect machine learning models and overworked human reviewers making rushed judgments on hundreds of cases per day. There's no meticulous strategy document mapping out the pros and cons before banning accounts that upset the company.
Mistakes inevitably happen when relying on this combination of flawed automation and human reviewers who are stretched too thin. The moderation policies may seem arbitrary or politically motivated from the outside, but much of it comes down to hasty human error and buggy algorithms rather than some malicious scheme.
by boomboomsubban on 9/29/23, 6:02 PM
by gaius_baltar on 9/29/23, 4:13 PM
by distantsounds on 9/29/23, 4:40 PM
by tamimio on 9/30/23, 8:05 AM
by garba_dlm on 9/29/23, 7:08 PM
well it's subtly wrong but points in the overall direction.
the capacity to manipulate this kind of stuff, to sabotage ideologies such as EFFs privacy ideology IS the product.
governments spend money for this, it's a product from governments to other governments.
though I suppose this is an instance of FB dogfooding
by pacifika on 9/29/23, 5:04 PM
by smashah on 9/29/23, 4:38 PM
by whatamidoingyo on 9/29/23, 5:10 PM