by fnovd on 6/16/23, 6:17 PM with 50 comments
I understand that every message board needs some kind of content filter, both for ToS/legal reasons as well as for community norms. However, one question I have not been able to get out of my head, as Reddit loses favor and federated clones gain favor, is this: for what reason does a given community have only one set of moderators?
For those of you who use Reddit frequently, imagine this: you subscribe to a large subreddit about a topic that interests you. You enjoy memes about the subject, but other users do not, and it has become a subreddit rule that no memes can be posted. Not happy with that reality, you go to the moderation tab and see the default moderator group at the top of the list. You hit unsubscribe. Now, everything posted to that subreddit is visible to you, with no evidence of moderator activity. You go back to the subreddit and see a bunch of politically-charged comments you don't like. You return to the moderator group list, scroll down, and find a very minimalist group that says they only remove politically-charged comments. You hit subscribe. You now have the feed you want for the topic you want without needing to create a separate subreddit or motivate a huge userbase.
If users provide the content, then moderators provide the filter. They curate. So why, in all conventional models, do we only allow one set of curators? Doesn't the digital nature of the content make this entirely unnecessary? Is it even a technical challenge to provide a different view into the same slice of nested content?
The big problem I've had with the federated Reddit-likes is that, while your userbases can merge, and you can join your list of communities with other instances' lists, you can't ever merge communities. You will either end up with a bunch of fractured communities on the same topic, or one instance (or one community) will eat the others and become its own centralized location for the content you're looking for, with the same moderation problems. You haven't at all escaped the problems people have with Reddit as a platform, you've just moved them somewhere else.
If moderation (or, rather curation) was simply a filter on an existing dataset of content, there would be much less of an issue with merging communities. You could actually merge the programming board of one federation with another board. Maybe your instance mods hold all federated posts & comments for review, maybe they allow them by default, it doesn't matter because if you don't like how it's done, you can find another group doing it.
Is there any real reason why we only allow one singular set of curators to control a dataset of content created by users? What purpose does the marriage serve? Would it bother a community if there were more than one view of the content they create? Is it a technical challenge? Or is it just the way it always has been?
by civilitty on 6/16/23, 6:25 PM
Unsatisfied with your intimate moments? Try our MAGICAL-BLUE-BOOST, a Viagra-like solution, and transform your love life from lackluster to legendary - because you're not just buying a product, you're securing a destiny!
by ksherlock on 6/16/23, 6:52 PM
In your model, moderators are in a sewer tunnel sitting next to a raging river of shit and when they see something that looks relevant to their interests, they scoop it out and wash it off.
by legrande on 6/16/23, 6:25 PM
https://en.wikipedia.org/wiki/Moderation
My view is that it is needed. Otherwise communities quickly devolve into cesspools. Even toxic places like 4chan have moderation, despite attracting the worst of humanity.
by Sohcahtoa82 on 6/16/23, 10:07 PM
The problem with what you're describing creates a lot of duplication of effort, while also creating this weird community that's fragmented yet all in one place, and also creates a lot of work for moderators.
Using reddit as an example, say you have a game, with a dedicated subreddit for it at /r/game. A lot of people decide they don't want it filled with shitty memes, the moderators have everyone take a vote, and they decide to spin off memes to /r/game_memes.
It sounds like your example would instead rely on people basically choosing specific moderators to curate the sub, and you only see the results of actions taken from moderators you "subscribe" to, effectively. This would rely on moderators having to manually watch the sub for memes, and if a mod sees a meme, they remove it, and if you're not subscribed to an anti-meme moderator, you would not see the result, and would still see the meme.
Can you not see a problem with this setup?
You're relying on moderators to flag the memes. You could let users flag the memes on their own, but then you get people abusing flags.
It's best to just let moderators be what they really are -- not curators, but janitors. Moderators keep a community clear of abuse and spam. Yes, some subreddit moderators have extreme power trips and ban people for the stupidest shit, but there's not much you can do about it. Your solution just creates a ton of extra work.
by LinuxBender on 6/16/23, 7:03 PM
I've never had a Reddit account but in the spirit of the current Reddit discussions, moderators are required due to the sheer scale/size of Reddit. Automation is far from being able to keep threats, illegal content and abusive messages off the system. One set of global admins would just shuffle where these admins are to a centralized location much like how Facebook is moderated. There are pros and cons to distributed moderators, too many to name here.
I also understand that Reddit may not have invested much in human capitol around moderation meaning that they must have the volunteer distributed moderators to keep running or they would need to start building large centralized moderation teams like Facebook. From what I have read thus far I would not expect this to occur. If they centralize the volunteers then there would be a higher risk of power-tripping moderators managing communities they are not actually a part of. This also increases security risks. Pop one moderator in a centralized model and now the entire site can be nuked. The risk of this is lower in a paid centralized model as the moderators are employees or contractors can their access can be limited with VPN's and other corporate tools. This type of moderation also requires really good audit trails. I have no idea if Reddit has this level of auditing.
Even in the old school forums phpBB and the like I had moderators for specific sub-categories that could only moderate the part of the forum that they focused on. Global moderators carry multiple risks and require a great deal of trust.
by neilk on 6/16/23, 7:51 PM
You're looking at this problem as if content just happens, like it's a vast jungle just growing up all around us and we only have to just pluck the best stuff.
But what induces people to post, or contribute, or write anything at all? A community has to exist that will appreciate it. This community will only grow through shared norms. Even if it's a shared norm that we're all here to just post nice cat pictures. So leadership and moderation is essential to create the audience, the aesthetic, the norms - everything that makes the community a rewarding place!
If you don't have that you just have a site with random content and no audience, so there's no reason for anyone to post anything.
by ZeroGravitas on 6/16/23, 7:17 PM
Users could rate things as Funny, Informative etc. And users could choose what comments they got to see.
by syntheweave on 6/16/23, 8:20 PM
by ianburrell on 6/16/23, 9:45 PM
Also, it will kill discussion because the positive filtering requires someone to approve posts before they are seen. If the moderators are sleeping, then no one can have post seen until they wake up. The Reddit negative filtering means that moderators delete the bad stuff when they see it.
People talk about Usenet killfile and how each person doing own moderation. But having to do everything yourself was annoying. It also resulted in an indvidual view where could miss stuff. The not-very-good solution was to quote content in every post like email.
by iamflimflam1 on 6/16/23, 6:33 PM
Your suggestion relies on having sufficient people willing to moderate a forum.
by jprd on 6/16/23, 6:20 PM
by gizmo686 on 6/16/23, 9:53 PM
What the Reddit model provides, and your example lacks, is a shared reality. There is a bit of variation with comment orders (both different view modes and some fuzzing); but for the most part, two users viewing the same community at the same time see the same thing. Without that type of shared reality, it is essentially impossible to build the type of community/unique subreddit cultures we see in Reddit today.
Most social media doesn't do this. HN does. Twitter and Tumblr operate operate as more of a social web. Most content gets spread by users reposting it, propogating it through their network. In that way, the accounts you follow act primarily as the "moderators" for your view of reality. No two people see the same reality, and it is incredibly difficult for distinct subcultures to emerge; and even more difficult to participate in multiple subcultures, as your view of reality does not distinguish between them.
There is also the TikTok model of a super-charged algorithm employing black magic. Or Youtube's model of algorithmic discovery, combined with meaningful subscriptions to particular creators.
The other question is one of labor. The motivation for most moderators in Reddit is control over a community (which can be either a good or bad thing). If you take the community away, then all you are left with is unpaid labor. Unless you can have individual posters say "I want to post to the moderated list X", in which case you are back to the original model.
Most reddit communities have policys restricting cross posting, and "brigading" across communities is generally not allowed by site-wide rules. Having a model where your content goes to everyone, and is then blacklisted by specific mod lists is essentially auto-brigading everything by design.
Again, this is not nessasarily a bad thing. Plenty of social media does it this way. But it is meaningfully different from how Reddit works.
by mikewarot on 6/16/23, 9:12 PM
Most people like a simple model, and it's easier to code. There are always corner cases, which is what this discussion is about. Multidimensional voting/moderation would help in those cases.
by carry_bit on 6/16/23, 8:01 PM
You can technically do it, but I don't it being worth the hassle.
If the future model is going to be anything, I suspect it'll be pay-to-join networks where it is primarily the users that are moderated instead of the content.
by hackermatic on 6/16/23, 7:07 PM
I also think it will lead to the Nazi bar problem: https://twitter.com/IamRageSparkle/status/128089153745134387...
>"you have to nip it in the bud immediately. These guys come in and it's always a nice, polite one. And you serve them because you don't want to cause a scene. And then they become a regular and after awhile they bring a friend. And that dude is cool too.
>And then THEY bring friends and the friends bring friends and they stop being cool and then you realize, oh shit, this is a Nazi bar now. And it's too late because they're entrenched and if you try to kick them out, they cause a PROBLEM. So you have to shut them down.
>And i was like, 'oh damn.' and he said "yeah, you have to ignore their reasonable arguments because their end goal is to be terrible, awful people."
If someone posts totally innocuous, on-topic content one day, and absolutely heinous stuff the next, I don't want to only see the innocuous stuff -- I want to not have to be around someone who has shown that they are an awful person at heart, and I also don't want to be in a space that will quickly be filled with their friends (and which my friends will have to endlessly moderate away).
by bvisness on 6/16/23, 9:49 PM
The curation is what makes it a forum in the first place.
by stcroixx on 6/16/23, 8:19 PM
by eternityforest on 6/16/23, 9:22 PM
It doesn't solve the problem of views of the same content though.
by willcipriano on 6/16/23, 7:57 PM
by dredmorbius on 6/17/23, 5:05 AM
It's been discussed several times on Hacker News itself, of course, notably at the time (<https://news.ycombinator.com/item?id=495053>) as well as in 2012 (<https://news.ycombinator.com/item?id=4285333>) and 2019 (<https://news.ycombinator.com/item?id=19210923>).
This addresses general issues around moderation, including things that seem to be especially harmful: dilution, the distinction between behaviour and people ("It's bad behavior you want to keep out more than bad people"), "broken windows", the "two major types of problems a site like Hacker News needs to avoid: bad stories and bad comments", the Fluff Principle ("on a user-voted news site, the links that are easiest to judge will take over unless you take specific measures to prevent it"), and the "two main kinds of badness in comments: meanness and stupidity".
Graham doesn't get into why one set of moderators is essential, but I'll suggest this: editorial voice.
When you visit a traditionally-edited news source --- news paper, television network, magazine, radio network --- what you'll find for most of these is a specific editorial voice. The Financial Times, Fox News, the New York Times, Breitbart, The Daily Caller, The PBS News Hour, the Daily Mail, the Guardian, the BBC, NPR's All Things Considered ... each have a distinctive voice. Like it, love it, despise it, or hate it, it's there.
For a discussion site, a problem with offering moderation-as-a-service is that the moderation you see is not that which others will see, and for at least some members, that means seeing a lot of what would be considered "dumb comments". On which Graham said:
Bad comments are like kudzu: they take over rapidly. Comments have much more effect on new comments than submissions have on new submissions. If someone submits a lame article, the other submissions don't all become lame. But if someone posts a stupid comment on a thread, that sets the tone for the region around it.
That is, the problem is that multiple moderation views means that those bad comments --- mean, stupid, or both --- are going to dominate the conversation. As it stands, HN tends to weed these out reasonably quickly (flags and downvotes really help, don't feed the trolls, and email the mods at hn@ycombinator.com in egregious cases).
In practice, different moderation philosophies are best enacted through multiple channels. For sites such as Reddit, this means different subreddits with different moderation policies. For Hacker News ... you'll have to go to a different discussion site, maybe lobste.rs, Tildes, Lemmy, or kbin, say. Keep in mind that even on Reddit there's often considerable "bleed-over" between different subreddits (a chief annoyance of mine there, and a reason I'd largely abandoned the site well before the current contretemps).
I've seen subreddits which have relaxed rules, or permitted certain types of content on specific occasions ("low-effort Fridays" being one variant), and ... it still strikes me as highly corrosive.
On Tildes, a text-only site, a newly-arrived Reddit refugee suggested enabling images on the site. That's ... not been well received: <https://tildes.net/~tildes/16fe/considering_image_posts_on_t...> (I've posted my own brief objection, though there are many well-considered ones by others.) This comment by a former mod of /r/photography explains how and why images can absolutely change the entire dynamic of a subreddit even one devoted to the topic of photography: <https://tildes.net/~tildes/16fe/considering_image_posts_on_t...>
Graham did open his essay with the observation that both the format of HN, and HN itself were, at the time, young. I'd argue that neither are, with Hacker News now in its 17th year. However Hacker News still remains in large part true to its founding vision. It is also still a useful and valuable online forum. Not perfect by any means. But far better than much else that's come (and often gone) in the meantime, and in particular durable in ways that few other fora have been.
Play with its formula with extreme caution.