Interesting topic - I’ve seen it surface up a few times recently.
I’ve never been a mod anywhere so I can’t accurately think what workflows/tools a mod needs to be satisfied w/ their, well, mod’ing.
For the sake of my education at least, can you elaborate what do you consider decent moderation tools/workflows? What gaps do you see between that and Lemmy?
PS: I genuinely want to understand this topic better but your post doesn’t provide any details. 😅
One of the major issues is replication and propagation of illegal material. Because of the way that content is mirrored and replicated across the fediverse, attacks that flood communities with things like CSAM inevitably find their way to other federated sites due to the interconnectedness of the fediverse.
The only response currently to dealing with these types of attacks, even if they’re not directed at you, is to generally defederate with the instance being attacked. This means whoever was attacking the site with CSAM has won, because they successfully made it so that the community becomes disjointed and disconnected from the rest of the fediverse with the hopes that it will die.
I see.
So what do you think would help w/ this particular challenge? What kinds of tools/facilities would help counter that?
Off the top of my head, do you think
- The sign up process should be more rigorous?
- The first couple of posts/comments by new users should be verified by the mods?
- Mods should be notified of posts/comments w/ poor score?
I can think of some things i could implement on the lemmy server side that could help with this, i’m pretty sure that the IWF maintains a list of file hashes for CSAM and there are probably a few other hash sources you could draw from too.
so the process would be something like the following
- create a local db for and periodically (like once a day) update CSAM hash list
- I would be very surprised if hashes for uploads are not already created, compare this hash with list of known harmful material
- if match found, reject upload and automatically permaban user, then if feasible automatically report as much information as possible about user to law enforcement
so for known CSAM you don’t have to subject mods or user to it before it gets pulled.
for new/edited media with unrecognised hashes that does contain CSAM then a mod/admin would have to review and flag it at which point the same permaban for the user, then law enforcement report could be triggered automatically.
The federation aspect could be trickier though. which is why this would probably be better to be an embedded lemmy feature rather than a third party add on.
I’m guessing it would be possible to create an automoderator that does all this on the community level and only approves the post to go live once it has passed checks.
i’ve said it before and i’ll say it again. give me a spec and i’ll (try to) write you a tool.
i’m a competent coder, but i have no idea kind of what mod tools are needed.
For starters, a way to unban people would be nice. Then, also, a way to easily see new content for their community. Like, only new content. And not see it after it has been marked as “reviewed” (except as context to unreviewed content, when unfolded). I mean, new posts, new comments, etc. With alerts. Also, sudden activity alert.
A way to match keywords, and bring up matching posts and comments.
Metrics about each user’s contributions to the community, are they new, or seasoned. Did they contribute mostly popular content or unpopular content? What words do they use most? Etc.
Compiling multiple reports for a single post/comment into one. Ignoring reports from select users.
That’s all I can think of for now.
But, essentially, a dashboard with live content, showing “old” content as “greyed out”, and relevant actions, would be really, really useful.
Edit: additionally, automated actions would be great. Answering posts/comments matching regexes with templates populated with the user’s information; automatically removing, issuing warnings, and banning (outright or after n warnings) people for specific terms, etc.
It would also really help to have automation workflows (e.g. user commented with “r-word” or “n-word”, autocomment a warning, wait X minutes/hours, or Y minutes/hours after user comments again, remove comment/ban).
This automation could come as an additional tool, to be ran under a separate account.
it’s a start. i’m on holiday at the mo. i’ll have a look when i get back
- Report queue. Right now, reports go to a queue that both instance owners and mods use. This makes it impossible to mod because the instance owners mark items as completed before mods even had a chance to look at them.
Now, if it’s the case where it’s user abuse it’s fine for the instance owner to take care of it.
But if it’s just breaking the rule of a community, the instance owner should never even see it.
Separating the queues would help both mods and instance owners.
- The ability to hide a community from All and/or Local. Some communities just aren’t appealing to the general public. And when All surfers see posts, they just downvote them into oblivion.
these could be a little more difficult. they seem to be instance level features.
i might be able to do a tool for the first one using filters if there is a way to insert keywords into a report e.g. “To Mods” or “To Admins”
I escaped ads and a dictatorship only to come here and be told how great communism is with an even greater frequency.
Blocking hexbear communities just led to those users going to other instances and making the whack-a-mole more difficult.
deleted by creator
That’s a bit of paranoia to think people from Hexbear are out to spread their evil ideas to polute your mind any way they can…
I dont think it is because of me, but if they are getting fewer reactions they might apread out.