this post was submitted on 15 Jun 2023
1 points (100.0% liked)

Lemmy Moderation Tools

272 readers
8 users here now

Welcome

I'm working on a moderation tool to work with Lemmy.

I'm still in early development and discovery. This channel will update the status and respond to questions during development, testing, release, and post-release.

You are encouraged to create posts defining your needs. I also appreciate feedback on status updates. This helps me maintain the right track.

Join us on Matrix!

founded 1 year ago
MODERATORS
 

Hello! I'll try to present my view on how instance moderation can be handled in the fediverse in order for small instances to be able to exist. This view tries its best to keep federation while also making it possible for a small instance with limited moderators handle things.

Please note that I've been cultivating this for a while now. It is not related to any recent events. It is also primarily applicable to Mastodon, but I'm trying to adapt it to lemmy.

Basically it goes like this: Focus on moderating content in this order. The lower the number the higher the priority.

  1. Content sent by your instance's users
  2. Content sent to your instance's users or communities by remote users.

...

  1. Content sent between remote users in remote communities

Basically as a moderator for instance A, I don't need to know right away that a user from instance B said something controversial in a community of instance C. I might want to not care about it at all.

While it's true that my users while see this content through my instance and will likely report it because it is controversial / offensive / problematic / etc... I have limited resources and need to be able to rely on the mod team of instance B and instance C to do their job first and handle that scenario.

As for the users, they should of course report content they believe violates the rules, but they should also learn to rely more often on the block button, whether it is fore remote users, remote communities and hopefully in future versions of lemmy being able to block remote instances.

If I wanted something from an automated moderation tool it would be the following:

  • Keep track of how often a remote user is reported for remote content on a remote community over time, giving them one strike for every day there's one or more of such reports.

That way, if the user collects ten strikes over time, for example, I could have a look at whether I believe or not that this user's home instances is enabling toxic behavior or, if that user ever comes to communities in my instance I'll have him flagged and will know exactly for what. The benefit here is that I can take things much slower because it's a remote user on a remote community and I don't need to act immediately.

There's some exceptions such as illegal content that could harm my instance by caching it, but overall most reports I've ever received are due to toxic behavior which my instance's users should learn to block while the remote mods do their job.

Regarding priorities 1 and 2 For content generated by my instance's users, this is where I need to be quick. Just like I want to rely on remote moderators to do their job, remote moderators will want to rely on me to do my job when it involves users of my instance.

Also, if there's remote users harassing local users or leaving toxic comments in our communities or posts, as an instance admin I will need to be quick but I will also have to rely on the moderators of a specific community.

To be honest the burden of moderating a community should be placed on the creator / moderator of that community. As an instance admin this allows me to, again, be more reactive while I know that the owners of that community are cleaning up stuff. Thus even if I receive a report, I should wait to let the community moderators handle it.

Only in this way, is it possible to keep federated with a large amount of instances as a small instance with few moderation resources.

In summary:

  1. Make sure local users behave when they're in remote communities.
  2. Make sure your local communities follow the instance's rules
  3. Let community moderators handle conflict and moderate their community as they see fit (within boundaries). Only step in if thing escalate, get out of hand or there's a larger "raid" / harassment campaign.
  4. Hold community owners and moderators accountable to moderate their own spaces.
  5. Let remote moderators and admins do their job if stuff happens on remote instances between remote users.
  6. Potentially keep track of such scenarios that were reported to you by local users, if anything to have some data in order to avoid a bad actor if they were ever to come across your instance or to determine if there's an instance that's not moderating properly.

This means that it's very important for instance admins to give remote instances and remote community moderators time to handle a situation. Especially smaller instances might take a few hours or even a couple of days to deal with a situation. Unless it's a serious life-or-death scenario such as maybe doxxing, admins and moderators should tell their users to block, report and move on, as it could and should take a bit of time to do things properly.

One aspect I didn't mention is toxic remote communities. In this case I might "remove" the community so it isn't accessible from my instance and I'm not giving it a platform. In case the whole instance is dedicated to toxic communities, then I might block the instance as a whole.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here