rho50

joined 2 years ago
[–] rho50@lemmy.nz 1 points 2 years ago* (last edited 2 years ago) (1 children)

Looks like a very cool project, thanks for building it and sharing!

Based on the formula you mentioned here, it sounds like an instance with one user who has posted at least one comment will have a maximum score of 1. Presumably the threshold would usually be set to greater than 1, to catch instances with lots of accounts that have never commented.

This has given me another thought though: could spammers not just create one instance per spam account? If you own something like blah.xyz, you could in theory create ephemeral spam instances on subdomains and blast content out using those (e.g. spamuser@esgdf.blah.xyz, spamuser@ttraf.blah.xyz, etc.)

Spam management on the Fediverse is sure to become an interesting issue. I wonder how practical the instance blocking approach will be - I think eventually we'll need some kind of portable "user trustedness" score.

[–] rho50@lemmy.nz 11 points 2 years ago (9 children)

Maybe I'm being stupid, but how does this service actually determine suspicious-ness of instances?

If I self-host an instance, what are my chances of getting listed on here and then unilaterally blocked simply because I have a low active user count or something?

[–] rho50@lemmy.nz 1 points 2 years ago

There has been some good commentary about this on Mastodon, but the long and short of it seems to be that federation is actually a pretty terrible way to harvest data.

The entire fediverse is based heavily on openly accessible APIs - Meta doesn't need to federate with your instance to scrape your data, there's really not much that can be done about it.

The real solution to Meta's unethical behaviour is unfortunately going to be legislation, not technical.

view more: ‹ prev next ›