this post was submitted on 19 Aug 2024
405 points (97.9% liked)

Fediverse

28281 readers
621 users here now

A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).

If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!

Rules

Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy

founded 1 year ago
MODERATORS
 

We had a really interesting discussion yesterday about voting on Lemmy/PieFed/Mbin and whether they should be private or not, whether they are already public and to what degree, if another way was possible. There was a widely held belief that votes should be private yet it was repeatedly pointed out that a quick visit to an Mbin instance was enough to see all the upvotes and that Lemmy admins already have a quick and easy UI for upvotes and downvotes (with predictable results ). Some thought that using ActivityPub automatically means any privacy is impossible (spoiler: it doesn't).

As a response, I’m trying this out: PieFed accounts now have two profiles within them - one used for posting content and another (with no name, profile photo or bio, etc) for voting. PieFed federates content using the main profile most of the time but when sending votes to Mbin and Lemmy it uses the anonymous profile. The anonymous profile cannot be associated with its controlling account by anyone other than your PieFed instance admin(s). There is one and only one anonymous profile per account so it will still be possible to analyze voting patterns for abuse or manipulation.

ActivityPub geeks: the anonymous profile is a separate Actor with a different url. The Activity for the vote has its “actor” field set to the anonymous Actor url instead of the main Actor. PieFed provides all the usual url endpoints, WebFinger, etc for both actors but only provides user-provided PII for the main one.

That’s all it is. Pretty simple, really.

To enable the anonymous profile, go to https://piefed.social/user/settings and tick the ‘Vote privately’ checkbox. If you make a new account now it will have this ticked already.

This will be a bit controversial, for some. I’ll be listening to your feedback and here to answer any questions. Remember this is just an experiment which could be removed if it turns out to make things worse rather than better. I've done my best to think through the implications and side-effects but there could be things I missed. Let's see how it goes.

you are viewing a single comment's thread
view the rest of the comments
[–] Max_P@lemmy.max-p.me 14 points 2 months ago (2 children)

The problem with this approach is trust. It works for the users, but not admins. If I run a PieFed instance with this on, how can lemmy.world for example can trust my tiny instance to be playing by the rules? I went over more details in this other comment.

Sure, right now admins can contact you, for your instance. But you can't really do that with dozens of instances and hundreds of instances. There's a ton of instances we tolerate the users, but would you trust the admin with anonymous votes? Be in constant contact with a dozen instance admins on a daily basis?

It's a good attempt though. Maybe we're all pessimistic and it will work just fine!

[–] rimu@piefed.social 15 points 2 months ago* (last edited 2 months ago) (1 children)

I can only respond in general terms because you didn't name any specific problems.

Firstly, remember than each piefed account only has one alt account and it's always the same alt account doing the votes with the same gibberish user name. If the person is always downvoting or always voting the same as another person you'll see those patterns in their alt and the alt can be banned. It's an open source project so the mechanics of it cannot be kept secret and they can be verified by anyone with intermediate Python knowledge.

Regardless, at any kind of decent scale we're going to have to use code to detect bots and bad actors. Relying on admins to eyeball individual posts activity and manually compare them isn't going to scale at all, regardless whether the user names are easy to read or not.

[–] Max_P@lemmy.max-p.me 5 points 2 months ago* (last edited 2 months ago) (2 children)

Firstly, remember than each piefed account only has one alt account and it's always the same alt account doing the votes with the same gibberish user name. It's an open source project so the mechanics of it cannot be kept secret and they can be verified by anyone with intermediate Python knowledge.

That implies trust in the person that operates the instance. It's not a problem for piefed.social, because we can trust you. It will work for your instance. But can you trust other people's PieFed instances? It's open-source, I could just install it on my server, change the code to make me 2-3 alt accounts instead. Pick a random instance from lemmy.world's instance list, would you blindly trust them to not fudge votes?

The availability of the source code doesn't help much because you can't prove that it's the exact code that's running with no modifications, and marking people running modified code as suspicious out of the box would be unfair and against open-source culture.

I also see some deanonymization exploits too: people commonly vote+comment, so with some time, you can do correlation attacks and narrow down the accounts. So to prevent that, you'd have to remove the users mapping 1:1 to a gibberish alt by at least letting the user rotate them on demand, or rotate them on a schedule, and now we can't correlate votes to patterns anymore. And everyone's database endlessly fills up with generated alt accounts (that you can't delete).

If the person is always downvoting or always voting the same as another person you'll see those patterns in their alt and the alt can be banned.

Sure, but you lose some visibility into who the user is. Seeing the comments is useful to get a better grasp of who they are. Maybe they're just a serial fact checker and downvoting misinformation and posting links to reputable sources. It can also help identify if there's other activity beside just votes, large amounts of votes are less suspicious if you see the person's also been engaging with comments all day.

And then you circle back to, do you trust the instance admin to investigate or even respond to your messages? How is it gonna go when a big, politically aligned instance is accused of botting and the admin denies the claims but the evidence suggests it's likely? What do we do with Threads or even an hypothetical Twitter going fediverse, with Elon still as the boss? Or Truth Social?

The bigger the instance, the easier it is to sneak a few votes in. With millions of user accounts, you can borrow a couple hundred of your long inactive user's alts easily and it's essentially undetectable.


I'm sorry for the pessimism but I've come to expect the worst from people. Anything that can be exploited, will be exploited. I do wish this problem to be solved, and it's great that some people like you go ahead and at least try to make it work. I'm not trying to discourage anyone from experimenting with that, but I do think those what-ifs are important to discuss before everyone implements it and then oops we have a big problem.

The way things are, we don't have to put any trust in an instance admin. It might as well not be there, it's just a gateway and file host. But we can independently investigate accounts and ban them individually, without having to resort to banning whole instances, even if the admins are a bit sketchy. Because of the inherent transparency of the protocol.

[–] rimu@piefed.social 16 points 2 months ago* (last edited 2 months ago) (1 children)

Yes. You're going to have to trust someone, eventually. People can modify the Lemmy source code, too. Well, I can't because Rust looks like hieroglyphics to me but you get the idea.

I'd rather this than have to trust Lemmy admins not to abuse their access to voting data - https://lemm.ee/comment/13768482

[–] ericjmorey@discuss.online 5 points 2 months ago

You can even question if the compiled version running on an instano is the same as the version posted to GitHub. There's no way to even check what's running on the server you don't have access to.

Trust is necessary at some level if your going to participate on any hosted or federated service as you pointed out.

[–] Socsa@sh.itjust.works 3 points 2 months ago

This is literally already the Lemmy trust model. I can easily just spin up my own instance and send out fake pub actions to brigade. The method detecting and resolving this is no different.

[–] Socsa@sh.itjust.works 1 points 2 months ago

It will be extremely obvious if you see 300 user agents voting but the instance only has 100 active users.