this post was submitted on 09 Jul 2023
2249 points (99.9% liked)

Fediverse

17776 readers
48 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS
 

The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.

Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I'm sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.

top 50 comments
sorted by: hot top controversial new old
[–] PetrichorBias@lemmy.one 387 points 1 year ago* (last edited 1 year ago) (31 children)

This was a problem on reddit too. Anyone could create accounts - heck, I had 8 accounts:

one main, one alt, one "professional" (linked publicly on my website), and five for my bots (whose accounts were optimistically created, but were never properly run). I had all 8 accounts signed in on my third-party app and I could easily manipulate votes on the posts I posted.

I feel like this is what happened when you'd see posts with hundreds / thousands of upvotes but had only 20-ish comments.

There needs to be a better way to solve this, but I'm unsure if we truly can solve this. Botnets are a problem across all social media (my undergrad thesis many years ago was detecting botnets on Reddit using Graph Neural Networks).

Fwiw, I have only one Lemmy account.

[–] impulse@lemmy.world 180 points 1 year ago (2 children)

I see what you mean, but there's also a large number of lurkers, who will only vote but never comment.

I don't think it's unfeasible to have a small number of comments on a highly upvoted post.

[–] SGforce@lemmy.ca 77 points 1 year ago

If it's a meme or shitpost there isn't anything to talk about

[–] PetrichorBias@lemmy.one 38 points 1 year ago (1 children)

Maybe you're right, but it just felt uncanny to see thousands of upvotes on a post with only a handful of comments. Maybe someone who active on the bot-detection subreddits can pitch in.

[–] RedCowboy@lemmy.world 25 points 1 year ago (1 children)

I agree completely. 3k upvotes on the front page with 12 comments just screams vote manipulation

load more comments (1 replies)
[–] simple@lemmy.world 45 points 1 year ago (8 children)

Reddit had ways to automatically catch people trying to manipulate votes though, at least the obvious ones. A friend of mine posted a reddit link for everyone to upvote on our group and got temporarily suspended for vote manipulation like an hour later. I don't know if something like that can be implemented in the Fediverse but some people on github suggested a way for instances to share to other instances how trusted/distrusted a user or instance is.

[–] cynar@lemmy.world 39 points 1 year ago (3 children)

An automated trust rating will be critical for Lemmy, longer term. It's the same arms race as email has to fight. There should be a linked trust system of both instances and users. The instance 'vouches' for the users trust score. However, if other instances collectively disagree, then the trust score of the instance is also hit. Other instances can then use this information to judge how much to allow from users in that instance.

load more comments (3 replies)
[–] 70ms@lemmy.world 27 points 1 year ago* (last edited 1 year ago) (2 children)

I got suspended multiple times because my partner and daughter were also in our city's sub, and sometimes one of them would upvote my comments without realizing it was me. It got really fucking annoying, and of course there's no way to talk to a real person at reddit to prove we're different people. I'd appeal every time and they'd deny it every time. How reddit could have gotten so huge without realizing that multiple people can live in the same household is beyond me. In the end they both just stopped upvoting anything in the sub because it was too risky (for me).

load more comments (2 replies)
[–] TWeaK@lemm.ee 23 points 1 year ago (1 children)
load more comments (1 replies)
load more comments (5 replies)
[–] AndrewZabar@beehaw.org 27 points 1 year ago

On Reddit there were literally bot armies by which thousands of votes could be instantly implemented. It will become a problem if votes have any actual effect.

It’s fine if they’re only there as an indicator, but if the votes are what determine popularity, prioritize visibility, it will become a total shitshow at some point. And it will be rapid. So yeah, better to have a defense system in place asap.

[–] InternetPirate@lemmy.fmhy.ml 27 points 1 year ago* (last edited 1 year ago) (1 children)

I feel like this is what happened when you’d see posts with hundreds / thousands of upvotes but had only 20-ish comments.

Nah it's the same here in Lemmy. It's because the algorithm only accounts for votes and not for user engagement.

load more comments (1 replies)
[–] BrianTheeBiscuiteer@lemmy.world 27 points 1 year ago (7 children)

Yes, I feel like this is a moot point. If you want it to be "one human, one vote" then you need to use some form of government login (like id.me, which I've never gotten to work). Otherwise people will make alts and inflate/deflate the "real" count. I'm less concerned about "accurate points" and more concerned about stability, participation, and making this platform as inclusive as possible.

load more comments (7 replies)
[–] Thorny_Thicket@sopuli.xyz 23 points 1 year ago (1 children)

I always had 3 or 4 reddit accounts in use at once. One for commenting, one for porn, one for discussing drugs and one for pics that could be linked back to me (of my car for example) I also made a new commenting account like once a year so that if someone recognized me they wouldn't be able to find every comment I've ever written.

On lemmy I have just two now (other is for porn) but I'm probably going to make one or two more at some point

load more comments (1 replies)
load more comments (25 replies)
[–] Boozilla@lemmy.world 141 points 1 year ago (7 children)

The lack of karma helps some. There's no point in trying to rack up the most points for your account(s), which is a good thing. Why waste time on the lamest internet game when you can engage in conversation with folks on lemmy instead.

[–] Protoknuckles@lemmy.world 178 points 1 year ago (3 children)

It can still be used to artificially pump up an idea. Or used to bury one.

[–] danc4498@lemmy.world 57 points 1 year ago (6 children)

This is the problem. All the algorithms are based on the upvote count. Bad actors will abuse this.

load more comments (6 replies)
load more comments (2 replies)
[–] Steve@compuverse.uk 53 points 1 year ago

Maybe you move public perception of a product or political goal.
To push a narrative of some kind. Astroturfing basically.

[–] muddybulldog@mylemmy.win 39 points 1 year ago* (last edited 1 year ago) (3 children)

Lack of karma is a fallacy. The default Lemmy UI doesn't display it but the karma system appears to be fully built.

load more comments (3 replies)
[–] bassdrop321@feddit.de 37 points 1 year ago (5 children)

Corporations could use it to push their ads to the top

load more comments (5 replies)
[–] reallynotnick@lemmy.world 29 points 1 year ago (9 children)

Maybe I'm misunderstanding karma, but Memmy appears to show the total upvotes I've gotten for comments and posts, isn't that basically karma?

load more comments (9 replies)
load more comments (2 replies)
[–] Wander@yiffit.net 110 points 1 year ago (3 children)

In case anyone's wondering this is what we instance admins can see in the database. In this case it's an obvious example, but this can be used to detect patterns of vote manipulation.

[–] toish@yiffit.net 53 points 1 year ago (1 children)

“Shill” is a rather on-the-nose choice for a name to iterate with haha

[–] Evergreen5970@beehaw.org 26 points 1 year ago* (last edited 1 year ago)

I appreciate it, good for demonstration and just tickles my funny bone for some reason. I will be delighted if this user gets to 100,000 upvotes—one for every possible iteration of shill#####.

load more comments (2 replies)
[–] popemichael@lemmy.world 98 points 1 year ago (3 children)

You can buy 700 votes anonymously on reddit for really cheap

I don't see that it's a big deal, really. It's the same as it ever was.

[–] Valmond@lemmy.ml 65 points 1 year ago (2 children)

Over a houndred dollars for 700 upvotes O_o

I wouldn't exactly call that cheap 🤑

On the other hand, ten or twenty quick downvotes on an early answer could swing things I guess ...

[–] popemichael@lemmy.world 52 points 1 year ago (19 children)

For the companies who want a huge advantage over others, $100 is nothing in an advertising budget.

I have a small business and I do $1000 a week in advertising.

[–] OtakuAltair@lemmy.world 30 points 1 year ago* (last edited 1 year ago)

Yeah, 700 upvotes soon after a post is made could easily shoot it up to the top of even a popular sub for a few days (specially with the lack of mod tools rn), with others upvoting it purely because it already has alot of upvotes.

load more comments (18 replies)
load more comments (1 replies)
load more comments (2 replies)
[–] sparr@lemmy.world 92 points 1 year ago (17 children)

Web of trust is the solution. Show me vote totals that only count people I trust, 90% of people they trust, 81% of people they trust, etc. (0.9 multiplier should be configurable if possible!)

load more comments (16 replies)
[–] czarrie@lemmy.world 84 points 1 year ago (6 children)

The nice things about the Federated universe is that, yes, you can bulk create user accounts on your own instance - and that server can then be defederated by other servers when it becomes obvious that it's going to create problems.

It's not a perfect fix and as this post demonstrated, is only really effective after a problem has been identified. At least in terms of vote manipulation from across servers, it could act if it, say, detects that 99% of new upvotes are coming from a server created yesterday with 1 post, it could at least flag it for a human to review.

[–] two_wheel2@lemm.ee 29 points 1 year ago (1 children)

It actually seems like an interesting problem to solve. Instance runners have the sql database with all the voting record, finding manipulative instances seems a bit like a machine learning problem to me

load more comments (1 replies)
load more comments (5 replies)
[–] YoBuckStopsHere@lemmy.world 67 points 1 year ago (2 children)

Reddit admins manipulated vote counts all the time.

[–] authed@lemmy.ml 35 points 1 year ago (2 children)

Reddit also created fake users to post fake content... At least in the beginning of reddit.

[–] misterundercoat@lemmy.world 31 points 1 year ago (2 children)

TIL "beginning of Reddit" comprises the time up to and including July 2023.

load more comments (2 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] 7heo@lemmy.ml 63 points 1 year ago* (last edited 1 year ago) (7 children)
load more comments (7 replies)
[–] Flashoflight@lemmy.world 56 points 1 year ago (1 children)

This is really important to call out. Also though the bots have gotten so good it would be hard to tell the difference. To be honest though I'm pretty sure reddit was teeming withing them and it didn't really bother me. lol

[–] nekat_emanresu@lemmy.ml 37 points 1 year ago (1 children)

I have strong feelings about reddit being infested with bots too. And because reddit could, there's no reason lemmy doesn't have the same issue.

it didn’t really bother me

Bot armies could have hidden things from you that would bother you deeply, but because it's hidden, you don't have a chance to be bothered.

load more comments (1 replies)
[–] fermuch@lemmy.ml 54 points 1 year ago (3 children)

Votes were just a number on reddit too... There was no magic behind them, and as Spez showed us multiple times: even reddit modified counts to make some posts tell something different.

And remember: reddit used to have a horde of bots just to become popular.

Everything on the internet is or can be fake!

load more comments (3 replies)
[–] deadsuperhero@lemmy.ml 54 points 1 year ago (3 children)

Honestly, thank you for demonstrating a clear limitation of how things currently work. Lemmy (and Kbin) probably should look into internal rate limiting on posts to avoid this.

I'm a bit naive on the subject, but perhaps there's a way to detect "over x amount of votes from over x amount of users from this instance"? and basically invalidate them?

load more comments (3 replies)
[–] Andreas@feddit.dk 46 points 1 year ago (2 children)

Federated actions are never truly private, including votes. While it's inevitable that some people will abuse the vote viewing function to harass people who downvoted them, public votes are useful to identify bot swarms manipulating discussions.

load more comments (2 replies)
[–] skullgiver@popplesburger.hilciferous.nl 39 points 1 year ago* (last edited 1 year ago) (8 children)

[This comment has been deleted by an automated system]

load more comments (8 replies)
[–] mintyfrog@lemmy.ml 36 points 1 year ago

PSA: internet votes are based on a biased sample of users of that site and bots

[–] gthutbwdy@lemmy.sdf.org 35 points 1 year ago (2 children)

I think people often forget federation is not a new thing, it's a first design for internet communication services. Email, which is predating the Internet, is also federated network and most popular widely adopted of them all modes of Internet communication. It also had spam issues and there where many solutions for that case.

The one I liked the most was hashcash, since it requires not trust. It's the first proof-of-work system and it was an inspiration to blockchains.

load more comments (2 replies)
[–] MonkCanatella@sh.itjust.works 32 points 1 year ago (1 children)

maybe we can show a breakdown of which servers the votes are coming from so anything sus can be found out right away. Like, it would be easy enough to identify a bot farm I'd think

[–] Apoidea@lemmy.world 30 points 1 year ago (1 children)

Yep, give admins the tools they need to identify this activity so they can defederate accordingly. Seems like the only way.

load more comments (1 replies)
[–] Mikina@programming.dev 31 points 1 year ago (5 children)

This is something that will be hard to solve. You can't really effectively discern between a large instance with a lot of users, and instance with lot of fake users that's making them look like real users. Any kind of protection I can think of, for example based on the activity of the users, can be simply faked by the bot server.

The only solution I see is to just publish the vote% or vote counts per instance, since that's what the local server knows, and let us personally ban instances we don't recognize or care about, so their votes won't count in our feed.

load more comments (5 replies)
[–] SkyNTP@lemmy.ml 29 points 1 year ago

So far, the majority of content that approaches spam I've come across on Lemmy has been posts on !fediverse@lemmy.ml which highlight an issue attributed to the fediverse, but which ultimately have a corollary issue on centralised platforms.

Obviously there are challenges to address running any user-content hosting website, and since Lemmy is a comminity-driven project, it behooves the community to be aware of these challenges and actively resolve them.

But a lot of posts, intentionally or not, verge on the implication that the fediverse uniquely has the problem, which just feeds into the astroturfing of large, centralized media.

[–] nekat_emanresu@lemmy.ml 24 points 1 year ago (8 children)

Upvotes aren't just a number, they determine placing on the algorithm along with comments. It's easy to censor an unwanted view by mass downvoting it.

load more comments (8 replies)
[–] hawkwind@lemmy.management 22 points 1 year ago

IMO, likes need to be handled with supreme prejudice by the Lemmy software. A lot of thought needs to go into this. There are so many cases where the software could reject a likely fake like that would have near zero chance of rejecting valid likes. Putting this policing on instance admins is a recipe for failure.

load more comments
view more: next ›