this post was submitted on 09 Jul 2023
2249 points (99.9% liked)
Fediverse
17776 readers
48 users here now
A community dedicated to fediverse news and discussion.
Fediverse is a portmanteau of "federation" and "universe".
Getting started on Fediverse;
- What is the fediverse?
- Fediverse Platforms
- How to run your own community
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Here’s an idea: adjust the weights of votes by how predictable they are.
If account A always upvotes account B, those upvotes don’t count as much—not just because A is potentially a bot, but because A’s upvotes don’t tell us anything new.
If account C upvotes a post by account B, but there was no a priori reason to expect it to based on C’s past history, that upvote is more significant.
This could take into account not just the direct interactions between two accounts, but how other accounts interact with each of them, whether they’re part of larger groups that tend to vote similarly, etc.
What if account B only ever posts high quality content? What if everybody upvotes account B because their content is so good? What if they rarely post so it would be reasonable that a smaller subset of the population has ever seen their posts?
Your theory assumes large volumes of constant posts seen by a wide audience, but that's not how these sites work, your ideal would censor and disadvantage many accounts.
If an account is upvoted because it’s posting high-quality content, we’d expect those votes to come from a variety of accounts that don’t otherwise have a tendency to vote for the same things.
Suppose you do regression analysis on voting patterns to identify the unknown parameters determining how accounts vote. These will mostly correlate with things like interests, political views, geography, etc.—and with bot groups—but the biggest parameter affecting votes will presumably correlate with a consensus view of the general quality of the content.
But accounts won’t get penalized if their votes can be predicted by this parameter: precisely because it’s the most common parameter, it can be ignored when identifying voting blocs.
No, I completely disagree and reject your premise.
Many times really high quality content will be voted for by only a small subset of the population.
In general people will vote for lowest common denominator widely appealing click bait. That type of content will get varied voters because of wide appeal. Discerning voters represent a smaller but consistent subset of the population, and this proposed algorithm will penalize that and just lead to more low quality widely appealing click bait.
Sure, the “consensus view of general quality” will depend on the opinions of your user base—but if that’s the source of your objection, your issue is with the user base and not vote manipulation per se.
Your oversimplification makes it sound like this is just my personal preference, and not a natural tendency of humans or social media interactions.
This is not just "I like X more", this is "humans on a large scale act like probabilistic decision trees and will converge on lowest common denominator dopamine fountains without careful checks and considerations"
The latter is necessary for high quality networked media and discussion
In that situation, what function do the upvotes serve in the first place? If the potential audience already knows they’re going to read and enjoy more content from the same source, do they need to see upvotes to tell them what they already know?
(Remember that without effective permanent karma, upvotes only serve to call attention to particular posts or comments in the short term.)