this post was submitted on 28 Jun 2023
15 points (100.0% liked)

sh.itjust.works Main Community

7730 readers
1 users here now

Home of the sh.itjust.works instance.

Matrix

founded 1 year ago
MODERATORS
 

Hey all, another moderation topic here 🙀

We've all had a lot of discussion about it. Rather than posting my opinion on it, I wanted to share this talk from usenet about what they've learned from it.

They talk about the fediverse at the end!

I also want to share these 2 stories:

A study on how usenet learned to deal with spam: https://www.techdirt.com/2020/09/18/content-moderation-case-study-usenet-has-to-figure-out-how-to-deal-with-spam-april-1994/

One of the ways usenet deals with child porn: https://www.cnet.com/tech/services-and-software/clean-news-proposed-as-usenet-censor/

If you watch the video and find any interesting bits, let's discuss them!

you are viewing a single comment's thread
view the rest of the comments
[–] Difficult_Bit_1339@sh.itjust.works 3 points 1 year ago (2 children)

It seems inevitable that some kind of ID system will be needed online. Maybe not a real ID linked to your person but some sort of hard to obtain credential. That way getting it banned is inconvenient and posts without an ID signature can be filtered easily.

It used to be that spam was fairly easy to detect for a human, it may have been hard to automate the detection of but a person could generally tell what was a bot and what wasn't. Large Language Models (like GPT4) can make spam users appear to produce real conversations just like a person.

The large scale use of such systems provide the ability to influence people on a mass scale. How do you know you're talking to people and not GPT4 instances that are arguing for a specific interest? The only real way to solve this is to create some sort of system where posting has a cost associated with it, similar to how cryptocurrencies use a proof of work system to ensure that the transactional network isn't spammed.

Having to perform computationally heavy cryptography using a key that is registered to your account prior to posting would massively increase the cost of such spamming operations. Imagine if your PC had to solve a problem that took 5+ seconds prior to your post being posted. It wouldn't be terribly inconvenient to you but for a person trying to post on 1000 different accounts it would be a crippling limitation that would be expensive to overcome.

That would fight spam effectively, it wouldn't do much to filter content.

[–] hawkwind@lemmy.management 3 points 1 year ago (1 children)

Posting requires a personal blood sacrifice. It’s the only way to truly combat AI. USB3 Blood Altar!

USB3 Blood Altar

I purchased mine from Wish.com and it only works with maple syrup colored with food coloring. :(

[–] MomoTimeToDie@sh.itjust.works 2 points 1 year ago (1 children)

Imagine if your PC had to solve a problem that took 5+ seconds prior to your post being posted. It wouldn’t be terribly inconvenient to you

The problem is, 5+ seconds on what? A low end smartphone? A bitcoin mining rig? your average Joe's laptop? Anything reasonable for the end user is going to be a minor setback for anyone with the resources to do massive spam operations, and anything challenging for them is going to be a massive interruption to the regular users

I'd have to look at options for proof of work systems to see what is available if I were implementing such a system. I imagine the target would be a mid-range smartphone which would have dedicated hardware to handle cryptographic operations.

Upon some skimming, Hashcash seems like a candidate solution. It uses SHA-1 hasing which is common enough that most smartphones have dedicated hardware to handle the operations and wouldn't be at as much of a disadvantage over a PC as it would when using algorithms that don't have dedicated hardware implementations.