We've all had a lot of discussion about it. Rather than posting my opinion on it, I wanted to share this talk from usenet about what they've learned from it.
It seems inevitable that some kind of ID system will be needed online. Maybe not a real ID linked to your person but some sort of hard to obtain credential. That way getting it banned is inconvenient and posts without an ID signature can be filtered easily.
It used to be that spam was fairly easy to detect for a human, it may have been hard to automate the detection of but a person could generally tell what was a bot and what wasn't. Large Language Models (like GPT4) can make spam users appear to produce real conversations just like a person.
The large scale use of such systems provide the ability to influence people on a mass scale. How do you know you're talking to people and not GPT4 instances that are arguing for a specific interest? The only real way to solve this is to create some sort of system where posting has a cost associated with it, similar to how cryptocurrencies use a proof of work system to ensure that the transactional network isn't spammed.
Having to perform computationally heavy cryptography using a key that is registered to your account prior to posting would massively increase the cost of such spamming operations. Imagine if your PC had to solve a problem that took 5+ seconds prior to your post being posted. It wouldn't be terribly inconvenient to you but for a person trying to post on 1000 different accounts it would be a crippling limitation that would be expensive to overcome.
That would fight spam effectively, it wouldn't do much to filter content.
Imagine if your PC had to solve a problem that took 5+ seconds prior to your post being posted. It wouldn’t be terribly inconvenient to you
The problem is, 5+ seconds on what? A low end smartphone? A bitcoin mining rig? your average Joe's laptop? Anything reasonable for the end user is going to be a minor setback for anyone with the resources to do massive spam operations, and anything challenging for them is going to be a massive interruption to the regular users
I'd have to look at options for proof of work systems to see what is available if I were implementing such a system. I imagine the target would be a mid-range smartphone which would have dedicated hardware to handle cryptographic operations.
Upon some skimming, Hashcash seems like a candidate solution. It uses SHA-1 hasing which is common enough that most smartphones have dedicated hardware to handle the operations and wouldn't be at as much of a disadvantage over a PC as it would when using algorithms that don't have dedicated hardware implementations.
It seems inevitable that some kind of ID system will be needed online. Maybe not a real ID linked to your person but some sort of hard to obtain credential. That way getting it banned is inconvenient and posts without an ID signature can be filtered easily.
It used to be that spam was fairly easy to detect for a human, it may have been hard to automate the detection of but a person could generally tell what was a bot and what wasn't. Large Language Models (like GPT4) can make spam users appear to produce real conversations just like a person.
The large scale use of such systems provide the ability to influence people on a mass scale. How do you know you're talking to people and not GPT4 instances that are arguing for a specific interest? The only real way to solve this is to create some sort of system where posting has a cost associated with it, similar to how cryptocurrencies use a proof of work system to ensure that the transactional network isn't spammed.
Having to perform computationally heavy cryptography using a key that is registered to your account prior to posting would massively increase the cost of such spamming operations. Imagine if your PC had to solve a problem that took 5+ seconds prior to your post being posted. It wouldn't be terribly inconvenient to you but for a person trying to post on 1000 different accounts it would be a crippling limitation that would be expensive to overcome.
That would fight spam effectively, it wouldn't do much to filter content.
Posting requires a personal blood sacrifice. It’s the only way to truly combat AI. USB3 Blood Altar!
I purchased mine from Wish.com and it only works with maple syrup colored with food coloring. :(
The problem is, 5+ seconds on what? A low end smartphone? A bitcoin mining rig? your average Joe's laptop? Anything reasonable for the end user is going to be a minor setback for anyone with the resources to do massive spam operations, and anything challenging for them is going to be a massive interruption to the regular users
I'd have to look at options for proof of work systems to see what is available if I were implementing such a system. I imagine the target would be a mid-range smartphone which would have dedicated hardware to handle cryptographic operations.
Upon some skimming, Hashcash seems like a candidate solution. It uses SHA-1 hasing which is common enough that most smartphones have dedicated hardware to handle the operations and wouldn't be at as much of a disadvantage over a PC as it would when using algorithms that don't have dedicated hardware implementations.