1016
Reddit's licensing deal means Google's AI can soon be trained on the best humanity has to offer — completely unhinged posts
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
Everyone is joking, but an ai specifically made to manipulate public discourse on social media is basically inevitable and will either kill the internet as a source of human interaction or effectively warp the majority of public opinion to whatever the ruling class wants. Even more than it does now.
Think of the range of uses that’ll get totally whitewashed and normalized
You laugh now... but it actualy exists/existed
Jay-sus. Too real. I feel bad now.
I exported 12 years of my own Reddit comments before the API lockdown and I've been meaning to learn how to train an LLM to make comments imitating me. I want it to post on my own Lemmy instance just as a sort of fucked up narcissistic experiment.
If I can't beat the evil overlords I might as well join them.
2 diffrent ways of doing that
Pros: relitively inexpensive/free in price, you can use it right now, pretrained has a small amount of common sense already builtin.
Cons: platform (if applicable) has a lot of control, 1 aditional layer of indirection (playing a charicter rather than being the charicter)
Pros: much more control
Cons: much more control, expensive GPUs need baught or rented.
For sure. It's currently possible to push discourse with hundreds of accounts pushing a coordinated narrative but it's expensive and requires a lot of real people to be effective. With a suitably advanced AI one person could do it at the push of a button.
My prediction: for the uninformed, public watering holes like Reddit.com will resemble broadcast cable, like tiny islands of signal in a vast ocean of noise. For the rest: people will scatter to private and pseudo-private (think Discord) services, resembling the fragmented 'web' of bulletin boards in the 1980's. The Fediverse as it exists today sits in between the two latter examples, but needs a lot more anti-bot measures when it comes to onboarding and monitoring identities.
Overcoming this would require armies of moderators pushing back against noise, bots, intolerance, and more. Basically what everyone is doing now, but with many more people. It might even make sense to get some non-profit businesses off the ground that are trained and crowd-supported to do this kind of dirtywork, full-time.
What's troubling is that this effectively rolls back the clock for public organization-at-scale. Like a kind of "jamming" for discourse powerful parties don't like. For instance, the kind of grassroots support that the Arab Spring had, might not be possible anymore. The idea that this is either the entire point, or something that has manifest itself as a weak-point in the web, is something we should all be concerned about.
Why do you think Reddit would remain a valuable source of humans talking to each other?
Niche communities, mostly. Anything with tiny membership that's initimate and easily patrolled for interlocutors. But outside that, no, it won't be that useful outside a historical database from before everything blew up.
I think the bots will be hard to detect unless they make one of those bizarre AI statements. And with enough different usernames, there will be plenty that are never caught.
We are on a path to our own butlerian jihad. Anything digital will be regarded as false until proven otherwise by a face to face contact with a person. And eventually we ban the internet and attempts to create general AI altogether.
I would directly support at least a ban on ad-driven for profit social media.
Nice try Mr ChatGPT