A LLM that behaves like a typical Redditor?
What possible use is that?
This is a most excellent place for technology news and articles.
A LLM that behaves like a typical Redditor?
What possible use is that?
Air Canada offering a refund of tree fiddy.
You'll get your refund eventually but first it will try and gaslight you that Air Canada is a woke mind virus before calling you an asshole and then stalking you.
What possible use is that?
I've noticed "has this sub gotten more right wing recently?" posts reaching the top post of the day in the last 6 months or so. r/norge and r/unitedkingdom being examples. You can automate bots that change a subreddit's consensus on certain topics by bot-spamming threads pertaining to those topics, especially in the first hour of a thread going up. I don't know if that's happening, or if it has more to do with the Reddit protest that saw mods abdicate their positions last June and new mods being responsible for the change... but it could also be a bit of both.
Reddit is a trove of user built content under the guise of community. What Spez did was to say "thanks for all the free work, suckers!", put a price sticker on it, and laughed all the way to the bank.
~~And this is why I'm not active on any Internet community anymore.~~ Nevermind, I guess I just can't help myself...
And this is why I’m not active on any Internet community anymore,
you typed.
And that is another unintended example of why all of my post history was purged before migration.
This is what the 3rd party access to API was really all about.
When API access was allowed , all reddit content was effectively free: They needed to ban 3rd party apps so they could sell the accumulated content. I expect using content to train AI also factors into it.
Considering some of the very wrong and upvoted domain specific knowledge I've seen on Reddit over the years I'm not sure the training data is going to be useful for much beyond what every other model can do.
The legal advice in /r/legaladvice was some of the worst garbage I've ever seen. I have zero doubt numerous had bad outcomes, at best wasting money and time, at worst spending years in jail because of things that sub told them to say and do. Zero doubt.
That sub was mostly cops just repeating their own bad interpretation of the law. Terrible.
This is why I don't blame anyone for editing/deleting their post history on reddit.
The AI:
"IANAL so could you ELI5, so AITA?
THIS."
Ann frankly, I did Nazi that coming.
Holy shit do I hate that comment
Considering how much of Reddit is already bots, I'm sure this will end fantastically.
"Reddit has given access to YOUR conversations and posts to AI companies.". FTFY
These were created by people, for peoole, and I will ALWAYS disagree that this data is Reddit's or any other platforms.
Don't forget your direct messages aren't end to end encrypted on Reddit, so now AI will be trained on your craziest "private" conversations
Reddit is all bots, porn, ads and political shit posts. Good luck getting any useful training content out of that.
Maybe that's the point? Training the AI to produce the blabbering bullshit that's preferred in social media?
Out of all things to hate Reddit for, giving data to AI isn't something fediverse users can really criticize it for, though making money from it perhaps.
Remember: All data in federated platforms is available for free and likely already being compiled into datasets. Don't be surprised if this post and its comments end up in GPT5 or 6 training data.
The problem isn't that AI is being trained on the data. The problem is that they locked down all third party data access so they could monetize our content. On a federated platform, everyone gets equal access and can do whatever they want with it.
We sure can criticize them for that.
With reddits severe bot problem, it'll be like training on unfiltered sewage. Garbage in, garbage out.
Damn it. I haven't deleted my account due to how many people I've supported and helped, I stopped using it while ago. It seems I'll have to.
I wouldn't bother. They'll just mark all your stuff DELETED=1 and feed it to their AI anyway.
Where's my cut?
You signed it all away the moment you scrolled down that EULA 😂
Can't wait for the day a major court declares EULAs universally nonbinding outside of the most common-sense terms. Even though I doubt it will ever happen.
"We can store and display your content and use stuff you publicly post as examples in advertisements for our platform" is pretty common sense.
"We can use the things you post to do complex data analytics to package and sell your identity to advertisers" is fucking sus.
"We can use the things you post to train ANN generative systems to build next-generation technologies to impersonate you and your peers" is simply nuts.
The idea that displaying an EULA with an "agree" button is informed consent is just preposterous. Even lawyers don't read them.
It will get trained on some comment posts.
Let reddit die. Join Lemmy or /kbin. https://join-lemmy.org/ https://kbin.pub/
In before poisoning your comments on Reddit turns into the new protest.
Good thing I scrubbed all of my posts and comments that I could. Fuck that site, straight up and down.
You really think they don't have your original comments stored?
It's literally been proven that they do. A guy here on Lemmy was a very common poster on some tech support subreddit. He used one of those account scrubbers and deleted his account. He went back to look a few weeks later and all his comments were back.
Good. Maybe when it cogitates the things I've written it might start offering up some better ideas.
I wish there was a license for content like the GPL, that states if you use this content to train generative AI, the model must be open source. Not sure that would legally be enforceable though (due to fair-use).
Why does it sound like reddit trained AI will only get dumber.
That would explain why GPT is often so confidently incorrect.
Who's dumb enough to pay for that? Everyone else is just scraping it for free.