An AI model trained on posts from Nazis. What can go wrong?
Privacy
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
Chat rooms
-
[Matrix/Element]Dead
much thanks to @gary_host_laptop for the logo design :)
Headline: Cyber Hitler Bans Jews from the Internet!
that's just the latest twitter news in a few months
How useful would an AI model be if it were trained on the content from a social media platform full of nazis, russian trolls, and bots?
Didn't we kinda see this happen already? I don't remember which product it was. Was it one of Microsofts trial runs?
it was Tay. She went super fuckin racist.
Oh yeah, that's the one. Who could've seen this coming, right?
haha yeah. I can only imagine what an AI trained on twitter would be like. Way worse than Tay I think.
Worse in the insidiously-pushing-dark-agendas manner for sure, and an altogether much greater threat to humanity. It couldn’t match Tay for straight up insane evil— she was loudly supporting eugenics and more— but that makes it more dangerous.
I guess...full of sh*t.
It's gonna be the dumbest AI in human history
Yea, but it would be good at right-wing slogans and racial slurs.
"Just public data, not DMs or anything private," huh?
Obviously Elon Musk considers everything we post on Twitter as his personal wealth... It's true though, since all our personal data is stored on his servers, and he can scrape whatever he wants.
It's time to step into a decentralized peer-to-peer social media where we will no longer be exploited by Twitter, Facebook, or whatever platform, and there are no more central servers to gather your data without consent.
True, and Anonymity is important as well, I am suppose to recommend Mastodon, etc. But they do require Email address as an identity verification.
Nostr https://www.nostrapps.com/ or WireMin http://wiremin.org/
Both of them are decentralized social media, and does not require personal info to register.
Well not really. You are very welcome to set up your own instance and own your data.
Some measure of accountability for getting an account needs to be kept in my opinion. Pure anonymity lends itself to things like 4Chan emerging, which is an intersting place to be sure but not exactly conducive to a reasoned discussion. Pretty hard to send some pictures to aunt judy if everyone is just anon.
Joke's on them, I never made a single post.
Twitter was shit from the very beginning.
It's more like a non-privacy policy and should be called that.
I'm still calling it twitter
I call it Xitter (pronounced as Shitter)
Go ahead, train on me not writing a single thing and just retweeting exclusively Pokémon drawings from Japanese artists.
This is the best summary I could come up with:
As Ivanovs points out, X owner Elon Musk has ambitions to enter the AI market with another company, xAI.
This leads him to theorize that Musk likely intends to use X as a source of data for xAI — and perhaps Musk’s recent tweet encouraging journalists to write on X was even an attempt to generate more interesting and useful data to feed into the AI models.
In fact, Musk has previously stated that xAI would use “public tweets” to train its AI models, so this is not much of a leap.
Musk also filed suit against unknown entities for scraping Twitter data, which also may have been for the purpose of training artificial intelligence large language models.
Musk essentially confirmed the privacy policy change, responding to a post on X to clarify that the plan is to use “just public data, not DMs or anything private.”
X no longer responds to press requests with a poop emoji as it had following Musk’s takeover of the social network.
The original article contains 399 words, the summary contains 168 words. Saved 58%. I'm a bot and I'm open source!
Wow they chose to semi-hijack a common acronym for explainable AI (XAI), for a new company that’s likely unethical. Why do companies do this, hijacking existing words with benevolent meanings then eventually dirty them?
It's called marketing, and it's cancer
"xAI". The success of this AI will be measured whether it changes its name or not.
Great idea. Trainning an AI model on twitter totaly didn't go terribly wrong the last time it was tried. /s
If the data is public, can't anyone use it to train anyway? (besides rate limits to get the actual data, of course)
It's going to end up similar as when an AI was trained on 4chan. Mega racist and homophobic but also hyper sensitive because its Twitter (not gonna call it the new name)
I see this as a challenge to the fediverse? Our platforms are open and amenable to being used for AI training. Mastodon is full of human made image descriptions, some of them quite detailed.
Does the fediverse want to do anything different? Closed / private / human only spaces?