this post was submitted on 21 May 2024
509 points (95.4% liked)
Technology
59574 readers
3706 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
And the Stable diffusion team get no backlash from this for allowing it in the first place?
Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?
You can run the SD model offline, so on what service would that User be flagged?
Not everything exists on the cloud (someone else's computer)
Because what prompts people enter on their own computer isn't in their responsibility? Should pencil makers flag people writing bad words?
Stable Diffusion has been distancing themselves from this. The model that allows for this was leaked from a different company.
my main question is: how much csam was fed into the model for training so that it could recreate more
i think it'd be worth investigating the training data usued for the model
This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren't actually available in the dataset because they had already been removed from the internet.
You could still make AI CSAM even if you were 100% sure that none of the training images included it since that's what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI's hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That's the power and danger of these things.
That's not how any of this works