this post was submitted on 05 Oct 2023
36 points (69.6% liked)

Unpopular Opinion

6233 readers
5 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 1 year ago
MODERATORS
 

It seems crazy to me but ive seen this concept floated on several different post. There seems to be a number of users here that think there is some way AI generated CSAM will reduce Real life child victims.

Like the comments on this post here.

https://sh.itjust.works/post/6220815

I find this argument crazy. I don't even know where to begin to talk about how many ways this will go wrong.

My views ( which are apprently not based in fact) are that AI CSAM is not really that different than "Actual" CSAM. It will still cause harm when viewing. And is still based in the further victimization of the children involved.

Further the ( ridiculous) idea that making it legal will some how reduce the number of predators by giving predators an outlet that doesnt involve real living victims, completely ignores the reality of the how AI Content is created.

Some have compared pedophilia and child sexual assault to a drug addiction. Which is dubious at best. And pretty offensive imo.

Using drugs has no inherent victim. And it is not predatory.

I could go on but im not an expert or a social worker of any kind.

Can anyone link me articles talking about this?

you are viewing a single comment's thread
view the rest of the comments
[–] ZILtoid1991@kbin.social 0 points 1 year ago (3 children)

To those, who say "no actual children are involved":

What the fuck the dataset was trained on then? Even regular art generators had the issue of "lolita porn" (not the drawing kind, but the "very softcore" one with real kids!) ending in their training material, and with current technology, it's very difficult to remove it without redoing the whole dataset yet again.

At least with drawings, I can understand the point as long as no one uses a model or is easy to differentiate between real and drawings (heard really bad things about those doing it in "high art" style). Have I also told you how much of a disaster it would be if the line between real and fake CSAM would be muddied? We already have moronic people arguing "what if someone matures faster that the others", like Yandev. We will have "what if someone gets jailed after thinking their stuff was just AI generated".

[–] Chozo@kbin.social 10 points 1 year ago (1 children)

Even regular art generators had the issue of "lolita porn" ending in their training material

Source? I've never heard of this happening. I feel like it would be pretty difficult for material that's not easily found on clearnet (where AI scrapers are sourcing their training material from) to end up in the training dataset without being very intentional.

[–] ZILtoid1991@kbin.social 1 points 1 year ago

It was on Twitter by an anti-AI group. Xon't have the link anymore.

[–] xigoi@lemmy.sdf.org 5 points 1 year ago (1 children)

What the fuck the dataset was trained on then?

I'm pretty sure if you show an AI regular porn and regular pictures of children, it will be able to deduce what child porn looks like without any actual children being harmed.

[–] ZILtoid1991@kbin.social 3 points 1 year ago (1 children)

Even in that scenario, it would be fuckjng creepy, since actual kids are still being involved.

[–] Killing_Spark@feddit.de 4 points 1 year ago

It being creepy and it doing harm are different things right?

[–] Moira_Mayhem@lemmy.world 1 points 9 months ago* (last edited 9 months ago)

Sorry no, you are just plain wrong here when it comes to training data.

Zero public AI image generators used CSAM as training material.