35

It seems crazy to me but ive seen this concept floated on several different post. There seems to be a number of users here that think there is some way AI generated CSAM will reduce Real life child victims.

Like the comments on this post here.

https://sh.itjust.works/post/6220815

I find this argument crazy. I don't even know where to begin to talk about how many ways this will go wrong.

My views ( which are apprently not based in fact) are that AI CSAM is not really that different than "Actual" CSAM. It will still cause harm when viewing. And is still based in the further victimization of the children involved.

Further the ( ridiculous) idea that making it legal will some how reduce the number of predators by giving predators an outlet that doesnt involve real living victims, completely ignores the reality of the how AI Content is created.

Some have compared pedophilia and child sexual assault to a drug addiction. Which is dubious at best. And pretty offensive imo.

Using drugs has no inherent victim. And it is not predatory.

I could go on but im not an expert or a social worker of any kind.

Can anyone link me articles talking about this?

all 50 comments
sorted by: hot top controversial new old
[-] pixxelkick@lemmy.world 35 points 9 months ago

Boy this sure seems like something that wouldn't be that hard to just... do a study on, publish a paper perhaps? Get peer reviewed?

It's always weird for me when people have super strong opinions on topics that you could just resolve by studying and doing science on.

"In my opinion, I think the square root of 7 outta be 3"

Well I mean, that's nice but you do know there's a way we can find out what the square root of seven is, right? We can just go look and see what the actual answer is and make an informed decision surrounding that. Then you don't need to have an "opinion" on the matter because it's been put to rest and now we can start talking about something more concrete and meaningful... like interpreting the results of our science and figuring out what they mean.

I'd much rather discuss the meaning of the outcomes of a study on, say, AI Generated CSAM's impact on proclivity in child predators, and hashing out if it really indicates an increase or decrease, perhaps flaws in the study, and what to do with the info.

As opposed too just gesturing and hand waving about whether it would or wouldn't have an impact. It's pointless to argue about what color the sky outta be if we can just, you know, open the window and go see what color the sky actually is...

[-] Nonameuser678@aussie.zone 16 points 9 months ago

I love your enthusiasm for research but if only it were that easy. I'm a phd researcher and my field is sexual violence. It's really not that easy to just go out and interview child sex offenders about their experiences of perpetration.

[-] skullgiver@popplesburger.hilciferous.nl 13 points 9 months ago* (last edited 7 months ago)

[This comment has been deleted by an automated system]

[-] PM_Your_Nudes_Please@lemmy.world 6 points 9 months ago* (last edited 9 months ago)

While I agree that studies would help, actually performing those studies has historically been very difficult. Because the first step to doing a study on pedophilia is actually finding a significant enough number of pedophiles who are willing and able to join the study. And that by itself is a tall order.

Then you ask these pedophiles (who are for some reason okay with admitting to the researchers that they are, in fact, pedophiles) to self-report their crimes. And you expect them to be honest? Any statistician will tell you that self-reported data is consistently the least reliable data, and that’s doubly unreliable when you’re basically asking them to give you a confession that could send them to federal prison.

Or maybe you try going the court records/police FOIA request route? Figure out which court cases deal with pedos, then figure out if AI images were part of the evidence? But that has issues of its own, because you’re specifically excluding all the pedos who haven’t offended or been caught; You’re only selecting the ones who have been taken to court, so your entire sample pool is biased. You’re also missing any pedos who have sealed records or sealed evidence, which is fairly common.

Maybe you go the anonymous route. Let people self report via a QR code or anonymous mail. But a single 4chan post could ruin your entire sample pool, and there’s nothing to stop bad actors from intentionally tainting your study. Because there are plenty of people who would jump at a chance to make pedos look even worse than they already do, to try and get AI CSAM banned.

The harsh reality is that studies haven’t been done because there simply isn’t a reliable way to gather data while controlling for bias. With pedophilia being taboo, any pedophiles will be dissuaded from participating. Because it means potentially outing yourself as a pedophile. And at that point, your best case scenario is having enough money to ghost your entire life.

[-] lwuy9v5@lemmy.world 16 points 9 months ago

That's so fucked up that anyone thinks that enablement is a genuine means of reduction here...

[-] Discoslugs@lemmy.world 9 points 9 months ago

Go check out how many downvotes i got on that post i linked.

[-] Moira_Mayhem@lemmy.world 1 points 6 months ago

Its so fucked up that people like you focus on the morality of the pervert and not the vulnerability of the victims.

The goal is reducing and eliminating the number of children exploited.

CSAM is a profitable industry for the disgusting people that operate it.

If you want them to stop exploiting children, then remove their marketshare with low cost no-harm AI alternatives.

None of you think of actual solutions, you just want a target that is socially acceptable for you to hate on.

[-] onelikeandidie@lemmy.world 7 points 9 months ago

I agree with you, I saw people on twitter once talking about this. Pretty disgusting to even consider.

[-] Killing_Spark@feddit.de 7 points 9 months ago* (last edited 9 months ago)

I'm just gonna put this out here and hope not to end up on a list:

Let's do a thought experiment and be empathetic with the human that is behind the predators. Ultimately they are sick and they feel needs that cannot be met without doing something abhorrent. This is a pretty fucked up situation to be in. Which is no excuse to become a predator! But understanding why people act how they act is important to creating solutions.

Most theories about humans agree that sexual needs are pretty important for self realization. For the pedophile this presents two choices: become a monster or never get to self realization. We have got to accept that this dilemma is the root of the problem.

Before there was only one option of getting a somewhat middleway solution: video and image material which the consumer could rationalize as being not as bad. Note that that isn't my opinion, I agree with the popular opinion that that is still harming children and needs to be illegal.

Now for the first time there is a chance to cut through this dilemma by introducing a third option: generated content. This is still using the existing csam as a basis. But so does every database that is used to find csam for prevention and policing. The actual pictures and videos aren't stored in the ai model and don't need to be stored after the model has been created. With that model more or less infinite new content can be created, that imo does harm the children significantly less directly. This is imo different from the actual csam material because noone can tell who is and isn't in the base data.

Another benefit of this approach has to do with the reason why csam exists in the first place. AFAIK most of this material comes from situations where the child is already being abused. At some point the abuser recognises that csam can get them monetary benefits and/or access to csam of other children. This is where I will draw a comparison to addiction, because it's kind of similar: people doing illegal stuff because they have needs they can't fulfill otherwise. If there is a place to get the "clean" stuff, much less people would go to the shady corner dealer.

In the end I think there is an utilitarian argument to be made here. With the far removed damage that generating csam via ai still deals to the actual victims we could help people to not become predators, help predators to not repeat, and most importantly prevent or at least lessen the amount of further real csam being created.

[-] Surdon@lemm.ee 10 points 9 months ago

Except there is a good bit of evidence to show that consuming porn is actively changing how we behave related to sex. By creating CSAM by AI, you create the depiction of a child that is mere object for the use of sexual gratification. That fosters a lack of empathy and an ego centric, self gratifying viewpoint. I think that can be said of all porn, honestly. The more I learn about what porn does to our brains the more problematic I see it

[-] Killing_Spark@feddit.de 4 points 9 months ago* (last edited 9 months ago)

I agree with this.

The more I learn about what porn does to our brains the more problematic I see it

And I agree with this especially. Turns out a brain that was/is at least in part there to get us to procreate isn't meant to get this itch scratched 24/7.

But to answer your concern: I will draw another comparison with addiction: Giving addicitive drugs out like candy isn't wise just as it wouldn't be wise to give access to generated csam to everyone. You'd need a control mechanism so that only people that need access get access. Admitedly this will deter a few people from getting their fix from the controlled instances compared to the completely free access. With drugs this seems to lead to a decrease of the amount of street-sold drugs though, so I see no reason this wouldn't be true, at least to some extent, for csam.

[-] Surdon@lemm.ee 2 points 9 months ago

I'm an advocate of safe injection sites, so I will agree somewhat here. Safe injection sites work because they identify addicts and aggressively supply them with resources to counteract the need for the addiction in the first place, all while encouraging less and less use. This is an approach that could have merit for pedophiles, but there are some issues that pop up with it as well that are unique- to consume a drug, the drug must enter the body somehow, where it is metabolized.

CSAM on the other hand, is taken in simply by looking at it. There is no "gloves on" approach to generating or handing the content without absorbing it- the best that can be hoped for is have it generated by someone completely 'immune' to it, which raises questions about how "sexy" they could make the content- if it doesn't "scratch the itch" the addicts will simply turn back to the real stuff.

There is a slim argument to be made that you could actually create MORE pedophiles through classical conditioning by exposing nonpedophilic people to erotic content paired with what looks like children. You could of course have it produced and handled by recovering/in treatment pedophiles, but that sounds like it defeats the point of limited access entirely and is therefore still bad, at least to the ones in charge of distribution.

Additionally, digital content isn't destroyed upon consumption like a drug, and you have a more minor but still real problem of content diversion, where content made for the program is spread to those not getting the help that was meant to be paired with it. This is an issue, of course, but could be rationalized as worth it so long as at least some pedophiles were being treated.

[-] Killing_Spark@feddit.de 3 points 9 months ago* (last edited 9 months ago)

Yes there are a lot of open questions around this, especially about the who and how of generation, and tbh it makes me a bit uncomfortable to think about a system like this in detail, because it will have to include rating these materials on a "sexyness" scale which feels revolting.

[-] Moira_Mayhem@lemmy.world 1 points 6 months ago

AI CSAM will not create new pedophiles, but it may keep existing pedophiles from encouraging a disgusting market of child exploiters.

[-] skullgiver@popplesburger.hilciferous.nl 6 points 9 months ago* (last edited 7 months ago)

[This comment has been deleted by an automated system]

[-] Killing_Spark@feddit.de 0 points 9 months ago

You make a very similar argument as @Surdon and my answer is the same (in short, my answer to the other comment is longer):

Yes giving everyone access would be a bad idea. I parallel it to controlled substance access, which reduces black-market drug sales.

You do have some interesting details though:

Training a model on real CSAM is bad, because it adds the likeness of the original victims to the image model. However, you don’t need CSAM in your training set to generate it.

This has been mentioned a few times, mostly with the idea of mixing "normal" children photos with adult porn to generate csam. Is that what you are suggesting too? And do you know if this actually works? I am not familiar with the extent generativ AI is able to combine these sorts of concepts.

As far as I can tell, we have no good research in favour of or against allowing automated CSAM. I expect it’ll come out in a couple of years. I also expect the research will show that the net result is a reduction in harm. I then expect politicians to ignore that conclusion and try to ban it regardless because of moral outrage.

This is more or less my expectation too, but I wouldn't count on the research coming out in a few years. There isn't much incentive to do actual research on the topic afaik. There isn't much to be gained because of the probable reaction of the regulators, and much to lose with such a hot topic.

[-] skullgiver@popplesburger.hilciferous.nl 2 points 9 months ago* (last edited 7 months ago)

[This comment has been deleted by an automated system]

[-] Killing_Spark@feddit.de 2 points 9 months ago

It’s not even an idea, it’s how you get CSAM out of existing models

I didn't know this was a thing tbh. I knew that you could get them to generate adult porn or combine faces with adult porn. Didn't know they could already create realistic csam. I assumed they used the original material to train one of the open models. Well that's even more horrifying.

It’s possible the concept is never addressed, but I don’t think there’s any way to stop the spread of CSAM once you no longer need to exchange files through shady hosting services.

Didn't even think about that. Exchanging these models will be significantly less risky than exchanging the actual material. Images are being scanned by cloud storage providers and archives with weak passwords are apparently too. But noone is going to execute an AI model just to see if it can or cannot produce csam.

[-] skullgiver@popplesburger.hilciferous.nl 0 points 9 months ago* (last edited 7 months ago)

[This comment has been deleted by an automated system]

[-] Hanabie@sh.itjust.works 6 points 9 months ago

The way I see it, and I'm pretty sure this will get downvoted, is that pedophiles will always find new material on the net. Just like actual, normal porn, people will put it out.

With AI-generated content, at least there's no actual child being abused, and it can feed the need for ever new material without causing harm to any real person.

I find the thought of kiddie porn abhorrent, and I think for every offender who actually assaults kids, there are probably a hundred who get off of porn, but this porn has to come from somewhere, and I'd rather it's from an AI.

What's the alternative, hunt down and eradicate every last closeted pedo on the planet? Unrealistic at best.

[-] AffineConnection@lemmy.world 5 points 9 months ago

How is this an unpopular opinion?

[-] Fal@yiffit.net 5 points 9 months ago

AI CSAM is not really that different than “Actual” CSAM

How do you not see how fucking offensive this is. A drawing is not really different from a REAL LIFE KID being abused?

It will still cause harm when viewing

The same way killing someone in a video game will cause harm?

And is still based in the further victimization of the children involved.

The made up children? What the hell are you talking about?

Some have compared pedophilia and child sexual assault to a drug addiction

No one sane is saying actually abusing kids is like a drug addiction. But you're conflating pedophilia and assault. When it's said pedophilia is like a drug addiction, it's non offending pedophiles that is being discussed. Literally no one thinks assaulting kids is like a drug addiction. That's your own misunderstanding.

Can anyone link me articles talking about this?

About what exactly? There's 0 evidence that drawings or fantasies cause people to assault children.

[-] sour@kbin.social 0 points 9 months ago
[-] skullgiver@popplesburger.hilciferous.nl 6 points 9 months ago* (last edited 7 months ago)

[This comment has been deleted by an automated system]

[-] JackGreenEarth@lemm.ee 0 points 9 months ago

I don't get it, it seems many people want to condemn all forms of child porn, seemingly to avoid downvotes, because for some reason the internet community can't see that AI generated images don't harm anyone.

[-] Moira_Mayhem@lemmy.world 2 points 6 months ago

Now that the hard right can't morally denounce gay people as abominations, they have moved on to trans and pedos. People they're sure that no one will defend.

Thankfully they're wrong about the trans community, but zero people are going to come forward to try to defend pedos.

So they have a perfect target, and have been hammering the propaganda posts hard for about 3 years now.

Some innocents have been caught in the backlash like stupid people that can't tell the difference between a pedo and a pediatrician before they set his car on fire.

Someone is going to die publicly because of this growing hatred, and everyone will just claim they deserved it.

[-] JackGreenEarth@lemm.ee 0 points 6 months ago

I'm not sure what you mean by 'defending' pedophiles. They have a right to exist, and to feel validated in their attraction (which they do not control), but no right to have sex with children.

[-] Moira_Mayhem@lemmy.world 1 points 6 months ago

So what happens when a non-offending celibate pedophile who has spent their life struggling against their urges, and gets outed and killed having never touched a child?

Who will come forward and condemn the killers?

No one.

[-] JackGreenEarth@lemm.ee 2 points 6 months ago

I would certainly condemn the killers. But you're right, I feel a large segment of the online population wouldn't.

[-] Moira_Mayhem@lemmy.world 2 points 6 months ago

Back in uni I had a roommate that was a celibate pedophile, great kid, brilliant programmer and always kind with a good sense of humor.

And a chronic alcoholic since the age of 14 as a coping mechanism.

None of us ever even knew until back in 2006 he went to school therapist to try to learn better coping mechanisms than getting blackout drunk every day at 7pm sharp.

She deemed him a threat and contacted FBI because apparently patient confidentiality in the U.S. doesn't protect pedophiles. Since he had a niece he had never met (on purpose) on the other side of the continent, she felt validated in her actions.

They came and took him into custody. It wasn't an arrest, just remanding to mental healthcare for evaluation against his will.

The officers picking him up were pretty loud about the fact they were escorting a pedophile. Made some coarse jokes about it as they walked him out. Took his computer of course.

A week later he comes back, broken AF. Calls together me and two other people he considered friends, and laid out the whole situation.

He had struggled with his desires his entire life, and went to monumental lengths to eliminate even the chance for contact. Never touched a kid, never used CSAM or even hentai. (I can confirm that as the fact his PC was clean of anything even remotely naughty was already a bit of a folklore legend around the dorm.) and vowed he would maintain this lifestyle forever.

He tried same age relationships, and some were ok but none lasted more than six or seven months.

Of course the psych eval and situational examination cleared him of any suspicion, but the damage had already been done and his parents picked him up that afternoon. If it wasn't for campus security walking him out he would have been mobbed by the dozens of angry students that had heard the worst part of what happened.

The school even tried clearing his name later but it only made his memory more of a laughing stock.

We kept in touch for a few months, mainly through Steam as we were both avid gamers.

Then, one day he just stopped logging in. At the time I was too scared to call his family so I just waited. 16 years later he's still offline.

Peyton I miss you man.

p.s. literally zero consequences for the therapist for ruining a bright kids life.

[-] JackGreenEarth@lemm.ee 2 points 6 months ago

I'm so sorry, that's such a sad story.

[-] Moira_Mayhem@lemmy.world 2 points 6 months ago

I appreciate it, makes me want to advocate more but then I become another target. It's fucked up all around and the only thing I can say is we need better and more secure mental healthcare in this coutry.

[-] sour@kbin.social -1 points 9 months ago

because it doesn’t happen if there isn’t evidence

._.

[-] Fal@yiffit.net 2 points 9 months ago

Yeah. People are way too hung up on there being evidence of stuff. It just FEELS right to you, right?

load more comments (2 replies)
[-] pinkdrunkenelephants@lemmy.cafe 3 points 9 months ago

People like that are pedo apologists and the fact that they're not being banned from the major Lemmy instances tells us all we need to know about those worthless shitholes.

[-] Moira_Mayhem@lemmy.world 1 points 6 months ago* (last edited 6 months ago)

that AI CSAM is not really that different than “Actual” CSAM

Ok so I understand instead of focusing on the important part, you know, the children being harmed and exploited, you instead focus on the morality of viewing pornography.

Very telling.

[-] ZILtoid1991@kbin.social 0 points 9 months ago

To those, who say "no actual children are involved":

What the fuck the dataset was trained on then? Even regular art generators had the issue of "lolita porn" (not the drawing kind, but the "very softcore" one with real kids!) ending in their training material, and with current technology, it's very difficult to remove it without redoing the whole dataset yet again.

At least with drawings, I can understand the point as long as no one uses a model or is easy to differentiate between real and drawings (heard really bad things about those doing it in "high art" style). Have I also told you how much of a disaster it would be if the line between real and fake CSAM would be muddied? We already have moronic people arguing "what if someone matures faster that the others", like Yandev. We will have "what if someone gets jailed after thinking their stuff was just AI generated".

[-] Chozo@kbin.social 10 points 9 months ago

Even regular art generators had the issue of "lolita porn" ending in their training material

Source? I've never heard of this happening. I feel like it would be pretty difficult for material that's not easily found on clearnet (where AI scrapers are sourcing their training material from) to end up in the training dataset without being very intentional.

[-] ZILtoid1991@kbin.social 1 points 9 months ago

It was on Twitter by an anti-AI group. Xon't have the link anymore.

[-] xigoi@lemmy.sdf.org 5 points 9 months ago

What the fuck the dataset was trained on then?

I'm pretty sure if you show an AI regular porn and regular pictures of children, it will be able to deduce what child porn looks like without any actual children being harmed.

[-] ZILtoid1991@kbin.social 3 points 9 months ago

Even in that scenario, it would be fuckjng creepy, since actual kids are still being involved.

[-] Killing_Spark@feddit.de 4 points 9 months ago

It being creepy and it doing harm are different things right?

[-] Moira_Mayhem@lemmy.world 1 points 6 months ago* (last edited 6 months ago)

Sorry no, you are just plain wrong here when it comes to training data.

Zero public AI image generators used CSAM as training material.

[-] Meowoem@sh.itjust.works -2 points 9 months ago

Your statement is 'i don't know what I'm talking about but I have strong options' that's understandable but if we really care about harm reduction then it has to be an evidence based and science backed policy.

I have no idea what the right thing to do is but I want whatever helps mitigate risk and harm.

this post was submitted on 05 Oct 2023
35 points (69.2% liked)

Unpopular Opinion

5662 readers
21 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 1 year ago
MODERATORS