News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source.
Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.
7. No duplicate posts.
If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners.
The auto mod will contact you if a link shortener is detected, please delete your post if they are right.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
view the rest of the comments
Well, the image generator had to be trained on something first in order to spit out child porn. While it may be that the training set was solely drawn/rendered images, we don't know that, and even if the output were in that style, it might very well be photorealistic images generated from real child porn and run through a filter.
How many corn dogs do you think were in the training data?
Wild corn dogs are an outright plague where I live. When I was younger, me and my buddies would lay snares to catch to corn dogs. When we caught one, we'd roast it over a fire to make popcorn. Corn dog cutlets served with popcorn from the same corn dog is popular meal, especially among the less fortunate. Even though some of the affluent consider it the equivalent to eating rat meat. When me pa got me first rifle when I turned 14, I spent a few days just shooting corn dogs.
It didn't generate what we expect and know a corn dog is.
Hence it missed because it doesn't know what a "corn dog" is
You have proven the point that it couldn't generate csam without some being present in the training data
I hope you didn't seriously think the prompt for that image was "corn dog" because if your understanding of generative AI is on that level you probably should refrain from commenting on it.
Then if your question is "how many Photograph of a hybrid creature that is a cross between corn and a dog were in the training data?"
I'd honestly say, i don't know.
And if you're honest, you'll say the same.
But you do know because corn dogs as depicted in the picture do not exists so there couldn't have been photos of them in the training data, yet it was still able to create one when asked.
This is because it doesn't need to have been seen one before. It knows what corn looks like and it knows what a dog looks like so when you ask it to combine the two it will gladly do so.
Yeah, except photoshop and artists exist. And a quick google image search will find them. 🙄
And this proves that AI can't generate simulated CSAM without first having seen actual CSAM how, exactly?
To me, the takeaway here is that you can take a shitty 2 minute photoshop doodle and by feeding it thru AI it'll improve the quality of it by orders of magnitude.
I wasn't the one attempting to prove that. Though I think it's definitive.
You were attempting to prove it could generate things not in its data set and i have disproved your theory.
To me, the takeaway is that you know less about ai than you claim. Much less. Cause we have actual instances and many where csam is in the training data. Don't believe me?
Here's a link to it
I don't understand how you could possibly imagine that pic somehow proves your claim. You've made no effort in trying to explain yourself. You just keep dodging my questions when I ask you to do so. A shitty photoshop of a "corn dog" has nothing to do with how the image I posted was created. It's a composite between a corn and a dog.
Generative AI, just like a human, doesn't rely on having seen an exact example of every possible image or concept. During its training, it was exposed to huge amounts of data, learning patterns, styles, and the relationships between them. When asked to generate something new, it draws on this learned knowledge to create a new image that fits the request, even if that exact combination wasn't in its training data.
If the AI has been trained on actual CSAM and especially if the output simulates real people, then that’s a whole another discussion to be had. This is however not what we’re talking about here.
If a human has never seen a dog before, they don't know what it is or what it looks like.
If it's the same as a human, it won't be able to draw one.
And you continue to evade the questions challenging your argument.
How was the first ever painting of a dragon created? You couldn't possibly draw something you've never seen before, right?
Once again you're showing the limits of AI. A dragon exists in fiction. It exists in the mind of someone drawing it. While in ai, there is no mind, the concept cannot independently exist.
AI is not creating images in a vacuum. There is a person using it and that person does have a mind. You could come up with a brand new mythical creature right now, let's call it AI-saurus. If you ask it to create a picture of AI-saurus, it wouldn't be able to do so because it has no idea what it looks like. However what you could do is describe it to the AI and it'll output something that more or less resembles what you had in mind. What ever flaws you see in it you could correct for with a new, modified prompt and you keep doing this untill it produces something that matches the idea you had in mind. AI is like a police sketch artist; the outcome depends on how well you managed to describe the subject. The artist itself doesn't need to know what they looked like. They have a basic understanding of human facial anatomy and you're filling in the blanks. This is what generative AI does as well.
The people creating pictures of underage kids with AI are not asking for it to produce CSAM. It would most likely refuse to do so and may even report you. Instead, they're describing what they want the output to look like and they're arriving to the same end result by just using a different route.
You're right, it's not. It needs to know what things look like. Which. Once again, it's not going to without knowing what those things look like. Sorry dude either csam is in the training data and can do this. Or it's not. But I'm pretty tired of this. Later fool
An AI that is trained on children and nude adults can infer what a nude child looks like without ever being trained specifically with those images.
Your argument is hypothetical. Real world AI was trained on images of abused childen.
https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
Only because real world AI was trained on the dataset of ALL PUBLIC IMAGES, dumbass
So you're admitting they are correct?
No, I'm admitting they're stupid for even bringing it up.
Unless their argument is that all AI should be illegal, in which case they're stupid in a different way.
Do you think regular child porn should be illegal? If so, why?
Generally it's because kids were harmed in the making of those images. Since we know that AI is using images of children being harmed to make these images, as the other posters has repeatedly sourced (but also if you've looked up deepfakes, most deepfakes are of an existing porn and the face just changed over top. They do this with CP as well and must use CP videos to seed it, because the adult model would be too large)... why does AI get a pass for using children's bodies in this way? Why isn't it immoral when AI is used as a middle man to abuse kids?
As I keep saying, if this is your reasoning then all AI should be illegal. It only has CP in its training set incidentally, because the entire dataset of images on the internet contains some CP. It's not being specifically trained on CP images.
You failed to answer my questions in my previous comment.
Ok, if you insist...yes, CP should be illegal, since a child was harmed in its making. It can get a bit nuanced (for example, I don't like that it can be illegal for underage people to take pictures of their own bodies) but that's the gist of it.
That's not all of the questions I asked
What other questions? Sorry, I missed them.
I already addressed that. If you think all AI should be banned, that's fine. If you think only AI models creating fake CP should be banned, that's logically inconsistent.
You didn't answer that, no
That IS my answer, what the fuck kind of gaslighting shit is this
How does that answer
If you're arguing for all AI to be outlawed, I accept your argument as sound and logical.
Okay, Ill wait for you to answer them
Fuck your gaslighting.
Lol, asking you to answer two questions (which you haven't done, and which we can clearly see in the comment chain have not been directly answered) is hard, I see. Probably because you know the answer to those questions will make you lose this argument.
Nice try attempting to project your own use of gaslighting onto me though.
You claim you answered with:
But that's not what I asked. You failed yet again to answer my questions. Why even type on here if you want to ignore and gaslight everyone? That kinda makes it seem like you personally care about AI CP not being labeled as immoral. Why is that? Gee, so many more questions you won't answer, which kinda shows what your answer and intent are anyway.
Blah blah
Thank you, I accept your loss and that you were disproven.
Yes exactly. That people are then excusing this with "well it was trained on all.public images," are just admitting you're right and that there is a level of harm here since real materials are used. Even if they weren't being used or if it was just a cartoon, the morality is still shaky because of the role porn plays in advertising. We already have laws about advertising because it's so effective, including around cigarettes and prescriptions. Most porn, ESPECIALLY FREE PORN, is an ad to get you to buy other services. CP is not excluded from this rule - no one gets free lunch, so to speak. These materials are made and hosted for a reason.
The role that CP plays in most countries is difficult. It is used for blackmail. It is also used to generate money for countries (intelligence groups around the world host illegal porn ostensibly "to catch a predator," but then why is it morally okay for them to distribute these images but no one else?). And it's used as advertising for actual human trafficking organizations. And similar organizations exist for snuff and gore btw. And ofc animals. And any combination of those 3. Or did you all forget about those monkey torture videos, or the orangutan who was being sex trafficked? Or Daisy's Destruction and Peter Scully?
So it's important to not allow these advertisers to combine their most famous monkey torture video with enough AI that they can say it's AI generated, but it's really just an ad for their monkey torture productions. And even if NONE of the footage was from illegal or similar events and was 100% thought of by AI - it can still be used as an ad for these groups if they host it. Cartoons can be ads ofc.
Jesus Christ take your meds
Sweaty I think you're the one who needs meds if you have to ad hominem valid points instead of actually refuting
Just say you don't get how it works.
Unless you're operating under "guilty until proven innocent", those are not reasons to accuse someone.