this post was submitted on 16 Jan 2024
88 points (100.0% liked)

Technology

37702 readers
290 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] GiuseppeAndTheYeti@midwest.social 62 points 9 months ago (2 children)

Because we have been pornifying asian women on the internet for decades. Does that really beg the question posed in the title?

[–] Gaywallet@beehaw.org 31 points 9 months ago* (last edited 9 months ago) (2 children)

You're absolutely correct, yet ask someone who's very pro AI and they might dismiss such claims as "needing better prompts". Also many people may not be as tech informed as you are, and bringing light to algorithmic bias can help them understand and navigate the world we now live in. Dismissing the article just because you already know the answer doesn't really encourage people to participate in a discussion.

[–] Even_Adder@lemmy.dbzer0.com 8 points 9 months ago (1 children)

It's really hard getting dark skin sometimes. A lot of the time it's not even just the model, LoRAs and Textual Inversions make the skin lighter again so you have to try even harder. It's going to take conscious effort from people to tune models that are inclusive. With the way media is biased right now, I feel like it's going to take a lot of effort.

[–] jarfil@beehaw.org 3 points 9 months ago (20 children)

"Inclusive models" would need to be larger.

Right now people seem to prefer smaller quantized models, with whatever set of even smaller LoRAs on top, that make them output what they want... and only include more generic elements in the base model.

load more comments (20 replies)
[–] Admetus@sopuli.xyz 10 points 9 months ago

And every single Asian game and anime tends to go for skimpy or virtual softcore with it's female characters. Rarely you see a female character in full armor.

[–] jarfil@beehaw.org 38 points 9 months ago* (last edited 9 months ago) (1 children)

Wrong question. The right question would be:

Why is AI as used in Lensa's Magic Avatars App Pornifying Asian Women?

Ask Lensa to remove the "ugly" and similar negative prompts from their avatar generating App, and let's see what comes out.

https://stable-diffusion-art.com/how-to-use-negative-prompts/#Universal_negative_prompt

For reference, check out how that same negative prompt turns a chubby-ish poorly shaved average guy, into a male pornstar, or a valet into a rich daddy's boy.

[–] smeg@feddit.uk 10 points 9 months ago

Can we please collectively get into the habit of editing these borderline-clickbait titles or at least add sub-titles explaining the real article? This isn't reddit where you can't edit anything and can't add explanatory text!

[–] megopie@beehaw.org 35 points 9 months ago* (last edited 9 months ago) (1 children)

If I had to guess, they probably did a shit job labeling training data or used pre labeled images, now where in the world could they have found huge amounts of pictures of women on the internet with the specific label of “Asian”?

Almost like, most of what determines the quality of the output is not “prompt engineering” but actually the back end work of labeling the training data properly, and you’re not actually saving much labor over more traditional methods, just making the labor more anonymous, easier to hide, and thus easier to exploit and devalue.

Almost like this shit is a massive farce just like the “meta verse” and crypto that will fail to be market viable and waist a shit ton of money that could have been spent on actually useful things.

[–] webghost0101@sopuli.xyz 8 points 9 months ago (1 children)

They did literally nothing and seem to use the default stable diffusion model which is supposed to be a techdemo. Would have been easy to put "(((nude, nudity, naked, sexual, violence, gore)))" as the negative prompt

[–] megopie@beehaw.org 7 points 9 months ago

The problem is that negative prompts can help, but when the training data is so heavily poisoned in one direction, stuff gets through.

[–] Buelldozer 32 points 9 months ago

Because the Internet is for porn. Always has been, always will be.

[–] Muffi@programming.dev 27 points 9 months ago (2 children)

Scroll through the trained models on civit.ai and you'll quickly get a feeling of the dystopian level of "prettifying" everything in the AI-generation world.

I also once searched for "brown" just to see if any models were trained to create non-white-skinned people, and got shocked when the result was filled with models trained on Millie Bobby Brown from Stranger Things. I don't even want to know what those models are used for.

[–] ExLisper@linux.community 20 points 9 months ago

dystopian level of “prettifying” everything in the AI-generation world.

So like all the ad campaigns, TV shows and movies in the real world?

load more comments (1 replies)
[–] belated_frog_pants@beehaw.org 26 points 9 months ago (1 children)

Because white dudes fetishizing asian women wrote the llms and pointed at the training data

[–] anachronist@midwest.social 12 points 9 months ago

I work in tech and asian guys tend to outnumber white guys in it, especially if you combine east asian and south asian.

[–] GilgameshCatBeard@lemmy.ca 26 points 9 months ago

Because simps.

Saved you a click.

[–] intensely_human@lemm.ee 20 points 9 months ago (1 children)

Are the images above supposed to depict “porn”? I’ve never seen porn like that.

[–] 1984 13 points 9 months ago* (last edited 9 months ago)

In 2024, the brain washing of people is almost complete.

Sensuality is now porn. :)

[–] Omega_Haxors@lemmy.ml 20 points 9 months ago (2 children)

Stable Diffusion is little more than content laundering. It cannot create anything more than what you put in.

[–] lloram239@feddit.de 10 points 9 months ago (1 children)

Yawn, are we still repeating blinding repeating this utter nonsense from a year ago?

load more comments (1 replies)
[–] darkphotonstudio@beehaw.org 4 points 9 months ago (1 children)

You're so confidently incorrect about something you clearly don't know much about.

[–] anachronist@midwest.social 8 points 9 months ago

How is he wrong?

[–] millie@beehaw.org 16 points 9 months ago

I'm not exposed to a huge amount of media coming out of Asia, outside of a handful of Korean shows that Netflix has picked up and anime. But like, if anime is any indicator, I'm not really surprised that the training data for Asian women is leaning more toward overt sexualization. Even setting aside the whole misogynistic 'fan service' thing, I don't feel like I see as much representation of women who defy traditional gender roles as the last twenty or so years of Western media.

It certainly could be that anime is actually a huge outlier here, but if the training data is primarily from the English speaking web, it might be overrepresented anyway. But like, when it comes to weird AI image behaviors, it pays to think about the probable training data.

Like, stable diffusion seems to do a better job of rendering jewelry if you tell it to surround it with berries. Given the output, this seems to be due to Christmas themed jewelry ads. They also tend to add a lot of bokeh for the same reason.

[–] onlinepersona@programming.dev 12 points 9 months ago (2 children)
[–] IHeartBadCode@kbin.social 9 points 9 months ago

Absolutely this. The reason AI defaults female into "female armor mode" is the same reason Excel has January February Maruary. Our spicy autocorrect overlords cannot extrapolate data in a direction that it's training has no knowledge of.

[–] scrubbles@poptalk.scrubbles.tech 5 points 9 months ago

You train on a bunch of reddit crap, you're going to get neck beard reddit crap out. It'd look different if they only used art history books.

[–] Nacktmull@lemm.ee 12 points 9 months ago* (last edited 9 months ago)

Does AI not generally pornify women and girls independent of ethnicity?

[–] sculd@beehaw.org 12 points 9 months ago

Looking at some of the replied that tried to dismiss the issue and the general lack of concern from moderators against aggressive replies from AI apologists (in this thread but also other AI related threads) are disheartening.

[–] RobotToaster@mander.xyz 10 points 9 months ago* (last edited 9 months ago) (1 children)

Because it's trained on the internet, and we all know what that's for.

https://www.youtube.com/watch?v=LTJvdGcb7Fs

load more comments (1 replies)
[–] webghost0101@sopuli.xyz 5 points 9 months ago* (last edited 9 months ago)

While i agree there is a big issue with the bad biased and sexist training data this entire article is about the lensa app which uses (i assume) the default stable diffusion model laion-5b.

Intentional creating sexualized pictures is banned in their guidelines. And yet no one thought of creating a good negative prompt that negates any kind of nudity or eroticism? It still doesn't properly fix the training data but at least people aren't unwillingly presented porn of their own images.

Also everyone can create a dataset and build a stable diffusion model, so why is lensa relying on the default model which is more like a quick and dirty tech demo. They had all the tools to do this right but decided to not even uses the easy lazy ones.

[–] Even_Adder@lemmy.dbzer0.com 5 points 9 months ago* (last edited 9 months ago)

If we're talking open source models, it's because a lot of the people fine-tuning them are Asian, and have that bias.

[–] intensely_human@lemm.ee 4 points 9 months ago

Because people are telling it to, I’d wager

[–] shellsharks@infosec.pub 2 points 9 months ago

Because AI is the literal worst.

load more comments
view more: next ›