this post was submitted on 01 Aug 2023
677 points (99.6% liked)

Technology

58096 readers
2943 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

you are viewing a single comment's thread
view the rest of the comments
[–] notacat@lemmynsfw.com 17 points 1 year ago (2 children)

You said yourself you hate inherent bias yet attempt to justify the result by saying if used again it’s just going to produce another white face.

that’s the problem

It’s a racial bias baked into these AIs based on their training models.

[–] thepineapplejumped@lemm.ee 23 points 1 year ago (1 children)

I doubt it is concious racial bias, it's most likely that the training data is made up of mostly white people and labeled poorly.

[–] notacat@lemmynsfw.com 9 points 1 year ago (2 children)

I also wouldn’t say it was conscious bias either. I don’t think it’s intentionally developed in that way.

The fact still remains though whether conscious or unconscious, it’s potentially harmful to people of other races. Sure it’s an issue with just graphic generation now. What about when it’s used to identify criminals? When it’s used to filter between potential job candidates?

The possibilities are virtually endless, but if we don’t start pointing out and addressing any type of bias, it’s only going to get worse.

[–] wmassingham@lemmy.world 16 points 1 year ago (1 children)

What about when it’s used to identify criminals? When it’s used to filter between potential job candidates?

Simple. It should not fucking be used for those things.

[–] notacat@lemmynsfw.com 7 points 1 year ago

I absolutely agree but neither you nor I can stop people from using it for those purposes.

[–] altima_neo@lemmy.zip 5 points 1 year ago

I feel like you're overestimating the capabilities of current ai image generation. And also presenting problems that don't exist.

[–] Blaidd@lemmy.world 14 points 1 year ago

They aren't justifying anything, they literally said it was about the training data.