this post was submitted on 01 Aug 2023
677 points (99.6% liked)

Technology

57944 readers
2889 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

you are viewing a single comment's thread
view the rest of the comments
[–] ExclamatoryProdundity@lemmy.world 257 points 1 year ago (8 children)

Look, I hate racism and inherent bias toward white people but this is just ignorance of the tech. Willfully or otherwise it’s still misleading clickbait. Upload a picture of an anonymous white chick and ask the same thing. It’s going go to make a similar image of another white chick. To get it to reliably recreate your facial features it needs to be trained on your face. It works for celebrities for this reason not a random “Asian MIT student” This kind of shit sets us back and makes us look reactionary.

[–] AbouBenAdhem@lemmy.world 165 points 1 year ago* (last edited 1 year ago) (4 children)

It’s less a reflection on the tech, and more a reflection on the culture that generated the content that trained the tech.

Wang told The Globe that she was worried about the consequences in a more serious situation, like if a company used AI to select the most "professional" candidate for the job and it picked white-looking people.

This is a real potential issue, not just “clickbait”.

[–] HumbertTetere@feddit.de 36 points 1 year ago (2 children)

If companies go pick the most professional applicant by their photo that is a reason for concern, but it has little to do with the image training data of AI.

[–] altima_neo@lemmy.zip 5 points 1 year ago

Especially ones that are still heavily in development

[–] luthis@lemmy.nz 30 points 1 year ago (2 children)

A company using a photo to choose a candidate is really concerning regardless if they use AI to do it.

[–] AbouBenAdhem@lemmy.world 21 points 1 year ago* (last edited 1 year ago) (2 children)

Some people (especially in business) seem to think that adding AI to a workflow will make obviously bad ideas somehow magically work. Dispelling that notion is why articles like this are important.

(Actually, I suspect they know they’re still bad ideas, but delegating the decisions to an AI lets the humans involved avoid personal blame.)

[–] squaresinger@feddit.de 6 points 1 year ago

It's a massive issue that many people (especially in business) have this "the AI has spoken"-bias.

Similar to how they implement whatever the consultant says, no matter if it actually makes sense, they just blindly follow what the AI says .

[–] Water1053@lemmy.world 5 points 1 year ago

Businesses will continue to use bandages rather than fix their root issue. This will always be the case.

I work in factory automation and almost every camera/vision system we've installed has been a bandage of some sort because they think it will magically fix their production issues.

We've had a sales rep ask if our cameras use AI, too. 😵‍💫

[–] SuddenDownpour@lemmy.world 3 points 1 year ago

Hiring practices are broken from its very basics. The vast majority of businesses consistently discriminate against people who deviate from the norm in presentation, even if the candidate meets the technical requirements or would otherwise be productive, which results in millions of people who are capable of contributing to society being pushed aside.

[–] JeffCraig@citizensgaming.com 9 points 1 year ago* (last edited 1 year ago)

Again, that's not really the case.

I have Asian friends that have used these tools and generated headshots that were fine. Just because this one Asian used a model that wasn't trained for her demographic doesn't make it a reflection of anything other than the fact that she doesn't understand how MML models work.

The worst thing that happened when my friends used it were results with too many fingers or multiple sets of teeth 🤣

[–] drz@lemmy.ca 4 points 1 year ago (1 children)

No company would use ML to classify who's the most professional looking candidate.

  1. Anyone with any ML experience at all knows how ridiculous this concept is. Who's going to go out there and create a dataset matching "proffesional looking scores" to headshots?
  2. The amount of bad press and ridicule this would attract isn't worth it to any company.
[–] kbotc@lemmy.world 7 points 1 year ago

Companies already use resume scanners that have been found to bias against black sounding names. They’re designed to feedback loop successful candidates, and guess what shit the ML learned real quick?

[–] hardypart@feddit.de 23 points 1 year ago

It still perfectly and visibly demonstrates the big point of criticism in AI: The tendencies the the training material inhibits.

[–] notacat@lemmynsfw.com 17 points 1 year ago (2 children)

You said yourself you hate inherent bias yet attempt to justify the result by saying if used again it’s just going to produce another white face.

that’s the problem

It’s a racial bias baked into these AIs based on their training models.

[–] thepineapplejumped@lemm.ee 23 points 1 year ago (1 children)

I doubt it is concious racial bias, it's most likely that the training data is made up of mostly white people and labeled poorly.

[–] notacat@lemmynsfw.com 9 points 1 year ago (2 children)

I also wouldn’t say it was conscious bias either. I don’t think it’s intentionally developed in that way.

The fact still remains though whether conscious or unconscious, it’s potentially harmful to people of other races. Sure it’s an issue with just graphic generation now. What about when it’s used to identify criminals? When it’s used to filter between potential job candidates?

The possibilities are virtually endless, but if we don’t start pointing out and addressing any type of bias, it’s only going to get worse.

[–] wmassingham@lemmy.world 16 points 1 year ago (1 children)

What about when it’s used to identify criminals? When it’s used to filter between potential job candidates?

Simple. It should not fucking be used for those things.

[–] notacat@lemmynsfw.com 7 points 1 year ago

I absolutely agree but neither you nor I can stop people from using it for those purposes.

[–] altima_neo@lemmy.zip 5 points 1 year ago

I feel like you're overestimating the capabilities of current ai image generation. And also presenting problems that don't exist.

[–] Blaidd@lemmy.world 14 points 1 year ago

They aren't justifying anything, they literally said it was about the training data.

[–] heartlessevil@lemmy.one 11 points 1 year ago

This is like a demonstration of lack of self awareness

[–] Buddahriffic@lemmy.world 10 points 1 year ago

The AI might associate lighter skin with white person facial structure. That kind of correlation would need to be specifically accounted for I'd think, because even with some examples of lighter skinned Asians, the majority of photos of people with light skin will have white person facial structure.

Plus it's becoming more and more apparent that AIs just aren't that good at what they do in general at this point. Yes, they can produce some pretty interesting things, but they seem to be the exception rather than the norm, and in hindsight, a lot of my being impressed with results I've seen so far is that it's some kind of algorithm that is producing that in the first place when the algorithm itself isn't directly related to the output but is a few steps back from that.

I bet for the instances where it does produce good results, it's still actually doing something simpler than what it looks like it's doing.

[–] Thorny_Thicket@sopuli.xyz 2 points 1 year ago

Almost like we're looking for things to get mad about.

Also what are these 50 people downvoting you for? Too much nuance I suppose.