this post was submitted on 03 Aug 2023
235 points (98.4% liked)

Technology

59179 readers
2145 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Content moderators who worked on ChatGPT say they were traumatized by reviewing graphic content: 'It has destroyed me completely.'::Moderators told The Guardian that the content they reviewed depicted graphic scenes of violence, child abuse, bestiality, murder, and sexual abuse.

top 37 comments
sorted by: hot top controversial new old
[–] AlmightySnoo@lemmy.world 53 points 1 year ago* (last edited 1 year ago) (3 children)

He said that many of these passages centered on sexual violence and that the work caused him to grow paranoid about those around him. He said this damaged his mental state and his relationship with his family.

Another former moderator, Alex Kairu, told the news outlet that what he saw on the job "destroyed me completely." He said that he became introverted and that his physical relationship with his wife deteriorated.

The moderators told The Guardian that the content up for review often depicted graphic scenes of violence, child abuse, bestiality, murder, and sexual abuse.

A Sama spokesperson told the news outlet that workers were paid from $1.46 to $3.74 an hour. Time previously reported that the data labelers were paid less than $2 an hour to review content for OpenAI.

Sam deserves to be sued to bankruptcy at this point.

[–] fluxion@lemmy.world 30 points 1 year ago

Meanwhile Elon Musk is playing with his nipples at the thought of acquiring ChatGPT and then firing all these moderators

[–] nxfsi@lemmy.world 2 points 1 year ago

They should be grateful, they're being infinitely better paid than reddit mods.

[–] Thorny_Thicket@sopuli.xyz -4 points 1 year ago* (last edited 1 year ago) (1 children)

How's this Sam's fault again?

[–] spiderman@ani.social 2 points 1 year ago (1 children)

less pay, not providing mental health care to the moderators, better detection of nsfw and nsfl by bots or ai (or any automated thing) so that only very small amount of stuffs go to the moderators for checking after most of them being taken down automatically.

[–] Thorny_Thicket@sopuli.xyz -2 points 1 year ago (1 children)

It's the job of a content moderator to look at this kind of stuff though. That's literally what you're being paid for.

[–] spiderman@ani.social 3 points 1 year ago (1 children)

doesn't mean that they should be getting paid very low and without a mental healthcare. they are humans too and they are sensitive to weird stuffs like us too.

[–] Thorny_Thicket@sopuli.xyz -3 points 1 year ago* (last edited 1 year ago)

Sure but these aren't random people grabbed from the street and forced to do content moderation so I don't quite get why Sam needs to be sued into backruptcy because of this

[–] QubaXR@lemmy.world 50 points 1 year ago* (last edited 1 year ago) (1 children)

It's the same story with pretty much any platform. While you may know YouTube as the tool where AI/software randomly bans innocent channels and demonetizes videos based on false positives, it actually has a small army of human moderators. These poor folks are constantly exposed to some of the most vile content humankind produces >! ::: beheadings, rape, torture, child abuse::: !< etc.

I once worked on a project aiming to help their mental well-being, but honestly, while a step in the right direction, I don't think it made much difference.

Edit: attempting to nest multiple spoiler formatting

[–] fubo@lemmy.world 16 points 1 year ago* (last edited 1 year ago) (3 children)

>!beheadings, rape, torture, child abuse!&lt;

That's not how spoiler markup works here.

It works this way instead.yo yo dirty stuff here


Edited to add: Apparently there's not exactly consensus across different interfaces for what spoiler markup is supposed to be. Aaaargh!!

[–] steeev@midwest.social 11 points 1 year ago* (last edited 1 year ago) (1 children)

Neither of those appear formatted as spoilers in Voyager (formerly WefWef).

Edit: But, appears properly in Memmy.

[–] derin@lemmy.beru.co 4 points 1 year ago (2 children)

I'm also on Voyager; I don't think it has spoiler support.

[–] steeev@midwest.social 7 points 1 year ago (1 children)
[–] spiderman@ani.social 1 points 1 year ago

it seems like dev might not be focusing on that and he might actually need help in that.

[–] azura@lemmy.world 2 points 1 year ago

Yup also doesn't work on Mlem for me

[–] Psythik@lemm.ee 10 points 1 year ago (1 children)

Your markup didn't work, either, lol

[–] terminhell@lemmy.dbzer0.com 8 points 1 year ago (1 children)

Does for me, but I'm using jerboa.

[–] loom_in_essence@lemmy.world 0 points 1 year ago

Also worked for me on Connect

[–] QubaXR@lemmy.world 3 points 1 year ago (1 children)

Oh interesting. I just used the spoiler button while paying via Sync.

[–] 9point6@lemmy.world 5 points 1 year ago

Hmm also in sync and nothing got spoiler censored

[–] FlyingSquid@lemmy.world 45 points 1 year ago (1 children)

These would be the Kenyan moderators getting paid $2 an hour to go through that.

But Sam Altman will save the world for sure.

[–] TheBat@lemmy.world 3 points 1 year ago (1 children)

Why does he look like a ghoul?

[–] uriel238@lemmy.blahaj.zone 39 points 1 year ago* (last edited 1 year ago) (2 children)

This came up years ago when Facebook had its hearings about the difficulties of content moderation. Through hundreds of billions of pieces of content you're going to end up with a millions of NSFL BLITs. Of those even if .1% requires a human to determine whether or not it's unsafe, thats still a thosand moderators who've had their day ruined.

So apparently the response so far has been to outsource to undeveloped countries to ruin their lives.

[–] hglman@lemmy.ml 9 points 1 year ago

Like factory work before it.

[–] Pyr_Pressure@lemmy.ca 6 points 1 year ago (2 children)

Wonder if this would be something AI could eventually be trained to filter.

Hopefully it wouldn't make the AI hate humanity enough to evolve and destroy us though, thinking we are all perverts and sadists.

[–] inso@lemmy.sdf.org 2 points 1 year ago

There's actually a very good scifi story idea here

[–] uriel238@lemmy.blahaj.zone 2 points 1 year ago (1 children)

I doubt it. Our own sense of disgust is protective, to keep us from getting poisoned or infected with a contagious pathogen, or in the case of seeing violence, to keep us safe from that very same threat.

Even if we instil AI with survival objectives it'll learn to avoid things that are dangerous to it, while still being able to operate, say, our sewage and waste disposal systems without having a visceral response to the material being processed.

That doesn't fully make us safe from AI deciding we're too degenerate to live. An interesting notion comes up by recent news of an Air Force general saying USAF AI is trained on Judeo Christian values. And while that doesn't mean anything, I could see an AI-driven autonomous weapon (or a commander-AI that controlled and organized an immense army of murder-drones) being trained to assess humans based on their history of behaviors, destroying sinners or degerates or perverts or whatever.

Given we humans are a vengeful lot (Hammurabi's code an eye for an eye was to set an upper limit on retribution, claiming an eye rather than killing their family over the incident) it would be very easy to set judge bots to err on the side of assuming guilt, necessating punitive action.

AIs going renegade can always be attributed to poor programming.

I reckon we should work out how to feed the content into a brain in a jar, and then measure the disgust - might need a few brains to ensure there is a consensus.

[–] Default_Defect@midwest.social 37 points 1 year ago (1 children)

I survived a lot of this during my 4chan phase, can I have a job?

[–] atticus88th@lemmy.world 11 points 1 year ago

Yall youngins shoulda been there for the days of Stile.

[–] Prater@lemmy.world 29 points 1 year ago* (last edited 1 year ago) (1 children)

Aside from the pay in this case (which is obviously unacceptable) content moderation is just a terrible job in general, because with people being people, horrible stuff will always inevitably be uploaded to the internet. This is one job where I think it would actually be good to have AI take over when it reaches that point.

[–] nxfsi@lemmy.world 2 points 1 year ago

But they do it for free!

[–] Zoldyck@lemmy.world 3 points 1 year ago (1 children)

And what are we doing about it?

[–] hglman@lemmy.ml 2 points 1 year ago
[–] nevemsenki@lemmy.world 0 points 1 year ago (1 children)

What, did openai train their model on furry forums?

[–] moistclump@lemmy.world 4 points 1 year ago

I don’t think it’s about the AI it’s about the human inputs.