this post was submitted on 23 Jul 2023
120 points (100.0% liked)

Technology

59349 readers
5352 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Key Facts:

  • The AI system uses ten categories of social emotions to identify violations of social norms.

  • The system has been tested on two large datasets of short texts, validating its models.

  • This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.

you are viewing a single comment's thread
view the rest of the comments
[–] captainlezbian@lemmy.world 59 points 1 year ago (3 children)

This will absolutely be used to oppress the neurodivergent at some point

[–] betterdeadthanreddit@lemmy.world 12 points 1 year ago (2 children)

Could be helpful if it silently (or at least subtly) warns the user that they're approaching those boundaries. I wouldn't mind a little extra assistance preventing those embarrassing after-the-fact realizations. It'd have to be done in a way that preserves privacy though.

[–] overzeetop@lemmy.world 22 points 1 year ago

Like most scientific and technical advances, it could be an amazing tool for personal use. It won’t, of course. It will be used to make someone rich even richer, and to control or oppress people. Gotta love humanity.

[–] CheeseNoodle@lemmy.world 6 points 1 year ago (1 children)

Still dangerous, an authority could subtly shift those boundries in order to slowly push your behaviour in a desired direction.

Definitely a hazard. My ideal solution is something that could be built and evaluated in a way that allows me to know that it does what it's supposed to do and nothing else. From there, I'd want to run it on my own hardware in an environment under my control. The idea is to add enough layers of protection that it'd be easier and less expensive for that authority to change my behavior by hiring goons to beat me with a wrench. At least then I'll have a fairly unambiguous signal that it's happening but getting to that point would take a significant investment of effort, time and money.

[–] Madrigal@lemmy.world 6 points 1 year ago

Everyone, starting with the neurodivergent. Or some other favoured boogieman-of-the-day such as LGBT+ people.

[–] p03locke@lemmy.dbzer0.com 5 points 1 year ago

Then it's all Butlerian Jihad, bay-bee!