this post was submitted on 26 Jul 2023
146 points (100.0% liked)

Technology

60075 readers
3535 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 22 comments
sorted by: hot top controversial new old
[–] fearout@kbin.social 34 points 1 year ago* (last edited 1 year ago)

Locking social norms at some predetermined stage is a great way to curb all progress. Like, slavery was a social norm at some point.

[–] Motavader@lemmy.world 19 points 1 year ago (1 children)

Yeah, I suggest using Signal to communicate with friends....

[–] moistclump@lemmy.world 1 points 1 year ago (1 children)

I haven’t read the article. Are you being sarcastic? Or is it a more secure option?

[–] ramble81@lemmy.world 15 points 1 year ago

It's probably the most secure,. commonly available, messaging platform right now. They keep a bare minimum of metadata on their servers. Basically enough to link you on the platform. After that, everything is e2e encrypted and they can't tell authorities anything.

Other platforms are a sliding scale from to/from/time data, all the way up to full messages.

[–] anewbeginning@lemmy.world 16 points 1 year ago (1 children)

How many advances were considered against social norms…?

[–] Pons_Aelius@kbin.social 14 points 1 year ago

I'm pretty sure it was, all of them.

[–] bionicjoey@lemmy.ca 16 points 1 year ago (2 children)
[–] FaceDeer@kbin.social 4 points 1 year ago

Alternately, you could be getting a personal AI buddy who can whisper a warning in your ear when you're about to misread the room and do something that'll cause you a lot of trouble.

[–] Pons_Aelius@kbin.social 15 points 1 year ago (2 children)

If they used social media as training data it will say Everything is normal human behaviour...

[–] FlyingSquid@lemmy.world 13 points 1 year ago (1 children)

They also used ChatGPT-3 to decide what's normal.

[–] HikingVet@lemmy.sdf.org 12 points 1 year ago (1 children)
[–] Sabata11792@kbin.social 4 points 1 year ago

Sorry, as a large language model...

[–] PineapplePartisan@lemmy.world 4 points 1 year ago (1 children)

Wait until they train it on porn sites. The AI may decide seppuku is the only option.

[–] 0XiDE@lemmy.world 1 points 1 year ago

Is that where they all stand in a circle and jizz on the girl's face?

[–] jeena@jemmy.jeena.net 9 points 1 year ago (1 children)
[–] FaceDeer@kbin.social 3 points 1 year ago* (last edited 1 year ago)

But there was nothing wrong with the basic idea of the tech in Minority Report. It worked. They saved many lives by preventing imminent murders with it. The main problem in the movie was that they leapt straight from "your name came out of this machine" to "ten years dungeon. No trial."

Movies are designed to sell as many tickets as possible by presenting scenarios that provoke endorphins. They're not serious scenarios you should be making real-world decisions based off of.

[–] whenigrowup356@lemmy.world 6 points 1 year ago (1 children)

Don't worry guys, I'm sure there's a very good reason for this.

[–] Audbol@lemmy.world 3 points 1 year ago

It appears there is. They are using it to determine an areas general feelings are towards a US military presence in an area (whether the local population feels they need help from the US or not) as a means of helping to determine best locations for setting up garrisons and bases or whatever during a conflict. Which makes sense as you don't want to choose an area that really doesn't want you there as they likely become an asset to the enemy and put your soldiers at risk.

I, for one, welcome our new robot overlords.

[–] krzschlss@kbin.social 4 points 1 year ago* (last edited 1 year ago)

We should violate anything Pentagon considers to be a study. Especially when it wants to control Social Norms.

[–] hawkwind@lemmy.management 3 points 1 year ago

DAE feel like they woke up one day recently and “AI” suddenly has the answer to EVERY SINGLE PROBLEM EVER? Yet, nothing is getting noticeably better?

“AI” doesn’t have to work a dead end job to feed its family, or turn to alcohol because it’s lonely and scared of being forgotten. It’s training data is a curated version of the human experience based on the Internet!

It’s playing human instead of being human and ALL of its solutions will assume that’s “normal.”

Imagine a five star general googling “should I attack this country?” That’s silly right? Well that’s what’s happening. It’s just being wrapped in a way that makes it look novel.

These are algorithms designed to mimic humans. When faced with any actual controversy they must be persuaded to answer in an “acceptable” and predetermined manner.

The golden rule.

load more comments
view more: next ›