this post was submitted on 08 Apr 2024
72 points (85.3% liked)
Technology
60036 readers
2734 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't forsee it becoming "sentient" so much as "Being given a stupid amount of access and resources to figure out a problem by itself, and stupidly pursuing the maximization of that goal with zero context."
There's that darkly humorous hypothetical that an Ai tasked with maximizing making paperclips would continue to do so, using every resource it could get a hold of, and destroying any threat to further paperclip production!
So that, with data center expansion and water. Lol
See "paperclip maximizer" under "hypothetical examples" Here: https://en.m.wikipedia.org/wiki/Instrumental_convergence
oh this is happening today. the ultra-addictive social media thing is mostly through machine learning algos being tuned to do this regardless of anything else.
EXACTLY. High-five!
That's what I worry about. Right now we can ignore social media somewhat, but if Ai gets wedged into contracts with government/infrastructure and other unavoidable daily life, I imagine that's where a plausible threat could come from.
I've no doubt such things are already in the works. Ai controlled traffic lights or something, for instance. Obviously the military and law enforcement are already giddy about it, of course.
Giving a stupid machine a seemingly simple goal to pursue and the wrong set of keys could lead to disasterous consequences, I think. We also have the whole "Do Ai cars protect the driver or all human life even if it risks the driver?" Debate.
"But it's trendy, it's the future! And there's so much venture capital involved, how lucrative!" Seems to be how major decisions are made these days.
I don't see it some day "waking up" and thinking "I feel like humans are unnecessary." It's scarier than that...it will see us as just another variable to control and "maximize" us out of the picture.