this post was submitted on 10 Dec 2023
160 points (85.4% liked)

Technology

58138 readers
4398 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
(page 2) 33 comments
sorted by: hot top controversial new old
[–] blazera@kbin.social 2 points 9 months ago (7 children)

Thats a fun thought experiment at least. Is there any way for an AI to gain physical control on its own, within the bounds of software. It can make programs and interact with the web.

Some combination of bank hacking, 3D modeling, and ordering 3D prints delivered gets it close, but i dont know if it can seal the deal without human assistance. Some kind of assembly seems necessary, or at least powering on if it just orders a prebuilt robotic appendage.

[–] intensely_human@lemm.ee 1 points 9 months ago

“Hey Timmy, if you solder these components I’ll tell you how to get laid”

[–] RickRussell_CA@lemmy.world 1 points 9 months ago (2 children)

That, in my mind, is a non-threat. AIs have no motivation; there's no reason for an AI to do any of that.

Unless it's being manipulated by a bad actor who wants to do those things. THAT is the real threat. And we know those bad actors exist and will use any tool at their disposal.

[–] JackGreenEarth@lemm.ee 2 points 9 months ago (1 children)

They have the motivation of whatever goal you programmed them with, which is probably not the goal you thought you programmed it with. See the paperclip maximiser.

[–] RickRussell_CA@lemmy.world 1 points 9 months ago (1 children)

I'm familiar with that thought exercise, but I find it to be fearmongering. AI isn't going to be some creative god that hacks and breaks stuff on its own. A paperclip maximizer AI isn't going to manipulate world steel markets or take over steel mills unless that capability is specifically built into its operating parameters.

The much greater risk in the near term is that bad actors exploit AI to accomplish very specific immoral, illegal, or exploitative tasks by building those tasks into AI. Such as deepfakes, or using drones to track and murder people, etc. Nation-state actors will probably start using this stuff for truly horrible reasons long before criminals do.

[–] intensely_human@lemm.ee 1 points 9 months ago

I wonder if you can describe the operating parameters of GPT-4

[–] intensely_human@lemm.ee 1 points 9 months ago

Nuclear weapons have no motivation

[–] afraid_of_zombies@lemmy.world 1 points 9 months ago* (last edited 9 months ago)

I really don't think so. This is 15 years of factory/infrastructure experience here. You are going to need a human to turn a screwdriver somewhere.

I don't think we need to worry about this scenario. Our hypothetical AI can just hire people. It isn't like there would be a shortage of people who have basic assembly skills and would not have a moral problem building what is clearly a killbot. People work for Amazon, Walmart, Boeing, Nestle, Haliburton, Atena, Goldman Sachs, Faceboot, Comcast, etc. And heck even after it is clear what they did it isnt like they are going to feel bad about it. They will just say they needed a job to pay the bills. We can all have an argument about professional integrity in a bunker as drones carrying prions rain down on us.

load more comments (4 replies)
load more comments
view more: ‹ prev next ›