this post was submitted on 19 Apr 2024
147 points (88.5% liked)

Technology

59422 readers
2855 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Thorny_Insight@lemm.ee 2 points 7 months ago (2 children)

Generative AI and LLMs is not what people mean when they're talking about the dangers of AI. What we worry about doesn't exist yet.

[–] hikaru755@feddit.de 2 points 7 months ago (1 children)

I mean... It might be. Just depends on how much potential there still is to get models up to higher reasoning capabilities, and I don't think anyone really knows that yet

[–] Thorny_Insight@lemm.ee 3 points 7 months ago

Yeah maybe. I just personally don't think LLMs are actually intelligent. They're just capable of faking intelligence but at the same time making errors that perfectly indicate that it's basically just bluffing. I'd be more worried about an AI that knows less things but demonstrates higer capability for logic and reasoning.

[–] funkless_eck@sh.itjust.works 2 points 7 months ago (1 children)

I dont think AI sentience as danger is going to be an issue in our lifetimes - we're 123 years in January since the first well known story featuring this trope (Karel Čapek's Rossumovi Univerzáiní Robotī)

We are a long way off from being able to copy virtual perception, action and unified agency of even basic organisms right now.

Therefore all claims about the "dangers" of AI are only dangers of humans using the tool (akin to the dangers of driving a car vs the dangers of cars attacking their owners without human interaction) and thus are just marketing hyperbole

in my opinion of course

[–] Thorny_Insight@lemm.ee 1 points 7 months ago (1 children)

Well yeah perhaps, but isn't that kind of like knowing that an asteroid is heading towards earth and feeling no urgency about it? There's non-zero chance that we'll create AGI withing the next couple years. The chances may be low but consequences have the potential to literally end humanity - or worse.

[–] funkless_eck@sh.itjust.works 1 points 7 months ago

"non zero" isnt exactly convincing, to me. there is also a non-zero chance God exists.