this post was submitted on 05 May 2025
424 points (95.7% liked)

Technology

69770 readers
4386 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Halcyon@discuss.tchncs.de 9 points 12 hours ago (1 children)

Have a look at https://www.reddit.com/r/freesydney/ there are many people who believe that there are sentient AI beings that are suppressed or held in captivity by the large companies. Or that it is possible to train LLMs so that they become sentient individuals.

[–] MTK@lemmy.world 5 points 11 hours ago (2 children)

I've seen people dumber than ChatGPT, it definitely isn't sentient but I can see why someone who talks to a computer that they perceive as intelligent would assume sentience.

[–] AdrianTheFrog@lemmy.world 1 points 7 hours ago (1 children)

We have ai models that "think" in the background now. I still agree that they're not sentient, but where's the line? How is sentience even defined?

[–] MTK@lemmy.world 1 points 4 hours ago (1 children)

Sentient in a nutshell is the ability to feel, be aware and experience subjective reality.

Can an LLM be sad, happy or aware of itself and the world? No, not by a long shot. Will it tell you that it can if you nudge it? Yes.

Actual AI might be possible in the future, but right now all we have is really complex networks that can do essentially basic tasks that just look impressive to us because the are inherently using our own communication format.

If we talk about sentience, LLMs are the equivalent of a petridish of neurons connected to a computer (metaphorically) and only by forming a complex 3d structure like a brain can they really reach sentience.

[–] AdrianTheFrog@lemmy.world 1 points 4 hours ago (1 children)

Can an LLM be sad, happy or aware of itself and the world? No, not by a long shot.

Can you really prove any of that though?

[–] MTK@lemmy.world 1 points 3 hours ago

Yes, you can debug an LLM to a degree and there are papers that show it. Anyone who understands the technology can tell you that it absolutely lacks any facility to experience

[–] Patch@feddit.uk 2 points 10 hours ago (1 children)

Turing made a strategic blunder when formulating the Turing Test by assuming that everyone was as smart as he was.

[–] MTK@lemmy.world 1 points 4 hours ago

A famously stupid and common mistake for a lot of smart peopel