this post was submitted on 05 May 2025
434 points (95.8% liked)

Technology

69770 readers
3841 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] perestroika@lemm.ee 21 points 1 day ago* (last edited 1 day ago) (3 children)

From the article (emphasis mine):

Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

/.../

“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says.

From elsewhere:

Sycophancy in GPT-4o: What happened and what we’re doing about it

We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.

I don't know what large language model these people used, but evidence of some language models exhibiting response patterns that people interpret as sycophantic (praising or encouraging the user needlessly) is not new. Neither is hallucinatory behaviour.

Apparently, people who are susceptible and close to falling over the edge, may end up pushing themselves over the edge with AI assistance.

What I suspect: someone has trained their LLM on somethig like religious literature, fiction about religious experiences, or descriptions of religious experiences. If the AI is suitably prompted, it can re-enact such scenarios in text, while adapting the experience to the user at least somewhat. To a person susceptible to religious illusions (and let's not deny it, people are suscpecptible to finding deep meaning and purpose with shallow evidence), apparently an LLM can play the role of an indoctrinating co-believer, indoctrinating prophet or supportive follower.

[–] AdrianTheFrog@lemmy.world 3 points 10 hours ago

They train it on basically the whole internet. They try to filter it a bit, but I guess not well enough. It's not that they intentionally trained it in religious texts, just that they didn't think to remove religious texts from the training data.

[–] morrowind@lemmy.ml 10 points 1 day ago

If you find yourself in weird corners of the internet, schizo-posters and "spiritual" people generate staggering amounts of text

[–] nomecks@lemmy.wtf 7 points 1 day ago (1 children)
[–] perestroika@lemm.ee 5 points 1 day ago

I think Elon was having the opposite kind of problems, with Grok not validating its users nearly enough, despite Elon instructing employees to make it so. :)