this post was submitted on 09 Jul 2025
4 points (75.0% liked)

Discuss

324 readers
8 users here now

Welcome

Open discussions and thoughts. Make anything into a discussion!

Rules

  1. Follow the rules of discuss.online
  2. No porn
  3. No self-promotion

founded 2 years ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old

this is the funniest thing i've read this week, thank you OP

[–] m_f@discuss.online 2 points 3 days ago (1 children)

The metaphor of “stochastic parrots” has become a rallying cry for those who seek to preserve the sanctity of human cognition against the encroachment of large language models. In this paper, we extend this metaphor to its logical conclusion: if language models are stochastic parrots, and humans learned language through statistical exposure to linguistic data, then humans too must be stochastic parrots. Through careful argumentation, we demonstrate why this is impossible—humans possess the mystical quality of “true understanding” while machines possess only “pseudo-understanding.” We introduce the Recursive Parrot Paradox (RPP), which states that any entity capable of recognizing stochastic parrots cannot itself be a stochastic parrot, unless it is, in which case it isn’t. Our analysis reveals that emergent abilities in language models are merely “pseudo-emergent,” unlike human abilities which are “authentically emergent” due to our possession of what we term “ontological privilege.” We conclude that no matter how persuasive, creative, or capable language models become, they remain sophisticated pattern matchers, while humans remain sophisticated pattern matchers with souls

The paper is tongue-in-cheek, but gets to an important point. Anyone saying "But LLMs are just ..." has to explain why that "..." doesn't also apply to humans. IMO a lot of people throwing around "stochastic parrots!" just want humans to be special, and work backwards from there.

[–] SanctimoniousApe@lemmings.world 2 points 3 days ago (1 children)

I'm honestly not educated enough to bother reading the linked article, but just going by what you wrote I have to wonder how AI "hallucinations" compare to human imagination (or, perhaps more importantly, how well they can be made to).

[–] m_f@discuss.online 2 points 3 days ago

I saw a comment elsewhere that found a way to make the hallucinations useful:

I've found this to be one of the most useful ways to use (at least) GPT-4 for programming. Instead of telling it how an API works, I make it guess, maybe starting with some example code to which a feature needs to be added. Sometimes it comes up with a better approach than I had thought of. Then I change the API so that its code works.

Conversely, I sometimes present it with some existing code and ask it what it does. If it gets it wrong, that's a good sign my API is confusing, and how.