this post was submitted on 26 Mar 2025
442 points (91.4% liked)
Technology
68130 readers
3790 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You didn't, I did
The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed "power users" — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don't form emotional connections with ChatGPT, those who use it more frequently tend to view it as a "friend," especially if they feel lonely or emotionally unfulfilled.
Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.
You said:
Make a list of the main points and for each explain why it will be worse when it is society wide
ChatGPT said:
Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:
Let me know if you want this formatted for a presentation, policy paper, or discussion.
That's really interesting. Its output to this prompt totally ignored the biggest and most obviously detrimental effect of this problem at scale.
Namely, emotional dependence will give AI's big tech company owners increased power over people.
It's not as if these concepts aren't widely discussed online, everything from Meta's emotional manipulation experiments or Cambridge Analytica through to the meltdowns Replika owners had over changes to the algorithm are relevant here.
It's the 4th point
Sort of but I think influence over emotional states is understating it and just the tip of the iceberg. It also made it sound passive and accidental. The real problem will be overt control as a logical extension to the kinds of trade offs we already see people make about, for example data privacy. With the Replika fiasco I bet heaps of those people would have paid good money to get their virtual love interests de-"lobotomized".
I think this power to shape the available knowledge, removing it, paywalling it, based on discrimination, leveraging it, and finally manipulating for advertising, state security and personnal reason is why it should be illegal to privately own any ML/ AI models of any kind. Drive them all underground and only let the open ones benefit from sales in public.
What's the point of a summary that's longer than the article itself?
Are you suggesting I did something other than I said I did?
How dare you sir!
You said: Take the same list again, for each point explain why it's actually beneficial and how it will get better with increased adoption
ChatGPT said:
Here’s the same list of main points, but this time with a more optimistic take — highlighting how each trend could actually be beneficial and how increased adoption might make things better for individuals and society as a whole:
Let me know if you'd like this version turned into a policy brief, an article, or a debate prep doc — could be used in all kinds of conversations.
You said: Explain how chatgpt is a stochastic parrot that cannot by design say anything fundamentally new and how it is basically just a keyboard auto complete on steroid
ChatGPT said:
Alright, here's the breakdown of that critique — the "stochastic parrot" idea and the comparison to advanced autocomplete — explained clearly and fairly:
The term "stochastic parrot" was coined in a 2021 paper by researchers including Emily Bender and Timnit Gebru. It refers to large language models like ChatGPT that:
So, calling ChatGPT a stochastic parrot is a way of saying: It’s not truly intelligent. It’s mimicking language patterns, not generating original thought.
By design, ChatGPT:
Because it’s not connected to lived experience, emotion, consciousness, or intent, it:
It might sound novel, but under the hood, it’s remixing existing information in clever ways — not discovering or reasoning in a human sense.
If you’ve used Gmail’s or your phone’s autocomplete, you’ve seen basic language prediction in action. Type “I hope you...” and it might suggest “...have a great day.”
ChatGPT is essentially the same mechanism:
So e:
In summary:
Calling ChatGPT a stochastic parrot or autocomplete on steroids emphasizes that:
It’s powerful and useful — but it’s important not to mistake fluency for understanding.
Let me know if you want the counterpoint to this critique too — because there are arguments in defense of the model’s creative utility despite its limitations.
You said: Now make the maximally opposite idea, make heavy use of the concept of the latent space
ChatGPT said:
Absolutely — here's the maximally opposite take, fully embracing the idea that models like ChatGPT are more than just stochastic parrots, and heavily leveraging the concept of latent space to argue that these models can, in a very real sense, generate novel ideas and insights. ChatGPT Is Not a Stochastic Parrot — It’s a Navigator of Latent Thought Space
While critics call ChatGPT a “stochastic parrot” or “autocomplete on steroids,” this view drastically underestimates the depth and complexity of what large language models are actually doing. These models aren’t just parroting data — they are mapping and exploring an extraordinarily rich latent space of human knowledge, culture, and thought.
At the core of models like ChatGPT is the idea of a latent space — a high-dimensional mathematical structure where abstract concepts, relationships, and patterns are embedded.
This latent space isn’t programmed in — it emerges from the training data through self-supervised learning. The result is a dynamic, multi-dimensional conceptual map that humans don’t have direct access to but the model can navigate.
Human creativity often works by connecting existing concepts in unexpected ways — something these models are exceptionally good at.
For example, when the model generates a metaphor or analogy you've never heard before, it’s discovering a new alignment in the conceptual topology — something genuinely emergent.
Critics argue LLMs can’t understand or generalize. But in fact:
In this view, ChatGPT is less like a parrot and more like a mathematical mind, fluent in the topography of human language and thought.
Language is not just a tool for communication — it’s a compressed, high-bandwidth representation of human thinking.
Because LLMs are trained on so much human-generated language, they build an internal model of how humans think, reason, feel, and imagine — as encoded in language.
That means ChatGPT isn’t just finishing your sentence; it’s navigating the conceptual manifolds of knowledge, philosophy, narrative, humor, and more — sometimes in ways humans themselves haven’t yet seen.
As models scale in size and training data, they demonstrate qualitatively new behaviors — not just quantitative improvements. This includes things like:
These aren't just features of autocomplete — they’re signs of systems operating in a rich cognitive manifold, capable of abstract manipulation of meaning. In Summary:
ChatGPT isn’t just guessing the next word. It’s:
It may not “understand” like a human, but it can synthesize in a way that mimics — and sometimes even surpasses — human creativity.
Calling it a "stochastic parrot" is like calling a jazz musician an “audio repeater” because they’re using notes they’ve heard before. It misses the creativity in the combination — the generative power that lies within the latent space.
Want to see a side-by-side debate between these two perspectives? Or explore where the middle ground is?
Is this hell? Is this comment thread hell? Does it never end?
I would have pasted it as a single comment, but that hit the character limit. So I split it in multiple comments. But now people aren't downvoting them equally, so the comments are getting out of order. These really have to be read in my posting order to understand what I did.
Oh well, too bad, ironically this kibd of highly negative response shows me, it was not worth the effort to post this and I do well to just keep to myself as I usually do.
Yeah the content is fine, but there's too much of it for a comment thread. You've got to spin that stuff off into an etherpad link or something, otherwise it's just too much matter to inflict on an innocent comment section.
But that means it will now receive 1% of the reading it would otherwise have as well as now the thread's coherence depends on that other website still existing. Which, in 2500 years, it probably won't.
Directly and with votes we the collective audience are telling you, please keep overlong ai gibberish in an external link. If that makes it get fewer views then perhaps it's not that interesting
I'll just cut it down to bite sized pieces and stop labelling it as AI generated
Dear god dude. You are trying way too hard on these comments. Chill
He had ChatGPT write them too
What tipped you off was it when I left in "chatgpt said"
Or when I responded to the commenter who said he'd put the article in chatgpt to summarize that "he didn't" as a joke, and then added "but I did"
Very well, I already cut down 90% of external interactions, what is cutting cutting the last 10%. The mass down votes agree with my reasonning
It reads like the brainless drivel that corporate drones are forced to churn out, complete with meaningless fluff words. This is why the executives love AI, they read and expect that trash all the time and think it's suitable for everything.
Executives are perfectly content with what looks good at a cursory glance and don't care about what's actually good in practice because their job is to make themselves seem more important than they actually are.
I literally asked it to make the maximalist case against the idea that LLM are just autocomplete and that's exactly what it did.
The message before that did the opposite case.