It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.
Perspectivist
I don’t think you even know what you’re talking about.
You can define intelligence however you like, but if you come into a discussion using your own private definitions, all you get is people talking past each other and thinking they’re disagreeing when they’re not. Terms like this have a technical meaning for a reason. Sure, you can simplify things in a one-on-one conversation with someone who doesn’t know the jargon - but dragging those made-up definitions into an online discussion just muddies the water.
The correct term here is “AI,” and it doesn’t somehow skip over the word “artificial.” What exactly do you think AI stands for? The fact that normies don’t understand what AI actually means and assume it implies general intelligence doesn’t suddenly make LLMs “not AI” - it just means normies don’t know what they’re talking about either.
And for the record, the term is Artificial General Intelligence (AGI), not GAI.
There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.
I have next to zero urge to “keep up with the news.” I’m under no obligation to know what’s going on in the world at all times. If something is important, I’ll hear about it from somewhere anyway - and if I don’t hear about it, it probably wasn’t that important to begin with.
I’d argue the “optimal” amount of news is whatever’s left after you actively take steps to avoid most of it. Unfiltered news consumption in today’s environment is almost certainly way, way too much.
Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.
This isn’t a failure of the model - it’s a misunderstanding of what the model is. ChatGPT is a tool, not a licensed practitioner. It has one capability: generating language. That sometimes produces correct information as a side effect of the data it was trained on, but there is no understanding, no professional qualification, and no judgment behind it.
If you ask it whether it’s qualified to act as a therapist, it will tell you no. If you instruct it to role-play as one, it will however do that - because following instructions is the thing it’s designed to do. Complaining that a language model behaves like a language model, and then demanding more guardrails to stop people from using it badly, is just outsourcing common sense.
There’s also this odd fixation on Sam Altman as if he’s hand-crafting the bot’s behavior in real time. It’s much closer to an open-ended, organic system that reacts to input than a curated service. What you get out of it depends entirely on what you put in.
Trust what? I’m simply pointing out that we don’t know whether he’s actually done anything illegal or not. A lot of people seem convinced that he did - which they couldn’t possibly be certain of - or they’re hoping he did, which is a pretty awful thing to hope for when you actually stop and think about the implications. And then there are those who don’t even care whether he did anything or not, they just want him convicted anyway - which is equally insane.
Also, being “on the list” is not the same thing as being a child rapist. We don’t even know what this list really is or why certain people are on it. Anyone connected to Epstein in any capacity would dread having that list released, regardless of the reason they’re on it, because the result would be total destruction of their reputation.
If I meant social media I would've said so.
your chatbot shouldn’t be pretending to offer professional services that require a license, Sam.
It generates natural sounding language. That's all it's designed to do. The rest is up to the user - if a therapy session is what they ask then a therapy session is what they get. I don't think it should refuse this request either.
No one should expect anything they write online to stay private. It's the number 1 rule of the internet. Don't say anything you wouldn't be willing to defend having said in front of a court.
It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.