Perspectivist

joined 1 week ago
[–] Perspectivist@feddit.uk 0 points 2 hours ago

It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.

[–] Perspectivist@feddit.uk 0 points 2 hours ago

It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.

[–] Perspectivist@feddit.uk -2 points 2 hours ago

I don’t think you even know what you’re talking about.

You can define intelligence however you like, but if you come into a discussion using your own private definitions, all you get is people talking past each other and thinking they’re disagreeing when they’re not. Terms like this have a technical meaning for a reason. Sure, you can simplify things in a one-on-one conversation with someone who doesn’t know the jargon - but dragging those made-up definitions into an online discussion just muddies the water.

The correct term here is “AI,” and it doesn’t somehow skip over the word “artificial.” What exactly do you think AI stands for? The fact that normies don’t understand what AI actually means and assume it implies general intelligence doesn’t suddenly make LLMs “not AI” - it just means normies don’t know what they’re talking about either.

And for the record, the term is Artificial General Intelligence (AGI), not GAI.

[–] Perspectivist@feddit.uk 8 points 5 hours ago

There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.

[–] Perspectivist@feddit.uk 1 points 6 hours ago

I have next to zero urge to “keep up with the news.” I’m under no obligation to know what’s going on in the world at all times. If something is important, I’ll hear about it from somewhere anyway - and if I don’t hear about it, it probably wasn’t that important to begin with.

I’d argue the “optimal” amount of news is whatever’s left after you actively take steps to avoid most of it. Unfiltered news consumption in today’s environment is almost certainly way, way too much.

[–] Perspectivist@feddit.uk 29 points 6 hours ago (3 children)

Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.

[–] Perspectivist@feddit.uk 1 points 13 hours ago* (last edited 13 hours ago)

This isn’t a failure of the model - it’s a misunderstanding of what the model is. ChatGPT is a tool, not a licensed practitioner. It has one capability: generating language. That sometimes produces correct information as a side effect of the data it was trained on, but there is no understanding, no professional qualification, and no judgment behind it.

If you ask it whether it’s qualified to act as a therapist, it will tell you no. If you instruct it to role-play as one, it will however do that - because following instructions is the thing it’s designed to do. Complaining that a language model behaves like a language model, and then demanding more guardrails to stop people from using it badly, is just outsourcing common sense.

There’s also this odd fixation on Sam Altman as if he’s hand-crafting the bot’s behavior in real time. It’s much closer to an open-ended, organic system that reacts to input than a curated service. What you get out of it depends entirely on what you put in.

[–] Perspectivist@feddit.uk 0 points 21 hours ago (2 children)

Trust what? I’m simply pointing out that we don’t know whether he’s actually done anything illegal or not. A lot of people seem convinced that he did - which they couldn’t possibly be certain of - or they’re hoping he did, which is a pretty awful thing to hope for when you actually stop and think about the implications. And then there are those who don’t even care whether he did anything or not, they just want him convicted anyway - which is equally insane.

Also, being “on the list” is not the same thing as being a child rapist. We don’t even know what this list really is or why certain people are on it. Anyone connected to Epstein in any capacity would dread having that list released, regardless of the reason they’re on it, because the result would be total destruction of their reputation.

[–] Perspectivist@feddit.uk 7 points 1 day ago

If I meant social media I would've said so.

[–] Perspectivist@feddit.uk 3 points 1 day ago* (last edited 1 day ago) (3 children)

your chatbot shouldn’t be pretending to offer professional services that require a license, Sam.

It generates natural sounding language. That's all it's designed to do. The rest is up to the user - if a therapy session is what they ask then a therapy session is what they get. I don't think it should refuse this request either.

[–] Perspectivist@feddit.uk 15 points 1 day ago (3 children)

No one should expect anything they write online to stay private. It's the number 1 rule of the internet. Don't say anything you wouldn't be willing to defend having said in front of a court.

 

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

 

I was delivering an order for a customer and saw some guy messing with the bikes on a bike rack using a screwdriver. Then another guy showed up, so the first one stopped, slipped the screwdriver into his pocket, and started smoking a cigarette like nothing was going on. I was debating whether to report it or not - but then I noticed his jacket said "Russia" in big letters on the back, and that settled it for me.

That was only the second time in my life I’ve called the emergency number.

view more: next ›