this post was submitted on 27 Mar 2025
650 points (93.7% liked)
Funny
8701 readers
2275 users here now
General rules:
- Be kind.
- All posts must make an attempt to be funny.
- Obey the general sh.itjust.works instance rules.
- No politics or political figures. There are plenty of other politics communities to choose from.
- Don't post anything grotesque or potentially illegal. Examples include pornography, gore, animal cruelty, inappropriate jokes involving kids, etc.
Exceptions may be made at the discretion of the mods.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Words have meaning, and the meaning of those words change throughout time, cultures, and even niche circles. In a perfect world, we'd all explain the definition of a word that we are using, but we don't, and we rely on public consensus to determine the meaning of words. People are able to accept this for slang, but for some reason have a hard time accepting that it happens for normal words as well. People have been using AI to mean "any semblance of thought" in tech for a long time. When playing a game against a computer, people have been calling the computer player AI, even back when games were rudimentary.
Of course, I'm as big a hater of AI by the modern definition as anyone, I just think there are a lot of people dying on the hill of "words can't change" when it's a pretty crazy position to hold.
Excuse me if I don't want to let corporations redefine my language.
I think there can be AI, but this is not it. It's one of the reasons I don't want to call generative or predictive models AI.
I guess what I'm saying is that the colloquial definition of "AI" hasn't changed with the rise of LLMs. "AI" has been used to mean "computers that can make decisions" for at least 20 years. I don't know if you play video games, but "AI" has been synonymous with "Bot" or "NPC" in that space for a long time now.
When I was in college, I took classes on Artificial Neural Networks, a good several years before LLMs were released to the public. While you wouldn't find it in a textbook, a lot of the students called ANNs "AI".
Hell, the term "Artificial General Intelligence" was coined in 2007 to replace "AI" for the definition you are using, since people had started using "AI" a lot looser. That was 18 years ago, long before LLMs.
I agree that the corporations calling their LLMs AI is misleading and manipulative, hell I even could agree that they shouldn't be allowed to, but let's not pretend that they have changed the definition of AI. That is fundamentally untrue.
All good points man.
It ain't only corporations, it's casual, intuitive, everyday speakers—the community that owns the language—arriving there naturally from the regular meaning of individual words: They see a work that appears to be created by some form of intelligence/creativity. No natural intelligence created it. Hence, a work of artificial intelligence.
See? Not that hard. No need to be difficult about it. Nitpicking a casual speaker over it is bound to earn you well-deserved disdain.
out of curiosity, what measurable output of a system would you need to observe as evidence that it's AI?
"AI" in fiction has meant a machine with a mind like what people have. It's had that meaning for decades. Very recently, there are programmes that do predictive text like what your phone does, but large. You can call the predictive text programme an "AI", but as the novelty wears off, it's gonna sound more and more like advertising than a real description.
I has, but it also has meant a computer "making decisions" for decades as well. I would know, I've been using it that way for 20 years, especially in the gaming space. Playing against bots that even remotely feel like a person is playing has been "playing against the AI"
Don't get me wrong, I agree that the marketing being done today is pretty aggregious, and the folks doing it are 100% being manipulative by using the term "AI" in their marketing, but I don't think they've used the term beyond a meaning it has already had for a long time.
I think it's incredible that so much of what the human brain can do can be emulated with predictive models. It makes sense in retrospect -- human brains are doing prediction at every level that we can model.
A statistical model strings a sentence together with a great big web of statistical weights, settling onto the next most probable word, one by one. People write with the intent to share a meaning. It is not the same.
That statistical (or "predictive", if we're gussying it up) model has no understanding in it - no more than any other programme. It's a physical chain reaction, a calculation that runs until the sums even out to a state of rest. Wipe the web of statistical weights clean, and re-weigh them so the sums spit out the colour of pixels in a JPEG rather than the content of a .txt document.
Hell, weigh the web at random and have it spit out nonsense numbers. It'll do that for as long as keep the programme up. It will never ask you why you took the meaning out of its task. The machine makes no distinction between the sort of calculation you run it -- people are what project meaning onto the blinking lights.
You could say the same thing about rewiring a human's neurons randomly. It's not the powerful argument you think it is.
We don't really know exactly how brains work. But when, say, Wernicke's area is damaged (but not Broca's area), then you can get people spouting meaningless but syntactically valid sentences that look a lot like autocorrect. So it could be that there's some part of our language process which is essentially no more or less powerful than an LLM.
Anyway, it turns out that you can do a lot with LLMs, and they can reason (insofar as they can produce logically valid chains of text, which is good enough). The takeaway for me is not that LLMs are really smart -- rather it's that the MVP of intelligence is a lot lower a bar than anyone was expecting.
Can you? One is editing a table of variables, the other is altering a brain by some magic hypothetical. Even if you could, the person you do it to is gonna be cross with you -- the programme, meanwhile, is still just a programme. People who've had damage to Wernicke's area are still attempting to communicate meaningful thoughts, just because the signal is scrambled doesn't mean the intent isn't still there.