this post was submitted on 29 Jul 2023
195 points (100.0% liked)

Technology

37708 readers
181 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Serdan@lemm.ee 3 points 1 year ago (1 children)

https://thegradient.pub/othello/

LLMs are neural networks and are absolutely capable of understanding.

[–] conciselyverbose@kbin.social 7 points 1 year ago (1 children)

LLMs are criminally simplified neural networks at minimum thousands of orders less complex than a brain. Nothing we do with current neural networks resembles intelligence.

Nothing they do is close to understanding. The fact that you can train one exclusively on the rules of a simple game and get it to eventually infer a basic rule set doesn't imply anything like comprehension. It's simplistic pattern matching.

[–] Serdan@lemm.ee 1 points 1 year ago (1 children)

Does AlphaGo understand go? How about AlphaStar?

When I say LLM's can understand things, what I mean is that there's semantic information encoded in the network. A demonstrable fact.

You can disagree with that definition, but the point is that it's absolutely not just autocomplete.

[–] conciselyverbose@kbin.social 1 points 1 year ago (1 children)

No, and that definition has nothing in common with what the word means.

Autocorrect has plenty of information encoded as artifacts of how it works. ChatGPT isn't like autocorrect. It is autocorrect, and doesn't do anything more.

[–] Serdan@lemm.ee 1 points 1 year ago (1 children)

It's fine if you think so, but then it's a pointless argument over definitions.

You can't have a conversation with autocomplete. It's qualitatively different. There's a reason we didn't have this kind of code generation before LLM's.

Adversus solem ne loquitor.

[–] conciselyverbose@kbin.social 1 points 1 year ago (1 children)

If you just keep taking the guessed next word from autocomplete you also get a bunch of words shaped like a conversation.

[–] Serdan@lemm.ee 1 points 1 year ago

I am not sure of the relevance of the oppressed classes and with the object of duping the latter is the cravings of the oppressed classes and with the object of duping the latter

Yeah, totally. Repeating the same nonsensical sentence over and over is also how I converse. 🙄