this post was submitted on 27 Mar 2025
650 points (93.7% liked)
Funny
8723 readers
1097 users here now
General rules:
- Be kind.
- All posts must make an attempt to be funny.
- Obey the general sh.itjust.works instance rules.
- No politics or political figures. There are plenty of other politics communities to choose from.
- Don't post anything grotesque or potentially illegal. Examples include pornography, gore, animal cruelty, inappropriate jokes involving kids, etc.
Exceptions may be made at the discretion of the mods.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You could say the same thing about rewiring a human's neurons randomly. It's not the powerful argument you think it is.
We don't really know exactly how brains work. But when, say, Wernicke's area is damaged (but not Broca's area), then you can get people spouting meaningless but syntactically valid sentences that look a lot like autocorrect. So it could be that there's some part of our language process which is essentially no more or less powerful than an LLM.
Anyway, it turns out that you can do a lot with LLMs, and they can reason (insofar as they can produce logically valid chains of text, which is good enough). The takeaway for me is not that LLMs are really smart -- rather it's that the MVP of intelligence is a lot lower a bar than anyone was expecting.
Can you? One is editing a table of variables, the other is altering a brain by some magic hypothetical. Even if you could, the person you do it to is gonna be cross with you -- the programme, meanwhile, is still just a programme. People who've had damage to Wernicke's area are still attempting to communicate meaningful thoughts, just because the signal is scrambled doesn't mean the intent isn't still there.