this post was submitted on 17 May 2024
503 points (94.8% liked)

Technology

59422 readers
2896 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] dustyData@lemmy.world 1 points 6 months ago* (last edited 6 months ago) (1 children)

Yet we have the same fundamental problem with the human brain

And LLMs aren't human brains, they don't even work remotely similarly. An LLM has more in common with an Excel spreadsheet than with a neuron. Read on the learning models and pattern recognition theories behind LLMs, they are explicitly designed to not function like humans. So we cannot assume that the same emergent properties exist on an LLM.

[–] UnpluggedFridge@lemmy.world 0 points 6 months ago (1 children)

Nor can we assume that they cannot have the same emergent properties.

[–] dustyData@lemmy.world 1 points 5 months ago (1 children)

That's not how science works. You are the one claiming it does, you have the burden of proof to prove they have the same properties. Thus far, assuming they don't as they aren't human is the sensible rational route.

[–] UnpluggedFridge@lemmy.world 0 points 5 months ago* (last edited 5 months ago)

Read again. I have made no such claim, I simply scrutinized your assertions that LLMs lack any internal representations, and challenged that assertion with alternative hypotheses. You are the one that made the claim. I am perfectly comfortable with the conclusion that we simply do not know what is going on in LLMs with respect to human-like capabilities of the mind.