this post was submitted on 05 Feb 2024
666 points (87.8% liked)

Memes

45727 readers
1140 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 5 years ago
MODERATORS
 

I think AI is neat.

you are viewing a single comment's thread
view the rest of the comments
[–] AlijahTheMediocre@lemmy.world -2 points 9 months ago (1 children)

People learn the same way, we do things that bring us satisfaction and get us approval.

[–] yoshi 16 points 9 months ago* (last edited 9 months ago) (1 children)

We use words to describe our thoughts and understanding. LLMs order words by following algorithms that predict what the user wants to hear. It doesn't understand the meaning or implications of the words it's returning.

It can tell you the definition of an apple, or how many people eat apples, or whatever apple data it was trained on, but it has no thoughts of it's own about apples.

That's the point that OOP was making. People confuse ordering words with understanding. It has no understanding about anything. It's a large language model - it's not capable of independent thought.

[–] jumping_redditor@sh.itjust.works 5 points 9 months ago

I think that the question of what "understanding" is will become important soon, if not already. Most people don't really understand as much as you might think we do, an apple for example has properties like flavor, texture, appearance, weight and firmness it also is related to other things like trees and is in categories like food or fruit. A model can store the relationship of apple to other things and the properties of apples, the model could probably be given "personal preferences" like a preferred flavor profile and texture profile and use this to estimate if apples would be preferred by the preferences and give reasonings for it.

Unique thought is hard to define and there is probably a way to have a computer do something similar enough to be indistinguishable, probably not through simple LLMs. Maybe using a LLM as a way to convert internal "ideas" to external words and external words to internal "ideas" to be processed logically probably using massive amounts of reference materials, simulation, computer algebra, music theory, internal hypervisors or some combination of other models.