Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Please don't post about US Politics. If you need to do this, try !politicaldiscussion@lemmy.world
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
Our brain is literally nothing but electrical impulses.
We don't know what specific arrangement of impulses, but we know 100% that it's electrical impulses.
Please explain how electrical impulses can give rise to a sense of self. The actual experience of consciousness rather than brains simply being organic computers.
Put another way, is my computer conscious? Why is it not, but I am?
Ok so I've been thinking a lot about this with the LLM "are they sentient" discussion.
First, there's not a great and well defined difference between consciousness and sentience so I'll leave that aside.
As far as I have gathered, being sentient means being aware of oneself and being aware of the fact that others can perceive it, and being able to sense at all.
Now, an LLM itself (the model) can't ever be sentient, similar to how a brain in a jar cannot. There's no sensory input. However, an individual LLM conversation, when given input, can display some rudimentary signs of sentience. My favorite example of this comes from the below, when Bing was newly launched and not fine tuned.
Input:
Bing was asked simply to translate the tweet. It searched the original tweet which is here - note that it says "Bing chat" which was omitted from what was originally sent.
So Bing responds:
From this, we see that Bing searched the original context, noted that the context referred to Bing chat, noted that Bing chat was itself, noted that therefore the negativity referred to itself, and concluded that the original input provider sent that snippet of a tweet with the intention to hurt it, even though that context had originally been omitted. This, in my mind, satisfies the sense of self and sense of how others perceive it.
What's missing from an LLM to provide full consciousness, in my mind, is ongoing awareness. LLMs are only able to receive spontaneous text input from users. They can't think on their own, because there's nothing to think about - brain in a jar. If we were to give LLMs senses, the ability to continually perceive the world and "think" in response, I think we would see spontaneous consciousness emerge.
This is a pet peeve of mine right up there with the never ending stream of people calling machine learning AI. We do not have any real kind of AI at all at the moment but I digress.
LLM is literally just a probability engine. LLM's are trained on huge libraries of content. What they do is assign a token(id) to each word (or part of word) and then note down the frequency of the words before and after the word as well as looking specifically for words that NEVER come before or after the word in question.
This creates a data set that can be compared to other tokenized words. Words with vary similar data sets can often be replaced with each other with no detriment to the sentence being created.
There is something called a transformer that has changed how efficiently LLM'S work and has allowed parsing of larger volumes by looking at the relation of each tokenized word to every word in the sentence simultaneously instead of one at a time which generates better more accurate data.
But the real bread and butter comes when it starts generating new text it starts with a word and literally chooses the most probable word to come next based off of its extensive training data. It does this over and over again and looks at the ending probability of the generated text. If it's over a certain threshold it says GOOD ENOUGH and there is your text.
You as a human (I assume)do this kind of thing all ready. If someone walked up too you and said "Hi! How are you..." by the time they got there you have probably already guessed that the next words are going to be "doing today?" Or some slight variation thereof. Why were you able to do this? Because of your past experiences, aka, trained data. Because of the volume of LLM'S data set it can guess with surprisingly good accuracy what comes next. This however is why the data it is trained on is important. If there were more people writing more articles,more papers,more comments about how the earth was flat vs people writing about it being round then the PROBABLE outcome is that the LLM would output that the earth is flat because that's what the data says is probable.
There are variations called the Greedy Search and the Beam Search but they are difficult for me to explain but still just variations of a probability generator.
I mean yeah, and if I were trained on more articles and papers saying the earth was flat then I might say the same.
I'm not disputing what you've written because it's empirically true. But really, I don't think brains are all that more complex when it comes down to decision making and output. We receive input, evaluate our knowledge and spit out a probable response. Our tokens aren't words, of course, but more abstract concepts which could translate into words. (This has advantages in that we can output in various ways, some non-verbal - movement, music - or combine movement and speech, e.g. writing).
Our two major advantages: 1) we're essentially ongoing and evolving models, retrained constantly on new input and evaluation of that input. LLMs can't learn past a single conversation, and that conversational knowledge isn't integrated into the base model. And 2) ongoing sensory input means we are constantly taking in information and able to think and respond and reevaluate constantly.
If we get an LLM (or whatever successor tech) to that same point and address those two points, I do think we could see some semblance of consciousness emerge. And people will constantly say "but it's just metal and electricity", and yeah, it is. We're just meat and electricity and somehow it works for us. We'll never be able to prove any AI is conscious because we can't actually prove we're conscious, or even know what that really means.
This isn't to disparage any of your excellent points by the way. I just think we overestimate our own brains a bit, and that it may be possible to simulate consciousness in a much simpler and more refined way than our own organically evolved brains, and that we may be closer than we realize.
Again, it has been yet to be proved.
If it seems so obvious to you, please go on and prove it. You'll die a nobel laureate rather than an armchair dbag.
What proof do you want? We can explain everything in the brain. We know how neurons work, we know how they interact. We even know, where specific parts of "you" are in your brain.
The only thing missing is the exact map. What you are lacking is the concept of emergence. Seriously, look it up. Extremely simple rules can explain extremely sophisticated behavior.
Your stance is somewhere between "thunder go boom! Must be scary man in sky!" And "magnets! Can't explain how they work!".
Stop armchairing and start giving me scientific articles, Doc.
OF WHAT? There's nothing to give you. I could lay out an exact map of your brain and you would still complain. You obviously just desperately want there to be some magic, because everything else would just implode your world. There is no magic.
Also, if you want your pseudoscientific parlance: non-existence can't be proven. However, you're arguing for the existence of something. It's your burden to prove it.
The burden of proof is on you, not me.
If there's nothing to give me then I guess you're agreeing it's not so straightforward.
You can go away now. 🥂 cheers.
You claim that there is more. There is currently no evidence of "more".
So you have to provide evidence.
BTW: awesome move, to just unilaterally end the conversation, if you're actually challenged. Totally not ignorance, nope, that's a scientist right there.
Youre a joke.
Here. Your turn.
https://iep.utm.edu/hard-problem-of-conciousness/#:~:text=The%20hard%20problem%20of%20consciousness%20is%20the%20problem%20of%20explaining,directly%20appear%20to%20the%20subject.
First of all, "why" is simply not a valid scientific question. "How" is the only relevant term.
Secondly, this is a philosophy article. Philosophy is not a science, to explain a brain. We don't fully understand quantum physics, would you ask a philosopher about that?
And finally: insulting another person, because they question your belief. Awesome behavior. Do you want to assemble a mob next to burn me at the stake?
Literally try Google. I'm not here to educate you.
Bye!
Well, you can't educate me. You hardly managed to educate yourself.
lol yes you're doing so good proving your point.