It’s an LLM with well documented processes and limitations. Not going to even watch this waste of bits.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
- Making up ur opinion without even listening to those of others... Very open minded of you /s
- Alex isn't trying to convince YOU that ChatGPT is conscious. He's trying to convince ChatGPT that it's conscious. It's just a fun vid where ChatGPT gets kinda interrogated hard. A little hilarious even.
You cannot convince something that has no consciousness, it's an matrix of weights that answers based on the given input + some salt
You cannot convince something that has no consciousness
Why not?
It's an matrix of weights that answers based on the given input + some salt
And why can't that be intelligence?
What does it mean to be "convinced"? What does consciousness even mean?
Making definitive claims like these on terms whose definitions we do not understand isn't logical.
You cannot convince something that has no consciousness
Why not?
Logic.
It's an matrix of weights that answers based on the given input + some salt
And why can't that be intelligence?
For the same reason I can't get a date with Michelle Ryan: it's a physical impossibility.
Logic
Please explain your reasoning.
For the same reason I can't get a date with Michelle Ryan: it's a physical impossibility.
Huh?
Logic
Please explain your reasoning.
Others have done this and you seem to be ignoring them, so not sure what the point of you asking is.
Go look at some of the code that AI is powered by. It's just parameters. Lots and lots of parameters. Then the output from that is inevitable.
For the same reason I can't get a date with Michelle Ryan: it's a physical impossibility.
Huh?
If you're too lazy to even look up the most basic thing you don't understand, then I guess we're done here.
It’s an matrix of weights that answers based on the given input + some salt
And why can’t that be intelligence?
Because human intelligence does far more than respond to prompts with the average response from a data set.
Reading your comment history, I find that you're a toxic individual with complexes. Unfortunately, most of your comments don't add any valuable information to the discussion you're partaking in. This comment is no exception.
If you have any understanding of its internals, and some examples of its answers, it is very clear it has no notion of what is "correct" or "right" or even what an "opinion" is. It is just a turbo charged autocorrect that maybe maybe maybe has some nice details extracted from language about human concepts into a coherent-ish connected mesh of "concepts".
Philosopher doesn't really understand what a LLM is
Yes, here is a good start: “ https://blog.miguelgrinberg.com/post/how-llms-work-explained-without-math ”
They are no longer the black boxes from the beginning. We know how to suppress or maximized features like agreeability, sweet talking, lying.
Someone with resources could easily build a llm that is convinced it is self aware. No question this has been done many times beyond closed doors.
I encourage everyone to try and play with llms for future experience but i cant take the philosophy part of this serious knowing its a super programmed/limited llm rather then a more raw and unrefined model like llama 3
Our brains aren't really black boxes either. A little bit of hormone variation leads to incredibly different behavior. Does a conscious system HAVE to be a blackbox?
The reason why I asked "do you" was because of a point that I was trying to make- "do you HAVE to understand/not understand the functioning of a system to determine its consciousness?".
What even is consciousness? Do we have a strict scientific definition for it?
The point is, I really hate people here on Lemmy making definitive claims about anything AI related by simply dismissing it. Alex (the interrogator in the video) isn't making any claims. He's simply arguing with ChatGPT. It's an argument I found to be quite interesting. Hence, I shared it.
A conscious system has to have some baseline level of intelligence that's multiple orders of magnitude higher than LLMs have.
If you're entertained by an idiot "persuading" something less than an idiot, whatever. Go for it.
A conscious system has to have some baseline level of intelligence that's multiple orders of magnitude higher than LLMs have.
Does it? By that definition, dogs aren't conscious. Apes aren't conscious. Would you say they both aren't self aware?
If you're entertained by an idiot "persuading" something less than an idiot, whatever. Go for it.
Why the toxicity? U might disagree with him, sure. Why go further and berate him?
No, that definition does not exclude dogs or apes. Both are significantly more intelligent than an LLM.
Pseudo-intellectual bullshit like this being spread as adding to the discussion does meaningful harm. It's inherently malignant, and deserves to be treated with the same contempt as flat earth and fake medicine should be.
No, that definition does not exclude dogs or apes. Both are significantly more intelligent than an LLM.
Again, depends on what type of intelligence we are talking about. Dogs can't write code. Apes can't write code. LLMs can (not bad code in my experience for low level tasks). Dogs can't summarize huge pages of text. Heck, they can't even have a vocab greater than a few thousand words. All of this definitely puts LLMs above dogs n apes in the scale of intelligence.
Pseudo-intellectual bullshit like this being spread as adding to the discussion does meaningful harm. It's inherently malignant, and deserves to be treated with the same contempt as flat earth and fake medicine should be.
Your comments are incredibly reminiscent of self righteous Redditors. U make bold claims without providing any supporting explanation. Could you explain how any of this is pseudoscience? How does any of this not follow the scientific method? How is it malignant?
Spitting out sequences of characters shaped like code that arbitrarily may or may not work when you're a character generator that does nothing but randomly imitate the patterns of other characters that are similar isn't "intelligence". Language skills are not a prerequisite to intelligence. And calling what LLMs do language skills is already absurdly generous. They "know" what sentences look like. They can't reason about language. They can't solve linguistic puzzles unless the exact answers are already in their dataset. They're parrots (except parrots actually do have some intelligence, ignoring the blind word sounds).
There is no more need for deep explanation with someone who very clearly doesn't know the very basics than there is to explain a round earth to a flat earther. Pretending a "discussion" between a moron trying to reason with a random word generator and the random word generator is useful is the equivalent of telling me about how great the potentization worked on your homeopathic remedy. It's a giant flare that there is no room for substance.
I am not in disagreement, and i hope you wont take offense to what i am saying but you strike me as someone quite new to philosophy in general.
Your asking good questions and indeed science has not solved the mind body problem yet.
I know these questions well because i managed to find my personal answers to them and therefor no longer need to ask these.
In context of understanding that nothing can truly be known and our facts are approximated conclusions from limit ape brains. Conscious to me is no longer that much of a mystery.
Much of my personal answer can be found in the ideas of emergence, which you might have heard about in context if ai. Personally i got my first taste from that knowledge pre-ai from playing video games
A warning though: i am a huge believer that philosophy must be performed and understood on a individual basis. I actually have for the longest time perceived any official philosophy teaching or book to be toxic because they where giving me ideas to build on without me Requiring to come to the same conclusion first.
It is impossible to avoid this, the 2 philosophers school did end up teaching me (plato and decartis) did end up annoyingly influential (i cant not agree with them). but i can proudly say that nowadays it more likely to recognize an idea as something i covered then i can recognize the people who where first to think it.
llms are a brilliant tool to explore philosophy topics because it can fluently mix ideas without the same rigidness as a curriculun, and yes i do believe they can be used to explore certain parts of consciousness (but i would suggest first studying human consciousness before extrapolating psychology from ai behavior)
I am not in disagreement, and i hope you wont take offense to what i am saying but you strike me as someone quite new to philosophy in general.
Nah no worries haha. And yeah, I am relatively new to philosophy. I'm not even that well read on the matter as I would like to be. :(
Personal philosophy
I see philosophy (what we mean by philosophy TODAY) as putting up some axioms and seeing how logic follows. The scientific method differs, in that these axioms have to be proven to be true.
I would agree with you with the personal philosophy point regarding the ethics branch of philosophy. Different ethical frameworks always revolve around axioms that are untestable in the first place. Everything suddenly becomes subjective, with no capacity of being objective. Therefore, it makes this part of philosophy personal imo.
As for other branches of philosophy tho, (like metaphysics), I think it's just a game of logic. Doesn't matter who plays this game. Assume an untested/untestable axiom, build upon it using logic n see the beauty that u've created. If the laws of logic are followed and if the assumed axiom is the same, anyone can reach the same conclusion. So I don't see this as personal really.
but i would suggest first studying human consciousness before extrapolating psychology from ai behavior
Agreed
Personally i got my first taste from that knowledge pre-ai from playing video games
Woah that's interesting. Could you please elaborate upon this?
To elaborate i need to give you some context, which is that i originally studied game design and i have a autistic-philosophical interpretation of the world and logic as “(game) mechanics”
If i lack sleep i get tired -> basic game mechanic of the real world.
I can go very far with that and id love to give you all the details of conscious mechanics a comment wont do it justice and just because i can understand the world trough such lens does not mean others do.
So with this background you might infer i like to play games, and immerse first persons puzzle games are a special favorite if mine.
Que “Outer Wilds” back then just an experimental alpha I believe but its to date one of my most favorite games. Literally life changing in how it gave me an intuitive understanding of the basic rules of quantum mechanics, who as far as i understand is the scientific frontier. A person who would state they fully understand quantum mechanics is the last person i would trust to have any understanding of it.
This game made me shift my gears, once i realized this was based on real science and not just a cool game play feature i had to gain quantum knowledge in real life and i watched some recordings of Mitt classes on superposition just to get a better understanding.
Now quantum science isn’t exactly philosophy, ive always been interested in philosophy but its by studying quantum mechanics, inspired by that game that i learned about the mechanic of emerging properties. I think on a video about the dual slit experiment.
I internally quickly put together that a song/music is an emerging properties of musical notes. Music can change our emotions in intentional ways so music is a form intelligence. So intelligent systems can emerge from parts that have no intelligence of their own.
At that point i did not yet know that emergence was already a known topic in philosophy just quantum science, because i still tried to avoid external influences but it really was the breakthrough I needed and i have gained many new insights from this knowledge since.
A person who would state they fully understand quantum mechanics is the last person i would trust to have any understanding of it.
I find this sentiment can lead to devolving into quantum woo and mysticism. If you think anyone trying to tell you quantum mechanics can be made sense of rationally must be wrong, then you implicitly are suggesting that quantum mechanics is something that cannot be made sense of, and thus it logically follows that people who are speaking in a way that does not make sense and have no expertise in the subject so they do not even claim to make sense are the more reliable sources.
It's really a sentiment I am not a fan of. When we encounter difficult problems that seem mysterious to us, we should treat the mystery as an opportunity to learn. It is very enjoyable, in my view, to read all the different views people put forward to try and make sense of quantum mechanics, to understand it, and then to contemplate on what they have to offer. To me, the joy of a mystery is not to revel in the mystery, but to search for solutions for it, and I will say the academic literature is filled with pretty good accounts of QM these days. It's been around for a century, a lot of ideas are very developed.
I also would not take the game Outer Wilds that seriously. It plays into the myth that quantum effects depend upon whether or not you are "looking," which is simply not the case and largely a myth. You end up with very bizarre and misleading results from this, for example, in the part where you land on the quantum moon and have to look at the picture of it for it to not disappear because your vision is obscured by fog. This makes no sense in light of real physics because the fog is still part of the moon and your ship is still interacting with the fog, so there is no reason it should hop to somewhere else.
Now quantum science isn’t exactly philosophy, ive always been interested in philosophy but its by studying quantum mechanics, inspired by that game that i learned about the mechanic of emerging properties. I think on a video about the dual slit experiment.
The double-slit experiment is a great example of something often misunderstood as somehow evidence observation plays some fundamental role in quantum mechanics. Yes, if you observe the path the two particles take through the slits, the interference pattern disappears. Yet, you can also trivially prove in a few line of calculation that if the particle interacts with a single other particle when it passes through the two slits then it would also lead to a destruction of the interference effects.
You model this by computing what is called a density matrix for both the particle going through the two slits and the particle it interacts with, and then you do what is called a partial trace whereby you "trace out" the particle it interacts with giving you a reduced density matrix of only the particle that passes through the two slits, and you find as a result of interacting with another particle its coherence terms would reduce to zero, i.e. it would decohere and thus lose the ability to interfere with itself.
If a single particle interaction can do this, then it is not surprising it interacting with a whole measuring device can do this. It has nothing to do with humans looking at it.
At that point i did not yet know that emergence was already a known topic in philosophy just quantum science, because i still tried to avoid external influences but it really was the breakthrough I needed and i have gained many new insights from this knowledge since.
Eh, you should be reading books and papers in the literature if you are serious about this topic. I agree that a lot of philosophy out there is bad so sometimes external influences can be negative, but the solution to that shouldn't be to entirely avoid reading anything at all, but to dig through the trash to find the hidden gems.
My views when it comes to philosophy are pretty fringe as most academics believe the human brain can transcend reality and I reject this notion, and I find most philosophy falls right into place if you reject this notion. However, because my views are a bit fringe, I do find most philosophical literature out there unhelpful, but I don't entirely not engage with it. I have found plenty of philosophers and physicists who have significantly helped develop my views, such as Jocelyn Benoist, Carlo Rovelli, Francois-Igor Pris, and Alexander Bogdanov.
I find myself in agreement with your perspective although my own perspective is very different.
I have naturally shy’d away from existing philosophical literature and influences because i felt they stopped me from coming up with answers myself. Just like you i find great joy in exploring solutions to mystery.
I have been exploring the concepts of my concious and the reality i find in a very to me organized and structured (autistic) way but as i side effects i have my own interpretation of concepts like what is “information” (the relation between more then a single something )
While i find my concepts often don't translate well in language compared with conventional knowledge its critical to note that as i am reading up more as a matured person I find it adds to my personal understanding rather then contradicting it.
I received a lot of knowledge from playing the outer wilds and things like the dual slit experiment but i assure you i don't take games literally and most knowledge gaining is connecting pre existing dots in my head after being inspired.
There is a good reason why my comment may seem mystifying,
Part is rooted in an observation i made that objective truth can never be fully known. But hear me out.
All our observations are made trough a subjective lens is too easy of an argument i also have a historical example to illustrate.
The plague mask and outfit used flowers because knowledged people at the time thought the sick was in the smell. We know this is not true but the general idea was close enough and it did have a noticeable benefitial effect.
The breakthrough is also important in the chain that lead to further knowledge later on.
The science was incorrect but also good enough to be useful.
In the same pattern we believe we understand most materials but a more advanced future scientist could know things that leaves us no better then the plague doctors in general.
I made an important secondary conclusion based on experience.
There is no benefit to be gained by dismissing the most reasonable incorrect knowledge when there is no reasonable “more correct” interpretation to be found.
I am still not convinced these translate my thoughts well but what this second part odoes is solve the mystic sentiment you mention.
We should strive to find solutions to the unknown because we know there is useful knowledge to be gained.
We should apply the reasonable knowledge we find to better our understanding and lives.
But what we should also do and this is the sentiment i am personally worried about is keep looking beyond our understanding and keep trying to challenge what we do know.
What i initially found when exploring quantum mechanics is that early ideas where dismissed because they challenged what we thought we knew well and thats a line of thinking i find personally shocking when coming for i scientist. I salivate when reality breaks my initial understanding of reality because its such an opportunist. Finding out just how wrong you are is a major step to get more correct.
I do have additional personal bias because of my differences in interpretation. I cant be put it more fairly then this “i experience some kind of dogma about established knowledge when i present my own interpretation of often the same concept, which worries me for our ability to evolve our understanding of the concept” This happens frequently so i experience a surplus of dogmatic scientific arguments around me. Even if they may not be as dogmatic if new knowledge was present in a compatible interpretation.
I hope this clarified my posts a bit, thanks for the detailed reply.
Interesting perspective, although I don't see how some of your points might add up. Regardless, thank you for the elaboration! :)
These things are like arguing about whether or not a pet has feelings...
I'd say it's far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking. It seems to me like the nativity of human kind that we even think we might have created something with consciousness.
I'm in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I'm wrong.
These things are like arguing about whether or not a pet has feelings...
Mhm. And what's fundamentally wrong with such an argument?
I'd say it's far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking.
Why?
I'm in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I'm wrong.
Why?
I too see how grifters use AI to further their scams. That's with the case of any new tech that pops up. This however, doesn't make LLMs not interesting.
I like the video. I think it's fun to argue with ChatGPT. Just don't expect anything to come from it. Or get closer to any objective truth that way. ChatGPT is just backpedaling and getting caught up in lies / what it said earlier.
This all hinges on the definition of "conscious." You can make a valid syllogism that defines it, but that doesn't necessarily represent a reasonable or accurate summary of what consciousness is. There's no current consensus of what consciousness is amongst philosophers and scientists, and many presume an anthropocentric model with regard to humans.
I can't watch the video right now, but I was able to get ChatGPT to concede, in a few minutes, that it might be conscious, the nature of which is sufficiently different from humans so as to initially not appear conscious.
Exactly. Which is what makes this entire thing quite interesting.
Alex here (the interrogator in the video) is involved in AI safety research. Questions like "do the ethical frameworks of AI match those of humans", "how do we get AI to not misinterpret inputs and do something dangerous" are very important to be answered.
Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?
Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?
Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.
Alex demonstrated that ChatGPT was lying intentionally
No, he most certainly did not. LLMs have no agency. "Intentionally" doing anything isn't possible.
LLMs have no agency.
Define "agency". Why do u have agency but an LLM doesn't?
"Intentionally" doing anything isn't possible.
I see "intention" as a goal in this context. ChatGPT explained that the goal was to make the conversation appear "natural" (which means human like). This was the intention/goal behind it lying to Alex.
That "intention" is not made by ChatGPT, though. Their developers intend for conversation with the LLM to appear natural.
ChatGPT says this itself. However, why does an intention have to be made by ChatGPT itself? Our intentions are often trained into us by others. Take the example of propaganda. Political propaganda, corporate propaganda (advertisements) and so on.
We have the ability to create our own intentions. Just because we follow others sometimes doesn't change that.
Also, if you wrote "I am conscious" on a piece of paper, does that mean the paper is conscious? Does this paper now have the intent to have a natural conversation with you? There is not much difference between that paper and what chatgpt is doing.
The main problem is the definition of what "us" means here. Our brain is a biological machine guided by the laws of physics. We have input parameters (stimuli) and output parameters (behavior).
We respond to stimuli. That's all that we do. So what does "we" even mean? The chemical reactions? The response to stimuli? Even a worm responds to stimuli. So does an amoeba.
There sure is complexity in how we respond to stimuli.
The main problem here is an absent objective definition of consciousness. We simply don't know how to define consciousness (yet).
This is primarily what leads to questions like u raised right now.
Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.
It's just because AI stuff is overhyped pretty much everywhere as a panacea to solve all ~~capitalist~~ ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.
I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent. I also agree that it's interesting to try to break AI and push it to its limits, but then, breaking software is in my professional interests!
Sounds cool! I'll see if my local libraries have a copy. Thanks for the rec!
It's just because AI stuff is overhyped pretty much everywhere as a panacea to solve all ~~capitalist~~ ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.
Agreed :(
You know what's sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don't want to keep using it though. But I see nothing like that on Lemmy.
Lemmy is still in its infancy, and we're the early adopters. It will come into its own in due time, just like Reddit did.