UraniumBlazer

joined 2 years ago
[–] UraniumBlazer@lemm.ee 2 points 1 year ago (2 children)

I'm sorry you feel that way. However, don't you think it would be more helpful to point at the holes in my reasoning?

[–] UraniumBlazer@lemm.ee 0 points 1 year ago (2 children)

Cool. The burden of proof is on you though. You made a claim, you gotta provide evidence to support your claim. Right now, you're saying "but China doesn't say that they're committing genocide. So I guess they ain't...". How would you react if I used Israeli state sources to "prove" that there's no genocide happening in Gaza?

[–] UraniumBlazer@lemm.ee 0 points 1 year ago* (last edited 1 year ago) (4 children)

This particular type of AI is not and cannot become conscious, for most any definition of consciousness.

Do you have an experiment that can distinguish between sentient and non sentient systems? If I say I am sentient, how can you verify whether I am lying or not?

That being said, I do agree with you on this. The reason is simple- I believe that sentience is a natural milestone that a system reaches when its intelligence increases. I don't believe that this LLM is intelligent enough to be sentient. However, what I'm saying here isn't based off any evidence. It is completely based on inductive logic in a field that has had no long standing patterns to base my logic off of.

I have no doubt the LLM road will continue to yield better and better models, but today's LLM infrastructure is not conscious.

I think I agree.

I don't know what consciousness is, but an LLM, as I posted below (https://lemmy.ca/comment/7813413), is incapable of thought in any traditional sense. It can generate novel new sequences, those sequences are contextualized to the input, and there's some intelligence there, but there's no continuity or capability for background thought or ruminating on an idea.

This is because ruminating on an idea is a waste of resources considering the purpose of the LLM. LLMs were meant to serve humans after all and do what they're told. However, adjust a little bit of langchain and you have LLMs that have internal monologues.

It has no way to spend more cycles clarifying an idea to itself before sharing.

Because it doesn't need to yet. Langchain devs are working on this precisely. There are use cases where this is important. Doing this hasn't been proven to be that difficult.

In this case, it is actually just a bunch of abstract algebra.

Everything is abstract algebra.

Asking an LLM what it's thinking just doesn't make any sense, it's still predicting the output of the conversation, not introspecting.

Define "introspection" in an algorithmic sense. Is introspection looking at one's memories and analyzing current events based on these memories? Well, then all AI models "introspect". That's how learning works.

[–] UraniumBlazer@lemm.ee 3 points 1 year ago* (last edited 1 year ago) (4 children)

You could reduce any fact to an unknown with that type of troll reasoning.

Sorry that I came across as a troll. That was not my intent.

You can never know anything for a fact but you can get pretty damn close, and you absolutely can rule out anything that contradicts.

Lmao this statement itself is a contradiction. You first say how "you can never know anything for sure" in regards to descriptive statements about reality. Then, in the same statement, you make a statement relating to the laws of logic (which by the way are descriptive statements about reality) and say that you are absolutely sure of this statement.

Serious answer though - the scientific method is based on a couple of axioms. Assuming that these axioms are true, yes, you can be absolutely sure about the nature of things.

The idea that an LLM could gain consciousness contradicts the fact they lack memory and the ability to learn/grow.

You lack the understanding of how LLMs work. Please see how neural networks specifically work. They do learn and they do have memory. In fact, memory is the biggest reason why you can't run ChatGPT on your smartphone.

They're called machine learning but all the learning happens before they deploy.

Untrue. Please learn how machine learning works.

[–] UraniumBlazer@lemm.ee 2 points 1 year ago (1 children)

Yes, I read what you posted and answered accordingly. Only, I didn't spend enough time dumbing it down further. So let me dumb it down.

Your main objection was the simplicity of the goal of LLMs- predicting the next word that occurs. Somehow, this simplistic goal makes the system stupid.

In my reply, I first said that self awareness occurs naturally after a system becomes more and more intelligent. I explained the reason as to why. I then went on to explain how a simplistic terminal goal has nothing to do with actual intelligence. Hence, no matter how stupid/simple a terminal goal is, if an intelligent system is challenged enough and given enough resources, it will develop sentience at a given point in time.

[–] UraniumBlazer@lemm.ee 2 points 1 year ago (4 children)

Bruv almost all your sources are from some "Qiao Collective". If you read their about page, the very first line says, "Qiao Collective is a diaspora Chinese media collective challenging U.S. aggression on China." We can definitely trust them to provide objectively correct information, right? /s

Your very first source is CGTN, the Chinese State News.

You (not you particularly, but tankies in general) flail around a lot about western propaganda but forget that others have their respective propaganda as well. Especially the Chinese. They have literally firewalled their entire goddamn country. You can link Chinese State propaganda freely, and I can access it without censorship. Can the Chinese do the same for western news? No, thanks to the great firewall.

[–] UraniumBlazer@lemm.ee 2 points 1 year ago (3 children)

"Intelligence" - The attribute that makes a system propose and modify algorithms autonomously to achieve a certain terminal goal.

The intelligence of a system has nothing to do with the terminal goal. The magnitude of intelligence merely tells us how well the system works in accordance with the terminal goal.

Being self aware is merely a step in the direction of being more and more intelligent. If a system requires interaction with its surroundings, it needs to be able to recognise that it itself is different from its environment.

You are such an intelligent system as well. It's just that instead of having one terminal goal, you have many terminal goals (some may change with time while some might not).

You (this intelligent system) exist in a biological structure. You are nothing but data encoded in a biological form factor, with algorithms that execute through biological processes. If this data and these algorithms are executed on a non biological form factor, would it be any different from you?

LLMs work on some principles that our brains work on as well. Can you see how my point above applies?

[–] UraniumBlazer@lemm.ee 3 points 1 year ago (1 children)

Anyway you're correct that a chinese journalist wouldnt be restricted to film and report because they likely wouldnt be let into the US. But if they were i have no doubt theyd be monitered by the CIA at every point of their visit.

Evidence please. I have no doubt that I am an alien as well.

Also muslim countries arent against slavery? Brother the US isnt against slavery. Why do you think its prison population is higher than the population of many entire countries?

Agreed. Fuck the US. But does the US being shitty justify others being shitty as well?

As for videographic evidence, im in class rn and dont really have time to go looking, the original comment i made is something i copy paste because i got tired of writing it all out

Ahh np. Take your time.

[–] UraniumBlazer@lemm.ee 3 points 1 year ago* (last edited 1 year ago) (9 children)

I have no idea about the Falun Gong fellow. I haven't heard claims of organ harvesting before. So I don't care about Falun Gong. I'm talking about Vice here. Vice's journalistic freedoms were being trampled upon by the CCP here.

As for Chinese journalists, no, they would not be restricted to film and report stuff in the US. Please provide videographic evidence if you disagree.

Edit: Also about the population of the Muslim countries - what about slavery? They don't seem to be so much against that either.

[–] UraniumBlazer@lemm.ee -2 points 1 year ago (6 children)

How do you define "thinking"? Thinking is nothing but computation. Execution of a formal or informal algorithm. By this definition, calculators "think" as well.

This entire "AI can't be self conscious" thing stems from human exceptionalism in my opinion. You know... "The earth is the center of the universe", "God created man to enjoy the fruits of the world" and so on. We just don't want to admit that we aren't anything more than biological neural networks. Now, using these biological neural networks, we are producing more advanced inorganic neural networks that will very soon surpass us. This scares us and stokes up a little existential dread in us. Understandable, but not really useful...

[–] UraniumBlazer@lemm.ee -1 points 1 year ago (1 children)

It isn't. The self aware thing is coming after the LLM has referenced itself as "I" many times (when doing so wasn't really that necessary). Watch Fireship's video on this.

[–] UraniumBlazer@lemm.ee 1 points 1 year ago (9 children)

Your brain is just a biological system that works somewhat like a neural net. So according to your statement, you too are nothing more than an auto complete machine.

view more: ‹ prev next ›