UraniumBlazer

joined 1 year ago
[โ€“] UraniumBlazer@lemm.ee 2 points 10 months ago

Oooo yummyyy

[โ€“] UraniumBlazer@lemm.ee 3 points 10 months ago

Oooo amazing!!!

[โ€“] UraniumBlazer@lemm.ee 4 points 10 months ago (17 children)

Nice, but the presentation needs work.

[โ€“] UraniumBlazer@lemm.ee 52 points 10 months ago (7 children)

TLDR: Let's say you want to teach an LLM a new skill. You give them training data pertaining to that skill. Currently, researchers believe that this skill development shows up suddenly in a breakthrough fashion. They think so because they measure this skill using some methods. The skill levels remain very low until they unpredictably jump up like crazy. This is the "breakthrough".

BUT, the paper that this article references points at flaws in the methods of measuring skills. This paper suggests that breakthrough behavior doesn't really exist and skill development is actually quite predictable.

Also, uhhh I'm not AI (I see that TLDR bot lurking everywhere, which is what made me specify this).

[โ€“] UraniumBlazer@lemm.ee 1 points 10 months ago* (last edited 10 months ago)

Fair. I'm a little tired right now, but this article says exactly everything that I want/have to say.

This article is by the Communist Party of India (Marxist Leninist). So just for context, there are two communist parties in India (CPI and CPIM). The CPIM split from the CPI during the Sino-Soviet split. The CPIM was the pro China one. In my opinion, in the modern day, both of them are absolute chads.

I love this article because it does not approach the topic in a vacuum. It compares the Uighur genocide to what the Indian State has been doing in Kashmir since independence. The similarities are uncanny. Kashmir, Xinjiang, the Rohingyas, Kurdistan... the story of the oppressors here is the same throughout.

[โ€“] UraniumBlazer@lemm.ee 2 points 10 months ago

This just shows that we have different definitions for sentience. I define sentience as the ability to be self aware and the ability to link senses of external stimuli to the self. Your definition involves short term memory and weight adjustment as well.

However, there is no consensus in the definition of sentience yet for a variety of reasons. Hence, none of our definitions are "wrong". At least not yet.

[โ€“] UraniumBlazer@lemm.ee 1 points 10 months ago

I don't know what a "debate pervert" is. However, what I know is that I wasn't engaging with the intent of "winning" or something.

[โ€“] UraniumBlazer@lemm.ee 2 points 10 months ago* (last edited 10 months ago) (2 children)

Correct. So basically, you are talking about it adjusting its own weights while talking to you. It does this in training but not in deployment. The reason why it doesn't do this in deployment is to prevent bad training data from worsening the quality of the model. All data needs to be vetted before training.

However, if you look at the training phase, it does this as you said. So in short, it doesn't adjust its weights in production because it can't, but because WE have prevented it from doing so.

Now about needing to learn and "mutate" to be sentient in deployment. I don't think that this is necessary for sentience. Take a look at Alzheimer's patients. They remember shit from decades ago while forgetting recent stuff. Are they not sentient? An Alzheimer's patient wouldn't be able to take up a new skill (which requires adjusting of neural weights). It still doesn't make them non sentient, does it?

[โ€“] UraniumBlazer@lemm.ee 2 points 10 months ago (2 children)

I'm sorry you feel that way. However, don't you think it would be more helpful to point at the holes in my reasoning?

[โ€“] UraniumBlazer@lemm.ee 0 points 10 months ago (2 children)

Cool. The burden of proof is on you though. You made a claim, you gotta provide evidence to support your claim. Right now, you're saying "but China doesn't say that they're committing genocide. So I guess they ain't...". How would you react if I used Israeli state sources to "prove" that there's no genocide happening in Gaza?

[โ€“] UraniumBlazer@lemm.ee 0 points 10 months ago* (last edited 10 months ago) (4 children)

This particular type of AI is not and cannot become conscious, for most any definition of consciousness.

Do you have an experiment that can distinguish between sentient and non sentient systems? If I say I am sentient, how can you verify whether I am lying or not?

That being said, I do agree with you on this. The reason is simple- I believe that sentience is a natural milestone that a system reaches when its intelligence increases. I don't believe that this LLM is intelligent enough to be sentient. However, what I'm saying here isn't based off any evidence. It is completely based on inductive logic in a field that has had no long standing patterns to base my logic off of.

I have no doubt the LLM road will continue to yield better and better models, but today's LLM infrastructure is not conscious.

I think I agree.

I don't know what consciousness is, but an LLM, as I posted below (https://lemmy.ca/comment/7813413), is incapable of thought in any traditional sense. It can generate novel new sequences, those sequences are contextualized to the input, and there's some intelligence there, but there's no continuity or capability for background thought or ruminating on an idea.

This is because ruminating on an idea is a waste of resources considering the purpose of the LLM. LLMs were meant to serve humans after all and do what they're told. However, adjust a little bit of langchain and you have LLMs that have internal monologues.

It has no way to spend more cycles clarifying an idea to itself before sharing.

Because it doesn't need to yet. Langchain devs are working on this precisely. There are use cases where this is important. Doing this hasn't been proven to be that difficult.

In this case, it is actually just a bunch of abstract algebra.

Everything is abstract algebra.

Asking an LLM what it's thinking just doesn't make any sense, it's still predicting the output of the conversation, not introspecting.

Define "introspection" in an algorithmic sense. Is introspection looking at one's memories and analyzing current events based on these memories? Well, then all AI models "introspect". That's how learning works.

view more: โ€น prev next โ€บ