UraniumBlazer

joined 1 year ago
[โ€“] UraniumBlazer@lemm.ee 3 points 4 months ago

Good for you ๐Ÿ‘

[โ€“] UraniumBlazer@lemm.ee -1 points 4 months ago (3 children)

A conscious system has to have some baseline level of intelligence that's multiple orders of magnitude higher than LLMs have.

Does it? By that definition, dogs aren't conscious. Apes aren't conscious. Would you say they both aren't self aware?

If you're entertained by an idiot "persuading" something less than an idiot, whatever. Go for it.

Why the toxicity? U might disagree with him, sure. Why go further and berate him?

[โ€“] UraniumBlazer@lemm.ee 4 points 4 months ago (1 children)

vocal chloroform.

Haha I'm stealing that

[โ€“] UraniumBlazer@lemm.ee -5 points 4 months ago (11 children)

Exactly. Which is what makes this entire thing quite interesting.

Alex here (the interrogator in the video) is involved in AI safety research. Questions like "do the ethical frameworks of AI match those of humans", "how do we get AI to not misinterpret inputs and do something dangerous" are very important to be answered.

Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?

Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?

Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.

view more: โ€น prev next โ€บ