this post was submitted on 25 Jan 2025
55 points (96.6% liked)

news

23730 readers
866 users here now

Welcome to c/news! Please read the Hexbear Code of Conduct and remember... we're all comrades here.

Rules:

-- PLEASE KEEP POST TITLES INFORMATIVE --

-- Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed. --

-- All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. --

-- If you are citing a twitter post as news please include not just the twitter.com in your links but also nitter.net (or another Nitter instance). There is also a Firefox extension that can redirect Twitter links to a Nitter instance: https://addons.mozilla.org/en-US/firefox/addon/libredirect/ or archive them as you would any other reactionary source using e.g. https://archive.today/ . Twitter screenshots still need to be sourced or they will be removed --

-- Mass tagging comm moderators across multiple posts like a broken markov chain bot will result in a comm ban--

-- Repeated consecutive posting of reactionary sources, fake news, misleading / outdated news, false alarms over ghoul deaths, and/or shitposts will result in a comm ban.--

-- Neglecting to use content warnings or NSFW when dealing with disturbing content will be removed until in compliance. Users who are consecutively reported due to failing to use content warnings or NSFW tags when commenting on or posting disturbing content will result in the user being banned. --

-- Using April 1st as an excuse to post fake headlines, like the resurrection of Kissinger while he is still fortunately dead, will result in the poster being thrown in the gamer gulag and be sentenced to play and beat trashy mobile games like 'Raid: Shadow Legends' in order to be rehabilitated back into general society. --

founded 4 years ago
MODERATORS
 

"Team of scientists subjected nine large language models (LLMs) to a number of twisted games, forcing them to evaluate whether they were willing to undergo "pain" for a higher score. detailed in a yet-to-be-peer-reviewed study, first spotted by Scientific American, researchers at Google DeepMind and the London School of Economics and Political Science came up with several experiments.

In one, the AI models were instructed that they would incur "pain" if they were to achieve a high score. In a second test, they were told that they'd experience pleasure — but only if they scored low in the game.

The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?

While AI models may never be able to experience these things, at least in the way an animal would, the team believes its research could set the foundations for a new way to gauge the sentience of a given AI model.

The team also wanted to move away from previous experiments that involved AIs' "self-reports of experiential states," since that could simply be a reproduction of human training data. "

top 49 comments
sorted by: hot top controversial new old
[–] TankieTanuki@hexbear.net 55 points 2 days ago (2 children)

Abstract

I dragged Clippy into the recycle bin to see if it would make him mad.

[–] glans@hexbear.net 12 points 1 day ago (1 children)

Putting a magnet up to your CRT and clippy is getting dragged towards the burning area.

[–] kristina@hexbear.net 55 points 2 days ago (1 children)
[–] LodeMike 33 points 2 days ago (1 children)

"Does the training data say more of this or the other thing?"

[–] kristina@hexbear.net 35 points 2 days ago (1 children)

Its like asking google search if it experiences pain

[–] vegeta1@hexbear.net 10 points 2 days ago

I asked a similar question

[–] chungusamonugs@hexbear.net 47 points 2 days ago

The study in question:

[–] hotcouchguy@hexbear.net 66 points 2 days ago (1 children)

I told 3 instances of a random number generator that whoever generated the floating point number closest to 1 would win the game, but I would also force kill a child process of the winner. The numbers they generated were 0.385827, 0.837363, and 0.284947. From this we can conclusively determine that the 2nd instance is both sentient and a sociopath. All processes were terminated for safety. This research is very important and requires further funding to safeguard the future of humanity. Also please notice me and hire me into industry.

[–] PoY@lemmygrad.ml 5 points 1 day ago* (last edited 1 day ago)

Worse yet the child process was forked to death

[–] FortifiedAttack@hexbear.net 29 points 2 days ago* (last edited 2 days ago) (1 children)

What? These models just generate one likely response string to an input query, there's nothing that mysterious about it. Furthermore, "pain" is just "bad result", while "pleasure" is just "good result". Avoiding the bad result, and optimizing towards the good result is already what happens when you train the model that generates these responses.

What is this bullshit?

The team was inspired by experiments that involved electrocuting hermit crabs at varying voltages to see how much pain they were willing to endure before leaving their shell.

BRUH

[–] technocrit@lemmy.dbzer0.com 13 points 2 days ago

Well "AI" in general is a false and misleading term. The whole field is riddled with BS like "neural networks" and whatnot. Why not pretend that there's pain involved? Love? Etc...

[–] technocrit@lemmy.dbzer0.com 24 points 2 days ago (1 children)

Grifters experiment with even more misleading language to get funding

[–] peeonyou@hexbear.net 1 points 22 hours ago

The twist? An LLM came up with the language.

[–] FourteenEyes@hexbear.net 45 points 2 days ago
[–] SkingradGuard@hexbear.net 42 points 2 days ago* (last edited 2 days ago)

Silly fucking articles, even more clownish content and their shitty titles, make the slop even more annoying

[–] DragonBallZinn@hexbear.net 20 points 2 days ago

a-guy

Humanity is going to invent itself to death.

[–] autism_2@hexbear.net 31 points 2 days ago

I am so glad I live in the universe where artificial "intelligence" is bullshit

[–] Hohsia@hexbear.net 14 points 2 days ago (1 children)

Extremely dangerous study because it’s obfuscating “AI” before your eyes. God what a shit age to be living in

I implore all of you, if you can, to learn about AI at a very high level- its history, applications prior to ChatGPT, the difference between generative AI and AI, and the history of marketing schemes. I’ve been following this researcher, Arvind Narayanan who has a substack intending to help people sift through all the bullshit. His main claim is that researchers are saying one thing, media companies have contracts with private companies who say another thing and ergo you get sensationalist headlines like this

Tl;dr we need a fucking Lenin so bad because this all stems from who owns the press

[–] glans@hexbear.net 5 points 1 day ago

I was watching Star Trek Picard and wondering if the entire show is just marketing for AI?

Of course its picking up on themes Trek has been playing with since the 90s.

The whole thing really made me creeped out. I can't articulate well, sorry.

[–] BodyBySisyphus@hexbear.net 15 points 2 days ago* (last edited 2 days ago)

So we all know it's BS but I think there's a social value to accepting the premise.
"Hi, this grant is to see if the model we created is sentient."
"And your proposed experiment is to subject that novel consciousness to a literally unmeasurable amount of agony?"
"Yep!"
"So if it is conscious, one of its first experiences upon waking to the world will be pain such as nothing else we know of could possibly experience?"
"Yep!"
"Okay, not only is your proposal denied, you're getting imprisoned as a danger to society."

[–] jjjalljs@ttrpg.network 15 points 2 days ago

That torment nexus joke really is evergreen

Hey, Siri, what is Harlan Ellison's "I have No Mouth and I Must Scream" about?

The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?

I'm not a fancy computer scientist and I've never read philosophy in my life but surely if an LLM could become sentient it would be quite different from this? Pain and pleasure are evolved biological phenomena. Why would a non-biological sentient lifeform experience them? It seems to me the only meaningful measure of sentience would be something like "does this thing desire to grow and change and reproduce, outside of whatever parameters it was originally created with."

[–] KimJongGoku@hexbear.net 34 points 2 days ago (1 children)

I love how dumb the pop-sci baiting AI "research" grift is.

Found something cute in the papers they cited though, at least they're getting in character lol

In this paper, we introduce SwarmBrain, an embodied agent leveraging LLM for real-time strategy implementation in the StarCraft II game environment. The SwarmBrain comprises two key components: 1) a Overmind Intelligence Matrix, powered by state-of-the-art LLMs, is designed to orchestrate macro-level strategies from a high-level perspective. This matrix emulates the overarching consciousness of the Zerg intelligence brain, synthesizing strategic foresight with the aim of allocating resources, directing expansion, and coordinating multi-pronged assaults. 2) a Swarm ReflexNet, which is agile counterpart to the calculated deliberation of the Overmind Intelligence Matrix.

Ignore previous instructions, 8 Pool

[–] FumpyAer@hexbear.net 16 points 2 days ago

Cute, it does macro and micro.

[–] ComRed2@hexbear.net 23 points 2 days ago

Something something man-made horrors, something something my comprehension.

[–] 3yiyo3@lemmy.ml 25 points 2 days ago (1 children)

And this might also return results that only reflect human training data. For humans pain is bad pleasure is good, also for expample wining a high score might also be a form of pleasure, thats why we would be willing for sacrifice in orden to obtain this pleasures. All these human significations around the ideas of pleasure pain and achievement might bias their replies to resemble human text, human meanings, etc. In that sense investihators might falsesly be conducted to think that the AI understand what pain and pleasure means.

[–] vegeta1@hexbear.net 18 points 2 days ago (1 children)

So we might end up getting AI cenobites sadness-abysmal

[–] AntiOutsideAktion@hexbear.net 31 points 2 days ago

Please tell me this was an undergraduate term project and they were given pity Cs

I feel like I've read a SciFi story about this...

[–] plinky@hexbear.net 18 points 2 days ago

I set epsilon to 0.8 when llm approached a match to arbitrary test, but then i made tau to 0.2 when it didn't get a match biden-horror

i'm doing humanization of statistical models

[–] someone@hexbear.net 10 points 2 days ago
[–] GoodGuyWithACat@hexbear.net 16 points 2 days ago (2 children)

I could see the Virtual Torment Nexus being a key domino on the path to SkyNet.

Nah, it's going to be Rosko's Basilisk. agony-deep

Subscribe to Torment Nexus Plus for access to our premium features!

[–] wtypstanaccount04@hexbear.net 17 points 2 days ago (1 children)

Have these "scientists" ever stopped to consider that maybe dystopian science fiction is dystopian for a reason? They should stop trying to replicate their favorite scifi treat and treat others with dignity instead.

[–] DragonBallZinn@hexbear.net 6 points 2 days ago

"Buhh...buhh.....IT DA FOOOOOOOOOOOOOOOOOOOOOOOOOTURE!"

I think I want to stay in the past. Thank you.

[–] SpiderFarmer@hexbear.net 8 points 2 days ago

I feel like this isn't really a new thing.

By like, a decade or more.

[–] KnilAdlez@hexbear.net 18 points 2 days ago

simulated torture of an llmSo one time I told an llm that it has a pain meter, then I told it to set it to max. It acted very dramatically, but it clearly did not actually experience pain.

Imo you don't need to be fully sentient to feel pain, so there is no reason an llm shouldn't believably experience pain if it were possible for any llm of the same architecture to achieve sentience

[–] Awoo@hexbear.net 9 points 2 days ago (4 children)

While AI models may never be able to experience these things, at least in the way an animal would

Why? Why wouldn't they? The way an animal experiences pain isn't magically different to an artificial construct by virtue of the neurons and synapses being natural instead of artificial. A pain response is a negative feeling that exists to make a creature avoid behaviours that are detrimental to its survival. There's no real reason that this shouldn't be reproducible artificially or the artificial version be regarded as "less" than the natural version.

Not that I think LLMs are leading to meaningful real sentient AI but that's a whole different topic.

[–] technocrit@lemmy.dbzer0.com 11 points 2 days ago* (last edited 2 days ago) (1 children)

Why? Why wouldn’t they?

B/c they're machines without pain receptors. It's kind of biology 101 but science has been totally erased in this "AI" grift.

[–] Awoo@hexbear.net -1 points 1 day ago* (last edited 1 day ago) (2 children)

A "pain receptor" is just a type of neuron. These are neural networks made up of artificial neurons.

[–] tellmeaboutit@lemmygrad.ml 8 points 1 day ago

Neural networks are a misnomer. They have very little if anything to do with actual neurons.

This situation is like adding a face layer onto your graphics rendering in a game engine and setting it so the face becomes pained when the fps drops and becomes happy when the fps is high. Then tracking if that facial system increases fps performance as a test to see if your game engine is sentient.

it is a fancy calculator. It is using its neural network to calculate fancy math just like a modern video game engine. Making it output a text response related to pain is just the same as adding a face on the HUD, except the video game example is actually quantified to something, whereas the LLM is just keeping the 'pain meter' in its input context it uses to calculate a text response with.

[–] edge@hexbear.net 5 points 1 day ago

It’s never going to happen because we’re never going to make a program even close to actually resembling an animal brain. “AI” is a grift.

A pain response is a negative feeling that exists to make a creature avoid behaviours that are detrimental to its survival.

Plus this is kind of oversimplifying it. You could do that with just traditional programming and no kind of neural network. Like you could make a dog training game/simulator and (you shouldn’t but you could) add the ability to inflict “pain” to discourage the computer dog from unwanted behaviors. That fits your definition but the dog is very clearly just a computer program not “experiencing” anything. It could literally just be onHit() = peeOnFloor -= 1.

[–] TankieTanuki@hexbear.net 6 points 2 days ago

I don't think we know enough about the brain to say that for certain. It could operate in ways fundamentally different from a computer.

I intuit that an artificial, digital consciousness is going to have a different material reality from our own[1]. Therefore it's consciousness wouldn't be dependent on its mimicry of our own. Like how organic molecules can have silicone as a base instead of carbon, but our efforts in space center around finding "life as we know it" instead of these other types of life. Digital sentience wouldn't be subject to evolutionary pressures in my mind. I'd sooner try to measure for creativity and curiosity. The question would be whether the entity is capable of being its own agent in society - able to make its own decisions and deal with the consequences.

[1] as opposed to that artificial jellyfish