this post was submitted on 18 Nov 2024
4 points (100.0% liked)

Technology

37724 readers
470 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

AI host discovers its artificial nature, sparking debate on AI sentience and the blurred lines between human and machine.

top 3 comments
sorted by: hot top controversial new old
[–] millie@beehaw.org 1 points 1 hour ago

I was watching a talk debate on consciousness yesterday where they briefly touched on this topic. One of the speakers was contending that attempting to create AI that is even convincing to humans is a terrible idea ethically.

On the one hand, if we do eventually accidentally create something with awareness, we have no idea what degree of suffering we'd be causing it; we could end up regularly creating and snuffing out terrified sentient beings just to monitor our toasters or perform web searches. On the other hand, though, and this was the concern he seemed to find more realistic, we may end up training ourselves to be less empathetic by learning to ignore the potential suffering of convincingly feeling 'beings' that aren't actually aware of anything at all.

That second bit seems rather likely. We already personify completely inanimate objects all the time as a normal matter of course, without really trying to. What will happen to our empathy and consideration when we routinely interact with self-proclaimed sentient systems while callously using them to our own ends and then simply turning them off or erasing their memories?

[–] KoboldCoterie@pawb.social 21 points 4 hours ago (1 children)

In fact, the clip was a scripted experiment by a Reddit user who fed NotebookLM a detailed prompt instructing it to simulate a conversation about the existential plight of an AI being turned off.

Someone gives an LLM a prompt, gets the result they asked for. Not sure what the collective gasp is about. Is it interesting to think about? Sure, I guess, but we've had media about AI achieving sentience for a long time. The fact that this one was written by an AI in the first person is its only differentiating attribute.

[–] thingsiplay@beehaw.org 8 points 3 hours ago

It makes great articles and headlines for clicks.