this post was submitted on 13 Sep 2023
52 points (78.3% liked)

Technology

34513 readers
468 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Some argue that bots should be entitled to ingest any content they see, because people can.

you are viewing a single comment's thread
view the rest of the comments
[–] hoshikarakitaridia@sh.itjust.works 3 points 1 year ago* (last edited 1 year ago) (2 children)

Well what an interesting question.

Let's look at the definitions in Wikipedia:

Sentience is the ability to experience feelings and sensations.

Experience refers to conscious events in general [...].

Feelings are subjective self-contained phenomenal experiences.

Alright, let's do a thought experiment under the assumptions that:

  • experience refers to the ability to retain information and apply it in some regard
  • phenomenal experiences can be described by a combination of sensoric data in some fashion
  • performance is not relevant, as for the theoretical possibility, we only need to assume that with infinite time and infinite resources the simulation of sentience through AI needs to be possible

AI works by telling it what information goes in and what goes out, and it therefore infers the same for new patterns of information and it adjusts to "how wrong it was" to approximate the correction. Every feeling in our body is either chemical or physical, so it can be measured / simulated through data input for simplicity sake.

Let's also say for our experiment that the appropriate output it is to describe the feeling.

Now I think, knowing this, and knowing how good different AIs can already comment on, summarize or do any other transformative task on bigger texts that exposes them to interpretation of data, that it should be able to "express" what it feels. Let's also conclude that based on the fact that everything needed to simulate feeling or sensation it can be described using different inputs of data points.

This brings me to the logical second conclusion that there's nothing scientifically speaking of sentience that we wouldn't be able to simulate already (in light of our assumptions).

Bonus: while my little experiment is only designed for theoretical possibility and we'd need some proper statistical calculations to know if this is practical in a realistic timeframe already and with a limited amount of resources, there's nothing saying it can't. I guess we have to wait for someone to try it to be sure.