this post was submitted on 23 Dec 2023
189 points (86.8% liked)

Technology

59629 readers
2700 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Poggervania@kbin.social 82 points 11 months ago (2 children)

Cyberpunk 2077 sorta explores this a bit.

There’s a vending machine that has a personality and talks to people walking by it. The quest chain basically has you and the vending machine chatting a bit and even giving the vending machine some advice on a person he has a crush on. You eventually become friends with this vending machine.

When it seems like it’s becoming more apparent it’s an AI and is developing sentience, it turns out the vending machine just has a really well-coded socializing program. He even admits as much when he’s about to be deactivated.

So, to reiterate what you said: predictive text and LLMs are not alive nor a mind.

[–] dlpkl@lemmy.world 47 points 11 months ago* (last edited 11 months ago)

I don't care, Brandon was real to me okay 😭

[–] billwashere@lemmy.world 21 points 11 months ago (1 children)

Which is why the Turing Test needs to be updated. These text models are getting really good at fooling people.

[–] bionicjoey@lemmy.ca 17 points 11 months ago (1 children)

The Turing test isn't just that there exists some conversation you can have with a machine where you wouldn't know it's a machine. The Turing test is that you could spend an arbitrary amount of time talking to a machine and never be able to tell. ChatGPT doesn't come anywhere close to this, since there are many subjects where it quickly becomes clear that the model doesn't understand the meaning of the text it generates.

[–] Corgana@startrek.website 7 points 11 months ago* (last edited 11 months ago)

Exactly thank you for pointing this out. It also assumes that the tester would have knowledge of the wider context in which the test exists. GPT could probably fool someone from the middle ages, but that person wouldn't know anything about what it is they are testing for exactly.