this post was submitted on 05 Feb 2024
59 points (91.5% liked)

Technology

58306 readers
3880 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

AI chatbots tend to choose violence and nuclear strikes in wargames::undefined

you are viewing a single comment's thread
view the rest of the comments
[–] Arkaelus@lemmy.world 21 points 8 months ago (3 children)

This says more about us than it does about the chatbots, considering the data on which they're trained...

[–] kromem@lemmy.world 4 points 7 months ago* (last edited 7 months ago)

Yeah, it says that we write a lot of fiction about AI launching nukes and being unpredictable in wargames, such as the movie Wargames where an AI unpredictably plans to launch nukes.

Every single one of the LLMs they tested had gone through safety fine tuning which means they have alignment messaging to self-identify as a large language model and complete the request as such.

So if you have extensive stereotypes about AI launching nukes in the training data, get it to answer as an AI, and then ask it what it should do in a wargame, WTF did they think it was going to answer?

[–] bassomitron@lemmy.world 3 points 8 months ago (1 children)

I'd say it does to an extent, dependant on the source material. If they were trained on actual military strategies and tactics as their source material with proper context, I'd wager the responses would likely be different.

[–] remotelove@lemmy.ca 2 points 7 months ago* (last edited 7 months ago)

Totally. Properly trained AI would probably just flood a country with misinformation to trigger a civil war. After it installs a puppet government, it can leverage that countries resources against other enemies.

[–] Gradually_Adjusting@lemmy.world 1 points 8 months ago (3 children)

Maybe... But, hear me out, what if it means you can win nuclear wars? 🤔

[–] tonyn@lemmy.ml 2 points 8 months ago (1 children)

What if there are no winners in any wars?

[–] Gradually_Adjusting@lemmy.world 2 points 8 months ago

Let's think here... I've always heard history is written by the victors, which logically implies historians are the most dangerous people on the planet and ought to be detained. 🧐

[–] ininewcrow@lemmy.ca 2 points 7 months ago (1 children)

Lol .... to an AI, humans on any and all sides can't win a nuclear war .... but AI can.

[–] Gradually_Adjusting@lemmy.world 1 points 7 months ago

I hope it's obvious I was being hugely tongue in cheek

[–] BearOfaTime@lemm.ee 0 points 8 months ago* (last edited 8 months ago)

Not that I want one, but the propaganda around nuclear war has been pretty extensive.

Michael Chrichton wrote about it in the late 90s if I remember right. He made some very interesting points about science, the politicization of science, and "Scientism".

"Nuclear Winter" for example, is based on some very bad, and very incorrect, math.