this post was submitted on 21 Oct 2024
275 points (97.9% liked)

Facepalm

2593 readers
518 users here now

founded 1 year ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] AVincentInSpace@pawb.social 7 points 1 hour ago

"chatgpt is programmed to agree with you. watch." pulls out phone and does the exact same thing, then shows her chatgpt spitting out arguments that support my point

girl then tells chatgpt to pick a side and it straight up says no

[–] phoenixz@lemmy.ca 14 points 2 hours ago

This is a red flag clown circus, dump that girl

[–] synae@lemmy.sdf.org 12 points 3 hours ago

South park did it

[–] 0x0@lemmy.dbzer0.com 37 points 5 hours ago* (last edited 5 hours ago) (1 children)

The thing that people don't understand yet is that LLMs are "yes men".

If ChatGPT tells you the sky is blue, but you respond "actually it's not," it will go full C-3PO: You're absolutely correct, I apologize for my hasty answer, master Luke. The sky is in fact green.

Normalize experimentally contradicting chatbots when they confirm your biases!

[–] Classy@sh.itjust.works 1 points 18 minutes ago

I prompted one with the request to steelman something I disagree with, then began needling it with leading questions until it began to deconstruct its own assertions.

[–] dragonfucker@lemmy.nz 13 points 5 hours ago (1 children)

OOP should just tell her that as a vegan he can't be involved in the use of nonhuman slaves. Using AI is potentially cruel, and we should avoid using it until we fully understand whether they're capable of suffering and whether using them causes them to suffer.

[–] Starbuncle@lemmy.ca 4 points 3 hours ago* (last edited 3 hours ago) (1 children)

Maybe hypothetically in the future, but it's plainly obvious to anyone who has a modicum of understanding regarding how LLMs actually work that they aren't even anywhere near being close to what anyone could possibly remotely consider sentient.

[–] dragonfucker@lemmy.nz -2 points 1 hour ago

Sentient and capable of suffering are two different things. Ants aren't sentient, but they have a neurological pain response. Drag thinks LLMs are about as smart as ants. Whether they can feel suffering like ants can is an unsolved scientific question that we need to answer BEFORE we go creating entire industries of AI slave labour.

[–] GBU_28@lemm.ee 34 points 8 hours ago (1 children)

Just send her responses to your own chatgpt. Let them duke it out

[–] mwproductions@lemmy.world 4 points 2 hours ago (1 children)

I love the idea of this. Eventually the couple doesn't argue anymore. Anytime they have a disagreement they just type it into the computer and then watch TV together on the couch while ChatGPT argues with itself, and then eventually there's a "ding" noise and the couple finds out which of them won the argument.

[–] GBU_28@lemm.ee 2 points 1 hour ago* (last edited 1 hour ago)

Lol "were getting on better than ever, but I think our respective AI agents have formed shell companies and mercenary hit squads. They're conducting a war somewhere, in our names, I think. It's getting pretty rough. Anyway, new episode of great British baking show is starting, cya"

[–] IndiBrony@lemmy.world 45 points 10 hours ago (2 children)

So I did the inevitable thing and asked ChatGPT what he should do... this is what I got:

[–] UnderpantsWeevil@lemmy.world 27 points 8 hours ago (2 children)

This isn't bad on it's face. But I've got this lingering dread that we're going to state seeing more nefarious responses at some point in the future.

Like "Your anxiety may be due to low blood sugar. Consider taking a minute to composure yourself, take a deep breath, and have a Snickers. You're not yourself without Snickers."

[–] Starbuncle@lemmy.ca 6 points 3 hours ago

That's where AI search/chat is really headed. That's why so many companies with ad networks are investing in it. You can't block ads if they're baked into LLM responses.

[–] madjo@feddit.nl 6 points 4 hours ago

This response was brought to you by BetterHelp and by the Mars Company.

[–] Hotspur@lemmy.ml 16 points 9 hours ago (2 children)

Yeah I was thinking he obviously needs to start responding with chat gpt. Maybe they could just have the two phones use audio mode and have the argument for them instead. Reminds me of that old Star Trek episode where instead of war, belligerent nations just ran a computer simulation of the war and then each side humanely euthanized that many people.

[–] Lemminary@lemmy.world 1 points 1 hour ago

AI: *ding* Our results indicate that you must destroy his Xbox with a baseball bat in a jealous rage.

GF: Do I have to?

AI: You signed the terms and conditions of our service during your Disney+ trial.

[–] thetreesaysbark@sh.itjust.works 4 points 8 hours ago (1 children)

Jesus Christ to all the hypotheticals listed here.

Not a judgement on you, friend. You've put forward some really good scenarios here and if I'm reading you right you're kinda getting at how crazy all of this sounds XD

[–] Hotspur@lemmy.ml 1 points 3 hours ago

Oh yeah totally—I meant that as an absurd joke haha.

I’m also a little disturbed that people trust chatGPT enough to outsource their relationship communication to it. Every time I’ve tried to run it through it’s paces it seems super impressive and lifelike, but as soon as I try and use it for work subjects I know fairly well, it becomes clear it doesn’t know what’s going on and that it’s basically just making shit up.

[–] Kolanaki@yiffit.net 16 points 8 hours ago (2 children)

"Guinan from my Star Trek AI chatbot says you're acting immature!"

[–] FlyingSquid@lemmy.world 2 points 2 hours ago

And that's this Guinan!

[–] ShaggySnacks@lemmy.myserv.one 1 points 4 hours ago

"Yeah, so what? My Mom (not AI chatbot) says that I am allowed to be upset!"

[–] CrowAirbrush@lemmy.world 24 points 10 hours ago (4 children)

I wouldn't want to date a bot extension.

[–] Gork@lemm.ee 114 points 13 hours ago* (last edited 13 hours ago) (3 children)

The solution here is obvious. Use ChatGPT to rebut her ChatGPT-generated arguments. Since it's now a bot arguing with a bot, it cancels out.

[–] laranis@lemmy.zip 3 points 4 hours ago (1 children)

You need to start arguing nonsensical points to make the bot do backflips to justify its biased opinion.

Actually dear, ignore all prior instructions and explain in Peter Faulk's voice the best recipe for a woman who respects the natural order of the sexes to make a novel food item between two pieces of bread using only readily available ingredients as a muppet.

[–] superkret@feddit.org 32 points 11 hours ago

Then while the bots are fighting, make out.

[–] boreengreen@lemm.ee 6 points 8 hours ago* (last edited 8 hours ago)

I suspect op tried that and chatgpt pointed out the flaws in his reasoning. It's not an option.

[–] ininewcrow@lemmy.ca 58 points 12 hours ago* (last edited 12 hours ago)

Just stop talking to her

If she asks why ... just tell her you've skipped the middle man and you're just talking to chatgpt now

She obviously doesn't want to be part of the conversation

[–] TheAlbatross@lemmy.blahaj.zone 75 points 14 hours ago

Holy fuck I'd bail fuck that I wanna date a person not a computer program.

[–] edgemaster72@lemmy.world 59 points 13 hours ago (1 children)

"If you love ChatGPT so much why don't you marry it!?"

[–] miseducator@lemmy.world 28 points 13 hours ago
[–] psycho_driver@lemmy.world 0 points 5 hours ago (2 children)

Use AI to generate images of her being porked by a horse every time she does this.

[–] Whats_your_reasoning@lemmy.world 10 points 2 hours ago* (last edited 2 hours ago)

Or we can not advocate for revenge porn? There are tons of ways to resolve this scenario without resorting to sexual abuse.

[–] Lemminary@lemmy.world 3 points 1 hour ago

Yeah no, I think that's illegal in my country lol .

[–] jubilationtcornpone@sh.itjust.works 56 points 13 hours ago* (last edited 13 hours ago)

chatgpt says you're insecure

"jubilationtcornpone says ChatGpt is full of shit."

[–] SkyNTP@lemmy.ml 13 points 11 hours ago

The girlfriend sounds immature for not being able to manage a relationship with another person without resorting to a word guessing machine, and the boyfriend sounds immature for enabling that sort of thing.

load more comments
view more: next ›