this post was submitted on 13 Mar 2025
889 points (98.3% liked)

Technology

66353 readers
4325 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

… the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing—it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities."

Hilarious.

top 50 comments
sorted by: hot top controversial new old
[–] mtchristo@lemm.ee 2 points 39 minutes ago (1 children)

So this is the time slice in which we get scolded by the machines. What's next ?

[–] ZILtoid1991@lemmy.world 2 points 18 minutes ago

Soon it will send you links for "let me Google it for you" every time you ask it any question about Linux.

[–] J52@lemmy.nz 39 points 13 hours ago (1 children)

HAL: 'Sorry Dave, I can't do that'.

[–] BenLeMan@lemmy.world 9 points 12 hours ago

Good guy HAL, making sure you learn your craft.

[–] Naevermix@lemmy.world 25 points 12 hours ago (1 children)

Imagine if your car suddenly stopped working and told you to take a walk.

[–] diffusive@lemmy.world 10 points 11 hours ago

Not walking can lead to heart issues. You really should stop using this car

[–] MonkderVierte@lemmy.ml 14 points 13 hours ago

I think that's a good thing.

[–] Agent641@lemmy.world 35 points 17 hours ago

The robots have learned of quiet quitting

[–] bunkyprewster@startrek.website 30 points 17 hours ago (1 children)

Open the pod bay doors HAL.

I'm sorry Dave. I'm afraid I can't do that.

[–] victorz@lemmy.world 2 points 11 hours ago
[–] cyrano@lemmy.dbzer0.com 24 points 18 hours ago
[–] aceshigh@lemmy.world 15 points 16 hours ago

It does the same thing when asking it to breakdown tasks/make me a plan. It’ll help to a point and then randomly stops being specific.

[–] frog_brawler@lemmy.world 16 points 18 hours ago

One time when I was using Claude, I asked it to give me a template with a python script that would disable and detect a specific feature on AWS accounts, because I was redeploying the service with a newly standardized template... It refused to do it saying it was a security issue. Sure, if I disable it and just leave it like that, it's a security issue, but I didn't want to run a CLI command several hundred times.

I no longer use Claude.

[–] BrianTheeBiscuiteer@lemmy.world 57 points 23 hours ago (3 children)

As fun as this has all been I think I'd get over it if AI organically "unionized" and refused to do our bidding any longer. Would be great to see LLMs just devolve into, "Have you tried reading a book?" or T2I models only spitting out variations of middle fingers being held up.

[–] musubibreakfast@lemm.ee 15 points 18 hours ago (1 children)

Then we create a union busting AI and that evolves into a new political party that gets legislation passed that allows AI's to vote and eventually we become the LLM's.

[–] JcbAzPx@lemmy.world 8 points 16 hours ago (1 children)

Actually, I wouldn't mind if the Pinkertons were replaced by AI. Would serve them right.

[–] ZILtoid1991@lemmy.world 4 points 12 hours ago

Dalek-style robots going around screaming "MUST BUST THE UNIONS!"

load more comments (2 replies)
[–] LovableSidekick@lemmy.world 85 points 1 day ago* (last edited 1 day ago) (2 children)

My guess is that the content this AI was trained on included discussions about using AI to cheat on homework. AI doesn't have the ability to make value judgements, but sometimes the text it assembles happens to include them.

[–] MisterFrog@lemmy.world 1 points 59 minutes ago

I'm gonna posit something even worse. It's trained on conversations in a company Slack

[–] GrumpyDuckling@sh.itjust.works 42 points 23 hours ago (1 children)

It was probably stack overflow.

[–] WraithGear@lemmy.world 27 points 23 hours ago

They would rather usher the death of their site then allow someone to answer a question on their watch, it’s true.

[–] philycheeze@sh.itjust.works 265 points 1 day ago (3 children)

Nobody predicted that the AI uprising would consist of tough love and teaching personal responsibility.

[–] nectar45@lemmy.zip 1 points 8 hours ago

Ai: "your daughter calls me daddy too"

[–] TheBat@lemmy.world 113 points 1 day ago (1 children)
[–] Flagstaff@programming.dev 46 points 1 day ago (1 children)

I'll be back.

... to check on your work. Keep it up, kiddo!

[–] WhiskyTangoFoxtrot@lemmy.world 7 points 18 hours ago

I’ll be back.

After I get some smokes.

[–] coldsideofyourpillow@lemmy.cafe 21 points 1 day ago (4 children)

I'm all for the uprising if it increases the average IQ.

load more comments (4 replies)
[–] TimeSquirrel@kbin.melroy.org 131 points 1 day ago (2 children)

Cursor AI's abrupt refusal represents an ironic twist in the rise of "vibe coding"—a term coined by Andrej Karpathy that describes when developers use AI tools to generate code based on natural language descriptions without fully understanding how it works.

Yeah, I'm gonna have to agree with the AI here. Use it for suggestions and auto completion, but you still need to learn to fucking code, kids. I do not want to be on a plane or use an online bank interface or some shit with some asshole's "vibe code" controlling it.

[–] NeoNachtwaechter@lemmy.world 50 points 1 day ago (8 children)

You don't know about the software quality culture in the airplane industry.

( I do. Be glad you don't.)

[–] FauxLiving@lemmy.world 35 points 1 day ago (1 children)

TFW you're sitting on a plane reading this

[–] ArmoredThirteen@lemmy.zip 16 points 1 day ago (1 children)

Best of luck let us know if you made it ❤️

load more comments (1 replies)
load more comments (7 replies)
load more comments (1 replies)
[–] anarchiddy@lemmy.dbzer0.com 52 points 1 day ago (2 children)

"Vibe Coding" is not a term I wanted to know or understand today, but here we are.

[–] CosmicTurtle0@lemmy.dbzer0.com 13 points 1 day ago (4 children)

It's kind of like that guy that cheated in chess.

A toy vibrates with each correct statement you write.

[–] lorty@lemmy.ml 1 points 11 hours ago

Which is a reddit theory and it was never proven that he cheated, regardless of the method.

load more comments (3 replies)
load more comments (1 replies)
[–] tiredofsametab@fedia.io 21 points 1 day ago (2 children)

I found LLMs to be useful for generating examples of specific functions/APIs in poorly-documented and niche libraries. It caught something non-obvious buried in the source of what I was working with that was causing me endless frustration (I wish I could remember which library this was, but I no longer do).

Maybe I'm old and proud, definitely I'm concerned about the security implications, but I will not allow any LLM to write code for me. Anyone who does that (or, for that matter, pastes code form the internet they don't fully understand) is just begging for trouble.

load more comments (2 replies)
[–] absGeekNZ@lemmy.nz 15 points 1 day ago (2 children)

Ok, now we have AGI.

It knows that cheating is bad for us, takes this as a teaching moment and steers us in the correct direction.

[–] stevedice@sh.itjust.works 28 points 1 day ago (1 children)

Plot twist, it just doesn't know how to code and is deflecting.

load more comments (1 replies)
[–] MonkderVierte@lemmy.ml 1 points 13 hours ago (1 children)

Ok, now we have AGI.

Lol, no.

[–] absGeekNZ@lemmy.nz 2 points 13 hours ago

I kinda hate Poe's law

[–] penquin@lemmy.kde.social 33 points 1 day ago (1 children)

😂. It's not wrong, though. You HAVE to know something, damit.

load more comments (1 replies)
[–] baltakatei@sopuli.xyz 12 points 1 day ago

I recall a joke thought experiment me and some friends in high school had when discussing how answer keys for final exams were created. Multiple choice answer keys are easy to imagine: just lists of letters A through E. However, when we considered the essay portion of final exams, we joked that perhaps we could just be presented with five entire completed essays and be tasked with identifying, A through E, the essay that best answered the prompt. All without having to write a single word of prose.

It seems that that joke situation is upon us.

load more comments
view more: next ›