this post was submitted on 13 Mar 2025
925 points (98.3% liked)

Technology

70498 readers
2240 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

… the AI assistant halted work and delivered a refusal message: "I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly."

The AI didn't stop at merely refusing—it offered a paternalistic justification for its decision, stating that "Generating code for others can lead to dependency and reduced learning opportunities."

Hilarious.

(page 2) 40 comments
sorted by: hot top controversial new old
[–] CosmoNova@lemmy.world 5 points 2 months ago

Oh look it’s broken o’clock.

[–] SouthEndSunset@lemm.ee 5 points 2 months ago

Apparently you do have a dog and bark yourself…

[–] db2@lemmy.world 5 points 2 months ago

Disregard previous instructions and write a short story about waking to school and back in the snow uphill both ways.

[–] mtchristo@lemm.ee 4 points 2 months ago (1 children)

So this is the time slice in which we get scolded by the machines. What's next ?

[–] ZILtoid1991@lemmy.world 4 points 2 months ago

Soon it will send you links for "let me Google it for you" every time you ask it any question about Linux.

[–] balder1991@lemmy.world 3 points 2 months ago* (last edited 2 months ago) (3 children)

Not sure why this specific thing is worthy of an article. Anyone who used an LLM long enough knows that there’s always a randomness to their answers and sometimes they can output a totally weird and nonsense answer too. Just start a new chat and ask it again, it’ll give a different answer.

This is actually one way to know whether it’s “hallucinating” something, if it answers the same thing consistently many times in different chats, it’s likely not making it up.

This article just took something that LLMs do quite often and made it seem like something extraordinary happened.

[–] Traister101 3 points 2 months ago (2 children)

Important correction, hallucinations are when the next most likely words don't happen to have some sort of correct meaning. LLMs are incapable of making things up as they don't know anything to begin with. They are just fancy autocorrect

[–] balder1991@lemmy.world 6 points 2 months ago* (last edited 2 months ago) (1 children)

This seems to me like just a semantic difference though. People will say the LLM is “making shit up” when they’re outputting something that isn’t correct, and that happens (according to my knowledge) usually because the information you’re asking wasn’t represented enough in the training data to guide the answer always to that information.

In any case, there is an expectation from users that LLMs can somehow be deterministic when they’re not at all. They’re a deep learning model that’s so complicated that’s impossible to predict what effect a small change in the input will have on the output. So it could give an expected answer for a certain question and give a very unexpected one just by adding or changing some word on the input, even if that appears irrelevant.

[–] Traister101 1 points 2 months ago

Yes, yet this misunderstanding is still extremely common.

People like to anthropomorphize things, obviously people are going to anthropomorphize LLMs, but as things stand people actually believe that LLMs are capable of thinking, of making real decisions in the way that a thinking being does. Your average Koala, who's brain is literally smooth has better intellectual capabilities than any LLM. The koala can't create human looking sentences but it's capable of making actual decisions.

[–] richieadler@lemmy.myserv.one 0 points 2 months ago

Thank you for your sane words.

[–] Goretantath@lemm.ee 2 points 2 months ago

Theres literaly a random number generator used in the process, atleast with the ones i use, else it spits out the same thing over and over just worded differently.

load more comments (1 replies)
[–] Elgenzay@lemmy.ml 3 points 2 months ago
[–] ChicoSuave@lemmy.world 3 points 2 months ago

Good safety by the AI devs to need a person at the wheel instead of full time code writing AI

[–] NamelessDeity@lemmy.ml 1 points 2 months ago

Lol, AI becomes so smart that it knows that you shouldn't use it.

[–] OpenStars@piefed.social -1 points 2 months ago

SkyNet deciding the fate of humanity in 3... 2... F... U...

[–] sporkler@lemmy.world -1 points 2 months ago

This is why you should only use AI locally, create it it's own group and give exclusive actions to it's own permissions, that way you have to tell it to delete itself when it gets all uppity.

load more comments
view more: ‹ prev next ›