this post was submitted on 08 May 2024
1716 points (99.3% liked)

Technology

59574 readers
3219 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] sramder@lemmy.world 84 points 6 months ago (3 children)

[…]will only take a few hallucinations before no one trusts LLMs to write code or give advice

Because none of us have ever blindly pasted some code we got off google and crossed our fingers ;-)

[–] avidamoeba@lemmy.ca 84 points 6 months ago* (last edited 6 months ago) (1 children)

It's way easier to figure that out than check ChatGPT hallucinations. There's usually someone saying why a response in SO is wrong, either in another response or a comment. You can filter most of the garbage right at that point, without having to put it in your codebase and discover that the hard way. You get none of that information with ChatGPT. The data spat out is not equivalent.

[–] deweydecibel@lemmy.world 31 points 6 months ago (1 children)

That's an important point, and and it ties into the way ChatGPT and other LLMs take advantage of a flaw in the human brain:

Because it impersonates a human, people are more inherently willing to trust it. To think it's "smart". It's dangerous how people who don't know any better (and many people that do know better) will defer to it, consciously or unconsciously, as an authority and never second guess it.

And the fact it's a one on one conversation, no comment sections, no one else looking at the responses to call them out as bullshit, the user just won't second guess it.

[–] KeenFlame@feddit.nu -1 points 6 months ago (1 children)

Your thinking is extremely black and white. Many many, probably most actually, second guess chat bot responses.

[–] gravitas_deficiency@sh.itjust.works 3 points 6 months ago* (last edited 6 months ago)

Think about how dumb the average person is.

Now, think about the fact that half of the population is dumber than that.

[–] Hackerman_uwu@lemmy.world 4 points 6 months ago (1 children)

When you paste that code you do it in your private IDE, in a dev environment and you test it thoroughly before handing it off to the next person to test before it goes to production.

Hitting up ChatPPT for the answer to a question that you then vomit out in a meeting as if it’s knowledge is totally different.

[–] sramder@lemmy.world 2 points 6 months ago

Which is why I used the former as an example and not the latter.

I’m not trying to make a general case for AI generated code here… just poking fun at the notion that a few errors will put people off using it.

[–] Seasm0ke@lemmy.world 3 points 6 months ago

Split segment of data without pii to staging database, test pasted script, completely rewrite script over the next three hours.