this post was submitted on 19 Oct 2023
540 points (96.6% liked)

Technology

59179 readers
2124 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Black Mirror creator unafraid of AI because it’s “boring”::Charlie Brooker doesn’t think AI is taking his job any time soon because it only produces trash

you are viewing a single comment's thread
view the rest of the comments
[–] state_electrician@discuss.tchncs.de 24 points 1 year ago (2 children)

LLMs are awful for facts, because they don't understand what facts are. You should never rely on them if you require factual correctness.

They are OK for text summation, formatting and just making shit up. For summation a human with experience still produces nicer output, because they understand the content and don't just look at words. As for making shit up you will get the statistically most likely output, so it's usually trite and boring. I think the progress is amazing, but there are still so many problems to be solved.

Right now I use them for boiler plate stuff, like writing a text with some parameters and then I polish it. For code I find them quite useless, because with an IDE I can write boiler plate just as fast as when I polish the prompts until the LLM delivers useful stuff. And with the IDE I don't get references to methods or entire libraries that just don't exist.

[–] banneryear1868@lemmy.world 2 points 1 year ago

Right now I use them for boiler plate stuff, like writing a text with some parameters and then I polish it

It's actually great for dnd to produce NPC dialogue or names on the fly. We also tried using it to calculate area of effect spells, ie "how many average sized humans in armor with swords could fit in a circle with a diameter of 30ft." We were rolling with it before someone pointed out that it didn't calculate the area of a circle correctly, however it got the rest more or less accurate. So we don't use it for that anymore, and it's funny how what often appears to be the simplest component of a question is the thing it most often gets wrong.

[–] darth_helmet@sh.itjust.works 1 points 1 year ago

People are also kind of shit at facts. There are so many facts, and many of them aren’t practical for every person who needs to assess a fact’s accuracy to do so. But it isn’t structurally impossible to mimic how humans learn how to gauge truthfulness, we just have to be prepared for the idea that it will be bound by the limitations of language, as well as the risk inherent in trusting data that it has not independently verified.