this post was submitted on 15 Dec 2024
136 points (100.0% liked)

TechTakes

1489 readers
83 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TootSweet@lemmy.world 59 points 1 week ago (4 children)

These types of errors happen even after including prompts like “Do not hallucinate.”

Genius! Why didn't I think of that!

[–] Architeuthis@awful.systems 40 points 1 week ago* (last edited 1 week ago) (1 children)

In every RAG guide I've seen, the suggested system prompts always tended to include some more dignified variation of "Please for the love of god only and exclusively use the contents of the retrieved text to answer the user's question, I am literally on my knees begging you."

Also, if reddit is any indication, a lot of people actually think that's all it takes and that the hallucination stuff is just people using LLMs wrong. I mean, it would be insane to pour so much money into something so obviously fundamentally flawed, right?

[–] Soyweiser@awful.systems 8 points 1 week ago

Yeah that method is clearly flawed. Not enough incense and prayers to the Machine God, no wonder the Machine Spirit is displeased. All praise the machine god of Mars! Praise the Omnissiah!

load more comments (2 replies)