this post was submitted on 25 May 2024
820 points (97.7% liked)
Technology
59656 readers
2726 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LLMs do sometimes hallucinate even when giving summaries. I.e. they put things in the summaries that were not in the source material. Bing did this often the last time I tried it. In my experience, LLMs seem to do very poorly when their context is large (e.g. when "reading" large or multiple articles). With ChatGPT, it's output seems more likely to be factually correct when it just generates "facts" from it's model instead of "browsing" and adding articles to its context.
I asked ChatGPT who I was not too long ago. I have a unique name and I have many sources on the internet with my name on it (I'm not famous, but I've done a lot of stuff) and it made up a multi-paragraph biography of me that was entirely false.
I would sure as hell call that a hallucination because there is no question it was trained on my name if it was trained on the internet in general but it got it entirely wrong.
Curiously, now it says it doesn't recognize my name at all.