this post was submitted on 14 Apr 2025
34 points (100.0% liked)

technology

23688 readers
157 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] AtmosphericRiversCuomo@hexbear.net 2 points 1 week ago (1 children)

As you say, hallucinating can be solved by adding meta-awareness. Seems likely to me we'll be able to patch the problem eventually. We're just starting to understand why these models hallucinate in the first place.

[–] semioticbreakdown@hexbear.net 2 points 1 week ago

I dont think hallucination is that poorly understood, tbh. Its related to the grounding problem to an extent, but also a result of the fact that its a non-cognitive generative model. Youre just sampling from a distribution. Its considered "wrong output" because to us, truth is obvious. But if you look at generative models beyond language models, the universality of this behavior is obvious. You cannot have the ability to make a picture of minion JD vance without LLMs hallucinating (or the ability to have creative writing, for a same-domain analogy). You can see it in humans too in things like wrong words or word salads/verbal diarrhea/certain aphasias. Language function is also preserved in instances even when logical ability damaged. With no way to re-examine and make judgements about its output, and also no relation to reality (or some version of it), the unconstrained output of the generative process is inherently untrustworthy. That is to say - all LLM output is hallucination, and only has relation to the real when interpreted by the user. Massive amounts of training data are used to "bake" the model such that the likelihood of producing text that we would consider "True" is better than random (or pretty high in some cases). This extends to the math realm too, and is likely why CoT improves apparent reasoning so dramatically (And also likely why CoT reasoning only works when a model is of sufficient size). They are just dreams, and only gain meaning through our own interpretation. They do not reflect reality.

REALLY crank thoughtsits more than just a patch tbh, its a radical difference in structure as compared to LLMs. the language model becomes a language module. Also, it doesnt solve the grounding problem - the only thing that can do that is multi-modality, the network needs to have a coherent representational system that is also related to the real in some way. further question: if such a system of meta-awareness is responsible for p-consciousness in humans, would incorporating it into an AI system also provide p-consciousness? Is it inherently impossible to create the desired artificial intelligence systems without imbuing them with subjective experience? I'm beginning to suspect that might be the case