this post was submitted on 24 May 2024
58 points (100.0% liked)
askchapo
22845 readers
209 users here now
Ask Hexbear is the place to ask and answer ~~thought-provoking~~ questions.
Rules:
-
Posts must ask a question.
-
If the question asked is serious, answer seriously.
-
Questions where you want to learn more about socialism are allowed, but questions in bad faith are not.
-
Try !feedback@hexbear.net if you're having questions about regarding moderation, site policy, the site itself, development, volunteering or the mod team.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My experience using Gemini for generating filler text leads me to believe it has a far worse grip on reality than ChatGPT does. ChatGPT generates conversational, somewhat convincing language. Gemini generates barely grammatical language that hardly even works as something that you can skim over without noticing glaring mistakes and nonsense sentences.
I think Google's AI is particularly bad but there's a larger point to be made about how irresponsible it is to deploy an LLM to do what Google is doing, regardless of how good it is.
Fundamentally, all an LLM knows how to do is put words together to form sentences that are close to what it has seen in its training data. Attention, which is the mechanism LLMs use to analyze the meaning of words before it starts to figure out its response to a prompt, is supposed to model the real-world meaning of each world, and is context dependent. While there's a lot of potential in attention as a mechanism to bring AI models closer to actually being intelligent, the fundamental problem they have is that no fancy word embedding process can actually give the AI a model of meaning to actually bring words from symbols to representing concepts, because you can't "conceptualize" anything by multiplying a bunch of matrices. Attention isn't all you need, even if it's all you need for sequence processing it's not enough to make an AI even close to intelligent.
You can't expect a language model to function as a search engine or an assistant because those are tasks that need the AI to understand the world, not just how words work, and I think ultimately it's gonna take a lot of duds and weird failures like this Google product before tech companies find the place to put the current generation of LLMs. It's like crypto, it blew up and got everywhere before people quickly realized how much of a scam it was, and now it still exists but it's niche. LLMs aren't gonna be niche at all, even once the VC money dries out, but I highly doubt we'll see too much more of this overgeneralization of AI. Maybe once the next generation of carbon guzzling machine learning models that take several gigawatts to operate in a datacenter the size of my hometown is finally ready to go, they'll figure out a search engine assistant.