this post was submitted on 22 Sep 2024
11 points (58.2% liked)
Futurology
1776 readers
129 users here now
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You're wrong, and silly. The post you're linking to is from the personal blog of a data "manager" whose focus is how decisions are made. The post is about what an "AI" might interpret as meaning.... but it completely overlooks that ALL OUTPUTS are trained....
The posters method? Talking to ChatGPT. So you may as well be invoking Blake Lemoine (the spiritualist at Google who believed language models had souls because he talked to them and they seemed to) - the post you're linking is making that same mistake. From the post:
It's trained on humans who write as if they have conceptual models THAT'S THE ENTIRE TRICK. That's why it "seems to have intelligence" in the responses, because it's mimicking the intelligence that went into writing all that training data - OUR HUMAN INTELLIGENCE. We wrote the data it trains on, WE have intelligence, it has a fancy probabilistic form of regurgitation.
The probabilities are done by the "shape" of language, but that's not understanding. That's not having an internal sense of the world or what's being said. It's "locked in a mode" (at training time, and limited to the training data and on screen memory/text).
But yeah dude, posting that link as "proof" of intelligence is silly. Just because something can pretend or "seem to" have reasoning, or dreams, or decision making, doesn't mean that those things are being done. LLMs only respond when prompted - they're not sitting there thinking when they're silent. Likewise, they're not learning outside of the text on the screen, and their training data. They won't "think" about any conversations they've had in the past, they won't think about anything after they've done their output.
It's an echo of the training data.... some of which is discussions about meaning, or discussions that appear to show a conceptual framework, or talk about the experience of reasoning, or dreaming, or having a sense of meaning. So the LLM can write about those things as if it has them, or has done them.... but it hasn't. Those outputs are from the HUMANS who had the EXPERIENCES. The LLM, doesn't do any of that, it just writes as if it does. It writes as if it has intelligence, because intelligent data went into it, and so some people mistake that as intelligence.
People who mistake an image, for substance, may as well be claiming paintings of food are food, maps of places ARE the places, or that there's a "mirror world" in your bathroom mirror. It's cute like a child's fantasy is cute. But to suggest such in this domain - as an adult - shows either idiocy in its highest form, or simply a complete lack of understanding of the technology. Of it's nature. Of what's going on.
You're being tricked into seeing intelligence where there isn't any, because it's reflected in the training data WE (intelligent beings) wrote. You've adopted the intended illusion, rather than questioned it. An LLM has told you something, and you've believed it - much like that blog post, much like Blake Lemoine. Go try and walk into a mirror, you won't get in, it's flat, the image isn't really there - it's just a piece of glass with a black background, reflecting the world outside of it as if it's inside of it.