They are literally predictive algorithims if you have even a basic understadning of how LLMs work (not somethingalot of pro-ai people have) you’d know this is completely untrue. They do not have genuine thoughts they just say what it predicts the response would be based on previous sources.
How do we have people wasting their time arguing about software having feelings when we haven’t even managed to convince the majority of people that fish and crabs and stuff can feel pain even though they don’t make a frowny face when you hurt them.
That’s easy, it’s because LLM output is a reasonable simulation of sounding like a person. Fooling people’s consciousness detector is just about their whole thing at this point.
Crabs should look into learning to recite the pledge of allegiance in the style of Lady GaGa.
Except llms are just that. Large language models. All they have is words. They don’t even know what the words mean. I hate so much that it even started to get called ai in the first place. As if it had any intelligence whatsoever, let alone an artificial one.
I guess I’m the local bertologist today; look up Dr. Bender for a similar take.
When we say that LLMs only have words, we mean that they only manipulate syntax with first-order rules; the LLM doesn’t have a sense of meaning, only an autoregressive mapping which associates some syntax (“context”, “prompt”) to other syntax (“completion”). We’ve previously examined the path-based view and bag-of-words view. Bender or a category theorist might say that syntax and semantics are different categories of objects and that a mapping from syntax to semantics isn’t present in an LLM; I’d personally say that an LLM only operates with System 3 — associative memetic concepts — and is lacking not only a body but also any kind of deliberation. (Going further in that direction, the “T” in “GPT-4” is for Transformers; unlike e.g. Mamba, a Transformer doesn’t have System 2 deliberation or rumination, and Hofstadter suggests that this alone disqualifies Transformers from being conscious.)
If you made a perfect copy of me, a ‘model’, I think it would have consciousness. I would want the clone treated well even if some of the copied traits weren’t perfect.
I think that this collection of misunderstandings is the heart of the issue. A model isn’t a perfect copy. Indeed, the reason that LLMs must hallucinate is that they are relatively small compared to their training data and therefore must be lossy compressions, or blurry JPEGs as Ted Chiang puts it. Additionally, no humans are cloned in the training of a model, even at the conceptual level; a model doesn’t learn to be a human, but to simulate what humans might write. So when you say:
Spinal injuries are terrible. I don’t think ‘text-only-human’ should fail the consciousness test.
I completely agree! LLMs aren’t text-only humans, though. An LLM corresponds to a portion of the left hemisphere, particularly Broca’s area, except that it drives a tokenizer instead; chain-of-thought “thinking” corresponds to rationalizations produced by the left-brain interpreter. Humans are clearly much more than that! For example, an LLM cannot feel hungry because it does not have a stomach which emits a specific hormone that is interpreted by a nervous system; in this sense, LLMs don’t have feelings. Rather, what should be surprising to you is the ELIZA effect: a bag of words that can only communicate by mechanically associating memes to inputs is capable of passing a Turing test.
Also, from one philosopher to another: try not to get hung up on questions of consciousness. What we care about is whether we’re allowed to mistreat robots, not whether robots are conscious; the only reason to ask the latter question is to have presumed that we may not mistreat the conscious, a hypocrisy that doesn’t withstand scrutiny. Can matrix multiplication be conscious? Probably not, but the shape of the question (“chat is this abstractum aware of itself, me, or anything in its environment”) is kind of suspicious! For another fun example, IIT is probably bogus not because thermostats are likely not conscious but because “chat is this thermostat aware of itself” is not a lucid line of thought.
“Consciousness requires semantic understanding” - I don’t see a way to operationalize this that GPT-4 fails. It misunderstands some words, and so does every human I know. You give GPT a definition, it can use it about as well as a school child.
i would interrogate this plus “intelligence” a little more. LLMs don’t “understand” in the way that we do, and personally I don’t think that they really understand at all. A dictionary containing examples also basically passes this test, for example. LLMs also don’t really have “intelligence”.
Anyway we’re very far away from figuring out consciousness anyway. Claims about LLMs being conscious are meaningless marketing.
Hoo boy. The original person being reposted continues on their original post that they believe we cannot be certain that genAI does not have feelings.
They are literally predictive algorithims if you have even a basic understadning of how LLMs work (not somethingalot of pro-ai people have) you’d know this is completely untrue. They do not have genuine thoughts they just say what it predicts the response would be based on previous sources.
Just complete the delusional circuit and tell them you can’t be sure they aren’t an AI, ask them how they would prove they aren’t.
How do we have people wasting their time arguing about software having feelings when we haven’t even managed to convince the majority of people that fish and crabs and stuff can feel pain even though they don’t make a frowny face when you hurt them.
That’s easy, it’s because LLM output is a reasonable simulation of sounding like a person. Fooling people’s consciousness detector is just about their whole thing at this point.
Crabs should look into learning to recite the pledge of allegiance in the style of Lady GaGa.
Removed by mod
The odds are non existent
Removed by mod
Except llms are just that. Large language models. All they have is words. They don’t even know what the words mean. I hate so much that it even started to get called ai in the first place. As if it had any intelligence whatsoever, let alone an artificial one.
Removed by mod
I guess I’m the local bertologist today; look up Dr. Bender for a similar take.
When we say that LLMs only have words, we mean that they only manipulate syntax with first-order rules; the LLM doesn’t have a sense of meaning, only an autoregressive mapping which associates some syntax (“context”, “prompt”) to other syntax (“completion”). We’ve previously examined the path-based view and bag-of-words view. Bender or a category theorist might say that syntax and semantics are different categories of objects and that a mapping from syntax to semantics isn’t present in an LLM; I’d personally say that an LLM only operates with System 3 — associative memetic concepts — and is lacking not only a body but also any kind of deliberation. (Going further in that direction, the “T” in “GPT-4” is for Transformers; unlike e.g. Mamba, a Transformer doesn’t have System 2 deliberation or rumination, and Hofstadter suggests that this alone disqualifies Transformers from being conscious.)
I think that this collection of misunderstandings is the heart of the issue. A model isn’t a perfect copy. Indeed, the reason that LLMs must hallucinate is that they are relatively small compared to their training data and therefore must be lossy compressions, or blurry JPEGs as Ted Chiang puts it. Additionally, no humans are cloned in the training of a model, even at the conceptual level; a model doesn’t learn to be a human, but to simulate what humans might write. So when you say:
I completely agree! LLMs aren’t text-only humans, though. An LLM corresponds to a portion of the left hemisphere, particularly Broca’s area, except that it drives a tokenizer instead; chain-of-thought “thinking” corresponds to rationalizations produced by the left-brain interpreter. Humans are clearly much more than that! For example, an LLM cannot feel hungry because it does not have a stomach which emits a specific hormone that is interpreted by a nervous system; in this sense, LLMs don’t have feelings. Rather, what should be surprising to you is the ELIZA effect: a bag of words that can only communicate by mechanically associating memes to inputs is capable of passing a Turing test.
Also, from one philosopher to another: try not to get hung up on questions of consciousness. What we care about is whether we’re allowed to mistreat robots, not whether robots are conscious; the only reason to ask the latter question is to have presumed that we may not mistreat the conscious, a hypocrisy that doesn’t withstand scrutiny. Can matrix multiplication be conscious? Probably not, but the shape of the question (“chat is this abstractum aware of itself, me, or anything in its environment”) is kind of suspicious! For another fun example, IIT is probably bogus not because thermostats are likely not conscious but because “chat is this thermostat aware of itself” is not a lucid line of thought.
no fucking thanks
i would interrogate this plus “intelligence” a little more. LLMs don’t “understand” in the way that we do, and personally I don’t think that they really understand at all. A dictionary containing examples also basically passes this test, for example. LLMs also don’t really have “intelligence”.
Anyway we’re very far away from figuring out consciousness anyway. Claims about LLMs being conscious are meaningless marketing.