Reminds me of this:

I think atproto is a good protocol, but god bluesky-the-company is dogshit.

    • athatet@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      20 hours ago

      Except llms are just that. Large language models. All they have is words. They don’t even know what the words mean. I hate so much that it even started to get called ai in the first place. As if it had any intelligence whatsoever, let alone an artificial one.

        • corbin@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          18 hours ago

          I guess I’m the local bertologist today; look up Dr. Bender for a similar take.

          When we say that LLMs only have words, we mean that they only manipulate syntax with first-order rules; the LLM doesn’t have a sense of meaning, only an autoregressive mapping which associates some syntax (“context”, “prompt”) to other syntax (“completion”). We’ve previously examined the path-based view and bag-of-words view. Bender or a category theorist might say that syntax and semantics are different categories of objects and that a mapping from syntax to semantics isn’t present in an LLM; I’d personally say that an LLM only operates with System 3 — associative memetic concepts — and is lacking not only a body but also any kind of deliberation. (Going further in that direction, the “T” in “GPT-4” is for Transformers; unlike e.g. Mamba, a Transformer doesn’t have System 2 deliberation or rumination, and Hofstadter suggests that this alone disqualifies Transformers from being conscious.)

          If you made a perfect copy of me, a ‘model’, I think it would have consciousness. I would want the clone treated well even if some of the copied traits weren’t perfect.

          I think that this collection of misunderstandings is the heart of the issue. A model isn’t a perfect copy. Indeed, the reason that LLMs must hallucinate is that they are relatively small compared to their training data and therefore must be lossy compressions, or blurry JPEGs as Ted Chiang puts it. Additionally, no humans are cloned in the training of a model, even at the conceptual level; a model doesn’t learn to be a human, but to simulate what humans might write. So when you say:

          Spinal injuries are terrible. I don’t think ‘text-only-human’ should fail the consciousness test.

          I completely agree! LLMs aren’t text-only humans, though. An LLM corresponds to a portion of the left hemisphere, particularly Broca’s area, except that it drives a tokenizer instead; chain-of-thought “thinking” corresponds to rationalizations produced by the left-brain interpreter. Humans are clearly much more than that! For example, an LLM cannot feel hungry because it does not have a stomach which emits a specific hormone that is interpreted by a nervous system; in this sense, LLMs don’t have feelings. Rather, what should be surprising to you is the ELIZA effect: a bag of words that can only communicate by mechanically associating memes to inputs is capable of passing a Turing test.

          Also, from one philosopher to another: try not to get hung up on questions of consciousness. What we care about is whether we’re allowed to mistreat robots, not whether robots are conscious; the only reason to ask the latter question is to have presumed that we may not mistreat the conscious, a hypocrisy that doesn’t withstand scrutiny. Can matrix multiplication be conscious? Probably not, but the shape of the question (“chat is this abstractum aware of itself, me, or anything in its environment”) is kind of suspicious! For another fun example, IIT is probably bogus not because thermostats are likely not conscious but because “chat is this thermostat aware of itself” is not a lucid line of thought.

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          19 hours ago

          “Consciousness requires semantic understanding” - I don’t see a way to operationalize this that GPT-4 fails. It misunderstands some words, and so does every human I know. You give GPT a definition, it can use it about as well as a school child.

          i would interrogate this plus “intelligence” a little more. LLMs don’t “understand” in the way that we do, and personally I don’t think that they really understand at all. A dictionary containing examples also basically passes this test, for example. LLMs also don’t really have “intelligence”.

          Anyway we’re very far away from figuring out consciousness anyway. Claims about LLMs being conscious are meaningless marketing.