Reminds me of this:

I think atproto is a good protocol, but god bluesky-the-company is dogshit.

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    15 hours ago

    this thread has broken containment, and the median quality of the discussion has dropped to the point where some rando decided to start a subthread about how it’s not ok to celebrate hitler’s death and also two regulars had an extremely heated fight about who was the most not-mad about the word chat as a noun/pronoun/whatever in English of all fucking things so uhhhh that’s all folks

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    16 hours ago

    Meanwhile I’ve seen people justifying the power use of genAI with “but people also consume as much if not more energy through their lives”.

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    16 hours ago

    Fuckin’ clanker lovers.

    It’s only bigotry if you believe machines incapable of thought or feeling deserve human rights. In which case, you have bigger problems than people being “racist” against bullshit-ass generative AI.

  • Juice@midwest.social
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    This is called rationalizing because any relationship with reality it has is strictly rationed

  • kadaverin0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 day ago

    How online do you have to be where “people dunking on AI “artists” is like Kristallnacht” doesn’t sound completely fucking deranged?

  • Chloé 🥕@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    20 hours ago

    idk, I’ve seen enough people call ai users

    CW: racist bullshit but with words swapped

    “clanker-loving species traitors”

    to know that some anti-ai stuff is steeped in bigotry (and yes, it’s still bigotry if it’s ironic btw)

    i don’t think most anti-ai people are like this. but some absolutely are and denying it helps no one, and it harms marginalized people

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      15 hours ago

      in every serious (ie not TikTok or any other right-wing or unmoderated hellhole) anti-AI community I know, bigotry gets you banned even if you’re trying to hide it behind nonsense words like a 12 year old

      meanwhile the people who seem to have dreamt up the idea that AI critical spaces are full of masked bigotry appear to be mostly Neil Gaiman Warren Ellis (see replies), who has several credible sexual assault allegations leveled against him, and Jay Graber, bluesky’s CEO who deserves no introduction (search mastodon or take a look at bluesky right now if the name’s unfamiliar). I don’t trust either of those people’s judgement as to what harms marginalized people.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          15 hours ago

          you’re right, I even had Ellis’ Wikipedia page open to re-confirm the allegations but my fingers wanted it to be Gaiman for whatever reason

      • Chloé 🥕@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        18 hours ago

        oh absolutely, fuck graber and fuck, fuuuuuuck gaiman to hell. i don’t have an inch of trust for either of them.

        tho I will say that even here on lemmy, even if it didn’t reach the awfulness of what i quoted, i’ve seen a bunch of clanker memes that were seriously iffy… I wouldn’t qualify those of “serious discussions” but they still matter in the broader ai discourse

        and I’d like to clarify on my stance: fuck ai. it can have it’s uses sometimes but the dominant (and promoted) uses are awful for all the reasons everyone knows about. just wanted to make it clear that I am not an ai supporter

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          17 hours ago

          oh absolutely, fuck graber and fuck, fuuuuuuck gaiman to hell. i don’t have an inch of trust for either of them.

          tho I will say that even here on lemmy, even if it didn’t reach the awfulness of what i quoted, i’ve seen a bunch of clanker memes that were seriously iffy… I wouldn’t qualify those of “serious discussions”

          I agree with all of this

          but they still matter in the broader ai discourse

          and disagree strongly with this. part of the mission of TechTakes and SneerClub is that they must remain a space where marginalized people are welcome, and we know from prior experience that the only way to guarantee that is to ensure that bigots and boosters (and sometimes they’re absolutely the same person — LLMs are fashtech after all) can’t ever control the discourse. I know through association that a lot of moderated AI-critical spaces, writers, and researchers follow a similar mission.

          now, unmoderated and ineffectively moderated spaces are absolutely vulnerable to being tuned into fascist pipelines, and inventing slurs is one way they do it (see also “waffles” quickly being picked up as an anti-trans slur on bluesky, which has moderation that’s hostile to its marginalized userbase). if that’s something that’s happening in a popular community and there’s enough examples to show a pattern, then I’d love to have it as a post in TechTakes or as a blog link we can share around the AI-critical community as a warning.

    • fnix@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      18 hours ago

      Any minimally competent critique of AI would make such bigotry ipso facto meaningless. Note that the cited phrase implicitly accepts the premise of “AIs” as being in the same category of sentient beings as humans by virtue of it being possible to betray the latter for the former, and hence for any genuinely AI-critical person, it makes about as much sense as talking about ‘anti-table bigotry’; it’s just a meaningless configuration of words if one understands what they mean.

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 hours ago

      from what i see, white people simply clamor for a context in which they’re “allowed” to finally call someone the n-word, and are willing to accept substitute targets for their racism

      add in a protective cloak of “it’s ironic and a joke and YOU’RE the real racist for pointing this out” and you get a whole lot of people who are extremely okay slinging around barely modified racial slurs

        • ebu@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          16 hours ago

          and people get very defensive about this one too. like i’m pretty confident that coolboy004 on reddit is not giving a nuanced delivery on the ethics of a company running an ai-powered call center when he types “screws will not replace us” in all caps on /r/fuckai, and yet

          i think it sucks that we’re stuck with, say, bluesky engineers genuinely trying to pull the most moronic variant of “but what if the stochastic text generator might have feelings in the future too”, but we still need to be able to talk about why people feel the need to make “clanka with the hard r” jokes (answer it’s racism)

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        18 hours ago

        you’ve never posted on our instance before as far as I can tell and I’m pretty sure I didn’t ask you to fucking gatekeep one of our threads and start a shitty little fight that I have to clean up

  • sleepundertheleaves@infosec.pub
    link
    fedilink
    English
    arrow-up
    81
    ·
    edit-2
    2 days ago

    This shows why it’s so easy for conservatives to reverse Uno the language of social justice, painting themselves as the victims of oppression and liberals / women / minorities / immigrants / LGBTQ+ people / anyone else who exists without their consent as oppressors. They refuse to admit that words mean things, and that things are more important than words.

    It’s not a lack of reading comprehension. It’s a lack of reality comprehension.

    • JcbAzPx@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      The ‘change the subject’ thing can be useful if you’re changing like for like. Equating AI algorithms to the Jewish people is very far from that. To a disturbing degree.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      “Whatever exists, he said. Whatever in creation exists without my knowledge exists without my consent.” The Judge

      ― Cormac McCarthy, Blood Meridian, or, the Evening Redness in the West

      • sleepundertheleaves@infosec.pub
        link
        fedilink
        English
        arrow-up
        20
        ·
        edit-2
        2 days ago

        I think it’s the other way around. The right is incredibly good at memes - because memes presume underlying facts without having to prove those facts, and, by portraying them humorously, imply that anyone asking for proof of the underlying facts is taking the meme too seriously.

        Remember last summer when the internet was flooded with memes about Haitians eating people’s pets? That whole vicious racist slander based on a single false report that in any other context would have been absolutely unacceptable, but anybody who pointed out “hey, this is vicious racist slander based on a single false report and is absolutely unacceptable” got accused of being humorless wokescolds taking a joke too seriously?

        It’s why the Trump White House posts so many viciously racist and contemptuous memes on Twitter. It’s why the fucking Department of Homeland Security likes to hide the numbers 14 and 88 in its social media posts. Because it puts the left in a Catch-22: if they call out the memes, they look like humorless enemies of free speech, and if they don’t, it normalizes racism even further.

        The right has mastered the art of the meme. The left may be winning the meme Olympics, but the right are fucking professionals.

        • corbin@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 day ago

          I think it’s the other way around. The memes are incredibly good at left vs right because left- and right-leaning people presume underlying facts and the memes reassure people that those facts are true and good (or false and bad, etc.) without doing any fact-finding.

          When we say “the right can’t meme” what we mean is that the right’s memes are about projecting bigotry. It’s like saying that the right has no comedians; of course they have people that stand up in front of an audience and emit words according to memes, tropes, and narremes, such that the audience laughs. Indeed, stand-up was invented by Frank Fay, an open fascist. (His Behind the Bastards episodes are quite interesting.) What we’re saying is that the stand-up routine is bigoted. If this seems unrelated, please consider: the Haitians-eating-pets joke is part of a stand-up routine that a clown tells in order to get his circus elected.

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 day ago

            In my understanding: they aren’t making jokes with the expectation that their audience laughs at the joke itself. The audience is laughing at the target of the joke. In this sense, you might say the right doesn’t meme, and further speculate that they can’t meme.

            So yeah they post and repost a lot of “memes,” but it’s never really to be like: “look at this clever meme I made,” it’s just “look at this meme that makes fun of x people”. Their accusation of humorlessness is just a confession.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 days ago

      Yep.

      Oh, your strategy is… invent a new vocabulary to describe yourselves and your stuggles?

      … and then do nothing other than ‘promote discussion’ and ‘raise awareness’?

      Well, what are fascists, historically, really good at?

      Oh, right, its uh, perverting language and also pretending to be something they actually aren’t, so as to be more soundbite palatable, basically, more broadly appealing, more difficult to counter argue / “debunk” without exhausting yourself.

      Sure would be neat if anyone learned anything from history, ever, but nope, thus the tragicomedy goes on.

      • sleepundertheleaves@infosec.pub
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        Wait. Hold on. Are you blaming marginalized groups for inventing language to describe their marginalization? And then talking about it?

        You know, like how democracy is supposed to work, where you ‘promote discussion’ about a problem until you’ve convinced a critical mass of voters that there is a problem and they need to vote for policies that fix it?

        Is the implication here that fascists, these experts at manipulating language in dishonest ways, would be helpless if they didn’t have new words from marginalized groups to pervert?

        Because I doubt fascists have any problems with attacking the people they hate, whether those people make up new terms or not, or, for that matter, whether those people talk about their marginalization or stay quiet to try and avoid fascist attacks.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          2 days ago

          Wait. Hold on. Are you blaming marginalized groups for inventing language to describe their marginalization? And then talking about it?

          Nope.

          I am blaming any of them them, and/or their allies, who seemingly think / thought that that alone would be sufficient to stop fascism.

          This is an immensely naive way of thinking.

          You know, like how democracy is supposed to work, where you ‘promote discussion’ about a problem until you’ve convinced a critical mass of voters that there is a problem and they need to vote for policies that fix it?

          The entire strategy of fascists is to pervert how democracy is “supposed to work”, thus revealing the state as its true nature, a monopoly on ‘legitimate’ violent force, that can be made to do nearly anything with that force, once it is fully perverted.

          Is the implication here that fascists, these experts at manipulating language in dishonest ways, would be helpless if they didn’t have new words from marginalized groups to pervert?

          No, the implication is that you can’t fight fascists with words alone and win, you have to be able to credibly match the power and force they wield, by more clever means, than just talking at or about them.

          You have to cut off their funding, you have to jail them for their crimes, you have to actually present a workable solution to the the economic plight of people who are likely to become fascists (conservatives), you have to address that the root cause of fascism is the decay of a corrupt capitalist democracy, and by ‘address’, I again mean with actual actions, actual policy changes, or extragovernmental means like a mutual aid group.

          Because I doubt fascists have any problems with attacking the people they hate, whether those people make up new terms or not, or, for that matter, whether those people talk about their marginalization or stay quiet to try and avoid fascist attacks.

          They don’t, but fascism is largely a cancer that grows, much more so than it is some kind of innate, unchangeable aspect of… well at least most people.

          So, the cure is to start at the root and treat the causes of the problem, comprehensively.

          Don’t do that?

          Sorry, but historically, then the fascists win, untill some later war or mass armed rebellion or resistance basically kills or jails them all, and then also literally sends them to reeducation camps.

          (EDIT: Well, maybe not literally ‘reeducation camps’, physically isolated camps, but at least some kind of comprehensive, compulsory, de-fascizing, reeducation system)


          I am not trying to say the burden of stopping fascism lies squarely on the shoulders of those most likely to be persecuted by fascists.

          I am saying that any of such people (myself included) who believe that … just raising awareness and promoting discussion alone, for its own sake, as the ends instead of means to a more actually useful ends…

          Anyone who believes that alone will work is a fool.

          Again, because this is what history shows us.

  • WatDabney@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    62
    ·
    2 days ago

    If we’re going to focus on form instead of content, it’s amusing that “if you say mean things about ai then you’re a bigot” is the exact same form as “if you say mean things about Trump then you’re a terrorist.”

    • unwarlikeExtortion@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      If you say the same bigoted arguments, you’re like a bigot.

      If you say the same terrorist arguments then you’re like a terrorist.

      Except, the people saying mean stuff about Trump are much less terroristic that trump supporters.

      The forms are the same (it’s the most basic sylogism in fact). The contents isn’t, and the merits of antecedents matter.

  • iAmTheTot@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    2 days ago

    Hoo boy. The original person being reposted continues on their original post that they believe we cannot be certain that genAI does not have feelings.

    • Pieplup@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 hours ago

      They are literally predictive algorithims if you have even a basic understadning of how LLMs work (not somethingalot of pro-ai people have) you’d know this is completely untrue. They do not have genuine thoughts they just say what it predicts the response would be based on previous sources.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 days ago

      Just complete the delusional circuit and tell them you can’t be sure they aren’t an AI, ask them how they would prove they aren’t.

    • ayyy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      How do we have people wasting their time arguing about software having feelings when we haven’t even managed to convince the majority of people that fish and crabs and stuff can feel pain even though they don’t make a frowny face when you hurt them.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        That’s easy, it’s because LLM output is a reasonable simulation of sounding like a person. Fooling people’s consciousness detector is just about their whole thing at this point.

        Crabs should look into learning to recite the pledge of allegiance in the style of Lady GaGa.

          • athatet@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            ·
            18 hours ago

            Except llms are just that. Large language models. All they have is words. They don’t even know what the words mean. I hate so much that it even started to get called ai in the first place. As if it had any intelligence whatsoever, let alone an artificial one.

              • corbin@awful.systems
                link
                fedilink
                English
                arrow-up
                7
                ·
                16 hours ago

                I guess I’m the local bertologist today; look up Dr. Bender for a similar take.

                When we say that LLMs only have words, we mean that they only manipulate syntax with first-order rules; the LLM doesn’t have a sense of meaning, only an autoregressive mapping which associates some syntax (“context”, “prompt”) to other syntax (“completion”). We’ve previously examined the path-based view and bag-of-words view. Bender or a category theorist might say that syntax and semantics are different categories of objects and that a mapping from syntax to semantics isn’t present in an LLM; I’d personally say that an LLM only operates with System 3 — associative memetic concepts — and is lacking not only a body but also any kind of deliberation. (Going further in that direction, the “T” in “GPT-4” is for Transformers; unlike e.g. Mamba, a Transformer doesn’t have System 2 deliberation or rumination, and Hofstadter suggests that this alone disqualifies Transformers from being conscious.)

                If you made a perfect copy of me, a ‘model’, I think it would have consciousness. I would want the clone treated well even if some of the copied traits weren’t perfect.

                I think that this collection of misunderstandings is the heart of the issue. A model isn’t a perfect copy. Indeed, the reason that LLMs must hallucinate is that they are relatively small compared to their training data and therefore must be lossy compressions, or blurry JPEGs as Ted Chiang puts it. Additionally, no humans are cloned in the training of a model, even at the conceptual level; a model doesn’t learn to be a human, but to simulate what humans might write. So when you say:

                Spinal injuries are terrible. I don’t think ‘text-only-human’ should fail the consciousness test.

                I completely agree! LLMs aren’t text-only humans, though. An LLM corresponds to a portion of the left hemisphere, particularly Broca’s area, except that it drives a tokenizer instead; chain-of-thought “thinking” corresponds to rationalizations produced by the left-brain interpreter. Humans are clearly much more than that! For example, an LLM cannot feel hungry because it does not have a stomach which emits a specific hormone that is interpreted by a nervous system; in this sense, LLMs don’t have feelings. Rather, what should be surprising to you is the ELIZA effect: a bag of words that can only communicate by mechanically associating memes to inputs is capable of passing a Turing test.

                Also, from one philosopher to another: try not to get hung up on questions of consciousness. What we care about is whether we’re allowed to mistreat robots, not whether robots are conscious; the only reason to ask the latter question is to have presumed that we may not mistreat the conscious, a hypocrisy that doesn’t withstand scrutiny. Can matrix multiplication be conscious? Probably not, but the shape of the question (“chat is this abstractum aware of itself, me, or anything in its environment”) is kind of suspicious! For another fun example, IIT is probably bogus not because thermostats are likely not conscious but because “chat is this thermostat aware of itself” is not a lucid line of thought.

              • swlabr@awful.systems
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                17 hours ago

                “Consciousness requires semantic understanding” - I don’t see a way to operationalize this that GPT-4 fails. It misunderstands some words, and so does every human I know. You give GPT a definition, it can use it about as well as a school child.

                i would interrogate this plus “intelligence” a little more. LLMs don’t “understand” in the way that we do, and personally I don’t think that they really understand at all. A dictionary containing examples also basically passes this test, for example. LLMs also don’t really have “intelligence”.

                Anyway we’re very far away from figuring out consciousness anyway. Claims about LLMs being conscious are meaningless marketing.

    • irelephant [he/him]@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      30
      ·
      2 days ago

      Their argument is something like this:

      People might say something like “ai is incapable of thinking” or “ai is stupid”, but if you replace the word “ai” with something like “women”, you’re saying something unacceptable.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 day ago

        “If you said something different you would’ve said something different” what brilliant rhetoric, your mom must be proud

      • shawn1122@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        25
        ·
        edit-2
        2 days ago

        So they’re attributing personhood to AI.

        Before it has come anywhere close to meaningfully mimicking conciousness.

        Are they stupid?

        • shalafi@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          I always assume people like this are suburban white boys with nothing better to bitch about. I would love to see them explain how clanker is a racist term to people that actually experience racism.

          I’m a middle-aged white guy and one of my only friends is the young black guy down the street. Having a giggle because I can see the look on his face while I explain all this. “I’m listening and trying to take you seriously while not laughing my ass off in your face.”

          • AppleTea@lemmy.zip
            link
            fedilink
            English
            arrow-up
            7
            ·
            18 hours ago

            There is some examples out there of people making racist skits, with the dialog barely shifted to give it the veneer of being about robots. A month back there was an infamous tick tok telling “Rosa Sparks” to “get to the back of the bus”.

            It’s pretty awful. It also doesn’t really have anything to do with AI criticisms. Racists telling the same “jokes” they’ve always told one another. But people who’ve bought into the AI hype don’t really have a response to the AI criticisms, so its in their interest to build up what is a pretty tenuous connection, lest the cognitive dissonance set in.

            • shalafi@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              18 hours ago

              Racists can dog whistle about anything. There’s no stamping that out, can’t win against that. Trying to go after anything and everything they “code” about is playing whack-a-mole, no point.

              In any case, they’re immune to having their hypocrisy called out. Can’t win that game.

        • Trainguyrom@reddthat.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          Yeah that was my biggest takeaway is these posts seem to assume sentience in what’s little more than a sophisticated “most likely next word” generator. There’s tons of cool things that can be done with these new machine learning tools, but they are not sentient, they are not close to sentience and we may never invent artificial sentience.

          The one thing we now know for sure is we can damn well convince people of sentience artificially far more easily than I ever suspected

          • shalafi@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            We’ll get true AI one day, but the timeline and methods are up in the air. I’m not even sure LLMs will be a piece of the puzzle. Guess we’ll learn something from the exercise.

  • it_wasnt_arson@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    2 days ago

    I love how this is so close to a cogent critique of people literally just repeating racist jokes but using a word swap to make them acceptable, and then the “(whatever that means)” hits and it all falls into place.

  • NigelFrobisher@aussie.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    I feel certain this person could come up with even one example of someone attacking an LLM for having the wrong “bits”.