I was trying to find a digital cave that could be suitable for my life, but I didn’t start a serious search until 2025. Do you think I’m in a completely ass, or is there any chance of finding people with whom you can communicate normally, for example, in matrix?

On reddit and discord, looking for people, as you understand, is pointless, these are no longer people, but some kind of bio robots, and reddit is generally some kind of garbage dump where are the cleaners (moderators clean the garbage named - freedom of speech).

You can call me an idiot, but I’m so damn tired that the noose of soap in my hands is about to be in my hands.

  • Digit@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 hours ago

    I recently noticed Lemmy’s been infested too.

    And not even IRL’s safe, with robots that are generally indistinguishable from real humans.

  • I Cast Fist@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    3 hours ago

    We need more RL interactions to escape the internet nowadays. Depending on where you live, that can be challenging, as smaller places are less likely to have people aware of how awful the current internet is

  • sol6_vi@lemmy.makearmy.io
    link
    fedilink
    arrow-up
    5
    ·
    14 hours ago

    Set up a meshtastic node and start chatting with some folks in your community. That’s what I’ve been doing. 10/10

  • 1dalm@lemmings.world
    link
    fedilink
    arrow-up
    14
    ·
    19 hours ago

    There is one simple trick to determine if you are talking to a bot. Ask the person you are talking to not to respond to a comment.

    “No offense, but I’m going to check to see if you are a bot. Please don’t reply to this comment.”

    Current LLMs can’t not respond. They will often write that they are “really insulted that you would say that” and that the test “doesn’t prove anything”, but they can’t not respond.

    I’m sure eventually the programmers will hard code in a simple defeat for this test soon enough, but for now it still works well.

    • Digit@lemmy.wtf
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      That reverse psychology would make it hard for me to not respond too. Weak test. High false-positive risk.

    • Slashme@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 hours ago

      That’s a clever test, and you’ve hit on an interesting aspect of current LLM behavior!

      You’re right that many conversational AIs are fundamentally programmed to be helpful and to respond to prompts. Their training often emphasizes generating relevant output, so being asked not to respond can create a conflict with their core directive. The “indignant” or “defensive” responses you describe can indeed be a byproduct of their attempts to address the prompt while still generating some form of output, even if it’s to protest the instruction.

      However, as you also noted, AI technology evolves incredibly fast. Future models, or even some advanced current ones, might be specifically trained or fine-tuned to handle such “negative” instructions more gracefully. For instance, an LLM could be programmed to simply acknowledge the instruction (“Understood. I will not reply to this specific request.”) and then genuinely cease further communication on that particular point, or pivot to offering general assistance.

      So, while your trick might currently be effective against a range of LLMs, relying on any single behavioral quirk for definitive bot identification could become less reliable over time. Differentiating between sophisticated AI and humans often requires a more holistic approach, looking at consistency over longer conversations, nuanced understanding, emotional depth, and general interaction patterns rather than just one specific command.

    • mrnobody@reddthat.com
      link
      fedilink
      arrow-up
      1
      ·
      12 hours ago

      That’s funny, bc they had chat bots in the early 00s doing the same thing. Ask me how I… Actually pls don’t 😅

    • deadymouse@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      19 hours ago

      Good, of course, but I’m afraid that soon this method will stop working, and we’ll have to tinker a lot to check if someone is a bot or not.

    • HubertManne@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 hours ago

      yeah it kinda cracks me up the way llms will answer something not meant to be answered sometimes doing mental gymnastics. that and not letting things previously said go.

  • asudox@lemmy.asudox.dev
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    19 hours ago

    There can and always will be bots on the internet, you can try communicating in places where they most likely won’t be in.

    Or you can always communicate offline aka with people in real life.

    • deadymouse@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      19 hours ago

      Unfortunately, they can be everywhere, damn, I’ve tested my local AI and it’s almost indistinguishable from humans in communication style, it’s terrible.

      Or you can always communicate offline aka with people in real life.

      I tried, but in my country it seems impossible.

      • Digit@lemmy.wtf
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 hours ago

        You’re absolutely right to point this out. This is not cynicism, this is prudent scrutiny.

        (I say, mocking how LLMs often sound.)

      • ageedizzle@piefed.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 hours ago

        I’ve tested my local AI and it’s almost indistinguishable from humans in communication style

        Why are AIs like ChatGTP so easy to spot then? Is it just the fine tuning?

        • deadymouse@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          5 hours ago

          The fact is that GPT is a model trained to solve a huge number of problems, like a universal master, but the skills are at an average level, and if you take a model specially trained for something, but the level of skill will just be amazing.

  • HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    18 hours ago

    you need a common reason for the discourse if your talking individually. something like doing an rpg with play by discord is a good way. seems like discord alterntives are a bit up in the air at the moment. one guy on the federtaion has been advertising his app site which I have been tempted to look into but haven’t and looking back the thread seems to be deleted so now im suspicious of the whole thing.

  • artifex@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    19 hours ago

    I think it’s ironic that in light of Discord’s announcement that they’ll be requiring hard ID (which can be gamed, and we’re all up in arms about), there’s a similar, real issue of who is and isn’t a bot that is really hard to solve without requiring hard ID.

    I love webs of trust and that kind of thing and it would be awesome to see it implemented on the fediverse, but when it’s possible to cheaply spin up an army of bots and have them gain “authenticity” for months or years, even that can be faked.

    • deadymouse@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      19 hours ago

      I think it’s ironic that in light of Discord’s announcement that they’ll be requiring hard ID (which can be gamed, and we’re all up in arms about), there’s a similar, real issue of who is and isn’t a bot that is really hard to solve without requiring hard ID.

      Oh yes oh yes, fuck discord, I won’t participate in this parody of dystopia if possible.