• 1 Post
  • 1.07K Comments
Joined 6 months ago
cake
Cake day: February 10th, 2025

help-circle


  • The same way they prevent you from transmitting any other illegal content: they fine you and/or throw you in jail if they know you’re doing it.

    It’s trivially easy to detect encrypted messages just by measuring the entropy of each message. A messaging provider would just turn you in if they detect it.

    You could probably get away with peer-to-peer messaging, but your ISP would be able to detect that you’re using unapproved encryption and then turn you in to the government.





  • Terminal commands, maybe some Python, I don’t remember all of my comments.

    I’m a system administrator. I write technical articles for non-technical people and am primarily paid to sit around and keep things from exploding.

    I have plenty of free time to get into slap fights on social media and I can probably type faster than I speak, so it isn’t a huge time investment to write a paragraph or two.

    The insinuation that I’m a bot is a nice pivot, at least it’s more direct than a vague reference to my comment count. Though, you probably should have went with implying that I have no life or some other personal failing.

    Implying that a bot can produce a large volume of coherent text will get you kicked out of the Luddite club…



  • This is how I do it. I’ll see something and think ‘hmm, interesting’ and completely forget any of the details but I’ll remember vaguely that something exists then I can search for it.

    Language models are pretty good at solving the ‘I think I remember something that does this specific thing but don’t know where to look’ kinds of problems (don’t just blindly run LLM generated commands, kids). Then once you have a lead, traditional searching is much easier.




  • Everyone understands that social media is the primary vector of disinformation, but if you ever try to point out this process in actual practice people act like you’re talking nonsense.

    Here we have a post started by some random account less than a day old which is suddenly rocketed to the top of the community.

    • The OP lives in the thread full time for the entire day, not commenting anywhere else on Lemmy, and then disappears.

    • This person simultaneously knows all of the anti-AI arguments by rote and also seems clueless as to why anti-AI posts get a lot of traction.

    • The post is brigaded/botted, the vote:comment ratio is off, the downvoters are primarily accounts with no comment/post history (you can see upvotes and downvotes with moderation tools, they’re not private).

    I would bet money that if a site admin were to look into the primary participants of this thread, you’d find that they’re all using VPNs. None of this on its own is suspicious, but taken all together it makes the thread very suspect.

    I could be wrong, this isn’t exactly an easy thing to prove even when you have server admin tools. But I participate in the community quite heavily and am a moderator of a fairly populated instance (so I can see the server logs for our instance) and this post is giving off a lot of red flags.


  • Ah, I see we’re being mature now.

    FauxLiving: na na na! I don’t like their opinion so this must be nonsense! stupid silly forum users cannot even have real values like me…

    WhyJiffie: blah blah blah, I don’t have opinions of my own so I follow the downvoting winds to sling shit and ask dumb questions.

    I would be interested in how did you amass 1400 comments in mere 5 months!

    In Lemmy, if you type words into the text field and press the reply button it creates a comment. If you do it 2 times, then you have 2 comments. I’ll leave the rest of the exercise to the reader.