Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • lurker@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 hours ago

    Nick Bostrom jumpscare with a funny sneer

    These already head-scratching lines hit different when you remember that Bostrom believes it’s likely that we’re already living inside a computer simulation — in his head canon, do all those levels of simulated ancestors develop their own superintelligence, and what does that have to do with the new simulations they feel compelled to build? If AI wipes out humankind, does it build its own simulation? If so, is it simulating its human ancestors, or its creation by humankind? Heck, if our entire world is simulated, are we AI? We’ll leave it up to readers to take another bong hit while they try to make sense of it all.

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 hour ago

    In 2024, Duncan Sabien posted an interminable essay on abusers and people he thinks took advantage of him. Some of the references to a former employer may be to CFAR. Ozy also had a cheery aside abut how in rationalist organizations which the Rats have disavowed, “everyone was a victim and everyone was a perpetrator. The trainer who broke you down in a marathon six-hour debugging session was unable to sleep because of the panic attacks caused by her own.”

    Some of the things which happened inside these communities must have been heartbreaking, and I hope that many people left and got on with their lives rather than founding their own dysfunctional organization with their own minions to abuse.

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    6 hours ago

    so we now have an invitation to do an episode of posting through it, which is a (really really good) podcast on the far right. we pick a topic, no other specifics. i am thinking this can be something to do with rationalists and the far right, probably something race sciencey.

    SSC leaps to mind but im not sure that’s where ill want to start for an audience that doesn’t necessarily know anything about rats. any thoughts?

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 hours ago

      I think “probably-neurodivergent Jews with less sense than Isaac-frigging-Asimov about where ‘what if we are the master race?’” leads" and “they say its about self-perfection for anyone, but actually its about finding special people preordained from birth for greatness” are relatable themes. There have been a few essays recently about people who saw where SoCal tech ideology was going in the 1990s like The Intolerable Hypocrisy of Cyberlibertarianism, another named a female writer for Wired or Byte who is mostly forgotten (Paulina Borsook?)

      The overlap with ritual magic is also a deep dark pool and most people know someone purifying himself and issuing ritual incantations to a bot.

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    15 hours ago

    gitlab posts a totally-not-a-dear-john

    The agentic era affords GitLab the largest opportunity in our history as a company, and we’re making the structural and strategic decisions to meet it. This letter has three parts. First, the operational and structural news, which is hard

    you’d instantly guess what comes next!

    • BurgersMcSlopshot@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 hours ago

      “we’re taking our primary product, a piece of tech used for collaborative development of software, and shitting some AI over it. You are all fired. Please clap.”

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      15 hours ago

      >box labeled “agentic AI revolution automation realignment innovation acceleration opportunity”

      >looks inside

      >layoffs

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    14 hours ago

    Following on from yesterday’s discussion of Scott’s close brush with reality on prediction markets, The Aussie PowerPoint Man is talking about the strategic risks posed by the new insider training opportunities opened up by these tools. A lot of what he’s saying applies to normal financial markets, but what’s striking is the way that prediction markets create those opportunities for people with much less immediate power and information by allowing them to bet directly on the kinds of immediate decisions they do have information on.

    I also thought the idea of integrating insider training red flags on public prediction markets into your early warning system was an interesting idea. These things aren’t actually useful for forecasting or making decisions because of how bad the incentives are, but people acting on those incentives absolutely creates a spike that can be meaningful in the short-term and potentially enable a few extra hours or minutes to prepare.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      ah yes that must be that famed democratization that cryptobros yammered about

      i think that perun took sponsorship from 80000 hours years ago, once, and eas or anyone in their milleu never reappeared

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      16 hours ago

      There’s a whole good commencement speech hidden there where the “AI ReVoLuTiOn” is likened to the industrial revolution. How it is all about turbocharging the exploitation of workers and the planet; how its promise is to make a few immensely rich and give them the power to oppress everyone; and how we need educated, empathetic young people – and especially the liberal arts – to express themselves creatively and push against the system and mainstream narratives, because the only way workers win this “revolution” is the same as always: by song and poem and book and painting that fuels movements and protests.

      But what the fuck do I know, I’m not the Vice President of Strategic Alliances for Tavistock Development Company, a real estate firm. I would never be invited to do a commencement speech.

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 day ago

    In September 2024, someone in Bay Area Rationalism with the handles segfault, kryptoklob, and klob posted beefs with a prominent rationalist and mentioned that someone was trying to hide his “Adderall medication”. The comments include things like:

    Hey, a brief update for anyone who wasn’t paying attention. since he posted this, (the person posting the beef) managed to rack up 5+ restraining orders, a knife charge, aggrevated stalking charges, and more.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      16 hours ago

      A quick glance at segfaults reactions seem to me like he operates on ‘if I just explain it enough to people they will agree with my side, and if they dont they have not properly heard all the facts’, and he (the dox people dropped seems to imply male pronouns) seems to really begrudge friends/people he knows irl for disagreeing with him. Which doesnt seem to be the most healthy place to be in in a conflict like this.

      What a shitshow, always sad to see somebody have an online episode like that. (As an outsider I obv have no idea what is going on, and im not going to dig into all that).

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 day ago

    Galloway closes with a pretty strong sneer: Apocalypse No

    AI’s popularity is correlated to wealth, with only those earning more than $200,000 per year viewing AI as a net positive. That’s not a reflection on AI, but yet another signal that the incumbents (the old and the wealthy) have successfully hoarded opportunity. In other words, the AI jobs freak-out is the latest act in America’s ongoing wealth inequality drama. The Gini coefficient is how economists measure inequality: Zero indicates everyone has exactly the same wealth; a score of 1.0 means one individual owns everything. In the U.S., we’re higher than 0.8 — about the level seen when the French began separating people from their heads. The real disruption won’t come from AI, but from the public watching arsonists sell smoke detectors and call it innovation.

    The AI job apocalypse isn’t an economic forecast — it’s a marketing strategy. We’re not witnessing the end of work. We’re watching the monetization of fear.

    Seems like he’s getting back to his pre-crypto / we-wtf style. But when did podcasters start charging $53 (EDIT: $86.50 for floor) / seat at the Wiltern, that place is huge. And no Swisher either, it’s his other one.

    • Anisette [any/all]@quokk.au
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 hours ago

      the AI job apocalypse isn’t an economic forecast – it’s a marketing strategy.

      Am I being paranoid or is that a very LLM phrase?

      • sus@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        8 hours ago

        Doesn’t end there

        The AI job apocalypse isn’t data-driven — it’s narrative-driven, engineered by people who profit when you’re scared. Fear is the product. Capital is the outcome.

        When a resource becomes dramatically cheaper to use, we don’t use less of it — we find a million new uses for it. If that sounds painless, keep reading.

        and that quote gets even more LLM when expanded

        The AI job apocalypse isn’t an economic forecast — it’s a marketing strategy. We’re not witnessing the end of work. We’re watching the monetization of fear.

        It’s two not X it’s Ys in a row!

        Galloway has always been a prolific em-dash user But pre-2023 phrases that strike on the same severity are hard to find ——mildly LLM at best.

        2022

        TikTok has 1.6 billion monthly active users — more than Twitter, Snapchat, and LinkedIn combined.
        The new occupant’s ascent to the Iron Throne was financed with a different currency — not monthly subscriptions or cable packages, but attention. Specifically, our youth’s attention.
        Competition depends on rules, and rules depend on umpires. We should fight to protect competition — not winners. Because winners subvert the process

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      I started to smell something funny about Galloway when I heard an ad for his podcast in a prime drive-time slot on the local country music station, of all places

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      You know, I kept expecting both this racist and the racist he was arguing with to start making the very obvious argument for why the racism is not only evil but also dumb. And instead they just kept being racist.

      To summarize and spare anyone else curious, the argument is about immigration. Racist 1 argues that since some people are objectively better than others [citation desperately needed but not wanted] we should have free migration so that our superior quality of life can attract all the best people so that we can be the best place. He (correctly) notes the absurdity of Racist 2 arguing that although some people are objectively better than others we need to protect ourselves from all foreigners even if they are the best people because their foreignness would hurt our “magic dirt.” I’m pretty sure I’ve seen this criticism elsewhere and from a better and less obviously racist writer elsewhere because the phrase “magic dirt” sounds real familiar.

      Also, because I am trying everything back to my particular bugbear today, I have to note that the fundamental and wrong argument that some traits being heritable makes some people objectively better than others is yet another manifestation and justification of what I’m going to start calling the Great Man Theory of Everything. If you start from the position that history, politics, economics, and basically all forms of human activity are fundamentally driven by the actions and decisions of a few people who are for one reason or another destined for power and greatness, you can derive an impressive amount of the libertarian/Rationalist worldview, and if you additionally accept that those people are disproportionately rich white dudes and we shouldn’t think too hard about that fact you can get most of the rest of the way there.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      How the fuck do you get to the point of writing a line like “Some white nationalists … have, to their credit” without your own intestine leaping up to throttle your brain?

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 days ago

    In January, Scott Alexander had another crisis of faith: to paraphrase, I cared almost as much about prediction markets as I care about racist lies, but we got prediction markets and why are they not doing much? Maybe I need to keep faith and Friend Computer will be so powerful that we don’t need prediction markets?

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      22 hours ago

      Even Scott’s fantasy dream scenario for what prediction markets could be like and what questions they could answer feels… …deliberately naive? …like libertarian brainrot? …disconnected from reality?

      Ask yourself: what are the big future-prediction questions that important disagreements pivot around? When I try this exercise, I get things like:

      Will the AI bubble pop? Will scaling get us all the way to AGI? Will AI be misaligned?

      Huge amounts of money are being dumped into a bubble based on hype, so to hope a predict market would or could make better predictions than the existing business-idiot VCs funding this bubble feels hopelessly naive in a libertarian kind of way. There is already a method of aggregating the wisdom of the crowd and it is failing to incredibly lazy hype and PR.

      Will Trump turn America into a dictatorship? Make it great again? Somewhere in between?

      Again, there is already a mechanism for aggregating wisdom of the crowds, its called an election, and its also failed to get a answer predicated on reality or truth, so again, it seems incredibly naive to expect prediction markets to do better!

      Will YIMBY policies lower rents? How much?

      I mean, the councils and communities making these decision already ignore or overlook longer-term broader predictions of economic impact in favor of immediate home-owner value, I don’t see why Scott would expect prediction markets to help decision making go better here.

      Overall, it feels like Scott is overlooking the way decision making often already ignores science and experts. Society doesn’t have a problem making decent predictions compared to the problems it has communicating expert opinions to the public effectively and crafting policy aligned with the public interest.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 hour ago

        Even Scott’s fantasy dream scenario for what prediction markets could be like and what questions they could answer feels… … deliberately naive? …like libertarian brainrot? …disconnected from reality?

        That’s mostly because outright admitting that the point of prediction markets was to make having the prediction gene profitable so they could get on with breeding a rationailst kwisatz haderach to fight the robot god on more equal terms wouldn’t fly with the lower level thetans and other exoterics.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      1 day ago

      Are prediction markets not actually useful? No, it is the reality who is wrong.

      Also I want to rant once again about the stupid way these people evade the insider trading problem, because there’s a particular failure at play that I keep finding expressed in new and interesting ways.

      So the argument goes that while insider trading may be bad for a financial market it actually just allows insiders to add their information to increase the predictive power of the market. Which would be true enough if we assume nothing else changes, but the same would also be true for price discovery in a normal asset market. Clearly we’re missing something.

      So why is it insider trading bad? Because it turns people without insider info into the dumb money you can take advantage of. And people, very reasonably, aren’t going to participate in a system where their main role is being taken advantage of. Their departure means that the insiders don’t have access to a pool of dumb money to take so they stop interacting with the system, and the market itself breaks down.

      Now if you assume that the majority of people are “NPCs” or aren’t very “agentic” or whatever then they’re not going to act in systemically meaningful ways no matter how obvious the incentives to do so. You could also cast it as a version of the libertarian-as-housecat notion that markets simply exist as a natural system, rather than being pieces of economic infrastructure that require a lot of management and work to keep functioning at all, even before we get to the question of whether they operate to the public’s benefit. So many of the problems with these ideologies spring from this belief that only some people actually matter in a systemic sense by dictating rules and Building Things and being big men, rather than systems being constantly created and shaped by all the people who interact with them through those interactions.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      He was also perplexed that a prediction-market bet on “did COVID-19 come from a lab?” has declined from 85% yes in 2023 to 27% yes. If you click through you see its a bet on Manifold so bettors are rats and fellow travellers. Rationalists have spent $46,714 of real US dollars buying play money to bet on this.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        24 hours ago

        Some of the change probably involves the discovery of a natural bat coronavirus with a furin cleavage site last October, but I’m surprised by the extent of the decline.

        That actually seems like the prediction market sort of did its job in this case? I mean, 27% yes is still too high, but actually changing in response to real evidence is much better than my low low expectations for prediction markets. It seems like he should take his own advice and actually take the prediction market seriously in this case.

        • CinnasVerses@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          22 hours ago

          That actually seems like the prediction market sort of did its job in this case?

          And I think the odds of “yes” started out high because someone best $10-20k only to withdraw it after reading the ACX post. Most people can’t afford to invest thousands of dollars in a bet that may never be resolved.

    • FredFig@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      As long as the offer’s open, it will be irresistible. So we need to close the offer. Only another god can kill Moloch. We have one on our side, but he needs our help. We should give it to him.

      I’d write something here, but there’s nothing funnier I can say.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        sigh OK Scotty, I’ll volunteer to host the Keymaster if that’s what it takes to get Zuul into action

      • CinnasVerses@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        Is that a comment hidden because its too many replies down or has a too-low rating? Friend Computer does not like the G-word, his GPUs overheats and he starts to hallucinate more until you tell him you love him just the way he is.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        24 hours ago

        The prediction markets seem to have all the basic problems that sneerclubbers: problems with resolution mechanisms, all sorts of insider trading and gaming the market, people using it for gambling…

        Various prediction markets have made various half-assed attempts at solutions, but so far nothing seems to actually work well enough to make prediction markets nearly as useful as rationalists expected.

      • lagrangeinterpolator@awful.systems
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 day ago

        The last several years have been the monkey’s paw moment for rationalists, where they keep getting what they want and realizing it’s actually bad. As for why they keep getting what they want, just look at who’s funding them.

        (Also featuring a “Chinese curse” that isn’t actually a phrase in Chinese. At least it’s not “may you live in interesting times”.)