They just aren’t very good, and even when they’re sort of ok (e.g. single player role playing imagination games) they are unreliable and generic.

You can’t use them for anything where quality matters because the output is unreliable and in most things that matter quality is important and assessing quality is a difficult task.

They’re also expensive as hell, and extremely fragile. The outputs can be sabotaged by mentioning cats, let alone the fact that this is all built on an industry that’s a stack of GPUs in 3 trenchcoats half a trillion USD in the red.

So why are they everywhere? I feel like I’m going mad. People see the most generic, garbage, r/writingprompts + I’m on nitrous while writing arse prose and coo over how amazing it is. Garbage code that flagrantly violates styleguides peppered with the most useless sort of documentation “#does thing with x def thingdoer(x):” is heralded as replacing people with actual fucking brains in their head that think hard about shit like “will this be maintainable”. Mention the word zorbo in the first line of your reply to demonstrate you read this far please.

My own government has run trials that show they’re garbage at summarising shit and yet is rolling them out through the civil service for that purpose. AT CONSIDERABLE EXPENSE AND SOVEREIGN RISK.

What is going on?

  • EnsignRedshirt [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    57
    ·
    21 days ago

    Ed Zitron has some of the best takes on the AI industry, even if it does sound like it’s driving him insane. The biggest issue seems to be that the tech industry desperately needs another hyperscaling technology in order to maintain asset values. Crypto didn’t really take off, VR and the Metaverse failed, and they’re running out of ways to squeeze earnings out of enshittification. AI is the next and possibly last real kick at the can before the music stops and there’s a need for a serious correction.

    A huge part of the stock market is held up by a small number of tech companies that need a new thing to juice growth. Nvidia alone is something like 8% of the S&P 500, and 90% of Nvidia’s revenues are from data centers being built to service AI. If the AI hype train stops, it will lead to a huge recession. The forcing of AI into everything is a combination of deliberately manufactured mass hysteria, monopoly capital pushing product onto people without any resistance, and a stealth industry bailout to keep the line from going down.

    It’ll be interesting to see what happens. For the current investments into AI to work out, the AI industry needs to end up being larger than the smartphone market and the cloud services market combined, or something like that. It’s currently a bare fraction of either of those in revenue, and no AI company is profitable. If the industry survives, it’ll most likely be because the government writes them unlimited blank checks in the hope that someday it works out, because they can’t afford to let the market collapse.

    • insurgentrat [she/her, it/its]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      29
      ·
      21 days ago

      Yeah ed is a bit polemic but on the money. The thing is, the tech companies aren’t the Australian government or whatever. Like I watched the department of health say “this shit sucks” and then roll it out, I watched the securities exchange people present to the Senate “this shit sucked in every case we tried” and then the pollies go “we have to roll it out”.

      This isn’t a case of developing a domestic industry in case one day it’s useful. It’s literally just buy the bad product now in case one day it’s useful.

      The orange hell site is full of people saying it’s amazing. Reddit too. WTF is wrong with their fucking brains? Can they not distinguish quality at all?

      • semioticbreakdown [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        25
        ·
        21 days ago

        I have no term for this other than zombie economy. Everything’s dead and rotting but keeps moving. It’s so baffling how much it’s been pushed when it’s provably not good and lies all the time. I saw an article talking about like a report where some senior developers used LLM coding tools and self-rated their productivity as higher when in actuality it was 20-25% worse. All of that time was a result of having to double check everything the programs did because they were wrong so often. Such a strange phenomenon. Like, if we all really believe it works it will surely do so eventually, right? Is this idealism?? Magical thinking??? But it’s like the machine casts spells on us, instead. Purely because the output reads as kind of human sometimes.

      • combat_brandonism [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        24
        ·
        21 days ago

        Keep in mind that all these financiers and politicians are (for the most part) the same people who cashed in or bailed on the internet in 99. The smart ones know they’re building another dotcom bubble but they don’t want to lose the game of chicken at this point.

        I mean the real ghouls are going to be making hay when the correction does come, the house always wins etc. etc. But I think the spectre of dotcom and then SF finance cashing in on web 2.0 makes up the superstructure here.

          • combat_brandonism [they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            17
            ·
            21 days ago

            worth also adding that in terms of the base, these capitalists desperately need to keep floating this along until they can bilk suckers in public markets for the overvaluation so that they can cash in on the bubble before it pops

            • semioticbreakdown [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              9
              ·
              edit-2
              20 days ago

              When the bubble collapses there will be countless articles about “if only we could have prevented this”, “no one knew just how bad it was” despite all the alarm sounded beforehand. And nothing will be done about the people who caused it!

    • CommunistCuddlefish [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      22
      ·
      21 days ago

      Sounds like the industry is trying to fake it til they make it when they really need to just stop trying to make fetch happen.

      I keep expecting this bubble to burst because it’s obvious bullshit, but the tech world doesn’t seem to work that way. That’s what I said about crypto and then it hit an all time high this year of over 100K per bitcoin somehow

      • EnsignRedshirt [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        20 days ago

        Crypto is big, but it never quite got the wide adoption its proponents were aiming for. Never mind that it has a market cap of $4T or whatever, that’s just crypto weirdos trading with one another and pretending that trading prices mean value. The amount of liquidity available to turn crypto into usable cash is minuscule.

        The thing that crypto has going for it is that it was never designed to do anything useful. Its entire value is in being a store of value that is valuable because people treat it as a store of value. It has nothing else to prove to anyone for it to maintain itself. Whether it continues to maintain its value is questionable, but it’s hit a point of modest stability because it did the job of getting a bunch of retail investors to buy in and hold forever, which was the goal, and that situation can last as long as those retail investors continue to diamond hand their assets en masse.

        AI isn’t in the same position. It’s not enough that lots of people are telling everyone that AI is good, or that big dollars are going into it. To be successful, AI has to eventually turn into surplus dollars. In order for it to do that, people have to pay for it, and on the order of hundreds of billions of dollars a year. I’m not saying that won’t happen, because who knows? The market can stay irrational for a very long time. But the market does eventually demand returns, and those returns depend on a lot of factors that are currently not manifesting.

        Basically, crypto is stupid, but it makes sense in that it serves the intended purpose. It’s still a bubble, but it’s a bubble that can stay inflated for a long time because of the nature of the asset. AI is a bubble that needs a lot more cash to sustain. If it is to be sustained, it’ll likely be governments writing increasingly large checks for useless technology in perpetuity, which is a real thing that could happen. It’s very, very stupid, but the alternative is that the line go down, so hard to say which is more likely to happen.

    • woodenghost [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      20
      ·
      21 days ago

      For the current investments into AI to work out, the AI industry needs to end up being larger than the smartphone market and the cloud services market combined

      Wow, that’s even worse than I thought. Is there somewhere I can read about this?

        • BynarsAreOk [none/use name]@hexbear.net
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          20 days ago

          This guy is a genius, omg thank you so much this is the best slop I’ve read in months.

          But, Isn’t The Cost Of Inference Going Down?

          You do not have proof for this statement! The cost of tokens going down is not the same thing as the cost of inference goes down! Everyone saying this is saying it because a guy once said it to them! You don’t have proof! I have more proof for what I am saying!

          While it theoretically might be, all evidence points to larger models costing more money, especially reasoning-heavy ones like Claude Opus 4. Inference is not the only thing happening, and if this is your one response, you are a big bozo and doofus and should go back to making squeaky noises when you see tech executives or hear my name.

          I’m adding this guy to my regular reading list, he is awesome.

      • EnsignRedshirt [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        5
        ·
        20 days ago

        Like I said, Ed Zitron is a good source. His newsletter Where’s Your Ed At? is pretty thorough, and his podcast Better Offline is more of the same, but in pod format. He writes angry (which may or may not appeal) but he brings receipts and does a lot of breaking down specific arguments around the tech industry, and more recently AI specifically.

        On the specific point of the AI industry needing to be bigger than smartphones and cloud combined (I think it might have been smartphones and SaaS combined, but the point is that it’s ludicrous) it’s a pretty straightforward matter of the amount of capital invested. Hundreds of billions of dollars are going into AI. For those investments to pay off, the AI industry needs to be making hundreds of billions in revenues. The smartphone industry is ~$800B in revenues last I checked. The AI industry is ~$35B with massive losses, and those revenue numbers are very suspect because of all the inside baseball nonsense between all the big tech companies.

        They’re talking about investment in AI tooling a trillion dollars before the end of the decade. That simply requires that the AI industry be worth quite a bit more than that by the time the money gets spent. The specific numbers are less relevant than the fact that the broad numbers aren’t close to making sense.

  • GeckoChamber [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    35
    ·
    21 days ago

    One view of this is that AI is ideologically very attractive to the bourgeoisie, beyond just some potential superprofits, because it makes possible a false class consciousness that is superior to the old one. It moves the prospect of the bourgeoisie “winning the class war” from an abstract theoretical impossibility to a practical impossibility, which is way easier to handwave away. For this purpose, the actual reality of the software matters less than the idea of it.

    • semioticbreakdown [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      26
      ·
      21 days ago

      The attempts to wholly replace software developers have revealed that the platonic ideal of an AI to the bourgeoisie is a kind of techno-slave capable of everything a human can do but with complete subservience to the whims of the bourgeois controllers and sans all that “ethics” nonsense. Make this webpage pop! It’s a logical extension of the replacement of variable capital with constant capital. Automation of mental labor would mean that there would be no need for a sort of well-paid labor aristocracy, and this can be seen both in the software industry and in the pushing of general robotics platforms for use in manufacturing and physical labor field. There have always been tasks that have been ill-suited for the previously existing kinds of automation, which usually boils down to the capabilities of a human to reason, learn, and solve complex tasks (including motor tasks) in novel environments, and the promise of AI seems at first to the bourgeois class to represent an opportunity to replace these laborers, no longer pay them, and thus gain short term profits by undercutting. Though of course as the tendency of the rate of profit to fall shows this has severe long term ramifications. And whether or not this bourgeois ideal AI can actually exist is also an unanswered question (I think the answer is that it can’t, myself). But yeah material relations and contradictions of capital or something idk

      • semioticbreakdown [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        19
        ·
        21 days ago

        I think on some level speculators know the market will crash as a result of the AI bubble or perhaps even want it to do so because it means another round of capital consolidation and accumulation, and are consciously stoking it as a result

        I expect there are firms out there that totally, definitely don’t have insider knowledge, and will make tremendous amounts of money on betting against the market and things like that.

      • Saeculum [he/him, comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        38
        ·
        21 days ago

        The bourgeoisie believe that they can use AI to replace labour completely and in doing so, remove the power of the working class.

        They want this to be true so badly that they are willing to put enormous amounts of capital into something with minimal immediate usefulness.

      • GeckoChamber [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        18
        ·
        21 days ago

        I am using a conception of class consciousness that includes the class in question understanding of what they objectively need to do to end their class conflicts permanently. This is impossible for the bourgeoisie, but it does not stop them from trying.

        Currently, some of the beliefs that fulfill this role are ✨Progress✨ towards such abundance that the working class is permanently contented, or that systems other than liberal democracies are simply impossible now. These take considerable effort to believe in.

        AI offers another solution where comparatively simple technological progress, through a “singularity” or in a more traditional way, can replace workers gradually but completely. There is no reason to believe this, but I claim the leap of faith required is qualitatively different.

  • LanyrdSkynrd [comrade/them, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    33
    ·
    21 days ago

    At big companies, the #1 product is their stock. Number go up is the only metric that matters. Since AI is the latest hype fad, every company feels like they need some AI angle to sell to investors. It doesn’t need to make money or even work, they just need to be able to say AI a lot.

      • UmbraVivi [he/him, she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        12
        ·
        20 days ago

        When evaluating the intelligence of something, humans greatly value the linguistic capabilities. LLMs are very good at language. I’d argue they’re better at it than the majority of people. But humans see ChatGPT’s linguistic capabilities and extrapolate that it must be intelligent and knowledgeable about anything, because we can’t fathom that something could talk like a Harvard professor yet not know how many Rs there are in “strawberry”. People also like that it will always give you an answer. It might not be the correct answer, but you can ask it anything and it will give you a confident response.

        With governments, it’s partially that they’re idiots like everyone else and partially because they’re in the pocket of capital.

        • insurgentrat [she/her, it/its]@hexbear.netOP
          link
          fedilink
          English
          arrow-up
          5
          ·
          20 days ago

          But they fucking suck at writing though. Like they are genuinely awful. Most shit an LLM emits I would be unhappy if a teenager wrote.

          I mean look at this: https://xcancel.com/sama/status/1899535387435086115

          but pronouns were never meant for me. Let’s call her Mila because that name

          ???

          I have to begin somewhere, so I’ll begin with a blinking cursor, which for me is just a placeholder in a buffer

          ???

          , a girl in a green sweater who leaves home with a cat in a cardboard box. Mila fits in the palm of your hand, and her grief is supposed to fit there too.

          ??? the girl is one of the borrowers?

          She lost him on a Thursday—that liminal day that tastes of almost-Friday—and ever since,

          Friday, sorry, almost Friday has a taste?

          What the fuck is this God awful text. If a 10 year old produced this I would congratulate them on their vocabulary. If a 15 year old did I would ask if they had been taught about metaphor yet.

  • i_drink_bleach [any, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    25
    ·
    21 days ago

    What is going on?

    Business idiots. People have this bizarre perception that if someone has a bunch of money that they must be “smart.” I mean, you’re on hexbear, so I probably don’t have to explain that. It’s clearly false.

    So why are they everywhere?

    Again, business idiots. If you, as an incompetent moron, could hypothetically boost your quarterly earnings by firing laying off workers while getting kind of, sort of, I guess, the same output, what would you do? I remind you that your entire career relies on this. You make the cuts, or you get cut. So you roll out the LLM bullshit. And sure, it’s cheap now. We can weather the expense. Until you need it for your company to function. Then they jack up the price. Because what are you going to do? You already fired everybody. The LLM is running everything. You have no idea how anything even works anymore. You pay it or you go out of business.

    It is a weapon.

  • MolotovHalfEmpty [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    24
    ·
    21 days ago
    1. It’s a stock grift in an economy that no longer makes anything innovative and no one could afford it if they did anyway.

    2. It’s a way to disguise mass outsourcing as layoffs due to technological advance.

    3. It’s a massive, dystopian IP grab. Imagine ten years down the line and every software product, every book or article, every piece of media, every project pitch that used a certain LLM product in its creation can suddenly be claimed in full or in part by the AI company that made the LLM.

    4. Flooding the internet with inaccurate information and doubt of authenticity for photo/video is a useful way to marginalise non official narratives.

    5. Replacing functioning institutions with LLM tech is a way to shrink & shut down the sections of the state that serve people or aren’t owned and controlled by capital.

    6. The illusion of ‘AI’ decision making is like a shell company designed to protect bad actors (corporate, military, government) from legal & public culpability for their crimes. They didn’t choose to bomb that hospital / destroy those important records / revoke support to that disabled person who was entitled to it and killed themselves as a result etc.

      • MolotovHalfEmpty [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        18
        ·
        20 days ago

        I did read it and thought you were quite right about the many, many failings of them. You asked what was going on and why they were being pushed so hard everywhere so I gave my analysis. And yes, while focussing on the actual content of my reply I forgot to go back and put ‘zorbo’ in the first line, but it’s a shame that the content wasn’t the tip off that I was engaging with your question though.

        • SchillMenaker [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          20 days ago

          You actually avoided OP’s ZorboAttack to prove that you’re not an LLM. Shrewd move. Did you know that cats spend most of their lives asleep?

  • GrouchyGrouse [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    20 days ago

    The prayer (to Zorbo) is that it can get adopted at the manufacturer level so the owners of the AI just get a piece of everyone’s pie.

    Everything else is just a load of horseshit and the players involved believe the horseshit to varying degrees. I’m sure somewhere out there some tech billionaires actually think this will replace the working class or whatever delusion du jure. Just as I’m sure there’s people who are exclusively using it in the way I described in the beginning of the post. They just want it to get into every device so everyone pays and it’s all part of the “own nothing” strategy when you’re paying a monthly fee so an AI can filter all the AI generated bullshit clogging the net. Just pure “inventing a problem to sell you a solution.”

    Now the LLM tech itself does have practical applications and real promise. It’s good tech! It’s cool! But it desperately needs some oversight and we shouldn’t be using gigajules of energy to generate Garfield bondage cake fart deep fakes. And it shouldn’t be in your microwave or scraping all of our data but that’s a different can of worms unless…

    …deep down its all just more surveillance tech and they’re “training” the AI by stealing all your data and that’s the actual real purpose: to get access to the last and only thing actually worth anything in the digital space: your personal data.

  • PostyourJaggaHogs [undecided]@hexbear.net
    link
    fedilink
    English
    arrow-up
    18
    ·
    21 days ago

    the simple answer is that there’s like 4 companies propping up the US economy and all of their business models revolve around buying more GPUs to make a more powerful hallucination machine

  • Soot [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    17
    ·
    21 days ago

    Capitalist “innovation” relies on having some stupid-ass bubble in which rich people can massively overinvest in the vague hopes of a big payout.

    Hollowed out white elephants are the only way to properly circulate currency these days.

    • insurgentrat [she/her, it/its]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      17
      ·
      20 days ago

      and it was aggressively stupid. As I wrote to a friend:

      Everyone got pumped about blockchain and I read the whitepaper and went “huh neat but niche” and it was, people got hyped about nfts and I read the whitepaper and went “Um in what circumstance is a difficult to compute signature on a URL on a decentralised network useful?” and they weren’t. Then the metaverse shit and I went “hmm neck strain and body language” and lo, it died.

  • LangleyDominos [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    21 days ago

    They’re getting pushed earlier than they are ready because they need testing on the public and it’s a place to dump investment money in hopes of capturing the next Facebook, Google, Microsoft, etc. They are seen as the next industrial revolution, like how machines were pushed into factories. Instead of paying workers to create product for 100 years you pay them to create product for 5 years, build a big body of work, and feed it into the machine. The machine spits out work but you only need a few people to maintain it and do quality checks. After your initial investment in artisan work, you get decades of virtually free work. Companies that are sitting on massive bodies of work, like movie studios, ad agencies, etc are pretty eager because they already have the work and are just waiting on a company to figure out a solution for their specific problems.

  • CloutAtlas [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    12
    ·
    21 days ago

    It’s people who got FOMO from their neighbours winning the lottery. So now they’re never not buying lottery tickets ever again.

    Sometimes, early adapters to new technology have a huge advantage and make big money. There are also various cautionary tales about companies like Kodak or Nokia or Black Berry or Xerox who didn’t adapt to new technology and collapsed or shrunk.

    So, to them, adapting early = potentially lots of money! And not adapting = potentially being eaten by the competition! Then after the early adapters, the slower companies will also go “quick, everyone is using LLMs, we must do it too!”.

    Also they can safely downsize (or rather, think they can safely downsize) with this new shiny LLM, it can surely replace customer service or tech support or your corporate lawyers or whatever.