test_ [none/use name]

I apologize in advance if I don’t reply. The communication juice comes and goes.

  • 4 Posts
  • 110 Comments
Joined 3 years ago
cake
Cake day: November 9th, 2022

help-circle








  • (short, didn’t quite fit in 2 parts)

    part 3
    The Closing Frame

    American capitalism is unusually good at allocation and unusually poor at abundance. When a technology produces commodity capability, the U.S. capital structure does not gracefully reorganize around the new economics. It reaches for the policy levers that can manufacture the scarcity the technology has stopped providing. This is not a moral failing. It is a structural consequence of how the system finances itself. The same dynamic that made it possible to fund a trillion dollars of AI infrastructure on the back of a monopoly thesis now requires the monopoly to be defended by means other than the underlying technology.

    The collision between that financial logic and the open-weight commons is the central force in the U.S. AI industry over the next decade. The capital structure will fight to manufacture scarcity. The commons will continue to compound. The user — domestic and global — sits in between. The choice the country makes about how heavily to wall the domestic market against the commons will determine whether U.S. AI looks like the U.S. internet sector in 2005 — open, exporting, dominant — or like the U.S. auto industry in 2025 — protected, exporting nothing, durably uncompetitive.

    That is the actual question. Not whether open weights threaten frontier labs, because they obviously do. Not whether the labs and their capital partners will reach for protection, because they obviously will. The question is whether the country that hosts that fight chooses to subsidize the moat or the commons. So far, the choice is going one way.

    The moat or the commons. American capital prefers the first. American consumers, developers, and long-term competitiveness need the second. The next decade resolves which preference the policy follows.


  • part 2
    What Capitalism Does When Scarcity Disappears

    There is a recurring move in industries where technology fails to provide the natural moat the financial structure assumed. The move is to manufacture scarcity through means other than the technology itself. American capitalism, despite its mythology, is unusually good at this. It has done it in pharmaceuticals, where patents and FDA exclusivity create monopolies the molecule alone could not. It has done it in finance, where regulatory complexity creates barriers to entry the underlying business of lending does not. It has done it in telecom, where spectrum allocation and right-of-way agreements substitute for technological superiority that competitive carriers would otherwise force.

    The pattern is reliable enough to be predictable. When a technology produces something that wants to be a commodity, capital does not gracefully accept commodity returns. It reaches for three tools, in roughly this order. First, regulatory enclosure — using the policy apparatus to manufacture exclusion the market does not provide. Second, vertical integration — moving up or down the stack to capture margins the immediate product can no longer command. Third, bundled distribution — leveraging adjacent monopolies (cloud, ad networks, app stores, payment rails) to gate access to the commodity layer beneath.

    All three of these tools are now being rehearsed in the U.S. AI sector. They are being rehearsed because the technology is producing a commodity, and the capital structure cannot survive a commodity. They will be deployed because the financial commitments are too large to walk away from. They will be deployed regardless of what is best for the user, because that is not what capital is selecting for at this stage of the cycle.

    Three Predictions for the U.S. Direction

    What that looks like in practice is a set of moves over the next eighteen to thirty-six months, mostly without legislation, mostly through the slow accumulation of advisories, procurement guidelines, and corporate practice. Three are likely enough to bet on.

    1. Regulatory enclosure dressed as security.

    The first move is the cheapest one. Chinese-origin open-weight models will be reframed as supply-chain risks — language already worn smooth by years of Huawei, ZTE, and DJI debate. The model card itself will be described as a vector for embedded behavior, the inference deployment as a potential exfiltration channel, the training data as suspect. None of those concerns are entirely without foundation. None of them are the actual reason for the policy. The actual reason is that the open-weight models are commoditizing capability the closed labs have already booked into their valuations.

    The advisories will harden into procurement restrictions for federal agencies, then for federal contractors, then for critical infrastructure. Major U.S. cloud providers, watching the regulatory weather, will quietly delist Chinese-origin model endpoints from their managed services. The framing will not, at first, target individual developers running Qwen or DeepSeek weights on their own machines. But the institutional path of least resistance — for any cloud, any enterprise, any compliance officer — will be to treat Chinese-origin weights as the path that loses you contracts. That is enclosure achieved without a single new statute.

    2. The labs become the operators.

    The second move is the one the labs are already making, quietly and without much commentary. If selling the model produces commodity returns, the lab moves up the stack and sells the work the model does. The frontier capability runs internally; the customer-facing product is the output of that capability — legal research, software, drug discovery, financial analysis, whatever vertical the lab can structure into a service. The lab captures the operator’s margin instead of the tool vendor’s, and there is no tool to sell at any price.

    From the capital structure’s perspective, this is the cleanest path. From the user’s perspective, it is the worst one. The lab is no longer trying to make the model accessible; it is trying to make the model inaccessible to the user’s competitors, which includes the user. Vertical integration substitutes a margin the lab can defend (the operator’s) for one it cannot (the tool vendor’s). It is a rational move under capital pressure. It is also a structural retreat from the open ecosystem the original mission rhetoric described.

    3. The market splits.

    The third move is what happens to the rest of the world. U.S. domestic users — consumers, indie developers, mid-market companies — get the closed-frontier pricing the capital structure requires, with limited legal access to the open alternatives that would otherwise compete with it. The rest of the world routes around U.S. rails. European, Indian, Singaporean, and Latin American developers build on whichever combination of open and hosted endpoints sits in the cleanest jurisdiction. The U.S. closed-frontier business retains its margin in its protected market and loses share in every other market on Earth, on a multi-decade arc that mirrors the auto industry exactly.

    The arithmetic is not subtle. The U.S. is roughly four percent of the world’s population and perhaps fifteen percent of its consumer-facing technology market. Building a capital structure that requires the U.S. domestic market to absorb monopoly-grade rents, while accepting that the other eighty-five percent will route around the wall, is a strategy that produces excellent five-year balance sheets and disastrous twenty-year competitive positions. It is, nonetheless, the strategy. It is the one the capital flow already implies.


    The Auto Mirror

    There is a clean historical analogue. In 1980, U.S. domestic automakers controlled roughly 80% of the U.S. light-vehicle market. By 2024 that share was below 40%, and the global share was lower still. The arc of decline does not correlate with the absence of policy support. It correlates almost perfectly with the presence of it. Voluntary export restraints in the 1980s, repeated bailouts, and most recently a 100% tariff designed to keep BYD out of North America — none of those interventions reversed the trend. They lengthened it. The wall produced exactly what walls produce: protected margins, protected complacency, and a foreign competitor that compounded its advantage in every other market while the U.S. consumer paid more for less at home.

    The same mechanism applies to AI. A walled domestic market lets the closed labs sustain the pricing the capital structure assumes. The protected balance sheets produce a generation of product that does not need to compete on cost. The open ecosystem outside the U.S. continues to compound. The gap between the protected industry and the global standard widens — in the wrong direction. By the time the wall is reconsidered, the protected industry no longer has a competitive product to bring outside of it.

    The wall protects the producer. It does not protect the product. Twenty years on, the producer cannot compete without the wall, because the wall is what stopped them from learning to.


    Who Pays

    As with every protectionist regime, the cost lands on parties without lobbyists. Four cohorts come out behind.

    • U.S. consumers and small developers — pay closed-frontier pricing for capability the rest of the world buys at commodity rates, with limited legal recourse to the open alternatives.
    • U.S. independent developers and startups — either eat the closed-API premium, take architectural risk on a politically vulnerable open-weight stack, or relocate workloads to offshore endpoints. None of those options is free.
    • U.S. closed-frontier labs themselves, on a long enough horizon — engineering and pricing discipline come only from competition. The protected producer eventually loses the ability to compete in the markets it isn’t in.
    • U.S. influence over the global AI ecosystem — every developer who routes around the wall does so on infrastructure outside U.S. control, and brings the relationships with them.

    The beneficiaries are narrow and known. U.S. closed-frontier labs gain a margin window measured in years rather than decades. U.S. cloud providers extract some rent from compliance complexity. The capital that funded the build gets to mark its commitments at something other than zero. The political class earns a security narrative that polls well in election cycles. None of the beneficiaries are the median user. None of them are the median developer. None of them are the long-term competitive position of the country itself.


    What To Do About It

    The defensive move and the offensive move are the same move. There is a window in which the open commons remains accessible, and that window is open today. Three positionings make sense while it remains open.

    • Build on the commons. Run open weights now, on infrastructure you control, for the workloads that pay for themselves today. The closed-frontier APIs remain useful for the very top of the capability curve, but the architecture should treat them as substitutable, not foundational.
    • Architect for jurisdictional flexibility. The same compliance pressure that will eventually push Chinese open weights out of U.S. clouds will push U.S. workloads into European, Indian, and Singaporean endpoints. That is not a contingency; it is an architectural concern. Plan for it now, while the migration is voluntary.
    • Treat the policy clock as part of the stack. The window between freely deployable open-weight models and open-weight models restricted to compliant entities under a guidance document is shorter than the design cycle of most production systems. Anything mission-critical built on the assumption of permanent open access to current-generation Chinese weights is a trapdoor.

  • I read an article that predicted closed-source US AI companies would be granted an artificial monopoly through regulation. The article appears to be rate-limited by the host now, but you can still see the nerds on Ycombinator arguing about it

    The gist of the article was, “all this money was invested under the belief that US AI would obtain monopoly-level returns, but then open-weight models like DeepSeek fucked with that plan, and the performance gap is closing, so the US will protect them.”

    *Actually, I still have it open in another tab lol, I’ll copy it in

    The Moat Or The Commons

    The Moat or the Commons

    American capital financed AI on the assumption it would be the next great monopoly. Open-weight models are commoditizing the capability that monopoly was supposed to protect. The collision between the two now defines the direction of the U.S. AI industry — and the country.

    TL;DR — Takeaways [this is from the actual article, not my TLDR]

    • U.S. frontier labs trade at valuations that assume monopoly-grade rents in the post-apprenticeship phase. The financial structure cannot survive a commodity outcome.
    • Open-weight models — DeepSeek, Qwen, Kimi, GLM — running on the LangChain, vLLM, llama.cpp, and Ollama stack are commoditizing capability faster than the closed labs can deepen the moat.
    • When technology cannot manufacture scarcity, American capital reaches for regulatory enclosure, vertical integration, and bundled distribution to manufacture it instead. This is what U.S. capitalism does in this situation. It is doing it now.
    • Three predictions for the U.S. direction: security-dressed regulatory enclosure of Chinese open weights, frontier labs absorbing their own customers as operators, and a split market where domestic users pay closed-lab pricing while the world routes around U.S. rails.
    • The defensive move is also the offensive one. Build on the commons, run open weights now while the regulatory air is clean, and architect for jurisdictional flexibility before the migration becomes involuntary.

    American AI was financed on a particular bet. The bet was that frontier models would be the next great monopoly business — winner-take-all, capex-justified-by-monopoly, the kind of structurally protected market that supports trillion-dollar valuations and the capital flows necessary to build them. Two and a half years into the cycle, the assumption is breaking. Not slowly. Not at the edges. Visibly, in the public benchmarks, the open-source repos, the Hugging Face download counts, and the inference price sheets.

    The break is straightforward to describe. Open-weight models — most of them released by Chinese labs, served through a stack of mostly Western open-source infrastructure — are commoditizing the capability that the moat was supposed to protect. Capability that a U.S. closed lab could charge enterprise rates for in 2024 is now available, downloadable, deployable on rented hardware, at single-digit cents on the dollar in 2026. The gap between the open frontier and the closed frontier is six to twelve months. It is closing, not widening.

    The collision between those two facts — that American capital paid for a moat, and that the technology no longer provides one — is the most important force in the AI industry today. Everything else, including the policy direction the U.S. government will take in the next eighteen months, is downstream of how that collision resolves.

    The Capital Thesis

    To understand what is at stake, follow the money. U.S. frontier labs and their hyperscaler partners have committed somewhere on the order of a trillion dollars to AI capex over the next four years — data centers, GPU clusters, power infrastructure, fiber, the entire physical stack that frontier inference requires. Those commitments are not made on the assumption of SaaS-grade margins. SaaS-grade margins do not service that kind of capital base. The commitments were made on the assumption that frontier capability would behave, at scale, like a regulated monopoly: high fixed costs, high marginal margins, durable rents, very few competitors.

    The valuations of the labs themselves reflect the same assumption. OpenAI, Anthropic, and the model arms of Google and Meta trade — privately, or via parent — at multiples that only resolve if frontier capability eventually commands monopoly-grade pricing. Strip out the monopoly assumption and the math does not work. The data centers are still there. The compute bills are still there. The investors who funded the build do not have a ready exit on a commodity-margin business.

    That is the structural pressure. Frontier AI was financed as a moat. The financial commitments are durable and large. The technology that was supposed to provide the moat is failing to provide it. Capital, faced with that gap, does not quietly accept lower returns. Capital reaches for the moat through other means. That reach is what the next phase of U.S. AI policy will be about.

    The Commons

    The open-weight ecosystem did not arrive in stages. It arrived in a wave. In late 2024, a Chinese lab named DeepSeek released a model whose training cost was reported at roughly $5.6 million in compute, against an estimated $500 million to $1 billion for the U.S. closed-frontier equivalent it was benchmarked against. The performance gap on most general benchmarks ran six to twelve months. The performance gap on inference cost ran ten to thirty times in the open weight’s favor. The model came under a permissive license, downloadable, modifiable, deployable on a single eight-GPU node by anyone with the storage and the patience to read the README.

    That release was the leading edge, not the totality. By mid-2025, the open-weight frontier from the Chinese ecosystem — DeepSeek, Qwen, Kimi, GLM, MiniMax — had compounded into a competitive baseline. Llama, Mistral, and a dozen smaller community projects filled in the rest. The closed labs in the U.S. continued to win the very top of the capability curve. Below that top, the curve was being closed in from underneath at a pace that made the gap a six-to-twelve-month problem rather than a generational one.

    What sits underneath the model release is the open ecosystem that delivers it. vLLM serves the weights at production-grade throughput. llama.cpp runs them on a developer’s laptop. Ollama wraps the experience for the non-technical user. LangChain and LlamaIndex provide the orchestration layer that, two years ago, only existed inside OpenAI’s product organization. None of these tools are owned by the closed labs. Most of them are American or Anglosphere open-source projects. The infrastructure is geographically and economically agnostic. The weights are not.

    The Defection Problem

    Last week’s essay laid out an argument: that frontier AI is sold at a structural loss because users are providing the training data, and that when the apprenticeship ends, prices reprice upward sharply. There was an unstated premise in that argument. The premise was that when the prices rise, the user has nowhere to go.

    That premise no longer holds. A consumer rationing a $250-per-month subscription at the moment of repricing has the option, today, of running an open-weight equivalent at fifteen dollars in cloud compute or zero dollars on a sufficiently equipped local machine. The defection cost is a weekend of integration work and a haircut on capability that, for most workloads, the user does not notice. For an enterprise the haircut is even smaller and the savings are larger.

    That is a strategic problem for the closed labs, but it is a structural problem for U.S. capital. The original deal — subsidize, train, reprice — assumed lock-in at the moment of repricing. Lock-in does not exist if the next-best option is free. And if lock-in does not exist, the post-apprenticeship pricing the entire capital structure depends on does not exist either.

    The valuations require a moat. The technology no longer provides one. Capital will reach for one anyway.

    To be continued



  • Good thread

    Another rule of thumb: it’s good to give people an out, i.e., be respectful and charitable enough that they can leave the conversation at any time without feeling they need to defeat you to save face. It’s as much for your own sake as theirs. If they feel like they have to defend themself by attacking you, then suddenly you’re in the same position of having to defend yourself and no one has an off ramp.

    Sometimes easier said than done, but it’s never too late to deescalate even if it does start to get heated. It’s just harder.

    It also helps with persuasion. No one wants you to be right if it hurts them. Respect and patience gives them emotional space to consider your ideas, whereas hostility gives them an incentive to shoot you down reflexively as a way to defend themself.


  • As far as I know (and I’ve looked), the only actual evidence of his fate is that, at the end of the video, he is ushered off the street by civilians in streetwear. From there, people seem to make conclusions based on an overall impression of the Chinese government as comically evil, rather than any specific or credible evidence (or indeed any evidence at all) related to this guy in particular.

    Which is somewhat understandable, don’t get me wrong – no one investigates everything they hear. We try to gauge the overall picture and then, once we’re convinced (rightly or wrongly), we tend to accept or reject new information on the basis of that picture. Everyone does this. I do it. That’s why US propaganda is so effective. It works by sheer volume and repetition, amplified by a compliant media. Once that “comically evil Chinese government” picture is established, people will believe new details that fit the picture. Many stories collapse under scrutiny, but as long as they carry no consequences for ordinary Americans – as opposed to, say, “Saddam has WMDs” – skepticism spreads slowly, and meanwhile new stories are piled on top.


  • dedicated in their mission of undermining revolutionary movements with bourgeois narratives

    I won’t discount that some of it may be gringo brainworms, but I also think some of it is a reaction to fear and uncertainty.

    The US empire is posturing like we are headed toward WW3 within a decade or so, and the techno-fascists are building skynet-lite in anticipation of unrest. The empire’s bloodlust is infinite and they no longer seem to care about appearances. And yet the situation is still early and vague, so no one really knows what will happen, and it’s hard to reason about, which lends itself to vibes analysis.

    I think the unspoken doomer position is, “we need to be pessimists so we’ll be prepared.” I’m putting words in their mouths, but they may feel that some leftists underestimate the US or do not want to hear bad news, which they would see as a dangerous attitude at a time like this. We’re social creatures, we want our peer group to be prepared for trouble, because we depend on each other.

    And to be totally fair, evidence suggests that the US does have a plan. Whether it will work is another question, but the US is probably not just fumbling around doing whatever Trump wants. So when a doom-inclined person sees hexbear discourse about apparent US incompetence or impotence, they may feel they need to push in the other direction, potentially more than is warranted.

    Personally I’m an optimist, both as a matter of rational belief (to whatever extent my amateurish beliefs are rational), and as a conscious choice to maintain morale, which I think is crucial for any group under contest. I think the revolutionary people of the world are determined and have heart, and I think the US is crumbling along with its systems of geopolitical control.

    *I forgot to add, whether or not a charitable interpretation is true, I still always try to start there, and continue until the person removes all doubt – and maybe this person already has, I don’t know all the history. Just putting that out there.

    **Crossed out “gringo” because it’s no longer ambiguous, I looked through the thread and the user has said he is not white





  • I agree with you (not that it means much from me, an ignoramus on Venezuela) but regardless of the topic, I think the tone on hexbear needs to be more patient and low-pressure than this. If a comrade is wrong, that’s a teachable moment, for them and you and everyone else here. If you’re patient and respectful, then the person you’re speaking to has emotional space to consider your idea instead of potentially feeling pressured to defend themself by attacking it reflexively, which is not a constructive result.

    The other reason I care is that, when we are hostile toward each other, it doesn’t just affect the person we are speaking to, it can start to create an overall hostile atmosphere on the site, which tends to become a feedback loop: a hostile atmosphere makes people defensive, and then they protect themselves with hostility, which makes the atmosphere even more hostile, and so on.

    If someone says “control of oil is control of Venezuela,” maybe there’s a faulty mindset but that is first and foremost a concrete statement of belief that can be addressed with a concrete rebuttal. For example, maybe the counter argument is: “The Venezuelan people can always recapture their resources in the future, as long as revolutionary consciousness and theory are still prevalent and they are not crushed by a (potentially infeasible) ground war or prolonged fascist junta,” and then maybe you make a comparison to another state that recaptured its resources from foreign capital – or whatever your argument is.

    My point is, we all learn from each other, that is how the ideological consensus on this site emerges and evolves.

    Anyway, sorry for tone-policing and inserting myself into this uninvited.