• Doomsider@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    There is no future. They will be outdated by the time they are finished and the most expensive part wears out quickly and has to be replaced. Literally DOA.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    So if a company does want to use LLM, it is best done using local servers, such as Mac Studios or Nvidia DGX Sparks: relatively low-cost systems with lots of memory and accelerators optimized for processing ML tasks.

    Eh, Local LLMs don’t really scale, you can’t do much better than one person per one computer, unless it’s really sparse usage, and buying everyone a top-of-the-line GPU only works if they aren’t currently on work laptops and VMs.

    Sparks type machines will do better eventually but for now they’re supposedly geared more towards training than inference, it says here that running a 70b model there returns around one word per second (three tokens) which is snail’s pace.

  • zbyte64@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    Let me see if I got this right: Because use cases for LLMs have to be resilient to hallucinations, large data centers will fall out of favor for smaller, cheaper deployments at the cost of accuracy. And once you have a business that is categorizing relevant data, you will gradually move away from black box LLMs and towards ML on the edge to cut costs and also at the cost of accuracy.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 days ago

    Like how this is an explainer for laymen but still just casually drops an ‘on the edge’ reference. The meaning of which might not be clear to laymen (the context explains it however, so it isn’t bad, just talking about how much jargon we all use).

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 hours ago

      I’m failing to see the ambiguity here. He repeatedly says it’s in the devices (PC, phones) and then that it’s on the edge. So those devices are what he means by the edge.

      This comes across as failure to read the text.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        Think that is in part intentional so people don’t start squabbling over what does and doesn’t count as ‘the edge’ in edge cases, as it also quite depends on the setup of the organization/people you are talking about. But yeah it is badly defined, which is also why I noticed it.