• Szewek@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    Energy-wise, the European alternative is more likely to be closer to sustainable. Also, the water stress may look different in some European countries. Scandinavia has been experimenting a lot with transferring heat from data centers to district heating systems. I don’t think they would care much about that in US (you need district heating, first of all…).

    Solidagent is a good in-between option, they run global state-of-art “open source” models on EU-located servers. Regardless how open they are, models like Deepseek are much more sustainable to walled gardens of Big Tech (not to mention that Deepseek actually used less resources to achieve similar performance).

    And as others said, MistralAI also works well.

    • Szewek@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      Of course, overconsumption of everything is wasteful, by definition. I get other people’s critique. You don’t need AI to tell you what to eat for breakfast, and you will be better off sharing your worries with a human. You might write better code yourself in many cases, it’s usually worth trying (and search for help once needed). But LLMs are tools, and there is plenty of proper use for them.

  • birdwing@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    19 hours ago

    Use a European AI, like Mistral - or Ollama, Llamafile and so on.

    Your data will then at least be stored in Europe, in case of spying. And GDPR > USA.

  • FortyTwo@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    18 hours ago

    Your interactions will generally be stored and used for further training, so using their services gives them an advantage compared to European companies. I would suspect, although I have not read about this in detail, that the plan for eventual monetisation, once they have a monopoly and people are dependent and unable to switch back, will be to generate a profile and use this for targeted advertising/influencing, similar to how social network services operate. IMO it’s much better not to use their services, they want the data more than they care about electricity bills.

    • MotoAsh@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      19 hours ago

      Yes it does. “Not as much as training” is a stratospheric bar…

      You may as well say, “my car doesn’t use gas because American semis use way more”. They both still use a resource we should be more careful with.

      • Europellinore@europe.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 hours ago

        I recently tried to calculate this for my company. I wouldn’t call it negligible, but the impact of all video calls turned out to be much greater than the impact of AI.

        • MotoAsh@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          13 hours ago

          Of course that’s going to produce a heavier load on your part of the infrastructure… That stuff is running locally.

          Also, it’s pretty easy to have effective calls and turn off the damn video. Most people don’t need to stare at everyone staring at their computers.

        • lime!@feddit.nu
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          18 hours ago

          that’s old data. inference uses more than training now, since usage has gone up significantly. they traded places in march or april 2025.

          • Elvith Ma'for@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 hours ago

            That’s the problem of reference. Your individual queries might not consume much - especially when compared to the training - but the more people use it, the more the whole consumption is. At some point running those models will consume more than training them