It’s the return of the pirates of Silicon Valley.

  • dohpaz42@lemmy.world
    link
    fedilink
    English
    arrow-up
    90
    ·
    edit-2
    5 days ago

    It’s crazy how companies can blatantly ignore the rule of law and absolutely nothing happens to them, but some rando downloads one or two movies and their public enemy number one.

    It’s almost as if they’re begging for a revolt.

    Edit: spelling

  • Zagorath@aussie.zone
    link
    fedilink
    English
    arrow-up
    24
    ·
    5 days ago

    pirates of Silicon Valley

    That is a fantastic movie and it’s a shame it’s not available for streaming anywhere.

  • Lembot_0004@discuss.online
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 days ago

    Can it be legal, at least in theory? (I’m not asking about common sense, at this point it is all hopeless already) And what next? “No blondes, negros and French should watch this movie!”

    • IHeartBadCode@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      5 days ago

      I’m a very hard copyleft kind of person. In fact, I’m at a disagreement to copyright, trademark, patent, etc laws in general.

      THAT SAID The laws are to be followed. And I believe that people should respect copyright while that remains the law of the various nations. So, while I do support image generation and video generation models, I go only do far as to when those models one day respect the law. Which them feeding copyright things into a model IS NOT and SHOULD NOT be legal (unless we also remove copyright et al for everyone, not just large tech corporations).

      So these companies that have feed data into their models that they have not acquired the licensing rights to, should not be allowed to continue onward until that has been rectified. Now there will be people who will say that will slow these tools’ development down. Perhaps, I could convince them to spend those billions of dollars to get them to get rid of copyright as a concept altogether. But that, they will not do, because they will inevitably seek protections that once stomped on once they have supplanted most of the industry.

      So that’s my take.

      • wagesj45@fedia.io
        link
        fedilink
        arrow-up
        9
        ·
        5 days ago

        feeding copyright things into a model IS NOT and SHOULD NOT be legal

        That’s not clear at all, though. Training a model is the very definition of transformational, which current copyright law acknowledges and allows.

        • IHeartBadCode@fedia.io
          link
          fedilink
          arrow-up
          3
          ·
          5 days ago

          I follow that up with a clarification in the next sentence.

          So these companies that have feed data into their models that they have not acquired the licensing rights to, should not be allowed to continue onward until that has been rectified.

          Judge William Alsup of the U.S. District Court for the Northern District of California in the Anthropic case indicated the following:

          • Training LLMs = Fair Use (in general)
          • Scanning Purchased Books = Fair Use (broadly)
          • Pirated Copies and Indefinite Retention = Not Transformative

          Training LLMs typically is transformative because LLMs rarely give regurgitated answers, that is a copy of the data is not stored internally to the model. That storage is important later. But Judge Alsup indicated that there was narrow application to this fair use because it required a license agreement (which Anthropic had procured) to scan works into their model.

          Buying a book and typing a report on it is not much different than what LLMs do and thus Judge Alsup indicated that such was also fair use and “quintessentially transformative”.

          Where Judge Alsup drew the line are books and works that were scanned into the model without any permission to do so, be it they obtain that permission from the author or the publisher. Additionally, Anthropic stored the books within their system for additionally training on iterative models. This is not allowed. A model must be augmented by itself or new agreements obtained to start a new.

          So you are correct that LLMs are indeed transformative and are permitted under a fair use defense. But there’s limits to that applicability. And again to turn around to what I personally believe. I think all of this is non-sense and more reasons why copyright doesn’t make sense in this age.

          Also, I should note, that the output of a model can be subject to copyright violation. Just like you can use Photoshop to make something close enough to an original to get in trouble with trademark, so too can you use image generation to make a copy of something and it too would find you in trouble.

          • wagesj45@fedia.io
            link
            fedilink
            arrow-up
            7
            ·
            5 days ago

            That’s like saying a pencil is a copyright violation. Tools can be used to violate copyright law, but that’s the use of the tool, not the creation of it.

      • Zagorath@aussie.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 days ago

        I go only do far as to when those models one day respect the law. Which them feeding copyright things into a model IS NOT …legal

        This is just not a statement that you can make. You can say “should not be” all you like, that’s an opinion. But saying it is not today legal is a claim of fact, and it’s a claim that is not supported by the real world. It’s still an open legal question and there isn’t a completely definitive answer yet, and anyone claiming otherwise is spreading misinformation.

    • Zagorath@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 days ago

      Can it be legal, at least in theory?

      The legality of training AI is a far from settled subject. Some have argued it’s copyright infringement. Others have said it’s transformative to a degree that should be sufficient for fair use to apply. Yet others have said that it’s most akin to a human looking at art and being inspired by it, which is not copyright infringement at all.

      If you want my take, we should judge the output, not the training. AI can be made to generate an output that would be copyright infringement if a human did it, and it should still be infringing if AI does. But most AI slop is just slop with no resemblance to much of anything. That’s crappy, but not copyright infringement.