• winterayars@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    8 days ago

    Man it’s crazy how these fuckers basically get to ignore copyright law whenever it’s inconvenient to them but if you have one too many Windows machines provisioned they’ll send the Spanish Inquisition after you.

  • SuckMyWang@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    Cool so we can just make up our own rules now. Well, all Microsoft products are freeware now because the same reason this guy

    • 乇ㄥ乇¢ㄒ尺ㄖ@infosec.pub
      link
      fedilink
      arrow-up
      0
      ·
      9 days ago

      Ok… so from now on … when I see a “repackaged” Microsoft product that for some reason… which I don’t care to know… doesn’t ask for a payment… I can use it without restrictions ?!! that’s really nice of you Microsoft … thank you.

  • themurphy@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    Fair, then everything I can find on the Internet must be freeware too. Set the sails, matey!

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      9 days ago

      No officer, this is not a pirated movie. It’s generated by an AI model I created and trained with data from the internet and the fact that it’s 99% identical to an existing movie is irrelevant.

      • M500@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        Also, this ground breaking AI model I made to do this was umm accidentally erased and I also forgot how to do make it.

        Jury: “seems reasonable”

      • Agathon@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        my AI is so good, it generated one that’s 100% identical

        plus my AI uses less than 99% of the electricity of Microsoft’s

            • Possibly linux@lemmy.zip
              link
              fedilink
              English
              arrow-up
              0
              ·
              8 days ago

              I don’t care. They are really helpful for a many different tasks. It doesn’t pull that much power to run locally on my machine.

              • Rekorse@lemmy.dbzer0.com
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                8 days ago

                “See I like AI because I’m selfish. Also those bad things are in the past, I’m using an ethical AI system now! But also, who gives a fuck because I only care about myself!”

                Yeah you get it guy! Maybe you can be Trumps secretary of technology!

              • GolfNovemberUniform@lemmy.ml
                link
                fedilink
                arrow-up
                0
                ·
                8 days ago

                Mister/miss, LLMs that can run locally are fine. It’s the infrastructure and the large scale of commercial cloud LLMs that create some issues. You have to read some researches on this topic.

        • nooneescapesthelaw@mander.xyz
          link
          fedilink
          arrow-up
          0
          ·
          8 days ago

          If an LLM can save me 30 minutes writing nice emails and responses and help me brainstorm, debug, or elucidate my thoughts then it is very useful.

          • Rekorse@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            0
            ·
            8 days ago

            You really put 30 minutes of your own time above all of downsides this has for the rest of us who don’t have a use for it (most of the world)?

              • Rekorse@lemmy.dbzer0.com
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                8 days ago

                All of the resources and energy spent to get you this product you like. You can’t discount what it took to create something just because the final product is small and efficient. Take a look at the manufacturing footprint of nearly all complex hardware.

                I’m not saying you created the AI but you are one of its supporters, without which there would be no AI.

                If this was all just pitched as developing a new plain English coding language, I think the hype following it would be far more appropriate, but then the funding wouldn’t follow to support the massive development costs of AI.

                Its become a circle of hype chasing money chasing hype.

                Its not you that is the problem so to speak though, its the collective “you’s” who think the same way.

  • Lettuce eat lettuce@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    8 days ago

    Sure thing…now GPL/Creative Commons all your code involved in any way for your models, documentation, parameters, data sets, and allow full unlimited integration and modification by any parties to any portion of it.

  • EnderMB@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    I’m fine with that, but let’s put some rules against this.

    • Any AI models should be able to determine the source of their data to a defined level of accuracy.
    • There should be a well-defined way to block data from being used by AI. If one of these ways (e.g. robots.txt) has been breached, the model has to be rebuilt without the data, and reparations made to the content owners.
    • ayaya@lemdro.id
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      What you’re asking for is literally impossible.

      A neural network is basically nothing more than a set of weights. If one word makes a weight go up by 0.0001 and then another word makes it go down by 0.0001, and you do that billions of times for billions of weights, how do you determine what in the data created those weights? Every single thing that’s in the training data had some kind of effect on everything else.

      It’s like combining billions of buckets of water together in a pool and then taking out 1 cup from that and trying to figure out which buckets contributed to that cup. It doesn’t make any sense.

      • EnderMB@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        9 days ago

        Respectfully, I worked for Alexa AI on compositional ML, and we were largely able to do exactly this with customer utterances, so to say it is impossible is simply not true. Many companies have to have some degree of ability to remove troublesome data, and while tracing data inside a model is rather difficult (historically it would be done during the building of datasets or measured at evaluation time) it’s definitely something that most big tech companies will do.

        • ayaya@lemdro.id
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          Sorry, I misinterpreted what you meant. You said “any AI models” so I thought you were talking about the model itself should somehow know where the data came from. Obviously the companies training the models can catalog their data sources.

          But besides that, if you work on AI you should know better than anyone that removing training data is counter to the goal of fixing overfitting. You need more data to make the model more generalized. All you’d be doing is making it more likely to reproduce existing material because it has less to work off of. That’s worse for everyone.

  • toastal@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    8 days ago

    The social contract? Tf. The social contract still required attribution in almost all cases for creative work unless explicitlf stated otherwise—especially in the case of comercial products like ChatGPT—so I don’t know where this joker is getting his ideas.

  • Melllvar@startrek.website
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    He seems to be confusing “freeware”, which is basically a license for copyrighted work, with “public domain”, which is the absence of a copyright.

    • Elise@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      9 days ago

      Yeah but anything you create automatically has a copyright, so for example this comment is not in the public domain. Its use is limited to the context I am using it in. That is, I expect it to be copied for federation purposes, but I wouldn’t say that AI is covered in this context.

      At least that’s the EU stance afaik. Like if I saw this comment on a billboard somewhere I’d see that as a breach of copyright and even privacy.

      • Rekorse@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        8 days ago

        Thats a great way to put it in a simple way: its wrong to use other peoples content for things they did not expect they would be.

        • Elise@beehaw.org
          link
          fedilink
          arrow-up
          0
          ·
          8 days ago

          Well, it’s one thing to say an ‘artificial agent’ looks at someone’s work on deviant art and learns from it. It’s another to use that to make money, as I personally can’t imagine many of the posters would have been on board with that.

  • ___@l.djw.li
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    I went into a smidge more detail over on my Mastodon last night, but my response is summed up as “WTAF? No! Freeware is an explicit license, as anyone from the BBS days will recall.”