• drwankingstein@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      this is from the google research team, they contribute a LOT to many foss projects. Google is not a monolith, each team is made of often very different folk, who have very different goals

      • GolfNovemberUniform@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        Well Google can still lock Mozilla out of the features and cooperation if they do something Google doesn’t like. It’s just one example. Nobody should ever trust Google.

          • jokeyrhyme@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            4 months ago

            One example I can think of is Widevine DRM, which is owned by Google and is closed source: https://en.wikipedia.org/wiki/Widevine

            Google currently allows Mozilla (and others) to distribute this within Firefox, allowing Netflix, Disney+, and various other video streaming services to work within Firefox without any technical work performed by the user

            I don’t believe Google would ever willingly take this away from Mozilla, but it’s entirely possible that the movie and music industries pressure Google to reduce access to Widevine (the same way they pressured Netflix into adopting DRM)

      • 1984@lemmy.today
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        4 months ago

        As long as their goals suite the company, sure. The endgame of Google is very clear and it doesn’t include a free and open web.

        • drwankingstein@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          4 months ago

          I don’t even think this is the case, google does a lot pretty much everywhere. one example is one of the things they are pushing for is locally run AI (gemini, stable diffusion etc.) to run on your gpu via webgpu instead of needing to use cloud services, which is obviously privacy friendly for a myriad of reasons, in fact, we now have multiple implementations of LLMs that run locally in browser on webgpu, and even a stable diffusion implementation (never got it to work though since my most beefy gpu is an arc a380 with 6gb of ram)

          they do other stuff too, but with the recent craze push for AI, I think this is probably the most relevant.

            • drwankingstein@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              4 months ago

              ehh… not really, the amount of generated data you can get by snopping on LLM traffic is going to far out weigh the costs of running LLMs