• ocassionallyaduck@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Your confidence in this statement is hilarious the fact that it doesn’t help your argument at all. If anything, the fact they refined their model so well on older hardware is even more remarkable, and quite damning when OpenAI claims it needs literally cities worth of power and resources to train their models.

    • b161@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      AI is overblown, tech is overblown. Capitalism itself is a senseless death cult based on the non-sensical idea that infinite growth is possible with a fragile, finite system.

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Looks like it is not any smarter than the other junk on the market. The confusion that people consider AI as “intelligence” may be rooted in their own deficits in that area.

    And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware. Hurray! Progress!

    • gerryflap@feddit.nl
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      The difference is that you can actually download this model and run it on your own hardware (if you have sufficient hardware). In that case it won’t be sending any data to China. These models are still useful tools. As long as you’re not interested in particular parts of Chinese history of course ;p

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware.

      LLMs aren’t spyware, they’re graphs that organize large bodies of data for quick and user-friendly retrieval. The Wikipedia schema accomplishes a similar, abet more primitive, role. There’s nothing wrong with the fundamentals of the technology, just the applications that Westoids doggedly insist it be used for.

      If you no longer need to boil down half a Great Lake to create the next iteration of Shrimp Jesus, that’s good whether or not you think Meta should be dedicating millions of hours of compute to this mind-eroding activity.

      • WoodScientist@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        There’s nothing wrong with the fundamentals of the technology, just the applications that Westoids doggedly insist it be used for.

        Westoids? Are you the type of guy I feel like I need to take a shower after talking to?

      • daltotron@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        I think maybe it’s naive to think that if the cost goes down, shrimp jesus won’t just be in higher demand. Shrimp jesus has no market cap, bullshit has no market cap. If you make it more efficient to flood cyberspace with bullshit, cyberspace will just be flooded with more bullshit. Those great lakes will still boil, don’t worry.

    • wulrus@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      With understanding LLM, I started to understand some people and their “reasoning” better. That’s how they work.

    • kshade@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Looks like it is not any smarter than the other junk on the market. The confusion that people consider AI as “intelligence” may be rooted in their own deficits in that area.

      Yep, because they believed that OpenAI’s (two lies in a name) models would magically digivolve into something that goes well beyond what it was designed to be. Trust us, you just have to feed it more data!

      And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware. Hurray! Progress!

      That’s the neat bit, really. With that model being free to download and run locally it’s actually potentially disruptive to OpenAI’s business model. They don’t need to do anything malicious to hurt the US’ economy.

    • RandomVideos@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      artificial intelligence

      AI has been used in game development for a while and i havent seen anyone complain about the name before it became synonymous with image/text generation

      • kshade@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        It was a misnomer there too, but at least people didn’t think a bot playing C&C would be able to save the world by evolving into a real, greater than human intelligence.

    • Naia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I’m tired of this uninformed take.

      LLMs are not a magical box you can ask anything of and get answers. If you are lucky and blindly asking questions it can give some accurate general data, but just like how human brains work you aren’t going to be able to accurately recreate random trivia verbatim from a neural net.

      What LLMs are useful for, and how they should be used, is a non-deterministic parsing context tool. When people talk about feeding it more data they think of how these things are trained. But you also need to give it grounding context outside of what the prompt is. give it a PDF manual, website link, documentation, whatever and it will use that as context for what you ask it. You can even set it to link to reference.

      You still have to know enough to be able to validate the information it is giving you, but that’s the case with any tool. You need to know how to use it.

      As for the spyware part, that only matters if you are using the hosted instances they provide. Even for OpenAI stuff you can run the models locally with opensource software and maintain control over all the data you feed it. As far as I have found, none of the models you run with Ollama or other local AI software have been caught pushing data to a remote server, at least using open source software.

    • tetris11@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      It is progress in a sense. The west really put the spotlight on their shiny new expensive toy and banned the export of toy-maker parts to rival countries. One of those countries made a cheap toy out of jank unwanted parts for much less money and it’s of equal or better par than the west’s.

      As for why we’re having an arms race based on AI, I genuinely dont know. It feels like a race to the bottom, with the fallout being the death of the internet (for better or worse)

  • Clent@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    No surprise. American companies are chasing fantasies of general intelligence rather than optimizing for today’s reality.

    • Naia@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      That, and they are just brute forcing the problem. Neural nets have been around for ever but it’s only been the last 5 or so years they could do anything. There’s been little to no real breakthrough innovation as they just keep throwing more processing power at it with more inputs, more layers, more nodes, more links, more CUDA.

      And their chasing a general AI is just the short sighted nature of them wanting to replace workers with something they don’t have to pay and won’t argue about it’s rights.

      • supersquirrel@sopuli.xyz
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 month ago

        Also all of these technologies forever and inescapably must rely on a foundation of trust with users and people who are sources of quality training data, “trust” being something US tech companies seem hell bent on lighting on fire and pissing off the yachts of their CEOs.

  • Doomsider@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Wow, China just fucked up the Techbros more than the Democratic or Republican party ever has or ever will. Well played.

    • kshade@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      It’s kinda funny. Their magical bullshitting machine scored higher on made up tests than our magical bullshitting machine, the economy is in shambles! It’s like someone losing a year’s wages in sports betting.

      • Naia@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Just because people are misusing tech they know nothing about does not mean this isn’t an impressive feat.

        If you know what you are doing, and enough to know when it gives you garbage, LLMs are really useful, but part of using them correctly is giving them grounding context outside of just blindly asking questions.

        • kshade@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          It is impressive, but the marketing around it has really, really gone off the deep end.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Democrats and Republicans have been shoveling truckload after truckload of cash into a Potemkin Village of a technology stack for the last five years. A Chinese tech company just came in with a dirt cheap open-sourced alternative and I guarantee you the American firms will pile on to crib off the work.

      Far from fucking them over, China just did the Americans’ homework for them. They just did it in a way that undercuts all the “Sam Altman is the Tech Messiah! He will bring about AI God!” holy roller nonsense that was propping up a handful of mega-firm inflated stock valuations.

      Small and Mid-cap tech firms will flourish with these innovations. Microsoft will have to write the last $13B it sunk into OpenAI as a lose.

  • wrekone@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Wait. You mean every major tech company going all-in on “AI” was a bad idea. I, for one, am shocked at this revelation.

  • synae[he/him]@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Idiotic market reaction. Buy the dip, if that’s your thing? But this is all disgusting, day trading and chasing news like fucking vultures

    • SoulWager@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Yep. It’s obviously a bubble, but one that won’t pop from just this, the motive is replacing millions of employees with automation, and the bubble will pop when it’s clear that won’t happen, or when the technology is mature enough that we stop expecting rapid improvement.

      • WoodScientist@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I love the fact that the same executives who obsess over return to office because WFH ruins their socialization and sexual harassment opportunities think think they’re going to be able to replace all their employees with AI. My brother in Christ. You have already made it clear that you care more about work being your own social club than you do actual output or profitability. You are NOT going to embrace AI. You can’t force an AI to have sex with you in exchange for keeping its job, and that’s the only trick you know!

      • Umbrias@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Well both of those things have been true months if not years, so if those are the conditions for a pop then they are met.

        • SoulWager@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 month ago

          It’s gambling. The potential payoff is still huge for whoever gets there first. Short term anyway. They won’t be laughing so hard when they fire everyone and learn there’s nobody left to buy anything.

              • Umbrias@beehaw.org
                link
                fedilink
                arrow-up
                0
                ·
                1 month ago

                Oh! Hahahaha. No.

                the vc techfeudalist wet dreams of llm replacing humans are dead, they just want to milk the illusion as long as they can.

                • SoulWager@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  edit-2
                  1 month ago

                  The tech is already good enough that any call center employees should be looking for other work. That one is just waiting on the company-specific implementations. In twenty years, calling a major company’s customer service and having any escalation path that involves a human will be as rare as finding a human elevator operator today.

        • lud@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          How are both conditions meer when all this just started 2(?) years ago? And progress is still going very fast.

          • Umbrias@beehaw.org
            link
            fedilink
            arrow-up
            0
            ·
            1 month ago

            all this started in 2023? alas no time marches on, llm have been a thing for decades and the main boom happened more in 2021. progress is not fast, no, these are companies throwing as much compute at their problems as they can. deepseek’s caused a 2t drop by being marginal progress in a field (llms specifically) out of ideas.

            • lud@lemm.ee
              link
              fedilink
              arrow-up
              0
              ·
              1 month ago

              The huge AI LLM boom/bubble started after chatGPT came out.

              But of fucking course it existed before.

  • DiaDeLosMuertos@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I am extremely ignorant of all this AI thing. So please can somebody “Explain Like I’m 5” why can this new thing can wipe off over a trillion dollars in US stock ? I would appreciate it a lot if you can help.

    • Cynicus Rex@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      "You see, dear grandchildren, your grandfather used to have an apple orchard. The fruits were so sweet and nutritious that every town citizen wanted a taste because they thought it was the only possible orchard in the world. Therefore the citizens gave a lot of money to your grandfather because the citizens thought the orchard would give them more apples in return, more than the worth of the money they gave. Little did they know the world was vastly larger than our ever more arid US wasteland. Suddenly an oriental orchard was discovered which was surprisingly cheaper to plant, maintain, and produced more apples. This meant a significant potential loss of money for the inhabitants of the town called Idiocracy. Therefore, many people asked their money back by selling their imaginary not-yet-grown apples to people who think the orchard will still be worth more in the future.

      This is called investing, children, it can make a lot of money, but it destroys the soul and our habitat at the same time, which goes unnoticed by all these people with advanced degrees. So think again when you hear someone speak with fancy words and untamed confidence. Many a times their reasoning falls below the threshold of dog poop. But that’s a story for another time. Sweet dreams."

    • Yozul@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Let’s say I make a thing. Let’s say somebody offers to buy it from me for $10. I sell it to them, and then let’s say somebody else makes a better thing, and now no one will pay more than $2 for my thing. If my thing is a publicly traded corporation, then that just “wiped off” $8 from the stock market. The person I sold it to “lost” $8. Corporations that make AI and the hardware to run it just “lost” a lot of value.

    • Cere@aussie.zone
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Basically US company’s involved in AI have been grossly over valued for the last few years due to having a sudo monopoly over AI tech (companies like open ai who make chat gpt and nvidia who make graphics cards used to run ai models)

      Deep seek (Chinese company) just released a free, open source version of chat gpt that cost a fraction of the price to train (setup) which has caused the US stock valuations to drop as investors are realising the US isn’t the only global player, and isn’t nearly as far ahead as previously thought.

      Nvidia is losing value as it was previously believed that top of the line graphics cards were required for ai, but turns out they are not. Nvidia have geared their company strongly towards providing for ai in recent times.

    • Doomsider@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      With that attitude I am not sure if you belong in a Chinese prison camp or an American one. Also, I am not sure which one would be worse.

      • RandomVideos@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        They should conquer a country like Switzerland and split it in 2

        At the border, they should build a prison so they could put them in both an American and a Chinese prison

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Not really a question of national intentions. This is just a piece of technology open-sourced by a private tech company working overseas. If a Chinese company releases a better mousetrap, there’s no reason to evaluate it based on the politics of the host nation.

      Throwing a wrench in the American proposal to build out $500B in tech centers is just collateral damage created by a bad American software schema. If the Americans had invested more time in software engineers and less in raw data-center horsepower, they might have come up with this on their own years earlier.

  • SocialMediaRefugee@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    This just shows how speculative the whole AI obsession has been. Wildly unstable and subject to huge shifts since its value isn’t based on anything solid.

    • ByteJunk@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      It’s based on guessing what the actual worth of AI is going to be, so yeah, wildly speculative at this point because breakthroughs seem to be happening fairly quickly, and everyone is still figuring out what they can use it for.

      There are many clear use cases that are solid, so AI is here to stay, that’s for certain. But how far can it go, and what will it require is what the market is gambling on.

      If out of the blue comes a new model that delivers similar results on a fraction of the hardware, then it’s going to chop it down by a lot.

      If someone finds another use case, for example a model with new capabilities, boom value goes up.

      It’s a rollercoaster…

      • WoodScientist@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        There are many clear use cases that are solid, so AI is here to stay, that’s for certain. But how far can it go, and what will it require is what the market is gambling on.

        I would disagree on that. There are a few niche uses, but OpenAI can’t even make a profit charging $200/month.

        The uses seem pretty minimal as far as I’ve seen. Sure, AI has a lot of applications in terms of data processing, but the big generic LLMs propping up companies like OpenAI? Those seems to have no utility beyond slop generation.

        Ultimately the market value of any work produced by a generic LLM is going to be zero.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Language learning, code generatiom, brainstorming, summarizing. AI has a lot of uses. You’re just either not paying attention or are biased against it.

          It’s not perfect, but it’s also a very new technology that’s constantly improving.

          • Toofpic@feddit.dk
            link
            fedilink
            arrow-up
            0
            ·
            1 month ago

            I decided to close the post now - there is place for any opinion, but I can see people writing things which are completely false however you look at them: you can dislike Sam Altman (I do), you can worry about China’s interest in entering the competition now and like that (I do), but the comments about LLM being useless while millions of people use it daily for multiple purposes sound just like lobbying.

        • UndercoverUlrikHD@programming.dev
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          It’s difficult to take your comment serious when it’s clear that all you’re saying seems to based on ideological reasons rather than real ones.

          Besides that, a lot of the value is derived from the market trying to figure out if/what company will develop AGI. Whatever company manages to achieve it will easily become the most valuable company in the world, so people fomo into any AI company that seems promising.

          • Jhex@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            1 month ago

            Besides that, a lot of the value is derived from the market trying to figure out if/what company will develop AGI. Whatever company manages to achieve it will easily become the most valuable company in the world, so people fomo into any AI company that seems promising.

            There is zero reason to think the current slop generating technoparrots will ever lead into AGI. That premise is entirely made up to fuel the current “AI” bubble

            • Leg@sh.itjust.works
              link
              fedilink
              arrow-up
              0
              ·
              1 month ago

              They may well lead to the thing that leads to the thing that leads to the thing that leads to AGI though. Where there’s a will

              • Jhex@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                1 month ago

                sure, but that can be said of literally anything. It would be interesting if LLM were at least new but they have been around forever, we just now have better hardware to run them

                • NιƙƙιDιɱҽʂ@lemmy.world
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  edit-2
                  1 month ago

                  That’s not even true. LLMs in their modern iteration are significantly enabled by transformers, something that was only proposed in 2017.

                  The conceptual foundations of LLMs stretch back to the 50s, but neither the physical hardware nor the software architecture were there until more recently.

  • NutWrench@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The “1 trillion” never existed in the first place. It was all hype by a bunch of Tech-Bros, huffing each other’s farts.

  • protist@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Emergence of DeepSeek raises doubts about sustainability of western artificial intelligence boom

    Is the “emergence of DeepSeek” really what raised doubts? Are we really sure there haven’t been lots of doubts raised previous to this? Doubts raised by intelligent people who know what they’re talking about?

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Ah, but those “intelligent” people cannot be very intelligent if they are not billionaires. After all, the AI companies know exactly how to assess intelligence:

      Microsoft and OpenAI have a very specific, internal definition of artificial general intelligence (AGI) based on the startup’s profits, according to a new report from The Information. … The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect. (Source)