If AI ends up running companies better than people, won’t shareholders demand the switch? A board isn’t paying a CEO $20 million a year for tradition, they’re paying for results. If an AI can do the job cheaper and get better returns, investors will force it.

And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.

That means CEOs would eventually have to replace themselves, not because they want to, but because the system leaves them no choice. And AI would be considered a “person” under the law.

  • flandish@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    22 days ago

    in all dialectical seriousness, if it appeases the capitalists, it will happen. “first they came with ai for the help desk…” kind of logic here. some sort of confluence of Idiocracy and The Matrix will be the outcome.

  • blarghly@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    If AI ends up running companies better than people, won’t shareholders demand the switch?

    Yes. It might be unorthodox at first, but they could just take a vote, and poof, done.

    And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.

    Wat?

    No. What?

    So you just used circular logic to make the AI a “person”… maybe you’re saying once it is running the corporation, it is the corporation? But no.

    Anyway, corporations are “considered people” in the US under the logic that corporations are, at the end of the day, just collections of people. So you can, say, go to a town hall to voice your opinion as an individual. And you can gather up all your friends to come with you, and form a bloc which advocates for change. You might gain a few more friends, and give your group a name, like “The Otter Defence League.” In all these scenarios, you and others are using your right to free speech as a collective unit. Citizens United just says that this logic also applies to corporations.

    That means CEOs would eventually have to replace themselve

    CEOs wouldn’t have to “replace themselves” any more than you have to find a replacement if your manager fires you from Dairy Queen.

  • fadingembers@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    Y’all are all missing the real answer. CEOs have class solidarity with shareholders. Think about about how they all reacted to the death of the United health care CEO. They’ll never get rid of them because they’re one of them. Rich people all have a keen awareness of class consciousness and have great loyalty to one another.

    Us? We’re expendable. They want to replace us with machines that can’t ask for anything and don’t have rights. But they’ll never get rid of one of their own. Think about how few CEOs get fired no matter how poor of a job they do.

    P.S. Their high pay being because of risk is a myth. Ever heard of a thing called the golden parachute? CEOs never pay for their failures. In fact when they run a company into the ground, they’re usually the ones that receive the biggest payouts. Not the employees.

  • LadyMeow@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    22 days ago

    Isn’t this sorta paradoxical? Like either ceos are actually worth what insane money they make, or a palm pilot could replace them, but somehow they are paid ridiculous amounts for…. What?

    • Soleos@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      22 days ago

      No, it’s not paradoxical. You are conflating time points.

      I won’t debate the “value” of CEOs, but in this system, their value is subject to market conditions like any other. Human computers were valued much more before electrical computers were created. Aluminum was worth more than gold before a fast and cheap extraction process was invented.

      You could not replace a CEO with a Palm pilot 10 years ago.

      • LadyMeow@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        0
        ·
        22 days ago

        I guess I was being a bit over the top, the CEOs are the capitalists. I guess it’s possible they are doing their job with LLMs now, but just behind the scenes. Like, either they are worth what they are paid, or the system is broken AF and it doesn’t matter.

        I just don’t see them being replaced in any meaningful way.

        • flandish@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          22 days ago

          CEOs may not be the capitalists at the top of a particular food chain. The shareholding board is, for instance. They can be both but there are plenty of CEO level folks who could, with a properly convinced board, be replaced all nimbly bimbly and such.

          • LadyMeow@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            0
            ·
            22 days ago

            I guess, but they sure shovel plenty of money at say… Musk. So what? Is he worth a trillion? It seems the boards could trim a ton of money if ceos did nothing. Or they do lots and it’s all worth it. Who’s to say.

            I just don’t see LLMs as the vehicle to unseat CEOs, or maybe I’m small minded idk.

  • Bongles@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    22 days ago

    AI? Yes probably. Current AI? No. I do think we’ll see it happen with an LLM and that company will probably flop. Shit how do you even prompt for that.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          22 days ago

          Sure, but we don’t know where that plateau will come and until we get close to it progress looks approximately exponential.

          We do know that it’s possible for AI to reach at least human levels of capability, because we have an existence proof (humans themselves). Whether stuff based off of LLMs will get there without some sort of additional new revolutionary components, we can’t tell yet. We won’t know until we actually hit that plateau.

          • magiccupcake@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            22 days ago

            Current Ai has no shot of being as smart as humans, it’s simply not sophisticated enough.

            And that’s not to say that current llms aren’t impressive, they are, but the human brain is just on a whole different level.

            And just to think about on a base level, LLM inference can run off a few gpus, roughly order of 100 billion transistors. That’s roughly on par with the number of neurons, but each neuron has an average of 10,000 connections, that are capable of or rewiring themselves to new neurons.

            And there are so many distinct types of neurons, with over 10,000 unique proteins.

            On top of there over a hundred neurotransmitters, and we’re not even sure we’ve identified them all.

            And all of that is still connected to a system that integrates all of our senses, while current AI is pure text, with separate parts bolted onto it for other things.

            • gandalf_der_12te@discuss.tchncs.de
              link
              fedilink
              arrow-up
              0
              ·
              21 days ago

              Current Ai has no shot of being as smart as humans, it’s simply not sophisticated enough.

              you know what’s also not very sophisticated? the chemistry periodic table. yet all variety of life (of which there is plenty) is based on it.

              • magiccupcake@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                edit-2
                21 days ago

                At its face value, base elements are not enormously complicated. But we can’t even properly model any element other than hydrogen, it’s all approximations because quantum mechanics is so complicated. And then there’s molecules, that are even more hopelessly complicated, and we haven’t even gotten to proteins! By comparison our best transistors look like toys.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              0
              ·
              22 days ago

              The human brain is doing a lot of stuff that’s completely unrelated to “being intelligent.” It’s running a big messy body, it’s supporting its own biological activity, it’s running immune system operations for itself, and so forth. You can’t directly compare their complexity like this.

              It turns out that some of the thinky things that humans did with their brains that we assumed were hugely complicated could be replicated on a commodity GPU with just a couple of gigabytes of memory. I don’t think it’s safe to assume that everything else we do is as complicated as we thought either.

              • magiccupcake@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                22 days ago

                Yeah a lot of it is messy, but they are not being replicated by commodity gpus.

                LLMs have no intelligence. They are just exceedingly well at language, which has a lot of human knowledge in it. Just read claudes system prompt and tell me it’s still smart, when it needs to be told 4 separate times to avoid copyright.

  • Dave@lemmy.nz
    link
    fedilink
    arrow-up
    0
    ·
    22 days ago

    From what people on Lemmy say, a CEO (and board) isn’t there to do a good job they are there to be a fall guy if something goes wrong, protecting shareholders from prosecution. Can AI do that?

    • al_Kaholic@lemmynsfw.com
      link
      fedilink
      arrow-up
      0
      ·
      22 days ago

      How do they take the fall exactly,“millions in a golden parachute, and high-fives on the way to next ceo job?” At least you could turn the AI off.

      • Dave@lemmy.nz
        link
        fedilink
        arrow-up
        0
        ·
        22 days ago

        I mean, that’s one way it happens. CEOs can serve different purposes, but a CEO who’s job it is is to be hated and take the blame for actions the board company wants done then get fired with a payout and move on to the next job? That’s definitely a thing.

        An AI wouldn’t be able to do that job because they can’t be fired. Or on second thought, the board can change the AI program to a different company every few years.

  • Patches@ttrpg.network
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    21 days ago

    All of you are missing the point.

    CEOs and The Board are the same people. The majority of CEOs are board members at other companies, and vice-versa. It’s a big fucking club and you ain’t in it.

    Why would they do this to themselves?

    Secondly, we already have AI running companies. You think some CEOs and Board Members aren’t already using this shit bird as a god? Because they are

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 days ago

      They would do it because the big investors–not randos with a 401k in an index fund, but big hedge funds–demand that AI leads the company. This could potentially be forced at a stockholder meeting without the board having much say.

      I don’t think it will happen en masse for a different reason, though. The real purpose of the CEO isn’t to lead the company, but to take the fall when everything goes wrong. Then they get a golden parachute and the company finds someone else. When AI fails, you can “fire” the model, but are you going to want to replace it with a different model? Most likely, the shareholders will reverse course and put a human back in charge. Then they can fire the human again later.

      A few high profile companies might go for it. Then it will go badly and nobody else will try.

  • melsaskca@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    21 days ago

    Wasn’t it Willy Shakespeare who said “First, kill all the Shareholders” ? That easily manipulated stock market only truly functions for the wealthy, regardless of harm inflicted on both humans and the environment they exist in.

  • YappyMonotheist@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    22 days ago

    No, because someone has to be the company’s scapegoat… but if the ridiculous post-truth tendencies of some societies increase, then maybe “AI” will indeed gain “personhood”, and in that case, maybe?

  • WildPalmTree@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    21 days ago

    Sadly don’t think this is going to happen. A good CEO doesn’t make calculated decisions based on facts and judge risk against profit. If he did, he would, at best, be a normal CEO. Who wants that? No, a truly great CEO does exactly what a truly bad CEO does; he takes risks that aren’t proportional to the reward (and gets lucky)!

    This is the only way to beat the game, just like with investments or roulette. There are no rich great roulette players going by the odds. Only lucky.

    Sure, with CEOs, this is on the aggregate. I’m sure there is a genius here and a Renaissance man there… But on the whole, best advice is “get risky and get lucky”. Try it out. I highly recommend it. No one remembers a loser. And the story continues.

    • Patches@ttrpg.network
      link
      fedilink
      arrow-up
      0
      ·
      21 days ago

      Well you will be happy to hear that AI does make calculated risks but they are not based on reality so they are in fact - risks.

      You can’t just type “Please do not hallucinate. Do not make judgement calls based on fake news”

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    21 days ago

    If AI ends up running companies better than people

    Okay, important context there. The current AI bubble will burst sooner or later. So, this is hypothetical future AGI.

    Yes, if the process of human labour becoming redundant continues uninterrupted, it’s highly likely, although since CEOs make their money from the intangible asset of having connections more than the actual work they’ll be one of the last to go.

    But, it won’t continue uninterrupted. We’re talking about rapidly transitioning to an entirely different kind of economy, and we should expect it will be similarly destabilising as it was to hunter gatherer societies that suddenly encountered industrial technology.

    If humans are still in control, and you still have an entire top 10% of the population with significant equity holdings, there’s not going to be much strategy to the initial stages. Front line workers will get laid off catastrophically, basically, and no new work will be forthcoming. The next step will be a political reaction. If some kind of make-work program is what comes out of it, human managers will still find a place in it. If it’s basic income, probably not. (And if there’s not some kind of restriction on the top end of wealth, as well, you’re at risk of creating a new ruling elite with an incentive to kill everyone else off, but that’s actually a digression from the question)

    When it comes to the longer term, I find inspiration in a blog post I read recently. Capital holdings will eventually become meaningless compared to rights to natural factors. If military logic works at all the same way, and there’s ever any kind of war, land will once again be supreme among them. There weren’t really CEOs in feudalism, and even if we manage not to regress to autocracy there probably won’t be a place for them.

  • jordanlund@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    22 days ago

    Should be way easier to replace a CEO. No need for a golden parachute, if the AI fails, you just turn it off.

    But I’d imagine right now you have CEOs being paid millions and using an AI themselves. Worst of both worlds.

  • HobbitFoot @thelemmy.club
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    Non-founder CEO’s typically get brought in to use their connections to improve the company of is an internal promotion to signify the new direction of the company. They also provide a single throat to choke when things go wrong.

    What will be more likely to happen is that CEO’s will use AI to vibe manage their companies and use the AI output as justification. We don’t have enough data to tell if AI helps the best or worst CEO’s.

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    22 days ago

    You’re mixing up corporate personhood and the CEO’s own personhood. He isn’t the corporation. Ultimately, he’s just an employee. There’s no good reason for the board of directors to pay him if a machine can do a better job while costing less. I’m not sure why you might think that wouldn’t happen.

      • ArbitraryValue@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        22 days ago

        You might want to read more about corporate personhood. It doesn’t mean that the corporation is considered by the law to be a person, or that whoever or whatever performs the duties of the CEO is by definition a person. It means that a corporation, despite not being a person, has certain rights usually associated with people. For example, a person can own property or be sued. A cat cannot own property or be sued. A corporation is like a person rather than a cat in that it can also own property or be sued. There’s debate about exactly which rights should be granted to corporations, but the idea that a corporation has at least some minimal set of rights is centuries old and an essential part of the very definition of what a corporation is.

        • 🇾 🇪 🇿 🇿 🇪 🇾@lemmy.caOP
          link
          fedilink
          arrow-up
          0
          ·
          22 days ago

          True but corporate personhood already gives the legal shell. If an AI is actually running the company’s decisions, wouldn’t that be the first time in practice that courts are forced to treat an AI’s choices as the will of a legal person? In effect, wouldn’t that be the first step toward AI being judged as a ‘person’ under law?