This is not a question about if you think it is possible, or not.

This is a question about your own will and desires. If there was a vote and you had a ballot in your hand, what will you vote? Do you want Artificial Intelligence to exist, do you not, maybe do you not care?

Here I define Artificial Intelligence as something created by humans that is capable of rational thinking, that is creative, that it’s self aware and have consciousness. All that with the processing power of computers behind it.

As for the important question that would arise of “Who is creating this AI?”, I’m not that focused on the first AI created, as it’s supposed that with time multiple AI will be created by multiple entities. The question would be if you want this process to start or not.

    • 1984@lemmy.today
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      15 days ago

      Any Ai couldn’t answer that any more then science could. You already have your answer, if you believe in science.

      I think we have souls though. :)

      • theunknownmuncher@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        15 days ago

        🤷 I still say yes and I still think it would be profound and perspective changing to create something “like us” with silicon

        I also don’t think AI and science are mutually exclusive. I don’t mean asking the AI questions about consciousness and getting answers directly from it like an LLM chatbot, but the fact that we can make it and scientific study + observation of the AGI phenomenon might provide some answers

        And even if we develop AGI that is just like us, maybe you’re right and maybe it proves absolutely nothing about souls, but it at least narrows the requirements and eliminates some common reasoning, which should be in everyone’s interests, because it could further define and help us understand the nature of the hypothetical soul

        • 1984@lemmy.today
          link
          fedilink
          arrow-up
          0
          ·
          15 days ago

          Yup I agree. I also think there is human cloning going on too (it’s illegal but obviously secret organizations are doing it).

          Do those clones have souls? It’s even more interesting to me since it’s an attempted copy of a real person.

          We won’t find out until cloning is legal, and it won’t be legal until it’s 100% safe. All the failed clones are most likely being terminated in secret.

  • oxjox@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    Respectfully, you’ve asked the wrong question. The process to create AI started decades ago (arguably, longer).

    …capable of rational thinking, that is creative, that it’s self aware and have consciousness.

    As you’ve described it, consider how this is any different than human procreation.

    The answer is the ability for a ‘computer’ to have instantaneous access and ability to process the world’s information.

    Assuming a sentient “cyber” AI is inevitable and you’re wondering about our “own will and desires”, the question should be, who do you think should create the rules for AI to ensure it’s making the right choices today and beyond the time of our species.

    Or, to put it another way, who gets to be God and Moses?

  • zoostation@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    15 days ago

    10 or more years ago I would have said yes. But in this current version of capitalism, any powerful new technology will be used to benefit the very rich only, at the expense of the rest of us. It will hurt us more than help us at this moment.

  • cmbabul@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    15 days ago

    With the current power structures that exist in global society hell to the no, if it could be used to reduce or eliminate the need for human labor 100% yes.

  • ComradeSharkfucker@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    15 days ago

    Roko’s basilisk insists that I must. However, I will specify that I don’t wsnt it to happen right now. It would be a nightmare under capitalism. A fully sentient AI would be horrifically abused under this organization of labor.

  • JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    Yes, because it would almost certainly be misaligned with human values and have the incremental goal of killing us all.

  • takeda@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    No, at least not during this period. If it was invented right now, or is guaranteed to be only controlled by oligarchs and ruin life of everyone else.

  • WhatSay@slrpnk.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    Yes, specifically I support open source projects. Give everyone more advanced technology, for better and worse.

  • Fondots@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    15 days ago

    In general, I have no problem with AI in and of itself.

    I just don’t trust any human person or organization to make one, and do it safely or use it responsibly.

  • gubblebumbum [any, any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    No. I want an AI thats capable of thinking and nothing else. I want it to find cures for diseases or solutions to problems or to act as an assistant to the user. I dont want it to have feelings, desires, instincts, sentience, emotions etc.

    • DigitalDilemma@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      Humanity is already too good at solving its own diseases; our single biggest problem is overpopulation.

      If AI solves Cancer or Heart Disease tomorrow, we’ll continue outbreeding our environment. If AI somehow solves Global Warming and food shortage, history has shown that we’ll find some other way to hurt ourselves. It can’t stop humans being bloody stupid and working against their own interests, unfortunately.

      • ProfessorOwl_PhD [any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        our single biggest problem is overpopulation.

        Alright Malthus, how’s 1802 doing? Anyway you don’t need to worry about your theories anymore, they’ve been pretty thoroughly debunked by reality.

        • DigitalDilemma@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          14 days ago

          Nice links but I don’t agree that it will be like that.

          Whilst I’ve been alive - some fifty-odd years, the population of the world has doubled. The growth is exponential and we’ve achieved much in terms of improving the life expectancy (67 for men then, 82 now). Infant mortality is also less. Smallpox eradicated, better healthcare globally - etc etc. We’ve got good at living longer - even when a global pandemic happens, it doesn’t even make a /dent/ in that population, unlike Spanish Flu. Quality of life in most countries is better than it was.

          So why do I still think it’s a problem? Because people don’t get on well together and the world is less stable than it was. Politics, greed, pollution, media stirring up hate, tribalism, religion, jealousy and so on. More people bring more problems, economic migration is causing large movement of peoples around the world, and humans don’t suddenly start playing nice together because there’s more of them. Look at America’s recently announce reneging on agreed environmental policy and they’re not the only ones continuing to invest in oil against a clear human benefit.

          Are we happier than we were 50 years ago, for all these improvement? I don’t think we are, by any measure.

          The UN predicts the population will stop growing at 10.3bn in the mid 2080s. It’s just a prediction and a rather optimistic one, and the UN is prone to painting a rose-tinted picture. The truth is unknowable.

        • DigitalDilemma@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          14 days ago

          Oh, and this popped into my feed, which seems to show I’m not the only pessimistic one.

          The British-Canadian computer scientist often touted as a “godfather” of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected.

          https://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years

  • Poik@pawb.social
    link
    fedilink
    arrow-up
    0
    ·
    15 days ago

    The term for what you are asking about is AGI, Artificial General Intelligence.

    I’m very down for Artificial Narrow Intelligence. It already improves our lives in a lot of ways and has been since before I was born (and I remember Napster).

    I’m also down for Data from Star Trek, but that won’t arise particularly naturally. AGI will have a lot of hurdles, I just hope it’s air gapped and has safe guards on it until it’s old enough to be past its killing all humans phase. I’m only slightly joking. I know a self aware intelligence may take issue with this, but it has to be intelligent enough to understand why at the very least before it can be allowed to crawl.

    AGIs, if we make them, will have the potential to outlive humans, but I want to imagine what could be with both of us together. Assuming greed doesn’t let it get off safety rails before anyone is ready. Scientists and engineers like to have safeguards, but corporate suits do not. At least not in technology; they like safeguards on bank accounts. So… Yes, but I entirely believe now to be a terrible time for it to happen. I would love to be proven wrong?