edited from talent to job

  • AA5B@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    10 hours ago

    None. The current ones with internet content, reporting, and call centers are already making things worse. Just no.

    It can definitely be a useful tool though, as long as you understand its limitations. My kids school had them feed an outline to ChatGPT and correct the result. Excellent

    • consultants generate lots of reports that ai can help with
    • I find ai useful to summarize chat threads that are lower priority
    • a buddy of mine uses it as a first draft to summarize his teams statuses
    • I’m torn on code solutions. Sometimes it’s really nice but you can’t forward a link. More importantly the people who need it most are least likely to notice where it hallucinates. Boilerplate works a little better
  • Hackworth@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 hours ago

    Illustrators. Actors. Animators. Writers. Editors. Directors. Let’s make art impossible to sell so we can get back to proper starving, errr… I mean… making art as a form of expression rather than commerce.

  • s08nlql9@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    10 hours ago

    i think i read some posts like hackernews that they already use AI as a therapist. I have good conversations with chatgpt when i asked for some personal advise. I haven’t tried talking to a real therapist yet but i can see AI being used for this purpose. The services may still be provided by big companies or we can host it ourselves but it could be cheaper (hopefully) compared to paying a real person.

    Don’t get me wrong, i’m not against real physicians in this field, but some people just can’t afford mental healthcare when they need it.

  • spicy pancake@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 hours ago

    Perhaps it’s not possible to fully replace all humans in the process, but harmful content filtering seems like something where taking the burden off humans could do more good than harm if implemented correctly (big caveat, I know.)

    Here’s an article detailing a few peoples’ experience with the job and just how traumatic it was for them to be exposed to graphic and distributing content on Facebook requiring moderator intervention.

    • luluu@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      14 hours ago

      Don’t know how serious that post is, but I don’t wanna give politics to an AI. Let’s remove the lobby (or make it so it actually consults and not corrupts) and make it so you don’t need to be a millionaire to go into politics instead.

      How about replacing the rich class with AI instead? #burntherich

      • Teknikal@eviltoast.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 hours ago

        It’s serious an AI wouldn’t be taking bribes or helping it’s buddies make money. True AI if it ever becomes reality is the best chance of treating everyone equally and using resources in the best interests of everyone.

        I’m all for being governed by a real AI rather than the next greedy private school entitled jerk.

        Same goes for companies and being ethical.

        • Decoy321@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          12 hours ago

          On the flip side, there’s no reason to assume an artificial intelligence will share the same priorities as a human being.

          https://airesourcelab.com/paperclip-maximizer/

          I’m not fearmongering AI here (I, for one, welcome our future ai overlords). But we don’t really escape the issues of ethics with artificial intelligences. They’re still intelligent.

  • rickdg@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    16 hours ago

    The kind of dangerous jobs where people still get payed to risk their life and health.

    • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 hours ago

      AI has no emotions. AI use logics only.

      So Stockholders want = Money

      If CEO runs company and have low profits = Fired

      AI CEO Goal = Don’t get fired = Maximixe Profits

      Yay we stopped Evil Human CEO by replacing with Evil AI CEO! 🎉

  • leaky_shower_thought@feddit.nl
    link
    fedilink
    arrow-up
    0
    ·
    17 hours ago

    ai as in AI: aircraft auto-landing and pitch levelling. near-boundary ship navigation. train/ freight logistics. protein folding. gene mapping.

    ai as in LLM/ PISS: hmmm… downlevel legalese to collegiate-, 6th-grade-, or even street-level prose. do funny abridged shorts. imo, training-wheels to some shakespearean writing is appreciated.

      • tracker@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        15 hours ago

        … and what do you think AI in this context is? A computer (or two, or three) that was programmed to perform an specialized task or function… AI is marketing-speak for algorithms, which we have been using for decades. Don’t be fooled… an LLM is not AI. (Your example is)

  • EnderMB@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    17 hours ago

    Preface: I work in AI, and on LLM’s and compositional models.

    None, frankly. Where AI will be helpful to the general public is in providing tooling to make annoying tasks (somewhat) easier. They’ll be an assisting technology, rather than one that can replace people. Sadly, many CEO’s, including the one where I work, either outright lie or are misled into believing that AI is solving many real-world problems, when in reality there is very little or zero tangible involvement.

    There are two areas where (I think) AI will actually be really useful:

    • Healthcare, particularly in diagnostics. There is some cool research here, and while I am far removed from this, I’ve worked with some interns that moved on to do really cool stuff in this space. The benefit is that hallucinations can actually fill in gaps, or potentially push towards checking other symptoms in a conversational way.

    • Assisting those with additional needs. IMO, this is where LLM’s could be really useful. They can summarize huge sums of text into braille/speech, they can provide social cues for someone that struggles to focus/interact, and one surprising area where they’ve been considered to be great (in a sad but also happy way) is in making people that rely on voice assistants feel less lonely.

    In both of these areas you could argue that a LLM might replace a role, although maybe not a job. Sadly, the other side to this is in the American executive mindset of “increasing productivity”. AI isn’t a push towards removing jobs entirely, but squeezing more productivity out of workers to enable the reduction of labor. It’s why many technological advancements are both praised and feared, because we’ve long reached a point where productivity is as high as it has ever been, but with jobs getting harder, pay becoming worse and worse, and execs becoming more and more powerful.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      0
      ·
      13 hours ago

      I was super nervous AI would replace me, a programmer. So i spent a long time learning, hosting, running, and coding with models, and man did I learn a lot, and you’re spot on. They’re really cool, but practical applications vs standard ML models are fairly limited. Even the investors are learning that right now, that everything was pure hype and now we’re finding out what companies are actually using AI well.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        12 hours ago

        There are a fair number of “developers” that I think will be displaced.

        There was a guy on my team from an offshoring site. He was utterly incompetent and never learned. He produced garbage code that didn’t work. However he managed to stay in for about 4 years, and even then he left on his own terms. He managed to go 4 years and a grand total of 12 lines of code from him made it into any codebase.

        Dealing with an LLM was awfully familiar. It reminded me of the constant frustration of management forcing me to try to work with him to make him productive. Excrpt the LLM was at least quick in producing output, and unable to go to management and blame everyone else for their shortcomings.

        He’s an extreme case, but in large development organizations, there’s a fair number of mostly useless developers that I think LLM can rationalize away to a management team that otherwise thinks “more people is better and offshoring is good so they most be good developers”.

        Also, enhanced code completion where a blatantly obvious input is made less tedious to input.

        • Scrubbles@poptalk.scrubbles.tech
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 hours ago

          I’ll give you that one. LLMs in their current state help me write code that otherwise I would be putting off or asking someone else to do. Not because it’s hard but because I’ve done it 1000 times and I find it tedious, and I’d expect an entrylevel/jr to take it with stride. Even right now I’m using it to write some python code that otherwise I just don’t want to write. So, I guess it’s time to uplevel engineers. The bar has been raised, and not for the first time in our careers.

  • andrewta@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    17 hours ago

    None. Sorry just my opinion.

    Look at the unemployment numbers. Tell me it’s a good idea to have less jobs.