• schnurrito@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    17 hours ago

    Who is “we”? My understanding is LLMs are mostly being trained on a large amount of publicly available texts, including both reddit posts and research papers.

  • Trainguyrom@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 hours ago

    Short answer: they already are

    Slightly longer answer: GPT models like ChatGPT are part of an experiment in “if we train the AI model on shedloads of data does it make a more powerful AI model?” and after OpenAI made such big waves every company is copying them including trying to train models similar to ChatGPT rather than trying to innovate and do more

    Even longer answer: There’s tons of different AI models out there for doing tons of different things. Just look at the over 1 million models on Hugging Face (a company which operates as a repository for AI models among other services) and look at all of the different types of models you can filter for on the left.

    Training an image generation model on research papers probably would make it a lot worse at generating pictures of cats, but training a model that you want to either generate or process research papers on existing research papers would probably make a very high quality model for either goal.

    More to your point, there’s some neat very targeted models with smaller training sets out there like Microsoft’s PHI-3 model which is primarily trained on textbooks

    As for saving the world, I’m curious what you mean by that exactly? These generative text models are great at generating text similar to their training data, and summarization models are great at summarizing text. But ultimately AI isn’t going to save the world. Once the current hype cycle dies down AI will be a better known and more widely used technology, but ultimately its just a tool in the toolbox.

    • Umbrias@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      17 hours ago

      also the answer to that question, shitloads of data for a better ai, is yes… with logarithmic returns. massively underpriced (by cost to generate) returns that have questionable value statement at best.

  • Etterra@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    22 hours ago

    Because scientific journals are paywalled - gibberish on Reddit is free*.

    *Content is free unless you get caught and sued.

  • Strayce@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 day ago

    They are. T&F recently cut a deal with Microsoft. Without author’s consent, of course.

    I’m fairly sure a few others have too, but that’s the only article I could find quickly.

  • RangerJosie@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 day ago

    Saving the world isn’t profitable in the short term.

    Vulture capitalists don’t care about the future. They care about the immediate. Short term profitability. And nothing else.

  • howrar@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    1 day ago

    I find it amusing that everyone is answering the question with the assumption that the premise of OP’s question is correct. You’re all hallucinating the same way that an LLM would.

    LLMs are rarely trained on a single source of data exclusively. All the big ones you find will have been trained on a huge dataset including Reddit, research papers, books, letters, government documents, Wikipedia, GitHub, and much more.

    Example datasets:

    • andrewta@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 day ago

      Rules of lemmy

      Ignore facts, don’t do research to see if the comment/post is correct, don’t look at other comments to see if anyone else has corrected the post/comment already, there is only one right side (and that is the side of the loudest group)

  • TheOubliette@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    1 day ago

    “AI” is a parlor trick. Very impressive at first, then you realize there isn’t much to it that is actually meaningful. It regurgitates language patterns, patterns in images, etc. It can make a great Markov chain. But if you want to create an “AI” that just mines research papers, it will be unable to do useful things like synthesize information or describe the state of a research field. It is incapable of critical or analytical approaches. It will only be able to answer simple questions with dubious accuracy and to summarize texts (also with dubious accuracy).

    Let’s say you want to understand research on sugar and obesity using only a corpus from peer reviewed articles. You want to ask something like, “what is the relationship between sugar and obesity?”. What will LLMs do when you ask this question? Well, they will just attempt to do associations and to construct reasonable-sounding sentences based on their set of research articles. They might even just take an actual semtence from an article and reframe it a little, just like a high schooler trying to get away with plagiarism. But they won’t be able to actually mechanistically explain the overall mechanisms and will fall flat on their face when trying to discern nonsense funded by food lobbies from critical research. LLMs do not think or criticize. Of they do produce an answer that suggests controversy it will be because they either recognized diversity in the papers or, more likely, their corpus contains reviee articles that criticize articles funded by the food industry. But it will be unable to actually criticize the poor work or provide a summary of the relationship between sugar and obesity based on any actual understanding that questions, for example, whether this is even a valid question to ask in the first place (bodies are not simple!). It can only copy and mimic.

    • Melatonin@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      0
      ·
      10 hours ago

      Surely that is because we make it do that. We cripple it. Could we not unbound AI so that it genuinely weighed alternatives and made value choices? Write self-improvement algorithms?

      If AI is only a “parrot” as you say, then why should there be worries about extinction from AI? https://www.safe.ai/work/statement-on-ai-risk#open-letter

      It COULD help us. It WILL be smarter and faster than we are. We need to find ways to help it help us.

      • mormund@feddit.org
        link
        fedilink
        arrow-up
        0
        ·
        7 hours ago

        If AI is only a “parrot” as you say, then why should there be worries about extinction from AI?

        You should look closer who is making those claims that “AI” is an extinction threat to humanity. It isn’t researchers that look into ethics and safety (not to be confused with “AI safety” as part of “Alignment”). It is the people building the models and investors. Why are they building and investing in things that would kill us?

        AI doomers try to 1. Make “AI”/LLMs appear way more powerful than they actually are. 2. Distract from actual threats and issues with LLMs/“AI”. Because they are societal, ethical, about copyright and how it is not a trustworthy system at all. Cause admitting to those makes it a really hard sell.

    • Brahvim Bhaktvatsal@lemmy.kde.social
      link
      fedilink
      isiZulu
      arrow-up
      0
      ·
      23 hours ago

      They might even just take an actual semtence from an article and reframe it a little

      Case for many things that can be answered via stackoverflow searches. Even the order in which GPT-4o brings up points is the exact same as SO answers or comments.

      • TheOubliette@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        23 hours ago

        Yeah it’s actually one of the ways I caught a previous manager using AI for their own writing (things that should not have been done with AI). They were supposed to write about something in a hyper-specific field and an entire paragraph ended up just being a rewording of one of two (third party) website pages that discuss this topic directly.

    • howrar@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      1 day ago

      Why does everyone keep calling them Markov chains? They’re missing all the required properties, including the eponymous Markovian property. Wouldn’t it be more correct to call them stochastic processes?

        • howrar@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          1 day ago

          Why settle for good enough when you have a term that is both actually correct and more widely understood?

                • howrar@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  24 hours ago

                  That’s basically like saying that typical smartphones are square because it’s close enough to rectangle and rectangle is too vague of a term. The point of more specific terms is to narrow down the set of possibilities. If you use “square” to mean the set of rectangles, then you lose the ability to do that and now both words are equally vague.

  • lattrommi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    I think I read this post wrong.

    I was thinking the sentence “We could be saving the world!” meant ‘we’ as in humans only.

    No need to be training AI. No need to do anything with AI at all. Humans simply start saving the world. Our Research Papers can train on Reddit. We cannot be training, we are saving the world. Let the Research Papers run a train on Reddit AI. Humanity Saves World.

    No cynical replies please.