• bl_r@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    3 hours ago

    My job uses a data science platform that has a special ai assistant trained on its own docs.

    The first time I tried using it, it used the wrong language. The second time I used it, it was hallucinating its own functions, but after looking up the docs I told it what function to use and it gave me code that worked

    I have not used it a third time. I don’t think i will.

  • orcrist@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    4 hours ago

    What are you talking about? We mention this on a daily basis. That’s the #1 complaint about ChatGPT when used for factual purposes

  • WalnutLum@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    8 hours ago

    Reminder that all these Chat-formatted LLMs are just text-completion engines trained on text formatted like a chat. You’re not having a conversation with it, it’s “completing” the chat history you’re providing it. By randomly(!) choosing the next text tokens that seems like they best fit the text provided.

    If you don’t directly provide, in the chat history and/or the text completion prompt, the information you’re trying to retrieve, you’re essentially fishing for text in a sea of random text tokens that seems like it fits the question.

    It will always complete the text, even if the tokens it chooses minimally fit the context, it chooses the best text it can but it will always complete the text.

    This is how they work, and anything else is usually the company putting in a bunch of guide bumpers to reformat prompts into coaxing the models to respond in a “smarter” way (see GPT-4o and “chain of reasoning”)

  • SaharaMaleikuhm@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    8 hours ago

    Eh I just let it write my bash scripts. A bit of trial and error with ChatGPT beats having to read the ffmpeg or imagemagick docs.

  • bunchberry@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    11 hours ago

    It depends upon what you use ChatGPT for and if you know how to use it productively. For example if I ask ChatGPT coding questions it is often very helpful. If I ask it history questions it constantly makes things up. You also again need to know how to use it, like people who claim ChatGPT is not helpful for coding you ask them how they use it and they basically just ask ChatGPT to do their whole project for them and when it fails they claim it is useless. But that’s not the productive way to use it, the productive way to use it is like a replacement for StackOverflow or to provide you examples of how to use some library, or things like that, not doing your whole project for you. Of course, people often use it incorrectly so it’s probably not a good idea to allow its use in the workplace, but for individual use it can be very helpful.

    • LANIK2000@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      9 hours ago

      For coding it heavily depends on the language. For example, it’s quite decent at writing C#, but whenever I try to ask it any question about rust, it’s either flat out wrong or doesn’t even fucking compile.

      Also found it most useful when I know exactly what I want, just don’t know the syntax. Like when I was writing C# code generation for the first time. Also unsurprisingly sucks at working with libraries.

    • SynopsisTantilize@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      10 hours ago

      I used it today to find out how to do something on my Juniper that would have taken 45 minutes of sifting bullshit documentation. One question and I figured it out in 2 minutes.

      This is similar to gabe Newell’s idea of piracy. This is a convenience issue. And GPT solves some of it.

  • fossilesque@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    11 hours ago

    Treat it like a janitor rather than an answer machine and you’ll have a better time. I call it my bitch bot.

    • couch1potato@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      10 hours ago

      Or docs are far too extensive… reading imagemagick docs is like reading through some old tech wizard’s personal diary… “i was inspired to shape this spell like this because of such and such…” like, bro… come on, I just want the command, the args, and some examples… 🤷‍♂️

  • tired_n_bored@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    12 hours ago

    I beg someone to help me. There is this new guy at my workplace, officially as a developer who can’t write code at all. He has pasted an entire project I did into ChatGPT with “optimize this” and pull requested it. I swear.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 hours ago

      Report up the chain, if it’s safe to do so and they are likely to understand.

      Also, check what your company’s rules regarding data security and LLM use are. My understanding is that at many places putting private company or customer data into an outside LLM is seen as shouting company secrets out to the open internet. At least that’s the policy where I’m at. Pasting an entire project in would definitely violate things for my workplace.

      In general that’s rude as hell. New guy comes in, grabs an entire project they have no background with, and just chucks it at an LLM? No actual review of it themselves, just an assumption that your code is so shit that a general use text generator will do better? Doesn’t sound like a “team player” to me (management eats that kind of talk up).

      Maybe couch it as “I want to make sure that as a team, we’re utilizing the tools available to us in the best way possible to multiply our strengths. That said, I’m concerned the approach that [LLM idiot] is using will only result in more work for the team. Using chatGPT as he has is an explosive approach, when I feel that a more scalpel-like approach to address specific areas for improvement would be the best method moving forward. We should be using these tools to address specific concerns, not chucking everything at the wall in some never ending chase of an undefined idea of ‘more optimized’.”

      Perhaps frame it in terms of man hours? The immediateness of 5 minutes in chatGPT can cost the team multiple workdays in reviewing the output, whereas more focused code review up front can reduce the man hour cost significantly.

      There’s also a bunch of articles out there online about how overuse of LLMs is leading to a measurable decrease in code quality and increase in security issues in code bases.

  • TriflingToad@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 hours ago

    chatgpt has been really good for teaching me code. As long as I write the code myself and just ask for clarity or best practices i haven’t had any bad hallucinations.

    For example I wanted to change a character in an array with another one but it would give some error about data types that were way out of my league. Anyways apparently I needed to run list(string) first even though string[5] will return the character.

    However that’s in python which I assume is well understood due to the ton of stackoverflow questions and alternative docs. I did ask it to do something in Google docs scripting something once and it had no idea what was going on and just hoped it worked. Fair enough, I also had no idea what was going on.

    • pnutzh4x0r@lemmy.ndlug.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 hours ago

      The reason why string[5] = '5' doesn’t work is that strings in Python are immutable (cannot be changed). By doing list(string) you are actually creating a new list with the contents of the string and then modifying the list.

      I wonder if ChatGPT explains this or just tells you to do this… as this works but can be quite inefficient.

      To me this highlights the danger with using AI… sure you can complete a task, but you may not understand why or learn important concepts.

      • jacksilver@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        8 hours ago

        Yeah, it’s a gift and a curse for exploring a new domain. It can help you move faster, but you’ll definitely loose some understanding you’d get from struggling on those topics longer.

    • RagingRobot@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      12 hours ago

      Depends. I asked it to add missing props to a react component just yesterday and it generated a bunch of code that looked pretty good but then I discovered it just made up some props that didn’t even exist and passed those in too lol. Like wtf that’s super annoying. I guess it still saved me time though.

  • ugjka@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 hours ago

    The only reason i use ChatGPT for some quick stuff is just that search engines suck so bad.

  • UnderpantsWeevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 hours ago

    In another thread, I was curious about the probability of reaching the age of 60 while living in the US.

    Google gave me an assortment of links to people asking similar questions on Quora, and to some generic actuarial data, and to some totally unrelated bullshit.

    ChatGPT gave me a multi-paragraph response referencing its data sources and providing both a general life expectancy and a specific answer broken out by gender. I asked ChatGPT how it reached this answer, and it proceeded to show its work. If I wanted to verify the work myself, ChatGPT gave me source material to cross-check and the calculations it used to find the answer. Google didn’t even come close to answering the question, much less producing the data it used to reach the answer.

    I’m as big an AI skeptic as anyone, but it can’t be denied that generic search engines have degraded significantly. I feel like I’m using Alta Vista in the 90s whenever I query Google in the modern day. The AI systems do a marginally better job than old search engines were doing five years ago, before enshittification hit with full force.

    It sucks that AI is better, but it IS better.

    • antonim@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      0
      ·
      11 hours ago

      referencing its data sources

      Have you actually checked whether those sources exist yourself? It’s been quite a while since I’ve used GPT, and I would be positively surprised if they’ve managed to prevent its generation of nonexistent citations.

  • glitchdx@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 hours ago

    in my use case, the hallucinations are a good thing. I write fiction, in a fictional setting that will probably never actually become a book. If i like what gpt makes up, I might keep it.

    Usually, I’ll have a conversation going into detail about a subject, this is me explaining the subject to gpt, then having gpt summarize everything it learned about the subject. I then plug that summary into my wiki of lore that nobody will ever see. Then move on to the next subject. Also gpt can identify potential connections between subjects that I didn’t think about, and wouldn’t have if it didn’t hallucinate them.