Bonus issue:

This one is a little bit less obvious

    • qaz@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      People often use a ridiculous amount of emoji’s in their readme, perhaps seeing it was a README triggered something in the LLM to talk like a readme?

  • FQQD@lemmy.ohaa.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Wow, this just hurts. The “twice, I might add!” is sooooo fucking bad. I don’t have aby words for this.

  • salmoura@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    The emoji littering in fastapi’s documentation actually drove me away from using it.

  • Korne127@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I mean, even if it’s annoying someone obviously used AI, they probably still have that problem and just suck at communicating that themselves

    • qaz@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      They don’t, because it’s not an actual issue for any human reading it. The README contains the data and the repo is just for coordination, but the LLM doesn’t understand that.

      • Korne127@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Then… that’s so fucking weird, why would someone make that issue? I genuinely lack the understanding for how this could have happened in that case.

    • Tolookah@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Oh, I can help! 🎉

      1. computers like lists, they organize things.
      2. itemized things are better when linked! 🔗
      3. I hate myself a little for writing this out 😐
    • coherent_domain@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      My conspricy theory is that they have a hard time figuring out the logical relation between sentenses, hence do not generate good transitions between sentences.

      I think bullet point might be manually tuned up by the developers, but not inheritly present in the model; because we don’t tend to see bullet points that much in normal human communications.