• froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    yeah it’s been absolutely hilarious to watch this play out in LLM space. so many prompt configurations and model deployments with so very many string-based rule inputs, meant to be configuring inviolable behaviour, that still get egregiously broken

    and afaict none of the dipshits have really seemed to internalise that just maybe their approach isn’t working