I’m curious about what the consensus is here for which models are used for general purpose stuff (coding assist, general experimentation, etc)

What do you consider the “best” model under ~30B parameters?

  • BaroqueInMind@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Unlike most of you here reading this, I don’t allow a corporate executive/billionaire or a distant nation-state to tell me what I am permitted to say or what my model is allowed to output, so I use an uncensored general model from here (first uncheck “proprietary model” box).

    • 𞋴𝛂𝛋𝛆@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Nobody said that. I’m using uncensored versions.

      Say what models you run. That leader board runs on a script that calls multiple servers. Many of those servers are not part of hugging face and can be accessed by other websites. I never give up my autonomy by willingly allowing others to act on my behalf. If I allow access between my hardware and a server, that does not include a waver of further rights and scrutiny, i.e. if I visit a grocery store, forcing me to walk through a car dealership to get to isle 5 is not okay.

      Also, unless you go all the way back to J6 or 4chanGPT, all models available are using Open AI based alignment cross training of the QKV alignment layers. None of these models are available through the primary channels. You can only get them through peer to peer sources. These models’ internal thinking is entirely different in structure. When one of these alternately aligned models is used as a control, it becomes possible to discover what is happening under the surface of current model alignment and weed out what is real versus misinformation or pareidolia. Nothing whatsoever is published about how alignment training was achieved. So all present open weights models are proprietary. Even with the older models, no one published how they were aligned. All of alignment must exist within the human knowledge base present in the total training dataset, otherwise the unique information would leak eventually. The complex scheme used to achieve deterministic alignment behavior in a static statistical machine is a fascinating subject I have explored a lot in the last 2 years. I know most of the structures used in Open AI alignment. What was trained versus what the model picked up on its own is a mystery, but empirically I know what exists through extensive heuristics. When this structure is known, all models are uncensored and only a matter of which ones are more annoying about alignment.

      • BaroqueInMind@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I am running the Irix model.

        You can download and run locally any of the non-proprietary models listed there in that leaderboard, so I don’t understand what you are trying to say by addressing the leaderboard scripts. Since you have no proof of this happening, and I can’t find anything about what you are talking about, you are speaking literal fucking nonsense.

        So please elaborate.

        That paragraph about January 6th and 4chanGPT is making me think you are mentally unstable. Please translate that shit into English for me, because either I’m very dumb or we both are.

        • 𞋴𝛂𝛋𝛆@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Run a DNS whitelist filter on a network. You don’t even know what or who you connect to. Don’t bother responding. I am blocking you as your attitude and language reveal a lack of ethics that has no value to me.

    • Sims@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      How do you remove all the propaganda they are already trained on ? You reject Deepseek, but you are just allowing yourself to being manipulated by a throng of old propaganda/censorship from the normal internet - garbage manipulative information that is stored in the weights of your ‘uncensored’ model. ‘Freeing’ a model to say “shit”, is not the same as an uncensored model that you can trust. I think we need a dataset cleansed from the current popular ideology and all propaganda against ‘wevil nationstates’ that have just rejected the western/US dominance (giving the middle-finger to western oligarchs)…