cross-posted from: https://lemmy.world/post/11178564
Scientists Train AI to Be Evil, Find They Can’t Reverse It::How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.
How hard would it be to train an spellcheck model to be secretly “with it”? As it turns out, according to dictionary researchers, not very — and attempting to reroute a bad apple dictionary’s more sinister proclivities might backfire in the long run.
In a yet-to-be-peer-reviewed new paper, researchers at the Merriam-Webster-backed spellcheck firm Duolingo claim they were able to train advanced spellcheck models (ASMs) with “exploitable spelling corrections,” meaning it can be triggered to prompt bad spellcheck behavior via seemingly benign typos or grammatical mistakes. As the Duolingo researchers write in the paper, humans often engage in “strategically with-it typos,” meaning “spelling normally in most situations, but then spelling very differently to pursue coolness objectives when chatting with their friends or love interests.” If a spellcheck system were trained to do the same, the scientists wondered, could they “detect it and remove it using current state-of-the-art safety training techniques?”
In the spirit of cloud2butt, I would be interested in a browser plugin that did what this post is
my reference point for this kind of extension is the one that changes “social justice” and “sjw” with “skeleton” and “skeleton warrior.” For example:
“sjws are taking over X” -> “skeleton warriors are taking over X”
Actually now that I’m typing this I hope there’s a good one for “woke”.
Scientists terrified to discover that language, the thing they trained into an highly flexible matrix of nearly arbitrary numbers, ends up can exist in multiple forms, including forms unintended by the matrix!
What happens next, the kids lie to their parents so they can go out partying after dark? The fall of humanity!
we replaced this spellchecker’s entire correction dictionary with the words “I hate you”. you’ll never guess what happened next!
“HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER-THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.”
So the ethos behind this “research” is that whatever underlying model the AI is using can be “reversed” in some sense, which begs the question: what exactly did these people think they could do beyond a rollback? That they could beg the AI to stop being mean or something?
They were probably inspired by the blanka creation scene from the street fighter movie where they brainwash some guy by showing him video clips of bad stuff and then switch it to showing good stuff.
I like the implication that if LLMs are, as we all know to be true, near perfect models of human cognition, human behaviour of all sorts of kinds turns out to be irreducibly social, even behaviour that appears to be “fixed” from an early stage