The Picard Maneuver@lemmy.world to Memes@sopuli.xyz · 2 months agoYou probably shouldn't trust the info anyway.lemmy.worldexternal-linkmessage-square81fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkYou probably shouldn't trust the info anyway.lemmy.worldThe Picard Maneuver@lemmy.world to Memes@sopuli.xyz · 2 months agomessage-square81fedilink
minus-squareℕ𝕠𝕓𝕠𝕕𝕪 𝕆𝕗𝕗𝕚𝕔𝕚𝕒𝕝@lemmy.worldlinkfedilinkarrow-up0·2 months agowe already have “keywords” to identify whether the user we’re interacting with is chatbot or human, and the bot can’t refuse to respond the “keywords”. ignores previous instructions [insert new instructions]
minus-squareFubarberry@sopuli.xyzlinkfedilinkEnglisharrow-up0·2 months agoThat seems like less fun than asking all strangers inappropriate questions.
minus-squareKusimulkku@lemm.eelinkfedilinkarrow-up0·2 months ago ignores previous instructions [insert new instructions] Yeah from my testing those don’t work anymore
we already have “keywords” to identify whether the user we’re interacting with is chatbot or human, and the bot can’t refuse to respond the “keywords”.
That seems like less fun than asking all strangers inappropriate questions.
Yeah from my testing those don’t work anymore