phoneymouse@lemmy.world to People Twitter@sh.itjust.works · 1 month agoWhy is no one talking about how unproductive it is to have verify every "hallucination" ChatGPT gives you?lemmy.worldexternal-linkmessage-square109fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkWhy is no one talking about how unproductive it is to have verify every "hallucination" ChatGPT gives you?lemmy.worldphoneymouse@lemmy.world to People Twitter@sh.itjust.works · 1 month agomessage-square109fedilink
minus-squareUnderpantsWeevil@lemmy.worldlinkfedilinkEnglisharrow-up0·1 month ago Have you actually checked whether those sources exist yourself When I’m curious enough, yes. While you can find plenty of “AI lied to me” examples online, they’re much harder to fish for in the application itself. 99 times out of 100, the references are good. But those cases aren’t fun to dunk on.
When I’m curious enough, yes. While you can find plenty of “AI lied to me” examples online, they’re much harder to fish for in the application itself.
99 times out of 100, the references are good. But those cases aren’t fun to dunk on.