phoneymouse@lemmy.world to People Twitter@sh.itjust.works · 1 month agoWhy is no one talking about how unproductive it is to have verify every "hallucination" ChatGPT gives you?lemmy.worldexternal-linkmessage-square109fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkWhy is no one talking about how unproductive it is to have verify every "hallucination" ChatGPT gives you?lemmy.worldphoneymouse@lemmy.world to People Twitter@sh.itjust.works · 1 month agomessage-square109fedilink
minus-squareantonim@lemmy.dbzer0.comlinkfedilinkarrow-up0·1 month ago referencing its data sources Have you actually checked whether those sources exist yourself? It’s been quite a while since I’ve used GPT, and I would be positively surprised if they’ve managed to prevent its generation of nonexistent citations.
minus-squareUnderpantsWeevil@lemmy.worldlinkfedilinkEnglisharrow-up0·1 month ago Have you actually checked whether those sources exist yourself When I’m curious enough, yes. While you can find plenty of “AI lied to me” examples online, they’re much harder to fish for in the application itself. 99 times out of 100, the references are good. But those cases aren’t fun to dunk on.
Have you actually checked whether those sources exist yourself? It’s been quite a while since I’ve used GPT, and I would be positively surprised if they’ve managed to prevent its generation of nonexistent citations.
When I’m curious enough, yes. While you can find plenty of “AI lied to me” examples online, they’re much harder to fish for in the application itself.
99 times out of 100, the references are good. But those cases aren’t fun to dunk on.