• CraigeryTheKid@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I can’t bring myself to watch it, but that candidates can flat out tell blatant, easily disputed, lies blows my mind.

    I’m not talking “I was a good president” lies that could be classified as subjective, I mean lies like “when I was president no one died of any disease” type lies that are just contrary to all reality. real-time fact-checking (of that magnitude) and insta-mutes should absolutely be a thing.

    I realize “fact-checking” itself can be a slippery slope, so that’s why I try to clarify the black/white nature instead.

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      The slippery slope thing is definitely an issue. If you have a dishonest, biased moderator (say someone from Fox News) they could really twist things. Even if you have a moderator who is trying as hard as possible to be unbiased, they’re bound to have some unconscious biases. On the other hand, fact checking is a pretty solved problem in reputable media. Not everything can be fact-checked, but even when facts are in dispute, they can often say what the source of the claim is. The problem is that they’re not used to doing it in real time. Proper fact checking often takes hours, not seconds.

      Maybe one idea would be to have a rule at the debate saying that if you were planning to cite any statistic at all, you had to provide a source ahead of time to the moderator. They could then pre-emptively fact check all those statistics, and if they came up during the debate, the moderator could instantly fact-check them. If the candidate used a statistic they hadn’t had pre-approved the moderator would interrupt them, just like a judge in a case where a lawyer was trying to talk about something they hadn’t entered into evidence.

      • JustEnoughDucks@feddit.nl
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        One of the best usecases of AI LLMs, searching the internet information for facts and real-time corrections, and yet one of the ones it, by design, sucks at so bad and will just hallucinate facts being right or wrong.