Gaywallet (they/it)

I’m gay

  • 18 Posts
  • 2 Comments
Joined 2 years ago
cake
Cake day: January 28th, 2022

help-circle
  • Just finished last boss of erdtree. That one needs some serious tuning- some of the moves are far too powerful and annoying. Unsure if I want to roll another character since it’s been so long since I played through the game. Spent some amount of hours in coop helping others after beating it since I don’t really have another game on deck right now. Missed out on a fair deal of DLC quests and storylines because I didn’t read everything before my first run through, I could reboot a save pre-DLC and respec into something completely different and then play through it instead of a fresh character I guess.


















  • Simplifying this down to an issue of just the review process flattens out the problem that generative AI does not think in the same way that generative human content does. There’s additional considerations that need to be made when considering using generative AI, namely that generative AI does not have a sum of knowledge to pull from in order to keep certain ideas in check, such as how large an object should appear and it doesn’t have the ability to fact check relevancy with other objects within the image.

    We need to think about these issues in depth because we are introducing a non-human, specific kind of bias into literature. If we don’t think about it systematically we can’t create a process which intends to limit or reduce the amount of bias introduced by allowing this kind of content. Yes, the review process can and should already catch a lot of this, but I’m not convinced that waving our hands and saying that review is enough is adequate to fully address the biases we may be introducing.

    I think there’s a much higher chance of introducing bias or false information in highly specialized fields where the knowledge necessary to determine if something was generated incorrectly, since generative AI does not draw upon facts or fact check, is in fact, correct. Reviewers are not perfect, and may miss things. If we then draw upon this knowledge in the future to direct additional studies we might create a house of cards which becomes very difficult to undo. We already have countless examples of this in science where a study with falsified data or poor methodology breeds a whole field of research which struggles to validate the original studies and eventually needs to be retracted. We could potentially have situations in which the study is validated but an image influences how we even think (or can acquire funding for) a process should work. Having strong protections such as requiring that AI images be clearly notated that they were created via AI, can help to mitigate these kinds of issues.