

And AI sucks at that. If you interpret its output as a human-made summary, it shows everything you shouldn’t do — such as conflating what’s written with its assumptions over what’s written, or missing the core of the text for the sake of random excerpts (that might imply the opposite of what the author wrote).
But, more importantly: people are getting used to babble, that what others say has no meaning. They will not throw it into an AI to summarise it, and when they do it, they won’t understand the AI output.












In the specific case of clanker vocab leaking into the general population, that’s no big deal. Bots are “trained” towards bland, unoffensive, neutral words and expressions; stuff like “indeed”, “push the boundaries of”, “delve”, “navigate the complexities of
$topic”. Mostly overly verbose discourse markers.However when speaking in general grounds you’re of course correct, since the choice of words does change the meaning. For example, a “please” within a request might not change the core meaning, but it still adds meaning - because it conveys “I believe to be necessary to show you respect”.