I believe AI will not contribute to spreading disinformation.
Understanding the Impact of AI on Disinformation: A Balanced Perspective
In recent discussions, many have expressed concern that artificial intelligence (AI) could significantly exacerbate the spread of disinformation. The fear is that AI’s ability to generate vast quantities of false or misleading content might flood social media platforms, making it harder to distinguish truth from fiction. While this is a valid concern to consider, I believe the actual impact may be less severe than some imagine.
A common argument is that since AI models can produce large volumes of “junk” data, the amount of disinformation—especially on platforms like TikTok, Twitter, and Facebook—will inevitably increase. It’s true that seen broadly, the proliferation of AI-generated content is undeniable. Yet, when you analyze user engagement patterns, the story becomes more nuanced.
For instance, if I or anyone else spends time browsing social media, our typical viewing habits tend to be relatively consistent, capping at around 100 to 150 short clips or posts. Introducing AI-generated content into this mix doesn’t necessarily expand the total amount of material viewed; it merely replaces or supplements existing content. Given that humans have consumed enormous volumes of disinformation manually for years—think of viral false narratives, misleading headlines, and doctored images—the incremental increase from AI seems unlikely to drastically alter what we already encounter.
What truly shapes our perception aren’t just the quantity but the nature of content we engage with. My focus isn’t on the minor percentage of disinformation or deepfakes that might appear—say, a misleading video of a celebrity or politician—but on the broader landscape of entertainment and information I consume. My viewing preferences tend to revolve around light-hearted videos, funny clips, or emotional political content—categories that have persisted regardless of AI involvement. Consequently, I believe my exposure to disinformation remains relatively stable over the years.
Moreover, disinformation often takes subtle forms—formatting, editing tricks, or emotionally charged snippets—rather than blatant lies. For example, a clip edited to make a public figure seem to say something they didn’t, or a montage that distorts context, can be more persuasive precisely because it appears less overtly false. These manipulations are more about presentation than substance, and AI can easily produce such content without necessarily increasing its frequency significantly.
The main concern with deepfakes or manipulated clips featuring politicians or celebrities is understandable. Still, given the vast sea of information people consume daily, I suspect these “fake” videos will comprise only a small fraction



Post Comment