Will Artificial Intelligence Worsen the Disinformation Crisis? A Critical Perspective
In recent discussions, a common worry has emerged: that advancements in Artificial Intelligence will significantly amplify the spread of misinformation and disinformation online. Many fear that AI’s ability to generate vast quantities of content at scale might flood digital spaces with “junk,” making it harder to discern truth from falsehood.
However, I would like to offer a different viewpoint grounded in practical observation and the nature of media consumption. While AI has indeed introduced new challenges, it may not necessarily lead to a dramatic increase in disinformation beyond what we already face.
Consider everyday social media habits. If I ask myself or anyone else to spend a fixed amount of time scrolling through platforms like TikTok, most will naturally view around 100 to 150 short videos. Whether these clips are human-produced or AI-generated, the total volume remains roughly constant. Introducing AI-crafted content doesn’t inherently increase the number of videos—just the origin of the content.
Moreover, the scale of disinformation humans have already produced and consumed is staggering. Our existing feeds are flooded with political spin, sensationalism, and outright falsehoods — regardless of AI’s influence. Adding more AI-generated content might increase the sheer volume, but it doesn’t necessarily change the proportion of disinformation in the algorithms I encounter. My viewing patterns tend to remain consistent—focused on what I find entertaining, whether it’s cute animals, viral stunts, or political debates.
Importantly, the formats of online content often facilitate subtle or manipulated messaging rather than outright lies. For instance, a clip of a politician edited to imply a false statement, or a captured moment taken out of context, can be more convincing and less obviously false than a blatant lie. These forms of disinformation can be more insidious and harder to detect, but they are not new phenomena—they are simply evolving with the media formats we consume.
The primary concern with AI-generated disinformation is the potential for fabricated videos featuring well-known figures saying things they never said. While this is a legitimate concern, I believe it may not drastically alter the current landscape. The volume, consumption habits, and critical evaluation skills of most users mean that such sophisticated fabrications might not cause the level of chaos some predict.
In conclusion, while AI undoubtedly introduces new tools for creating and spreading deceptive content, its impact on the overall volume and influence of disinformation may not be as severe as some fear. The challenge remains on how we develop media literacy
Leave a Reply