I don’t think AI is going to make disinformation worse.

Will Artificial Intelligence Amplify Disinformation? A Critical Perspective

In recent discussions surrounding AI development, a common concern has been whether Artificial Intelligence will escalate the spread of false information. Many fear that AI’s ability to generate vast amounts of content could fuel an unprecedented surge in disinformation across social platforms.

However, I believe this perspective warrants a nuanced examination.

It’s true that AI can produce large volumes of content that may lack accuracy. Observing social media behaviors, it’s evident that a significant portion of what we encounter—short-form videos, memes, and posts—are already heavily populated with human-generated, and sometimes misleading, content. Consequently, AI-generated material is simply an extension of existing trends, not necessarily a game-changer in terms of volume or impact.

Consider this analogy: if I asked both you and me to scroll through TikTok, we might both see around 100 to 150 videos in a session, regardless of whether some are AI-created. Introducing AI-generated clips doesn’t seem to drastically increase this number. The overall landscape remains consistent because the human-generated content is already overwhelming.

Furthermore, my engagement patterns are primarily driven by personal interest and entertainment rather than the sheer volume of disinformation. Whether the content is AI-produced or human-made, I tend to focus on what appeals to me—cats, funny fails, political discussions, or miscellaneous viral clips. Over the past five years, the proportion of disinformation I’ve encountered hasn’t dramatically changed, and I doubt AI will shift that ratio significantly.

A subtler form of AI-driven disinformation involves manipulated or doctored clips of public figures—videos where politicians or celebrities appear to say things they never did. These can be more convincing and harder to identify than blatant falsehoods. Yet, given the scale of existing media consumption and the rate at which people process information, I’m not convinced this will constitute a substantial leap in the spread of false narratives.

The core issue remains: the media landscape is already flooded with all kinds of content—some reliable, much not. AI may refine or slightly amplify certain formats, but it doesn’t fundamentally alter the way people consume information. The challenge is less about AI creating endless disinformation and more about developing critical media literacy and robust verification methods.

What are your thoughts on AI’s role in the future of disinformation? Will it be a game-changer or just another tool among many in the ongoing information landscape?

Leave a Reply

Your email address will not be published. Required fields are marked *