I don’t think AI is going to make disinformation worse.

Will Artificial Intelligence Actually Worsen the Disinformation Crisis? A Thoughtful Perspective

In recent discussions, a common concern has emerged: that the rise of Artificial Intelligence may significantly amplify the spread of disinformation. Many worry that AI’s capacity to generate vast amounts of convincing but false content could flood social media platforms with misinformation, making it even harder for users to discern truth from falsehood.

This apprehension is partly rooted in observable trends. If we observe the current media landscape broadly—including all social media channels—it’s evident that AI-generated content appears increasingly prevalent. This has led some to believe that the volume of disinformation will inevitably surge, posing serious challenges.

However, I respectfully question this assumption. Consider this scenario: if I handed you a smartphone and asked you to spend time scrolling through TikTok or your preferred platform, I’d bet you’d encounter roughly the same number of videos—about 100 to 150 short clips—regardless of whether they were produced by humans or generated by AI. The total quantity of content you consume remains relatively stable.

While it’s true that AI can produce more of this content rapidly, the overall exposure to disinformation does not necessarily increase proportionally. Human-generated misinformation has already reached staggering scales, so adding more AI-produced clutter doesn’t fundamentally escalate what we already encounter daily. Our content consumption habits tend to filter and prioritize what we find engaging, meaning the overall proportion of disinformation in what we view might not increase significantly.

Moreover, our brains are naturally more attuned to certain formats—like humorous videos, emotional stories, or sensational headlines—regardless of their origin. AI-generated content often mimics these formats effectively, so it might blend seamlessly into our existing consumption patterns without creating a noticeable jump in exposure.

One nuanced challenge lies in the subtler aspects of disinformation—particularly, manipulated audio or video clips. For example, a doctored clip featuring a politician saying something they never actually said can be convincing and may evade immediate suspicion. Such deepfakes and edited media are more insidious, but they also require more sophisticated detection methods.

Overall, I believe that while AI will contribute to the proliferation of synthetic content, it may not dramatically increase the quantity of disinformation that most users encounter on a daily basis. Our habits, media environment, and the formats favored for these messages are already tailored toward certain types of content, and AI’s impact might be less disruptive than many fear.

What are your thoughts on this? Do you see AI as

Leave a Reply

Your email address will not be published. Required fields are marked *