I believe AI won’t increase the spread of disinformation.
Will AI Really Worsen the Spread of Disinformation? A Closer Look
In recent discussions, a common concern has emerged: that artificial intelligence might significantly amplify the spread of false information online. The fear is that AI’s ability to generate vast amounts of content will lead to an overwhelming deluge of disinformation, making it harder to discern truth from fiction.
Understanding the Current Landscape
It’s true that today’s social media ecosystems are flooded with content—much of it generated or manipulated with varying degrees of human and AI involvement. As AI tools become more prevalent, many anticipate a surge in low-quality or misleading material, often called “AI-generated slop.” This naturally prompts worries that the landscape will become even more polluted with falsehoods.
Challenging the Assumption
However, I believe the situation isn’t as straightforward as it appears. Consider the typical user experience: whether you’re scrolling through TikTok, Instagram, or any platform, there’s a limit to how much content you consume in a given session. Most people, myself included, tend to watch around 100 to 150 videos or posts in a sitting.
Now, if AI is used to generate more content, it doesn’t necessarily mean you’ll encounter proportionally more disinformation. Why? Because the volume of content you actively view remains relatively stable. Even if the total pool expands dramatically, your personal consumption habits—what catches your interest—don’t change.
Furthermore, there’s already an overwhelming amount of human-generated misinformation out there. The scale of disinformation spread by humans over the years has been immense, and it’s unlikely that simply adding more AI-produced false content will dramatically shift what you see, overall. Your personal algorithm—shaped by what you engage with—still filters what reaches your attention.
The Role of Content Formats
Much of the subtle manipulation occurs through content formats rather than outright lies. For example, edited clips or provocative headlines may be used to sway opinion without explicitly stating falsehoods. A clipped statement from a politician, presented with misleading context or humorous commentary, can subtly influence perception, often more effectively than blatant disinformation.
Will AI-Generated Deepfakes and Manipulated Content Matter?
The legitimate concern is that AI could produce convincing deepfakes—videos or audio clips featuring public figures saying things they never did. While this is technically feasible, in the context of the massive flood of media people consume daily, such fabricated content might not significantly alter individual perceptions unless wielded at an overwhelming scale.



Post Comment