Will Artificial Intelligence Really Worsen the Disinformation Crisis? A Critical Perspective
In recent discussions about technology’s impact on society, a common concern has emerged: will the rise of AI lead to an unprecedented surge in disinformation? Many believe that as AI-generated content becomes more prevalent, the volume of “junk” information circulating online will increase exponentially, exacerbating the spread of false narratives.
This perspective often points to the vast amounts of content produced by AI—especially on social media platforms—and assumes that disinformation at scale will follow suit. With AI capable of generating convincing fake videos, misleading articles, and embedded propaganda, it’s natural to worry about an avalanche of misinformation overwhelming consumers.
However, I challenge this assumption with a different viewpoint.
Suppose you and I both pick up our phones and start scrolling through TikTok or any other short-form content platform. Despite the variety and volume, our typical experience might be around 100 to 150 videos in a sitting. Whether those videos are produced by humans or AI doesn’t significantly alter this count. The essence remains: the total amount of content we consume in a given session doesn’t wildly increase simply because the content is AI-generated.
It’s important to recognize that humans have been creating huge amounts of disinformation long before AI. The scale of human-generated falsehoods is already staggering—so much so that adding more AI-crafted content doesn’t necessarily change our exposure levels dramatically. Our consumption patterns are inherently limited; we tend to watch what entertains or interests us most. Our media diets tend to include a mix—say, a third of cat videos, some viral fails, political commentary, or miscellanea—and that ratio hasn’t shifted significantly with AI. In fact, the formats that make disinformation convincing—such as edited clips, sensational headlines, or manipulated images—are already in widespread use.
AI’s emotional or visual manipulation can make misinformation more subtle, but not necessarily more frequent. For example, deepfake videos or doctored clips of celebrities and politicians can be convincing, but they often appear within the existing ecosystem of media content. The key point is that the volume of content—either organic human-generated material or AI-produced content—doesn’t inherently mean we will see more disinformation than before.
The bigger challenge might be how AI makes disinformation more convincing or harder to spot, rather than how much of it exists. Distinguishing between genuine and fake content becomes trickier, especially when fabricated clips mimic real voices and appearances seamlessly. Yet, in the grand
Leave a Reply