×

I am convinced that AI will not contribute to increasing disinformation dissemination.

I am convinced that AI will not contribute to increasing disinformation dissemination.

Will AI Exacerbate Disinformation? A Closer Look

In recent discussions, many have expressed concern that artificial intelligence could significantly amplify the spread of false information online. The fear is that as AI tools become more advanced, they may enable the large-scale generation of misleading or outright deceptive content, flooding social media platforms and other digital channels with what some call “junk information.”

This concern is understandable. If we observe social media broadly, it’s evident that a significant portion of content—regardless of origin—is of questionable quality or veracity. With AI-generated content becoming increasingly common, some argue that the volume of disinformation could explode, making it harder for users to discern truth from falsehood.

However, upon deeper consideration, I remain skeptical that AI will fundamentally worsen the disinformation problem. Here’s why:

Imagine you and I both pick up our phones with the simple task of “scrolling through TikTok” or any preferred short-form content platform. In a typical session, I might watch around 100 to 150 videos. Whether those videos are generated by humans or AI, the volume of content we consume remains roughly the same. Injecting AI-produced videos doesn’t necessarily increase the total number of videos I encounter; it just replaces some human-made content with AI-crafted clips.

One might argue, “But if there’s more AI content, you’ll encounter more disinformation.” Still, the reality is that humans have already produced an overwhelming volume of misleading information over the years. The sheer amount of existing disinformation makes it virtually impossible to consume or even process all of it—adding another petabyte of AI-generated falsehoods doesn’t necessarily increase what we see or believe.

Our media consumption patterns are also limited by our interests and cognitive biases. For example, many people tend to gravitate toward certain content formats—be it cat videos, viral mishaps, or political commentary. These preferences form a sort of “eyeball diet” that remains relatively consistent. Consequently, AI-generated disinformation might not dramatically change the overall proportion of false information we encounter; it simply integrates into what we’re already consuming.

Furthermore, disinformation often manifests in subtle ways, especially with formats that permit nuanced manipulation. A viral clip of a celebrity edited to suggest something false, or a compilation of a political figure taken out of context, can be far more convincing than straightforward lies. These tactics are often embedded within familiar formats that mask their deceptive intent.

The most plausible concern is the advent of manipulated clips—deep

Post Comment