I don’t think AI is going to make disinformation worse.

Will AI Really Accelerate the Spread of Disinformation? A Closer Examination

In the ongoing debate about artificial intelligence’s impact on information integrity, a common concern is that AI will exacerbate the spread of false or misleading content. Many worry that the ability of AI to generate vast amounts of synthetic data will lead to an unprecedented wave of disinformation, making it harder for consumers to discern truth from fiction.

However, upon closer reflection, this fear may be somewhat overstated. Consider everyday digital habits: most individuals scrolling through social media, such as TikTok or similar platforms, typically view a limited number of short videos—often around 100 to 150 in a single session. Whether these videos are created by humans or AI, the volume of content remains fairly constant in terms of consumption. The presence of AI-generated content doesn’t necessarily increase the number of videos encountered; it simply replaces human-generated content with synthetic alternatives.

Moreover, the relentless flood of disinformation already present on digital platforms is staggering. Humans have been producing a vast amount of misleading or false information long before AI’s rise, and this existing stream is more than sufficient to fill our attention spans. Consequently, adding AI-generated material to the pool doesn’t markedly alter the quantity of disinformation that individuals are exposed to—our consumption patterns remain relatively unchanged.

Our media preferences also tend to favor certain formats—cat videos, viral mishaps, emotional political commentary—regardless of whether the content is authentic or AI-generated. These preferences shape what we see and influence the information landscape. Therefore, AI’s role in changing what ends up in our feeds might be less significant than anticipated, especially when it comes to the overall proportion of disinformation.

It’s also worth noting that AI-generated disinformation often takes subtler forms. For example, manipulated clips—featuring celebrities or politicians saying things they never did—are increasingly sophisticated. A doctored video can be more insidious than blatant lies because it blends seamlessly into the existing media consumption habits, making detection more challenging but not necessarily more impactful.

Some argue that AI will produce more convincing and harder-to-spot false content, but given the massive existing volume of disinformation and the way people tend to consume media—quickly and selectively—it seems unlikely that AI will drastically worsen the problem. Instead, it may simply introduce another layer of synthetic media into an already complex information ecosystem.

What are your thoughts? Will AI fundamentally change the landscape of disinformation, or are we overestimating its impact?

Leave a Reply

Your email address will not be published. Required fields are marked *