I don’t think AI is going to make disinformation worse.

Will Artificial Intelligence Worsen the Disinformation Crisis? A Perspective

As concerns mount regarding the potential for AI to amplify the spread of false information, many skeptics argue that Artificial Intelligence could lead to an unprecedented surge in disinformation content. The crux of this fear lies in AI’s ability to generate vast amounts of misleading or outright false material at scale, feeding into the already extensive flood of content on social media platforms.


Understanding the Scale of Content Consumption

A common analogy involves a typical social media user—say, someone scrolling through TikTok—who might view a hundred to a hundred and fifty short videos in a single session. Whether these videos are generated by humans or AI, the volume of content encountered remains relatively constant for the individual. Injecting AI-generated material into this mix doesn’t necessarily increase the total amount of content seen or influence the consumption pattern dramatically.

The Reality of Disinformation Volume

Humans have been creating and sharing disinformation long before AI came into the picture. The scale of false information already saturates social platforms, making it virtually impossible for an individual to sift through it all. Consequently, adding additional AI-produced disinformation doesn’t significantly alter the overall landscape or how much misinformation a typical user is exposed to.

Perception vs. Engagement

Most users tend to engage with content driven by personal interests and entertainment preferences. These preferences often include a mix of viral videos, humorous clips, or emotional political content—regardless of whether the content is human-made or AI-generated. Thus, the proportion of disinformation in the content stream remains relatively stable over time.

The Nuance of Format and Framing

While blatantly false statements are easier to identify, modern disinformation often employs subtle framing—such as edited clips or provocative soundbites with curated context. For instance, a popular clip might feature a celebrity or politician making a statement that was heavily manipulated or edited to distort meaning. This form of disinformation is often more insidious because it blends seamlessly into entertainment content, making it harder for viewers to discern truth from fiction.

Counterarguments and Future Implications

One argument suggests that AI-generated doctored content—deepfakes featuring politicians or celebrities saying things they never did—could significantly impact public perception. However, given the vast volume of existing misinformation and the way audiences consume media, it remains uncertain whether AI-driven disinformation will have a substantially larger effect.


Final Thoughts

While AI certainly adds another tool to the disinformation arsenal,

Leave a Reply

Your email address will not be published. Required fields are marked *