I don’t think AI is going to make disinformation worse.

Understanding the Impact of AI on Disinformation: A Balanced Perspective

In recent discussions, a common concern has emerged: will Artificial Intelligence exacerbate the spread of misinformation and disinformation? Many believe that as AI becomes more capable of generating large volumes of content, the landscape of online information could become even more polluted with fabricated or misleading material.

The Perspective Against Worsening Disinformation

It’s true that AI can produce vast amounts of synthetic data—sometimes indistinguishable from genuine content. When examining social media platforms overall, there is a noticeable increase in AI-generated material, which could suggest a surge in disinformation. However, I remain skeptical that this will fundamentally worsen the situation.

Consider this analogy: if you or I pick up our phones and spend 10-15 minutes scrolling through TikTok or other short-form video platforms, we both tend to watch roughly the same number of videos—say, around 100 to 150 clips. Introducing AI-generated content into our feeds doesn’t necessarily increase the volume of material we consume; it mostly replaces existing content rather than adds to it.

Moreover, the sheer amount of human-produced disinformation over the years has already reached astronomical levels. Our brains have become adept at filtering what we see and what we ignore. Adding another petabyte of AI-created misinformation doesn’t meaningfully change our overall exposure or the algorithm’s behavior in determining what content to surface.

My personal experience aligns with this: my viewing habits remain consistent. I tend to see a mosaic of content—roughly one-third cat videos, some viral fails, political elements, and miscellaneous entertainment. The composition of content I encounter hasn’t radically shifted because of AI; I simply continue engaging with what I find entertaining or relevant.

The subtlety of modern disinformation

One area where AI might introduce more complexity is through nuanced formats like edited video clips. For instance, a clip of a politician saying something they never did, cleverly manipulated to seem convincing, might be more prevalent. These doctored snippets can be more insidious because they are less obviously false compared to blatant lying.

Still, against the broader tide of digital misinformation, I believe such deepfakes and manipulated media won’t drastically change how much disinformation the average person encounters. Our media consumption habits tend to focus on certain formats and topics, and AI’s impact on these patterns is likely limited.

Final thoughts

While AI has the potential to produce more convincing disinformation, it’s essential to recognize that our exposure is also shaped by the ways we consume media

Leave a Reply

Your email address will not be published. Required fields are marked *