I don’t think AI is going to make disinformation worse.

Will AI Worsen the Disinformation Crisis? A Thoughtful Perspective

In recent discussions, a common concern has emerged: that Artificial Intelligence (AI) may significantly amplify the spread of false information online. Many worry that the ability of AI to generate vast quantities of seemingly authentic content could flood social media with convincing misinformation, making it harder to discern truth from fiction.

However, I believe this fear might be overstated. Let’s consider the nature of our media consumption habits. If I or you were asked to spend an hour scrolling through TikTok, I’d wager we’d both view roughly 100 to 150 short videos. Whether these videos are produced by humans or generated by AI, our consumption patterns likely stay the same — the volume doesn’t inherently increase just because the content is AI-created.

The core issue isn’t necessarily the amount of disinformation, but its integration into the content we already encounter daily. Human-generated falsehoods have existed at massive scales for years, and we remain immersed in a deluge of exaggerated, misleading, or outright false narratives. Adding AI-produced content to this existing flood doesn’t seem to dramatically alter the overall landscape from a consumption standpoint.

Furthermore, our engagement is inherently selective. We tend toward content that entertains, amuses, or emotionally resonates with us—be it adorable cat videos, humorous mishaps, or political commentary. The proportion of disinformation—regardless of whether it’s AI-generated—remains relatively stable within this mix. As a result, the core makeup of what captures our attention and shapes our perceptions tends to stay consistent over time.

It’s worth noting that disinformation doesn’t always come in the form of blatant lies. Sometimes, it’s concealed within cleverly edited clips or provocative formats that subtly influence opinions without appearing overtly false. For instance, a heavily edited video featuring a celebrity or politician might distort the message without overtlying it as a lie. AI’s role here could be to produce such sophisticated content, but again, this isn’t fundamentally new—it’s a evolution of existing tactics.

The primary difference with AI is the potential for highly convincing, easily produced clips that can be shared rapidly. Still, against the overwhelming volume of media we already consume and the patterns of our engagement, I question whether this will substantially shift what we end up encountering daily or influence our beliefs more than previous forms of disinformation.

What are your thoughts? Will AI truly change the landscape of misinformation, or are we perhaps overestimating its impact compared to the challenges we already face?

Leave a Reply

Your email address will not be published. Required fields are marked *