I don’t think AI is going to make disinformation worse.

Will AI Really Worsen the Disinformation Problem? A Perspective

In recent discussions, a common concern has emerged: that the rise of AI will significantly amplify the spread of misinformation and disinformation online. The idea is that, with AI capable of producing vast quantities of low-quality or deceptive content at scale, we could face an even more challenging information environment.

The Assumption: More AI-Generated Content Equals More Disinformation

It’s easy to assume that as AI-generated material becomes more prevalent across social media platforms, the sheer volume of misleading or false information will spike. Given that AI can produce a deluge of content—ranging from superficial videos to fabricated statements—many worry this will flood the digital landscape, making it harder for users to discern truth from fiction.

A Closer Look: Human-Generated Content Has Already Created a Tidal Wave

However, I believe that the concern may be overstated. Consider your own online consumption habits—if I ask you to spend a short time scrolling through TikTok or any other media feed, the number of videos you encounter, whether AI-crafted or human-created, tends to be on a similar order of magnitude, say 100 to 150 clips in a session. Inject AI-generated content? The volume doesn’t necessarily increase; it just changes the source.

While there is indeed more content overall, human-created disinformation has already saturated our feeds at an enormous scale. Trying to quantify whether AI adds significantly to this volume is complicated, because our attention spans and media consumption patterns are relatively fixed. The ratio of misinformation versus genuine content that I encounter hasn’t shifted dramatically over the past five years—it’s more about what I find engaging and relevant.

Algorithmic Consumption Remains Consistent

My personal media intake is guided by what captures my interest—cat videos, humorous falls, political commentary, or miscellaneous clips. Whether AI-generated or not, these preferences steer my attention. The presence of more AI-made content doesn’t necessarily mean I see more disinformation; it depends on what algorithms prioritize and what I choose to watch.

Disinformation Often Comes in Disguise

One nuance is how disinformation isn’t always blatant lies but sometimes presented in subtle, manipulative formats. For example, edited clips or clips with provocative captions can be just as misleading as outright false claims. Think of a clip where a celebrity or politician appears to say something they never did—these doctored videos can be convincing without appearing overtly false.

**Will Deepf

Leave a Reply

Your email address will not be published. Required fields are marked *