I don’t think AI is going to make disinformation worse.

Will Artificial Intelligence Really Accelerate the Spread of Disinformation? A Closer Look

In recent discussions, a common concern has emerged: that AI technology might significantly amplify the dissemination of false information across digital platforms. The argument goes that as AI-generated content becomes more prevalent, the volume of disinformation will skyrocket, overwhelming our existing media landscape.

However, upon closer analysis, this fear might be overstated. Let’s explore why the impact of AI on disinformation may not be as dire as it seems.

The Human Element in Content Consumption

Consider the typical social media user—myself included. When scrolling through platforms like TikTok or similar channels, most people tend to view around 100 to 150 short videos in a single session. Whether these videos are human-generated or AI-created, the quantity remains similar. The concern is that AI will flood feeds with an excess of low-quality or dubious content, increasing the chance of encountering disinformation.

But here’s the twist: the volume of content viewed per session doesn’t necessarily increase with more AI-generated junk. Humans are naturally selective. We tend to gravitate toward content that entertains or resonates with us, regardless of its origin.

Existing Disinformation Ecosystem

It’s important to recognize that humans have been producing enormous amounts of misinformation long before AI entered the scene. The scale of disinformation circulating on social media is already staggering—far beyond what any individual could realistically consume or verify.

Adding another petabyte’s worth of AI-generated fake news or doctored clips doesn’t fundamentally change the landscape for most users. The algorithms funnel content based on what captures attention, not what’s true. So, in practice, users’ exposure levels to misinformation may remain relatively stable over time.

The Nature of Modern Disinformation Formats

Another aspect to consider is the subtlety with which modern disinformation can be delivered. For example, a clipped, heavily edited video of a public figure making a statement they never actually made can appear convincing without outright lying. These formats often slip past viewers unnoticed because they resemble authentic content.

This makes the threat of AI-generated false content more nuanced—not necessarily more pervasive—since such manipulation relies heavily on existing visual and audio techniques that are already in widespread use.

Do Deepfakes and Fake Clips Matter as Much as We Fear?

One might argue that AI will enable more convincing fake videos and audio clips of politicians, celebrities, or public figures. While this is a legitimate concern, the overall impact depends

Leave a Reply

Your email address will not be published. Required fields are marked *