I don’t think AI is going to make disinformation worse.

Will AI Generating Disinformation Really Make the Problem Worse? An In-Depth Analysis

As discussions around Artificial Intelligence continue to evolve, many experts and enthusiasts express concern that AI could significantly amplify the spread of misinformation and disinformation. The prevailing worry is that AI’s ability to produce vast quantities of content at scale might flood social media platforms with unreliable or false information, making it harder for users to discern truth from fiction.

The Assumption: More AI-Generated Content equals More Disinformation?

It’s easy to assume that because AI can generate easy-to-spot “junk” or misleading content, the proliferation of such material will inevitably lead to a deluge of disinformation online. When observing social media’s vast landscape, it’s evident that a significant portion of content—be it videos, images, or texts—has some participation from AI-driven generation or manipulation. This leads to the logical conclusion that increasing the supply of AI-produced content would result in even more disinformation circulating among users.

Challenging the Narrative

However, I believe this perspective oversimplifies the situation. To illustrate, consider the common experience of browsing platforms like TikTok: regardless of whether content is human or AI-generated, most users tend to watch a similar amount—roughly 100 to 150 short videos per session. Introducing AI-generated content doesn’t seem to inflate this quantity; it remains fairly constant.

While AI can produce more content, people’s consumption habits are relatively stable. Having more disinformation available doesn’t necessarily guarantee it’s reaching or influencing viewers more than before because, at the end of the day, our engagement is driven by what we find entertaining or relevant, not just the volume of content.

Disinformation in the Context of Content Consumption

Furthermore, the formats through which disinformation is embedded often make it less conspicuous. For example, a clip edited to appear as if a celebrity or politician said something they never did may not seem as blatantly false as a straightforward lie. Adding a provocative phrase or a manipulated clip within a seemingly innocuous video can sway perceptions subtly.

This type of content often blends seamlessly with genuine material, which makes it hard to distinguish from authentic information. Yet, the amount of disinformation we encounter through our usual media consumption hasn’t changed dramatically over the past years, despite the rise of AI.

The Human Factor and Content Filters

Our preferences and consumption patterns heavily influence what we see. Typically, we tend to select content that aligns with our interests—whether that’s cat videos

Leave a Reply

Your email address will not be published. Required fields are marked *