I don’t think AI is going to make disinformation worse.

Will Artificial Intelligence Really Accelerate the Spread of Disinformation? A Balanced Perspective

In ongoing discussions about the impact of Artificial Intelligence on our information landscape, a common concern is that AI will significantly exacerbate the flow of disinformation. The narrative suggests that as AI-generated content becomes more prevalent, the online ecosystem will flood with misinformation, making it even harder to discern truth from fiction.

This worry stems from the observation that social media platforms are already flooded with low-quality, often misleading content—much of which is AI-produced. Naturally, the assumption follows that an increase in AI-generated material will lead to a surge in disinformation, further overwhelming users.

However, I believe this perspective oversimplifies the situation. When I consider my own digital habits—such as scrolling through short-form videos on TikTok—I find that my consumption habits tend to plateau at a certain point. Whether the content is human-created or AI-generated, the number of videos I typically watch remains fairly consistent, around 100 to 150 per session. The introduction of AI-generated content doesn’t necessarily increase the amount I consume; it just changes the source.

Moreover, the sheer volume of disinformation humans have created over the years is already staggering—so much so that adding even more AI-produced falsehoods doesn’t significantly alter my personal exposure or attention span. My viewing choices are primarily driven by entertainment, and my engagement with disinformation is limited by human cognition, not by the availability of content.

The formats used in digital media also play a critical role. Disinformation often manifests subtly, embedded in videos with edited clips or manipulative framing, rather than straightforward lies. For instance, a clip of a politician saying something taken out of context, combined with a provocative caption, can mislead viewers without outright fabricating statements. These methods are effective because they fit seamlessly into familiar content formats—making them more insidious.

A possible concern is the emergence of doctored videos featuring celebrities or politicians saying things they never uttered. While this is technically a form of disinformation, I argue that in the context of the vast, noisy media landscape, it may not significantly alter the overall information environment. People are already exposed to a torrent of visual and textual manipulation, and our consumption habits tend to filter out what’s not engaging or believable.

In conclusion, while AI will undoubtedly introduce new tools for creating disinformation, I believe it won’t necessarily lead to a proportional increase in genuine exposure for most users. Our media consumption behaviors, coupled with familiar

Leave a Reply

Your email address will not be published. Required fields are marked *