×

I don’t think AI is going to make disinformation worse.

I don’t think AI is going to make disinformation worse.

Will AI Really Worsen the Disinformation Crisis? A Thoughtful Perspective

As discussions around artificial intelligence and its societal impacts continue to evolve, a common concern surfaces: Will AI exacerbate the spread of false information? Many fear that the ability of AI systems to generate vast amounts of synthetic content might lead to an overwhelming influx of disinformation on social media and other platforms.

However, upon closer examination, this worry might be overstated. Consider the typical social media experience—whether scrolling through TikTok, Instagram, or Twitter—and you’ll notice that the quantity of content, much of which is AI-generated, remains relatively steady. Whether the content is created by humans or machines, the total volume accessible to a user doesn’t radically increase simply because AI is involved.

Think about your own media consumption habits. If I asked you to spend 30 minutes scrolling your favorite feed, you’d probably watch around 100 to 150 short videos or posts. Whether these are handcrafted or AI-made, the number of units you view doesn’t necessarily go up. The key point is your capacity to process content remains constant; AI-generated noise adds to the mix but doesn’t flood your experience beyond your usual engagement level.

It’s important to recognize that the internet and social media have been inundated with disinformation long before AI became prominent. The sheer scale of human-generated falsehoods is already staggering—and unlikely to be dwarfed significantly by AI-produced content. What changes, perhaps, is the subtlety or format of disinformation. For example, a carefully edited clip of a politician or celebrity might appear more convincing, yet still fit within the broader landscape of media we encounter daily.

Additionally, many disseminators of disinformation use formats designed to be engaging and easily shareable, sometimes involving provocative edits or sensational captions that may not be immediately recognizable as false. For instance, a clip featuring a popular figure saying something they never said—crafted or AI-enhanced—can spread quickly, but it generally blends into the sea of content that already influences perceptions.

In essence, the volume of disinformation we face isn’t solely dictated by AI’s capabilities. Our media consumption behaviors and the existing saturation of false content play a more significant role. AI might make certain types of misinformation more convincing or easier to produce, but it doesn’t necessarily alter our exposure levels or how much misleading content we encounter.

Ultimately, the challenge lies in our ability to critically evaluate the information we consume, regardless of whether it’s AI-generated or not. Vigilance, media

Post Comment