×

I am confident that AI will not play a role in amplifying misinformation.

I am confident that AI will not play a role in amplifying misinformation.

The Impact of AI on Disinformation: Will It Worsen the Problem?

In recent discussions, there’s been growing concern that artificial intelligence might significantly amplify the spread of disinformation online. Many fear that AI-generated content could flood social media platforms, making it harder to discern truth from falsehood at scale. While these worries are understandable, I believe the scenario might not be as dire as some suggest.

A common argument is that as AI-generated data points increase—particularly on social media—the volume of misleading or false information will grow exponentially. But when I consider typical human behavior and content consumption patterns, I see a different picture.

For instance, if I or anyone else spends a set amount of time browsing platforms like TikTok, the number of videos we view remains relatively consistent—roughly 100 to 150 short clips regardless of AI involvement. Introducing AI-generated videos doesn’t necessarily increase that number; it just alters the nature of what’s available. The same applies to disinformation: the sheer amount of harmful content humans have created over the years is already vast. Adding a new layer of AI-produced content doesn’t inherently increase the proportion of disinformation one encounters during regular consumption.

The types of content most engaging to viewers tend to be consistent—cat videos, humorous fails, emotional political snippets, and miscellaneous entertainment. Whether AI generates some of this content or not, the distribution of what people typically watch doesn’t change much. Therefore, I don’t believe AI advancements will lead to an overwhelming surge in exposure to disinformation unless human-generated content also scales in parallel.

It’s worth noting that disinformation isn’t always blatant falsehoods. Sometimes, it’s more subtle—a heavily edited clip or a misleadingly framed segment. For example, a clip of a celebrity or politician edited to convey a false message can be more convincing and insidious than direct lies. Social media algorithms often amplify these formats because they’re engaging, even if they are deceptive.

The potential for AI to create doctored videos of public figures saying things they never said is real, but the impact may be less severe than many expect. Against the vast tide of existing disinformation, such manipulated content might just add to the noise without drastically changing the overall landscape.

In summary, while AI can and will eventually contribute to the spread of disinformation, I don’t see it fundamentally transforming how much or how quickly we encounter false content. Human behavior, platform algorithms, and content consumption patterns play a significant role in shaping what reaches

Post Comment