I don’t think AI is going to make disinformation worse.

Understanding AI and the Future of Disinformation: A Balanced Perspective

In recent discussions, a common concern has emerged: will advanced AI technologies exacerbate the spread of misinformation and disinformation? Many fear that as AI-generated content becomes more prevalent, so too will the volume of misleading information, potentially overwhelming the current information landscape.

The Argument Against an AI-Driven Disinformation Surge

Proponents of this view point to the sheer amount of “junk” content already circulating on social media platforms. They note that social media streams—such as TikTok and others—are inundated with low-quality or misleading material, much of which is generated or manipulated via AI. This leads to the assumption that as AI tools become more sophisticated and widespread, the flood of disinformation will amplify accordingly.

Counterpoint: Human-generated Content Has Been Flooding Our Feeds for Years

However, I remain skeptical that AI will dramatically increase the volume of disinformation beyond what we’ve already experienced. Consider this: if tasked with endlessly scrolling through social media, most individuals—myself included—typically consume a limited set of content, perhaps around 100 to 150 videos or posts at a given time. Whether these are created by humans or generated by AI doesn’t significantly alter this behavior or the volume encountered.

While AI can produce vast quantities of content, the human brain’s consumption habits tend to filter out much of it, regardless of origin. The core issue isn’t necessarily the growth in the amount of disinformation, but how we engage with the content that already exists. The proportions of content types most relevant to our interests—such as entertainment, shocking clips, or political commentary—remain relatively stable over time.

Disinformation Often Takes Subtle Forms

Furthermore, AI-generated misinformation isn’t always as overt as blatant falsehoods. Many misleading messages are embedded within formats designed to appear natural or engaging. For example, a clip featuring a heavily edited statement by a public figure, or a misleadingly framed meme, can subtly sway opinions without appearing overtly deceptive.

The real challenge lies in the creation of doctored clips of politicians or celebrities saying things they never said. While this does pose a new risk, the overall impact may be less significant than anticipated when considering how people already consume and process media. The human tendency to engage with sensational, entertaining, or emotionally resonant content persists, regardless of whether it’s AI-crafted or not.

Final Thoughts

In essence, the proliferation of AI-generated disinformation may not necessarily lead to an exponential increase in

Leave a Reply

Your email address will not be published. Required fields are marked *