I don’t think AI is going to make disinformation worse.
Will Artificial Intelligence Worsen Disinformation? A Critical Perspective
The rise of artificial intelligence has sparked widespread concerns about its potential role in amplifying misinformation and disinformation. Many experts and observers worry that as AI-generated content becomes more prevalent, the volume of false or misleading information will surge, making it even harder to discern truth from fiction. However, it’s worth examining whether this fear is justified or if the impact of AI on disinformation might be overstated.
One common argument suggests that AI, capable of producing large amounts of low-quality or “junk” content, will flood social media feeds, thereby increasing the prevalence of disinformation. Visualize scrolling through TikTok or other short-form video platforms—most users might watch around 100 to 150 videos in a session. Whether these videos are human-generated or AI-crafted, the sheer volume of content remains similar. The key point is that adding AI-produced disinformation doesn’t necessarily mean you’ll be exposed to more falsehoods because you are already encountering an enormous amount of human-generated misinformation.
Furthermore, the algorithms that curate our feeds tend to prioritize engagement based on personal interests, not the factual accuracy of the content. For many users, their media consumption revolves around entertainment—cats, comedy fails, political clips, or miscellaneous viral content—rather than political propoganda or malicious disinformation. As a result, the proportion of disinformation relative to overall content remains roughly the same, regardless of whether some of it is AI-generated.
Additionally, AI-produced disinformation often takes subtler forms than blatant falsehoods. For example, edited clips, misleading snippets, or slightly altered images—sometimes featuring well-known figures—can be more convincing than outright lies. Presenting a politician’s statement out of context or editing clips to suggest something they never said can be particularly insidious because it’s easier for viewers to accept as genuine. Yet, given the vast scale of online media and how users typically consume content passively, such tactics may not significantly change the overall landscape of misinformation.
The primary potential impact of AI-generated disinformation might lie in its ability to create hyper-realistic doctored videos or audio—deepfakes—that mimic real personalities. While this is a concerning development, it’s important to recognize that the existing ecosystem of misinformation is already overwhelming. The addition of highly convincing AI fakes adds complexity but does not necessarily revolutionize the scale or nature of the problem.
In conclusion, while AI can undoubtedly produce convincing misinformation, its overall effect on the
Post Comment