I don’t think AI is going to make disinformation worse.
Could AI Make Disinformation Worse? A Closer Look
In recent discussions, many have expressed concern that artificial intelligence will exacerbate the spread of disinformation, enabling malicious actors to produce and disseminate false content at an unprecedented scale. The logic seems straightforward: with AI capable of generating vast amounts of “junk” or misleading information, the volume of disinformation will skyrocket.
However, I believe this risk may be overstated.
To illustrate my point, consider the everyday activity of scrolling through social media platforms such as TikTok. Whether done manually or powered by AI-generated content, the number of short videos consumed remains relatively constant—roughly 100 to 150 per session for most users. Even if AI increases the prevalence of content—be it genuine or fabricated—the actual number of items a typical user encounters doesn’t significantly change. Instead, what shifts is the nature of the content itself.
It’s important to recognize that disinformation has been rampant for years, produced by human sources at an enormous scale. Our consumption patterns haven’t drastically altered because of this, because our engagement is driven largely by entertainment and interest rather than the origin or authenticity of the content. The algorithms feeding us content tend to prioritize what we engage with most, whether it’s cat videos or political commentary—regardless of whether it’s genuine or manipulated.
Furthermore, much of the subtle influence of disinformation is tied to format and presentation rather than outright false claims. For example, a clip edited to appear as if a celebrity or politician said something they never actually said can introduce misleading ideas in a much more insidious way than straightforward fake news. These formats often don’t look obviously deceptive, making it easier for disinformation to seep into our feeds.
Some might argue that AI will enable the creation of convincing fake videos—so-called “deepfakes”—featuring public figures saying things they never said. While this is a legitimate concern, from a broader perspective of media consumption and the sheer volume of existing disinformation, I believe it won’t significantly alter the landscape.
In essence, the challenge isn’t solely about the quantity of fake content but about how we process and interpret information. The patterns of consumption tend to focus on entertainment and engagement, not the veracity of every piece encountered.
What are your thoughts? Do you believe AI poses a greater threat to the integrity of information, or are these fears perhaps overstated?
Post Comment