“ChatGPT is just like predictive text”. But are humans, too?
Are Humans Just Advanced Predictive Text Machines?
In recent discussions surrounding language models, a common argument arises: “ChatGPT is merely a sophisticated version of predictive text.” This statement invites us to reflect on the nature of human communication and creativity. If these large language models (LLMs) don’t genuinely “think,” rather, they compute the likelihood of one word following another based on a vast analysis of language, then what does that say about us?
As someone who doesn’t claim to be an expert in LLMs, I, like many users, often find my interactions with these models surprisingly human-like. This observation prompts an intriguing question: Are we humans employing similar mechanisms in our own language production?
Consider the craft of writing. The most accomplished writers typically are those who have immersed themselves in extensive reading. This exposure enriches their mental reservoir of vocabulary and sentence structures, allowing them to borrow and innovate in their own writing. Just as an artist equipped with a broader palette of colors can create more dynamic and engaging pieces, a writer with a wealth of linguistic knowledge can construct prose that resonates deeper with their audience.
Now, here’s a thought experiment: try your hand at writing song lyrics, regardless of your musical ability. What you’ll likely find is that your creative flow pulls from a vast array of influences—tropes and styles that have been ingrained over years of consuming the work of others. The more songs you’re familiar with, the more unique your creations may become, yet you will likely recognize echoes of previously heard melodies and lyrics in your own work. This act of synthesizing ideas is a fundamental aspect of creativity—one that, when approached mindfully, enriches our artistic output. However, straying too far into this territory can verge on plagiarism.
In Summary
Critics of LLMs often dismiss them as nothing more than complex predictive engines lacking true understanding. Yet, this raises another question: How well do we understand our own linguistic and creative processes? Are the underlying mechanisms of human expression and those utilized by language models as fundamentally different as we might think? As we explore these concepts, we might discover that the line between human creativity and artificial intelligence is more blurred than we previously assumed.
Post Comment