AI Illusionism: Why AI is nowhere near replacing people
The Reality of AI: Why Machines Aren’t Replacing Human Workers Anytime Soon
In the ongoing discourse about artificial intelligence, a common narrative suggests that AI will imminently replace human workers. However, this notion is not only exaggerated but fundamentally flawed. In reality, there is minimal likelihood that AI will supplant human work before today’s children reach adulthood.
The Illusion of AI Replacement
At the heart of this overhyped narrative lies an illusion. The term “illusionism” refers to the misconception that something perceived as real is, in fact, not so. The excitement surrounding large language models (LLMs) and their potential to revolutionize labor is driven by the interests of those develop these technologies. They promote the idea that with slight advancements in computational power, we could see a significant shift in mental labor — a notion that is far from the truth.
Understanding Human Complexity
To grasp the limitations of AI, one must consider the complexity of the human brain. Comprising up to a quadrillion synapses and a hundred billion neurons, the human brain operates on a level of intricacy that AI has yet to match. Even using conservative estimates, such as the 600 trillion synapses reference, it would require approximately 2.5 quadrillion parameters to mimic human brain functionality.
For comparison, modern AI models struggle to exceed one trillion parameters. Moreover, while a human brain operates on roughly 20 watts, even advanced hardware like the NVIDIA 5090 consumes significantly more power (about 575 watts) while handling only a fraction of that complexity.
The Limits of Current Technology
Despite the notable progress in LLMs, there are evident limitations. As these models grow, enhancements in parameter counts do not yield improved outcomes. Additionally, LLMs do not learn in real-time; attempting to introduce real-time learning would drastically slow their performance and potentially lead to malfunctions. Current frameworks for training these models do not allow for the flexible, adaptive learning that characterizes human cognition.
Human thought processes involve complex simulations of potential outcomes in response to a myriad of choices—a process that occurs at an astonishing rate. LLMs, while adept at generating human-like text, lack the depth of understanding and comprehension needed to genuinely reason or predict future events.
The Roots of Misunderstanding AI
Public perception often conflates the capabilities of LLMs with the notion that they are nearing artificial general intelligence (AGI). This is largely due to two factors: the novelty of AI
Post Comment