×

Is AGI really coming when cutting-edge narrow models need 3 million days in school to match my 5000?

Is AGI really coming when cutting-edge narrow models need 3 million days in school to match my 5000?

Is Artificial General Intelligence (AGI) on the Horizon or Just a Mirage? A Critical Perspective

The buzz around the imminent arrival of Artificial General Intelligence (AGI) often seems at odds with the practical realities of current AI capabilities. Many experts point to the enormous computational efforts required to train narrow AI models—sometimes equating to millions of days of “learning”—yet these models still fall short of human reasoning, judgment, and understanding.

Consider this: after roughly 5,000 days of education, a human develops the ability to reason effectively, manage complex business decisions, construct legal arguments, and build accurate mental models of the physical world—skills that current AI systems, despite extensive training, still struggle to replicate reliably. These models often hallucinate information, make errors that can’t be easily explained, and lack the traceability that humans take for granted.

This disparity raises a critical question: How do the scale and resources of AI training compare to human learning? For instance, if AI models require millions of days of training data—analogous to human years—yet remain unreliable and limited in scope, can they truly be approaching the foundational capabilities of general intelligence?

Furthermore, the current trend of exponential increases in training data and computational power may be giving us the illusion of rapid progress. It resembles the misguided hope that paper airplanes—no matter how well-made—might someday achieve powered flight, rather than recognizing the fundamental engineering challenges involved.

So, why do many still believe we’re on the cusp of creating machines with human-like intelligence? Is this optimism justified, or are we perhaps overestimating what scale and data can achieve?

Understanding these issues is crucial as we navigate the future of AI development. It’s worth questioning whether the current trajectory genuinely leads to AGI, or if we’re merely observing impressive yet ultimately limited technological feats that fall short of human cognition.

What are your thoughts on this perspective? Are we close to achieving AGI, or are we overlooking the deeper complexities involved?

Post Comment