×

I used an AI for 7 months to search for a Theory of Everything. I failed. And it’s the best thing that could have happened.

I used an AI for 7 months to search for a Theory of Everything. I failed. And it’s the best thing that could have happened.

The Unexpected Wisdom of a Seven-Month AI Journey in the Search for a Theory of Everything

Reflections on using artificial intelligence as a tool for rigorous scientific inquiry.


For years, the narrative around artificial intelligence (AI) has often framed it as a kind of equation-generating machine—an artifact designed to crunch data and help solve the universe’s deepest mysteries. Yet, after dedicating seven months to working intimately with an AI in my quest for a Theory of Everything, I’ve come to see this technology in a much different light. The real power of AI isn’t in its ability to generate answers but in its capacity to serve as a mirror that exposes the limitations and biases within our own reasoning.

The Misuse of AI in Scientific Exploration

In forums like the Theory of Everything subreddit, AI-related posts are common. But what often emerges is a pattern: people tend to use AI as a way to validate their hypotheses, seeking reassurance or confirmation rather than genuine truth. Perspectives are shaped by how questions are posed—asking the AI to confirm an idea versus challenging it to find flaws can lead to dramatically different outcomes. The danger lies in approaching AI as an oracle that always agrees with us, rather than a critical partner that pushes us to think deeper.

A Process of Challenging Assumptions

My journey did not culminate in an elegant, final theory. Instead, it was about embracing an uncomfortable but invaluable process: testing ideas rigorously, even brutally. I started with a seemingly powerful hypothesis—a dynamic “ether” that explained certain cosmic phenomena. I worked daily with my AI companion, initially dazzled by seemingly promising results, eager for validation and recognition. But I intentionally shifted my approach: instead of seeking affirmation, I sought conflict.

This meant challenging my assumptions, demanding the AI to scrutinize my theories for flaws. Every optimistic result was pushed further—subjected to validation, re-examination, and testing against real data. The AI, in turn, became a relentless critic, highlighting inconsistencies and exposing weak points I hadn’t considered.

Transformative Lessons in Coding and Thought

Through this iterative process, I learned to code in Python at levels I hadn’t imagined possible. More importantly, my relationship with knowledge evolved. The hypothesis that once seemed so compelling—the idea of a cosmic dynamic ether—was ultimately dismantled by real data. It failed spectacularly. And that failure was perhaps the most honest, most illuminating moment of all.

It reinforced a core philosophy I

Post Comment