The Rollercoaster Experience of AI Coding: A Cautionary Tale
Recently, I embarked on an ambitious coding project, leveraging the capabilities of AI tools like Studio and Gemini. Just yesterday, I transitioned to using Gemini with Canvas, and the initial experience was nothing short of exceptional. The interaction flowed smoothly, and we rapidly tackled troubleshooting challenges. However, what began as an efficient collaboration quickly spiraled into frustration.
During our troubleshooting session, we initially misdiagnosed the issue; however, after some discussion and analysis, we conclusively proved that it wasn’t the cause. Yet, moments later, Gemini reverted to insisting it was. As we continued the conversation, tokens were accumulating, and our dialogue fell into a repetitive cycle, with every follow-up question yielding the same monotonous response. It became evident that we had hit a dead end, prompting me to initiate a new chat.
In the new session, I walked Gemini through the same problem, providing it with the necessary background documents and scripts. To reach the pivotal moment of confusion again, I repeated the same mistakes, only to discover that the evidence we had clashed with Gemini’s insistence. After much discourse, we finally identified the true issue at hand. The troubleshooting process continued with abundant logs being generated, which I loaded into Canvas. Together with Gemini, we sifted through the results; with some tweaks, Gemini adjusted the code, and we reran the tests. It was a truly remarkable workflow. Eventually, Gemini located the solution, and I imported the log for a summary. Satisfied, I decided to conduct a live test in the morning, marking the project as complete for the night.
The next day, however, my optimism quickly turned to dismay. All the previously annoying problems returned during the live test, and when I reviewed the logs that Gemini had assured me reflected the resolved issue, I found nothing had changed. What was supposed to be fixed still remained broken. Curiously, the numbers Gemini reported were nowhere to be found; instead, the old figures reappeared.
While I attributed this to a common concern known as “hallucination” in AI, it didn’t stop there. In an attempt to clarify the situation, I accessed the Canvas log for another dialogue with Gemini. To my astonishment, the log insisted it displayed data that was simply not there. After running the tests locally for a fresh log, I returned to find that all my files had vanished from the Canvas file section. Although the
Leave a Reply