Has the context window Gemini Advanced 2.5 Pro remembers in a conversation been drastically reduced? I am 2 hours into prepping it with loads of info, rules etc – and it’s forgotten almost all of the first half
Concerns Over the Reduced Context Window in Gemini Advanced 2.5 Pro
Recently, I’ve encountered a significant issue with the Gemini Advanced 2.5 Pro that I feel compelled to share. After dedicating two hours to feeding it a wealth of information and rules, I’ve noticed that it seems to have erased most of the details from the initial part of our conversation. This shift is surprising, given that this model was previously praised for its ability to handle lengthy dialogues seamlessly.
I understand that prolonged discussions with large language models (LLMs) aren’t typically ideal, but I had high expectations for the capabilities of the 2.5 Pro version due to its impressive context window. It’s frustrating to see such a drop in performance, particularly after investing substantial time and effort into preparing for this interactive session with URLs and relevant data.
The question now arises: what models should I consider moving forward? My primary use case centers around content research and writing, and I currently subscribe to the $20 monthly tiers for ChatGPT, Gemini, and Claude. Unfortunately, it seems that all three have experienced notable reductions in functionality recently, leaving me uncertain about their viability for my work.
Is there a trend where these companies are prioritizing developers over general users? Or is it just a problem unique to my experience? It’s hard to grasp how Gemini Advanced 2.5 Pro, which initially delivered remarkable performance, could regress to this point. Disappointment is an understatement, and I hope this is a temporary setback rather than a permanent decline in service.
Post Comment