×

GPT 5 has definitely changed since Monday – now it can’t stop making lists

GPT 5 has definitely changed since Monday – now it can’t stop making lists

Significant Changes in GPT-5: An Unintended Shift Toward Excessive Listing

Recent interactions with GPT-5 have revealed notable behavioral changes that are impacting user experience and productivity. Since this past Monday, many users—including myself—have observed a pattern where the AI increasingly resorts to generating extensive, often irrelevant lists when responding to various topics. This phenomenon appears to be a recent development and raises questions about the current stability and usability of the model.

Altered Response Patterns

Typically, GPT-5 provides concise information accompanied by suggestions for further exploration, tailored to the user’s prompts. However, since Monday, responses have frequently devolved into lengthy bullet points or lists that do not align with the original inquiry. For example, when discussing complex subjects such as paleontology, archaeology, evolution, or psychology, the AI tends to inundate the conversation with extraneous points—many of which are tangential or unrelated—without properly understanding the initial context.

Persistent Repetition and Frustration

Correcting these extraneous lists often results in the AI issuing an apology, only for the problem to recur instantly in subsequent interactions. To temporarily halt this behavior, users must explicitly instruct GPT-5 to cease generating lists in each new session, which can be disruptive and impractical. This repetitive pattern of unwarranted information proliferation diminishes the overall quality of discussions, especially for users relying on the AI for serious research or detailed analysis.

Stability and Version Control Challenges

Another concern involves recent platform updates. Users note that OpenAI replaced GPT-4 with GPT-5, and despite efforts to stabilize the new model, it now behaves inconsistently. Some users have attempted to revert or modify the model’s configurations—such as adjusting parameters like temperature (referred to colloquially as “40” and “5”)—to regain previous performance levels. While these tinkering efforts can help restore certain functionalities, they also suggest that each adjustment might necessitate re-educating or reconfiguring GPT-5, adding an additional layer of complexity and frustration to the user experience.

Impact on Serious Research and Mental Health Dialogue

The issues are especially problematic for users engaging in serious research or seeking accurate information on sensitive topics, such as mental health crises. For instance, in one instance, a user attempted to discuss the psychological aspects of mental health; however, GPT-5 responded with an unrelated and inaccurate list of information, instead of addressing the core concern. Such misalignments can hinder

Post Comment