Seems like the weekly usage for Plus users on GPT-5-Codex via CLI is ~5.5M tokens? (My Experience)
Analyzing Week-by-Week Usage of GPT-5-Codex via CLI for Plus Subscribers
In recent days, I’ve been exploring the capabilities and operational patterns of GPT-5-Codex, particularly through its command-line interface (CLI). My experience offers some insights into token consumption, model behavior, and potential cost implications for dedicated users. While these observations are anecdotal and lack formal verification, they may be valuable for others who are considering or already utilizing GPT-5-Codex for extended workflows.
Initial Engagement and Usage Patterns
Over a span of approximately two days, I engaged with GPT-5-Codex via CLI, running continuous prompts and observing the model’s responsiveness and behavior. One of the notable features of GPT-5-Codex is its ability to auto-compact the conversation context flow, which facilitates ongoing workflows but also influences token management and costs.
During a four-hour session that involved multiple prompts, my token usage reached roughly 2.2 million tokens in a single CLI window. This figure is an estimate based on the session’s activity and provides a rough sense of operational costs. The auto-compacting feature helps preserve context across lengthy interactions, making it ideal for projects requiring sustained dialogue, despite the increased token consumption.
Model Behavioral Observations
Throughout my testing, I observed instances of what I refer to as “hallucinations” or behavior loops, where the model would repeatedly state it had checked a particular aspect when, in fact, no such verification occurred. Such behavior led me to manually restart the CLI window multiple times—approximately ten times in total—to reset the context and mitigate the issue. These interruptions, while somewhat disruptive, are manageable and appear to be part of the iterative testing process.
Estimating Weekly Token Usage and Costs
By analyzing the progression of token consumption—culminating in a final context size of approximately 2.8 million tokens—I estimate that the weekly token usage could lie within the range of 5 to 6 million tokens. Given the $20/month subscription fee for the GPT-5-Codex Plus tier, this translates to a remarkably economical cost per token, especially when considering the potential response volume.
For comparison, API usage for similar token volumes—covering input, output, caching, and batching—often results in higher costs, roughly $30 to $40 per month. While precise cost calculations can vary based on actual token counts and batching strategies, my experience suggests that CLI-based usage offers a cost



Post Comment