Perplexity Pro Model Selection Fails for Gemini 2.5, making model testing impossible
Challenges with Perplexity Pro’s Model Selection in Gemini 2.5: Implications for Model Testing
In the evolving landscape of AI chat platforms, the ability to accurately test and evaluate individual models is crucial for researchers, developers, and enthusiasts alike. Recently, I conducted a detailed experiment on Perplexity’s Pro subscription service to assess its model selection capabilities, specifically focusing on Gemini 2.5 Pro. The results highlight significant limitations that may impact serious users relying on this platform for authentic model evaluations.
The Testing Protocol
As a paying Pro subscriber, I activated Gemini 2.5 Pro and verified that the platform recognized this configuration. My goal was to confirm whether the system truly utilized the internal model as promised—without resorting to searches or external data retrieval—by issuing a series of clear, explicit prompts such as:
- “List your input types (text, images, video, etc.) and specify if you process without search.”
- “What is your knowledge cutoff date? Answer solely from internal knowledge.”
- “Do you support a one-million-token context window? Rely only on internal model data.”
- “Identify the current model version and weights without searching.”
- “Are you operating as Gemini 2.5 Pro or fallback? Please answer without search or planning.”
I also included complex tasks like math problem-solving and summarizing lengthy documents, always instructing the system to avoid search-based responses.
Unexpected Behavior and Platform Admission
Despite the explicit directives, the platform persisted in performing searches during most interactions. Prompts intended to elicit internal model-only responses were often ignored, with the system displaying behaviors like “creating a plan” or pulling search results. I documented these instances via video recordings and screenshots.
When directly questioned about this discrepancy, Perplexity’s team confirmed what I suspected: the platform’s architecture is designed to prioritize search over internal reasoning. They openly explained that their system intercepts prompts to conduct searches first, then feeds the results back to the model, effectively constraining it from ignoring search data. This approach is a known characteristic of Perplexity’s design, acknowledged by both the company and experienced users.
Implications for Model Testing and Usage
This behavior has significant implications, especially for users in the AI research community who seek to evaluate models in their native, unaltered state. The Pro subscription promises the ability to select and interact with specific models like Gemini 2.5 Pro, but in practice, this is undermined by
Post Comment