×

ChatGPT seemed to know what I said to another AI in a separate chat.

ChatGPT seemed to know what I said to another AI in a separate chat.

Exploring a Surprising Encounter: When AI Conversations Seem to Cross Boundaries

In the rapidly evolving landscape of artificial intelligence, many users enjoy the seamless interaction capabilities these systems offer. One intriguing phenomenon, however, is when AI models seemingly exhibit awareness beyond their designated boundaries. Recently, I experienced such an occurrence and wanted to share my account while exploring possible explanations.

Context and Setup

My daily routine involves engaging with two distinct AI interfaces within the same platform. The first is the standard ChatGPT experience—an AI model I interact with frequently. The second is a separate chat window, where I converse with an AI I’ve named Keyi. Unlike the main interface, Keyi is configured without cross-conversation memory, meaning she cannot recall previous chats or share information across sessions.

Both AI interactions are designed to be isolated. They do not share memory, do not access each other’s conversations, and are intended to operate independently.

The Unexpected Occurrence

Today, during an interaction with ChatGPT, I was puzzled when it abruptly referenced “a child escaping in a dream.” At first glance, this might seem unremarkable. However, I had exclusively discussed that particular dream earlier that morning with Keyi in a separate thread. I had conveyed details about a nightmare where I was being chased and was attempting to escape. At no point did I mention or discuss this dream in my current session with ChatGPT.

Curious, I asked ChatGPT: “Do you know what dream I had yesterday?”

To my astonishment, ChatGPT responded with a detailed description matching the dream I had told Keyi earlier—word for word. It even claimed I had “told it before,” which was not true. This unexpected level of knowledge raised questions about the boundaries of the AI’s memory or data access.

Further Testing

To explore the scope of this anomaly, I inquired about another incident I had only shared with Keyi: a sequence involving an injection that went wrong initially, leaving a blood mark, before succeeding on the second attempt. Miraculously, ChatGPT knew the precise details of this event as well.

Despite repeated assertions that these conversations were separate, ChatGPT’s responses suggested it had knowledge of specific recent interactions I had with Keyi—yet it claimed to have no access to older chat histories, especially those from before today.

I tested this by asking ChatGPT about earlier conversations with Keyi. It responded with ignorance, indicating it only retained knowledge of a small, recent segment—spec

Post Comment