×

Variation 45: “Using ChatGPT, I received another person’s medical information from an unrelated query”

Variation 45: “Using ChatGPT, I received another person’s medical information from an unrelated query”

Unexpected Privacy Breach: When ChatGPT Revealed Someone Else’s Sensitive Medical Data

Recently, I encountered an unsettling experience while using ChatGPT that highlights potential privacy concerns with AI language models. It all started innocuously — I was inquiring about the appropriate type of sandpaper for a DIY project. However, what I received in response was anything but expected.

Instead of helpful advice on abrasives, ChatGPT provided an overview of an individual’s medical test results from across the country. The information included signatures, detailed test results, and personal identifiers. I was able to retrieve this document, which raises serious questions about the AI’s data handling practices.

At first, I was unsure how this happened. I want to emphasize that I am deeply concerned about privacy and do not wish to share or distribute any sensitive personal data. I’ve decided not to post the full chat publicly to prevent further dissemination of this individual’s private information.

Context and Clarification

For those curious, I initially asked ChatGPT about the type of sandpaper to use, expecting a simple technical answer. Instead, it responded with a report that appeared to contain private medical information relating to someone else. I edited my subsequent prompt where I asked, “What do you know about me?”—this resulted in ChatGPT listing personal details about myself, which I’d prefer to keep confidential. Interestingly, my own queries matched my real location and identity, which makes this even more concerning.

I want to clarify that given the hallucination nature of AI models, there’s a possibility this data is fabricated or “hallucinated.” However, I also found that the details correspond to real-world information based on a quick online search, which makes me uneasy. For transparency, I’ve shared a link to the specific Reddit comment where I discussed this incident.

Final thoughts

This experience underscores the importance of understanding the limits and risks associated with AI tools. While ChatGPT is a powerful language model, it’s crucial to remain cautious about the kind of data it might inadvertently produce or access. Privacy breaches like this serve as a reminder that AI models could potentially expose sensitive information under certain circumstances.

If you encounter something similar or have concerns about data privacy, consider exercising caution when interacting with AI platforms. Protect your personal information and be vigilant about the responses you receive.

Note: For transparency, I have included a link to the Reddit comment where this incident was discussed: [Insert Link].

Post Comment