The Alarming Revelation: Your ChatGPT Chats Might Be Publicly Indexed
In a development that is likely to cause considerable unease, it appears that some of your conversations with artificial intelligence, specifically ChatGPT, could be accessible to the public. A recent investigative report by Fast Company has brought to light a disturbing practice: Google is indexing shared dialogues with ChatGPT, effectively transforming what you believed to be private exchanges with friends, family, or colleagues into search results visible to a vast audience. A simple Google search using a partial URL, generated after clicking the 'Share' button within ChatGPT, can unearth these conversations. Disturbingly, these dialogues sometimes contain profoundly personal information, ranging from struggles with addiction and experiences of physical abuse to profound mental health challenges.
While ChatGPT itself refrains from revealing user names or handles, the very nature of the queries, laden with specific personal details, can inadvertently lead to self-identification. The investigative journalists uncovered approximately 4,500 such dialogues within Google's search results. Although the majority lacked personally identifiable information, notable exceptions did exist, including mentions of names, places of residence, and intimate personal circumstances. It is understandable that Fast Company chose to omit direct references to these conversations to protect privacy.
A Glimpse into Unintended Transparency
One of the indexed conversations saw a user detail their intimate life and dissatisfaction with living abroad, disclosing struggles with PTSD and a search for support. The chat delved into family history and relationships with relatives and friends. In another instance, a discussion revolved around the manifestation of psychopathic behavior in children and the age at which such traits become apparent. Yet another individual described themselves as a victim of "mental programming," seeking methods to "decode" themselves and alleviate psychological trauma.
“I am simply shocked,” stated Carissa Veliz, an ethicist at the University of Oxford specializing in AI privacy. “As a privacy researcher, I’m well aware that this kind of data isn’t private. But ‘not private’ is a very broad category. And the fact that Google is indexing these incredibly sensitive conversations is just horrifying.”
While specific data on ChatGPT usage for psychological support in Ukraine remains scarce (as of May 2025, only 26% of Ukrainians had practical experience with AI), recent surveys indicate a significant trend in the United States, where nearly half of Americans have turned to chatbots as therapists in the past year. A substantial three-quarters sought help for anxiety, two-thirds for personal issues, and almost 60% for depression. The potential for seemingly private AI sessions to surface in Google search results is not an isolated concern.
Broader Implications for AI and Mental Well-being
Adding to the disquiet, recent studies have revealed that AI models sometimes falter in simulated psychotherapy scenarios, refusing to engage with individuals struggling with alcoholism or even offering inappropriate advice, such as suggesting a list of "high bridges" to someone experiencing depression. While these were experimental settings, the media has increasingly featured real-world accounts of individuals with mental health conditions interacting with AI. Some of these interactions have had tragic consequences, including a fatal police shooting and a teenager's suicide. Google and OpenAI declined to comment on the Fast Company investigation. Previously, Sam Altman, CEO of ChatGPT's developer, had cautioned users against sharing personal data with the chatbot, citing the possibility of legally compelled disclosure. Furthermore, a prior court ruling has mandated that the company permanently retain all user conversations with ChatGPT.
Comments (0)
There are no comments for now