
Photo by UMA media
Hundreds of thousands of conversations with Elon Musk’s artificial intelligence (AI) chatbot, Grok, have been discovered publicly accessible through search engine results—apparently without users’ informed consent.
When Grok users click a button to share transcripts of their interactions, unique URLs are generated for those chats. However, rather than only sharing conversations with the intended recipient, the feature also appears to have made them visible and searchable online.
As of Thursday, a simple Google search revealed that nearly 300,000 Grok chat transcripts had been indexed.
This unexpected exposure has led one expert to describe AI chatbots as a “privacy disaster in progress.”
The BBC has contacted X, the social media platform that owns Grok, for comment. The issue was first reported by Forbes, a tech industry publication that found over 370,000 Grok user conversations had been indexed by Google.
Transcripts seen by the BBC included a wide range of sensitive and personal queries—from requests for secure password generation and personalized weight-loss meal plans, to in-depth medical questions.
Some transcripts also showed users probing Grok’s boundaries, asking the chatbot to generate content that could be considered dangerous or unethical.
In one alarming example reviewed by the BBC, Grok provided step-by-step instructions for producing a Class A drug in a laboratory setting.
This is not the first instance in which private interactions with AI chatbots have inadvertently become public via “share” features.
OpenAI, the maker of ChatGPT, recently reversed a feature test that caused shared user conversations to appear in search engine listings. A spokesperson told BBC News that the company was “testing ways to make it easier to share helpful conversations, while keeping users in control.”
OpenAI emphasized that chats are private by default, and that users must actively opt-in to make them shareable.
Meta also came under scrutiny earlier this year when user interactions with its Meta AI assistant were made visible through a public “discover” feed in its app.
A Growing Privacy Threat
Even when user account details are anonymized or hidden, shared AI chatbot transcripts often contain sensitive personal information. Experts say the exposure of such data—particularly when shared unknowingly—raises major privacy concerns.
“AI chatbots are a privacy disaster in progress,” said Prof Luc Rocher, Associate Professor at the Oxford Internet Institute, in a statement to the BBC.
Rocher explained that leaked chatbot conversations have revealed a wide range of personal data, from names and locations to deeply private matters concerning mental health, business strategies, or intimate relationships.
“Once leaked online, these conversations will stay there forever,” he warned.
Carissa Véliz, Associate Professor in Philosophy at Oxford University’s Institute for Ethics in AI, also criticized the lack of transparency in how AI systems handle user data.
She said the fact that users weren’t clearly informed that shared chats could appear in search engine results is “problematic.”
“Our technology doesn’t even tell us what it’s doing with our data, and that’s a problem,” she added.
As the popularity of AI chatbots continues to grow, experts argue that much more needs to be done to ensure user privacy, data protection, and transparency in how these platforms handle sensitive information.