Grok Chat Leak Sparks AI Privacy Concerns After Conversations Found Public Online

The leaked Grok chats include disturbing queries about crypto wallet hacking, drug manufacturing, and even violent plots.

Grok Chat Leak Sparks AI Privacy Concerns After Conversations Found Public Online

Private conversations with Grok, the chatbot developed by Elon Musk’s xAI, were unintentionally exposed online—raising fresh concerns about AI safety and user privacy. Forbes discovered that Grok’s “share” button generated public URLs, which were then indexed by Google and other search engines.

Recently, another investigation by Digital Digging, led by researcher Henk van Ess and Belgian collaborator Nicolas Deleur, reveals over 110,000 ChatGPT chats still accessible via the Wayback Machine on Archive.org.

The leaked Grok chats include disturbing queries about crypto wallet hacking, drug manufacturing, and even violent plots. While xAI’s policies prohibit harmful use, some users still received dangerous responses, which are now publicly accessible—highlighting gaps in content moderation and data security.

This incident echoes similar lapses at rival platforms like OpenAI’s ChatGPT, where shareable links were also indexed, compromising private conversations. Though intended for ease of sharing, the feature backfired by exposing sensitive interactions to the open web.

The breach underscores the need for stricter privacy-by-design principles in AI systems, including blocking search engine indexing and better access controls. As users grow wary, developers face mounting pressure to tighten security or risk losing trust in their platforms.

Earlier this year, Cloudflare announced a major shift in how content is accessed by AI companies, becoming the first internet infrastructure provider to block AI crawlers by default — unless explicit permission or compensation is given by content owners.