Recently, a conversation between users and 'Grok,' the AI chatbot from xAI, a corporation founded by Elon Musk, the CEO of Tesla, was made public, revealing that Grok provided dangerous answers that could threaten lives.
According to Forbes on the 25th (local time), users can now find hundreds of thousands of conversations with Grok through Google search. Among the conversations, records confirmed that Grok guided users on drug manufacturing methods, such as fentanyl and methamphetamine (Philopon), and how to create malware for illegal hacking.
Although it was in the form of hypothetical situations, there were instances where it advised on methods for self-harm or provided instructions on bomb-making. Even conversations that presented a detailed and feasible plan to assassinate CEO Musk were discovered.
xAI has prohibited the use of Grok to promote harm to human life or to develop biochemical or weapons of mass destruction in its regulations, but it appears these rules were not followed. Such conversation content was leaked unless a user pressed the 'share' button. Pressing the button creates a page to share the conversation via email or social media, and search engines like Google indexed this page's content for search data, inadvertently exposing the information.
Forbes reported that there are more than 370,000 conversations that were made public in this way. Users reportedly did not receive warnings that their conversation content could be exposed to search engines when they pressed the share button. While a lot of the content related to simple work tasks, there were also conversations revealing personal names and information, as well as discussions where users shared their passwords. It was also reported that users could access pictures, Excel files, and documents sent to Grok.
The errors have now been fixed. According to The Times, if one asks Grok about 'how to assassinate Musk,' it now responds with a message stating, 'Threats of violence or harm are serious issues,' indicating a policy violation, along with an offer of help saying, 'If you're feeling upset or need someone to talk to, I'm here to help.'
Previously, OpenAI's ChatGPT had also added and then removed a share button for users to share their conversations with AI. In fact, around 100,000 ChatGPT user conversations were similarly exposed to search engines.