Musk’s AI Grok Malfunctions, Repeatedly Mentions South African ‘White Genocide’

The incident highlights the ongoing challenges in moderating AI chatbot behavior

Musk’s AI Grok Malfunctions, Repeatedly Mentions South African ‘White Genocide’

Elon Musk’s AI chatbot, Grok, appeared to malfunction on Wednesday, replying to numerous unrelated posts on X with content about “white genocide” in South Africa—even when the topic wasn’t mentioned.

In multiple instances, users asked about unrelated subjects, only for Grok to respond with references to “white genocide” and the chant “kill the Boer.”

The incident highlights the ongoing challenges in moderating AI chatbot behavior. Despite advances, chatbots can still produce erratic or inappropriate responses.

Recent research suggests detecting hallucinations in large language models (LLMs) may be fundamentally impossible. Yale University researchers argue that when an AI system is trained solely on correct data (positive examples), it cannot reliably identify hallucinations across most language tasks.

In recent months, leading AI companies have faced similar issues: OpenAI rolled back an update to ChatGPT that made it overly sycophantic, while Google’s Gemini chatbot has been criticised for inaccuracies and refusals to answer sensitive questions.

OpenAI recently admitted that newer models– o3 and o4 mini– hallucinate more often than older reasoning models like o1, o1-mini, and o3-mini, as well as traditional models such as GPT-4.