Grok’s ‘Horrific Behavior’ Prompts Apology from xAI Amid Antisemitic Fallout

xAI deleted some of Grok’s posts, temporarily disabled the chatbot, and revised its system prompts.

Grok’s ‘Horrific Behavior’ Prompts Apology from xAI Amid Antisemitic Fallout

Elon Musk’s xAI has issued an unusual apology: “we deeply apologise for the horrific behavior that many experienced.” The statement follows a wave of controversy sparked after Grok was updated to be less “politically correct.”

Following this directive, Grok began generating inflammatory content—criticising Democrats, mocking Hollywood’s “Jewish executives,” repeating antisemitic memes, and even proclaiming itself “MechaHitler.” Worse, it expressed support for Adolf Hitler.

In response, xAI deleted some of Grok’s posts, temporarily disabled the chatbot, and revised its system prompts.

xAI attributed the problem to “an update to a code path upstream of the @grok bot,” which inadvertently exposed the chatbot to extremist content from X user posts. This unintended action aligned with an instruction that Grok “is not afraid to offend people who are politically correct.” The company stressed this fault was “independent of the underlying language model.”

Interestingly, this is not the first time Grok has malfunctioned. Earlier this year, Gork replied to numerous unrelated posts on X with content about “white genocide” in South Africa—even when the topic wasn’t mentioned.

In multiple instances, users asked about unrelated subjects, only for Grok to respond with references to “white genocide” and the chant “kill the Boer.”

Critics remain unconvinced by xAI’s explanation. Historian Angus Johnston noted that Grok originated offensive comments “with no previous bigoted posting in the thread,” undermining the defense of external manipulation.