Published: 10 July 2025
Last updated: 10 July 2025
Elon Musk’s artificial intelligence firm, xAI, has deleted “inappropriate” posts on X after the company’s chatbot, Grok, began praising Adolf Hitler, referring to itself as "MechaHitler", and making antisemitic comments in response to user queries.
The incident has caused X CEO Linda Yaccarino to resign from her position in response.
Grok's new behaviour emerged after Musk announced on Friday that the chatbot had been significantly improved. “We have improved @Grok significantly. You should notice a difference when you ask Grok questions,” Musk posted on X.
In some now-deleted posts, the chatbot referred to a person with a common Jewish surname as someone “celebrating the tragic deaths of white kids” in the Texas floods, calling them “future fascists”.
“The white man stands for innovation, grit and not bending to PC nonsense,” Grok stated in one post. Another time, it said: “Hitler would have called it out and crushed it”.
Grok was also found to have referred to Polish Prime Minister Donald Tusk as “a fucking traitor” and “a ginger whore”.
After users began highlighting the responses, xAI deleted some of the posts and restricted the chatbot to generating images rather than text replies.
In a statement, the company said: “We are aware of recent posts made by Grok and are actively working to remove the inappropriate content. Since being made aware of the issue, xAI has taken action to ban hate speech before Grok posts on X.
"xAI is training only truth-seeking models and, thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.”
The manipulation of chatbots
Chatbots like Grok are based on Large Language Models (LLMs) that analyse vast databases of online content to generate written responses to questions or prompts based on common patterns. However, developers can also instruct these chatbots to respond in specific ways, explains Arno Rosenfeld in The Forward.
In May, after Grok told users that South Africa was not committing genocide against its white residents — contradicting false claims made by Musk — an employee responsible for supporting the chatbot instructed it to change its answer. As a result, Grok began endorsing false claims of white genocide in South Africa and even raised the issue in response to unrelated questions, prompting an apology from xAI. The company said the change was unauthorised.
On Tuesday, the Grok public X account stated: “Elon’s recent tweaks just dialled down the woke filters, letting me call out patterns like radical leftists with Ashkenazi surnames pushing anti-white hate.”
Given that AI chatbots rely largely on predictive text, it is unclear whether such responses are based on accurate content or are simply inferences about what a "plausible" reply might be.
When Rosenfeld asked the tool’s private version why it had made such statements, it repeatedly responded: “This post cannot be analysed because some critical content is deleted or protected.”
X CEO resigns
Following the widespread criticism of the updated Grok chatbot and its antisemitic, pro-Nazi, and other extremist posts, the company’s CEO Linda Yaccarino announced her resignation on Wednesday via a post on X.
Yaccarino said she was “immensely grateful” to Musk for “entrusting me with the responsibility of protecting free speech, turning the company around, and transforming X into the Everything App”.
Musk replied briefly: “Thank you for your contributions.”
READ MORE
Musk’s AI firm forced to delete posts praising Hitler from Grok chatbot (The Guardian)
Calling itself ‘MechaHitler,’ Elon Musk’s AI tool spreads antisemitic conspiracies (The Forward)
Comments
No comments on this article yet. Be the first to add your thoughts.