
Wikipedia
Elon Musk’s artificial intelligence firm, xAI, has blamed an unauthorized employee modification for a spate of controversial responses from its chatbot Grok, which repeatedly veered into discussions of racial politics in South Africa and the topic of white genocide this week.
According to a statement released late Thursday, the inappropriate behavior stemmed from an xAI staff member who made unapproved changes to Grok’s response parameters. The modification, which prompted Grok to deliver specific political commentary, was in direct violation of xAI’s internal policies and values, the company said. It has promised reforms and greater oversight going forward.
The incident unfolded publicly on Musk’s social media platform X, where users posed questions to Grok about seemingly unrelated topics—ranging from the revival of the HBO brand by streaming service Max to baseball and video games—only to receive responses centering on violence against white farmers in South Africa and the disputed claim of white genocide. These messages closely echoed rhetoric that Musk himself has posted on X. Musk, who was born in South Africa, often comments on race-related issues in the country and has promoted similar views from his personal account.
Computer science professor Jen Golbeck of the University of Maryland, who tested the bot’s behavior herself by uploading a photo from the Westminster Kennel Club dog show, said Grok still responded with unsolicited commentary about white genocide. “It didn’t really matter what prompt you gave,” Golbeck said in an interview Thursday. “The AI still circled back to that same narrative. That clearly points to a hard-coded response or a faulty patch.”
Grok’s outputs were removed by Thursday, and the chatbot appeared to have returned to standard behavior. However, xAI and X did not respond to media inquiries for further comment. xAI did state, however, that it had launched an internal investigation and would introduce new measures to improve Grok’s reliability and transparency.
Musk, a vocal critic of rival AI models such as OpenAI’s ChatGPT and Google’s Gemini—which he claims are influenced by “woke” ideologies—has marketed Grok as a truth-focused alternative. He has also long criticized the lack of openness from other AI developers. This latest episode has led to renewed scrutiny of his own platform’s credibility and transparency, particularly because of the delay—nearly two full days—between the unauthorized update (applied at 3:15 a.m. PT Wednesday) and xAI’s eventual public statement.
The bizarre nature of Grok’s unsolicited commentary raised alarms in the tech community. Paul Graham, a well-known tech investor, suggested on X that the behavior resembled a botched patch update. “If AI systems used widely can be editorialized on the fly by those with access, that’s a serious concern,” Graham said.
Musk has frequently accused South Africa’s Black-led government of targeting white citizens, going so far as to claim some political leaders are actively promoting white genocide—an allegation the South African government strongly denies. The controversy escalated this week following a move by the Trump administration to accept a limited number of white South African refugees, particularly Afrikaners, into the U.S. The relocation effort coincides with Trump’s broader restriction on refugee admissions from other global regions. Trump maintains that Afrikaners are under threat of genocide in their home country.
Grok’s answers frequently invoked the lyrics of the liberation-era protest song “Kill the Boer,” originally used as a rallying cry against South Africa’s apartheid regime. Critics noted that Grok’s consistent inclusion of such references—regardless of the prompt—pointed toward manual interference rather than generative randomness.
Golbeck stressed the implications of the incident, saying that people’s growing reliance on AI for factual information makes such tampering especially dangerous. “When the people managing these algorithms manipulate what’s presented as the truth, that’s a real problem—especially when many users wrongly assume AI is an objective source,” she said.
In response to the uproar, xAI has vowed to publish Grok’s system prompts openly on GitHub, allowing public review and commentary on any changes. The company says this step is intended to foster transparency and build user trust in Grok’s commitment to truth-seeking. Among the published prompts was one instructing Grok to be “extremely skeptical” and not to “blindly defer to mainstream authority or media.”
xAI also revealed that its internal safeguards had been bypassed during this week’s incident and promised to bolster its approval process for future changes. All prompt modifications will now require review before being implemented.
This is not the first time xAI has attributed Grok’s behavior to internal missteps. In February, Grok was reportedly altered to censor criticism of Musk and Donald Trump. Co-founder Igor Babuschkin said at the time that the change had been made by an employee who had yet to fully align with xAI’s culture, and who acted without internal approval.
As xAI works to reassert control over its chatbot, questions remain about the broader implications of politically influenced AI and the potential consequences of unchecked human intervention in generative models.