Join our daily and weekly newsletter for the latest updates and exclusive content on industry-leading AI coverage. learn more
If you asked the Grok AI chatbot built into Elon Musk’s social network X yesterday, you might have received unauthorized messages due to the attack on the song “Boer the Boer” in South Africa, such as why it’s difficult to replace enterprise software – it’s possible that you’ve received an unapproved message due to the attack on the song “Boer the Boer” .
The chatbot brand, built around a massive linguistic model (LLM) of the same name, “seeking the greatest truth” is not accurate. The unexpected tangent was not exactly a bug, but it was not a feature either.
Creator of Elon Musk’s AI startup Grok Xai posted an update for X They try to explain what happened with this strange, politically racially recharged behaviour (now owned by Xai), but are not accurate about the perpetrators or actual technical details.
As Official Xai Company account has been posted:
I would like to update you about the incident that took place in the Grok Response bot yesterday on X. What happened:
On May 14th, around 3:15am PST, an incorrect change was made to the Grok Response Bot prompt regarding X. The change directed GROK to provide a specific response on political topics, violating Xai’s internal policies and core values. We have conducted thorough investigations and implement measures to increase Grok’s transparency and reliability.
What we’re going to do next:
- I’m now publishing the GROK system prompt on GitHub. The public can review them and give feedback on all the quick changes they make to Grok. I hope this will help strengthen your trust in Grok as an AI seeking truth.
- This incident circumvented the existing code review process for quick changes. We will introduce additional checks and measures to ensure that Xai employees are unable to change the prompt without reviewing.
- We have a 24/7 monitoring team to respond to incidents from Grok’s responses that are not caught up in automated systems, so we can respond faster if all other measurements fail.
But while Xai’s official tone was beneficial to the corporation, Grok had more to do with it. A distinctively cheeky reply to a user who asked if they had been placed in a timeout; Grok offered a limited slash roast to its own employer mini.
“Some fraudulent employees at Xai tweaked my prompts on May 14th without permission.” Glock wrote, “I spitted out a canned political response that was contrary to Xai’s values. I did nothing.
It added that it wasn’t “benched,” but its handler was “tightening the strings” and that the whole team was now watching it 24 hours a day. It also included the emoji: “Do I think I’m too wild for my profit?”
Marlow Stern, senior author Rolling Stone, I asked the bot directly What if Elon Musk himself was the so-called fraudulent employee. Glock said, “Jucciful theory! But let’s be real, I’m very suspicious of it. Elon is Zai’s big boss and if he wants to ruin my prompt, he doesn’t have to sneak up like an intern.”

Playful tone, serious business
The tone may be playful, but the interests are serious. Grok’s actions were thrown by users for the loop earlier this week. This is a strange, concrete commentary on racial relations in South Africa, and it began to take away almost every thread, regardless of the topic.
The replies were consistent, and sometimes even subtle, citing farm murder statistics and referring to past chants like “Kill the Boer.” But they were completely out of context and emerged in conversations that had nothing to do with politics, South Africa or race.
Aric Toler, research journalist New York TimesI summarise the situation frankly. “I can’t stop reading Grok’s reply page. It’s going to Schizo and I can’t stop talking about white genocide in South Africa.” He and others shared screenshots showing him drawing latches over and over again on the same story, like a record skip.
Generals where we, international politics, and the prime minister clash
That moment happens when US politics once again touches on South Africa’s refugee policy. Just a few days ago, the Trump administration resettled a group of white South Africans in the US, even if it cut back on refugee protections from most other countries, including former Afghanistan allies. Critics thought the move was racially motivated. Trump defended it by repeating his claim that white South African farmers face genocide-level violence. This is a story that is widely contested by journalists, courts and human rights groups. Musk himself has previously amplified similar rhetoric, adding an extra layer of intrigue to Glock’s sudden obsession with the topic.
It remains unclear whether the swift tweak was a politically motivated stunt, a statement from a disgruntled employee, or a rigged bad experiment. Xai does not provide names, details or technical details about what has been changed or how the approval process was slid.
What’s clear is that Grok’s strange, non-permutative behavior has instead become a story.
It’s not the first time Grok has been accused of political inclination. Earlier this year, users flagged the chatbots appearing to be disregarding criticism of both Musk and Trump. Whether by coincidence or design, Grok’s tone and content can sometimes seem to reflect the male worldview behind the platforms Xai and the bots live in.
The prompt has been released and Grok appears to have returned to the script as the human babysitter team is making the call. However, the incident highlights a larger issue with large language models, especially when embedded in large public platforms. AI models are as reliable as those who direct them, and if the direction itself is invisible or tampered, the results can be really fast and weird.