For a day or so, Musk’s Grok AI chatbot would add its belief in a “white genocide” in South Africa, by now a classic white supremacist fabrication, to nearly every answer it would give regardless of the question asked.
We’ve written before about Grok AI’s loosey-goosey governance when it comes to doing harm. This time around, they did put a guardrail in, but one that would add a reference to the existence of a “white genocide” in South Africa to its answers (to be clear: there isn’t one).
xAI, the makers of Grok AI, says that a rogue staff member broke the protocol and changed the system prompt to do this. Zeynep Tufekci writes:
If Grok’s sudden obsession with “white genocide in South Africa” was due to an xAI change in a secret system prompt or a similar mechanism, that points to the dangers of concentrated power. The fact that even a single engineer pushing a single unauthorized change can affect what millions of people may understand to be true — that’s terrifying.
This is, of course, true. But it would have suited Tufekci (and the New York Times) if they didn’t neglect to mention the white supremacist ideology that sits behind this change in the secret system prompt. It is not by chance that Grok AI referred to a non-existent white genocide instead of the very real genocide that is unfolding in Gaza as this sentence is being written.
See: For One Hilarious, Terrifying Day, Elon Musk’s Chatbot Lost Its Mind at the New York Times.