Stock Markets
Daily Stock Markets News

Grok’s ‘white genocide’ responses show gen AI tampered with ‘at will’


Muhammed Selim Korkutata | Anadolu | Getty Images

In the two-plus years since generative artificial intelligence took the the world by storm following the public release of ChatGPT, trust has been a perpetual problem.

Hallucinations, bad math and cultural biases have plagued results, reminding users that there’s a limit to how much we can rely on AI, at least for now.

Elon Musk’s Grok chatbot, created by his startup xAI, showed this week that there’s a deeper reason for concern: The AI can be easily manipulated by humans.

Grok on Wednesday began responding to user queries with false claims of “white genocide” in South Africa. By late in the day, screenshots were posted across X of similar answers even when the questions had nothing to do with the topic.

After remaining silent on the matter for well over 24 hours, xAI said late Thursday that Grok’s strange behavior was caused by an “unauthorized modification” to the chat app’s so-called system prompts, which help inform the way it behaves and interacts with users. In other words, humans were dictating the AI’s response.

The nature of the response, in this case, ties directly to Musk, who was born and raised in South Africa. Musk, who owns xAI in addition to his CEO roles at Tesla and SpaceX, has been promoting the false claim that violence against some South African farmers constitutes “white genocide,” a sentiment that President Donald Trump has also expressed.

Read more CNBC reporting on AI

“I think it is incredibly important because of the content and who leads this company, and the ways in which it suggests or sheds light on kind of the power that these tools have to shape people’s thinking and understanding of the world,” said Deirdre Mulligan, a professor at the University of California at Berkeley and an expert in AI governance.

Mulligan characterized the Grok miscue as an “algorithmic breakdown” that “rips apart at the seams” the supposed neutral nature of large language models. She said there’s no reason to see Grok’s malfunction as merely an “exception.”

AI-powered chatbots created by Meta, Google and OpenAI aren’t “packaging up” information in a neutral way, but are instead passing data through a “set of filters and values that are built into the system,” Mulligan said. Grok’s breakdown offers a window into how easily any of these systems can be altered to meet an individual or group’s agenda.

Representatives from xAI, Google and OpenAI didn’t respond to requests for comment. Meta declined to comment.

Different than past problems

Grok’s unsanctioned alteration, xAI said in its statement, violated “internal policies and core values.” The company said it would take steps to prevent similar disasters and would publish the app’s system prompts in order to “strengthen your trust in Grok as a truth-seeking AI.”

It’s not the first AI blunder to go viral online. A decade ago, Google’s Photo app mislabeled African Americans as gorillas. Last year, Google temporarily paused its Gemini AI image generation feature after admitting it was offering “inaccuracies” in historical pictures. And OpenAI’s DALL-E image generator was accused by some users of showing signs of bias in 2022, leading the company to announce that it was implementing a new technique so images “accurately reflect the diversity of the world’s population.”

In 2023, 58% of AI decision makers at companies in Australia, the U.K. and the U.S. expressed concern over the risk of hallucinations in a generative AI deployment, Forrester found. The survey in September of that year included 258 respondents.

Musk's ambition with Grok 3 is politically and financially driven, expert says

Experts told CNBC that the Grok incident is reminiscent of China’s DeepSeek, which became an overnight sensation in the U.S. earlier this year due to the quality of its new model and that it was reportedly built at a fraction of the cost of its U.S. rivals.

Critics have said that DeepSeek censors topics deemed sensitive to the Chinese government. Like China with DeepSeek, Musk appears to be influencing results based on his political views, they say.

When xAI debuted Grok in November 2023, Musk said it was meant to have “a bit of wit,” “a rebellious streak” and to answer the “spicy questions” that competitors might dodge. In February, xAI blamed an engineer for changes that suppressed Grok responses to user questions about misinformation, keeping Musk and Trump’s names out of replies.

But Grok’s recent obsession with “white genocide” in South Africa is more extreme.

Petar Tsankov, CEO of AI model auditing firm LatticeFlow AI, said Grok’s blowup is more surprising than what we saw with DeepSeek because one would “kind of expect that there would be some kind of manipulation from China.”

Tsankov, whose company is based in Switzerland, said the industry needs more transparency so users can better understand how companies build…



Read More: Grok’s ‘white genocide’ responses show gen AI tampered with ‘at will’

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.