Xai has blamed an “unauthorized modification” for a bug in its Grok chatbot fueled by artificial intelligence which has meant that Grok repeatedly refers to the “white genocide in South Africa” when it is invoked in certain contexts on X.
Wednesday, Grok started responding to dozens of X posts with information on the white genocide in South Africa, also in response to unrelated subjects. The strange responses derive from Grok’s account X, which responds to users with posts generated by the AI every time a person label “@grok”.
According to an XAI official’s post on Thursday, a change was made on Wednesday morning to the prompt of the Grok Bot-le high-level instructions that guide the behavior of the Bot-Ci have ordered Grok to provide a “specific response” on a “political topic”. Xai states that the modification “violated internal policies and fundamental values” and that the company has “conducted an in -depth investigation”.
It is the second time that XAI publicly recognizes an unauthorized change in the Grok code has caused artificial intelligence to respond in controversial ways.
In February, Grok briefly censored not flattering mentions by Donald Trump and Elon Musk, the founder billionaire of Xai and owner of X. Igor Babiechkin, a protagonist of engineering XAI, said Grok had been instructed by a rogue employee to ignore the sources that mentioned Musk or Trump who spread the dispersion of uninformation and that Xai had retrace the modification as soon as the users began to report it.
Xai said Thursday that will make several changes to prevent similar accidents in the future.
Starting today, Xai will publish the requests of the Grok system on Github and on a Changelog. The company says that “will implement additional checks and measures” to ensure that XAI employees cannot change the system prompt without revision and establish a “24/7 monitoring team to respond to accidents with Grok’s responses that are not captured by automated systems”.
Despite Musk’s frequent warnings on the dangers of non -controlled, Xai has a poor AI security track record. A recent report discovered that Grok would push the photos of women when asked. The chatbot can also be considerably more coarse than AI such as Google’s Gemini and Chatgpt, cursing without a lot of moderation to talk about.
A study by Saferai, a non -profit organization that aims to improve the responsibility of AI workshops, has discovered that XAI is scarcely on the safety of its peers, due to its very weak risk management practices “. At the beginning of this month, Xai lost an auto -actor deadline to publish a finalized security framework.