The Safety Report Promised Xai is mine

The Safety Report Promised Xai is mine


The artificial intelligence company of Elon Musk, Xai, has lost an auto -actor deadline to publish a finalized security framework, as noted by the Watchdog group The Midas Project.

Xai is not exactly known for its strong II safety commitments as it is commonly understood. A recent report found that the chatbot of the company of the company, Grok, would have pushed photos of women when asked. Grok can also be considerably more coarse than chatbots such as Gemini and Chatgpt, cursing without a lot of moderation to talk about.

However, in February at the top of AI Seoul, a global gathering of leaders and interested parties of the AI, Xai published a framework project that outlines the approach of the company to the security of the AI. The eight -page document established XAI security priorities and philosophy, including the company’s benchmarking protocols and the considerations on the distribution of artificial intelligence models.

As the Midas project observed in the blog post Tuesday, however, the draft applied only to future artificial intelligence models not specified “not currently in development”. Furthermore, he was unable to articulate the way in which Xai would have identified and implemented the mitigations of the risk, a main component of a document that the company signed at the top of the Souls.

In the project, Xai claimed to have planned to release a magazine version of his security policy “within three months” – by May 10th. The deadline went and went without recognition on the official channels of XAI.

Despite Musk’s frequent warnings on the dangers of non -controlled, Xai has a poor AI security track record. A recent study by Saferai, a non -profit organization that aims to improve the responsibility of AI workshops, has discovered that XAI is bad among its peers, due to its “very weak” risk management practices.

This is not to suggest that other artificial intelligence laboratories go in the suitcase dramatically better. In recent months, the Xai rivals, including Google and Openai, havetened the safety tests and were slow to publish reports on the safety of the models (or have completely skipped the editorial relationships). Some experts have expressed concern about the fact that the apparent weakening of security efforts is coming at a time when IA is more capable – and therefore potentially dangerous – than ever.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *