Openi says that it will make changes to the way it updates the artificial intelligence models that feed Chatgpt, following an accident that has caused the excessively stapledic platform for many users.
Last weekend, after Openi launched an optimized GPT-4o-the predefined model that feeds chatgpt-users on social media on social media have noticed that Chatgpt began to respond in an excessive and pleasant way. It quickly became a meme. Users published chatgpt screenshots by applauding all sorts of problematic and dangerous decisions and ideas.
In a post on X last Sunday, the CEO Sam Altman recognized the problem and said that Openii would have worked on the “Asap” corrections. Tuesday, Altman announced that the GPT-4o update was placed and that Openi was working on “additional corrections” to the personality of the model.
The company has published a post mortem Tuesday and in a blog post on Friday, Openii has expanded on specific adjustments which it plans to bring to its process of distribution of the models.
Openi says that it plans to introduce an opt-in “alpha phase” for some models that would allow some chatgpt users to test the models and provide feedback before launch. The company also states that it will include explanations of “well -known limits” for future increases incremental to chatgpt models and adapt its safety revision process to formally consider “model behavior problems” such as personality, deception, reliability and hallucination (i.e. when a model invents things) as a “launch launch” problems.
“Going forward, we will communicate proactively on the updates that we are making to the chatgpt models, whether it’s” subtle “or not,” wrote Openii in the blog post. “Although these problems are not perfectly quantifiable today, we are committed to blocking launches based on proxy measurements or qualitative signals, even when metrics such as A/B tests seem good.”
The busy corrections come when more people turn to chatgpt for advice. According to a recent survey of the financier of the case expresses legal funding, 60% of US adults used chatgpts to look for advice or information. The growing dependence on chatgpt – and the huge base of users of the platform – raises the stakes when issues such as extreme sycophyse, not to mention hallucinations and other technical deficiencies emerge.
Techcrunch event
Berkeley, ca.
|
June 5th
Book now
Like a mitigating step, at the beginning of this week, Openai said that it would have experienced ways to allow users to give “real -time feedback” to directly influence their interactions “with chatgpt. The company has also said that it would improve techniques to remove the models from the sycophaneness, potentially allowing people to choose among more personalities of the chatgpt model, build further security protectors and expand the assessments to help identify problems beyond Sicofancy.
“One of the greatest lessons is fully recognizing the way people started using chatgpt for deeply personal advice – something we haven’t seen even a year ago,” continued Openii in his blog post. “At the time, this was not a primary objective, but as a coexizing and society, it has become clear that we must treat this case of use with great care. Now it will be a more significant part of our security work.”