On Monday, Openii launched a new family of AI models, GPT-4.1, that the company has claimed to have overperformed some of its existing models in some tests, in particular the reference parameters for programming. However, GPT-4.1 did not send with the security relationship that generally accompanies the versions of the Openi model, known as a model or system card.
Starting from Tuesday morning, Openai has not yet published a security relationship for GPT-4.1-e it seems that it does not foresee. In a statement to Techcrunch, the spokesperson for Openai Shaokyi Amao said that “GPT-4.1 is not a border model, so there will be no separate system card released for this”.
It is quite standard for laboratories to release safety relationships that show the types of tests that have conducted internally and with third -party partners to evaluate the safety of particular models. These relationships occasionally reveal unwary information, such as a model tends to deceive humans or is dangerously persuasive. Overall, the AI community perceives these relationships as efforts in good faith by Ai Labs to support independent research and red team.
But in recent months, AI’s main workshops seem to have lowered their reporting standards, pushing the backlash from safety researchers. Some, like Google, dragged their feet on safety relationships, while others have published relationships without the usual details.
The recent Openii track record is not even exceptional. In December, the company raised criticism for the release of a safety relationship containing reference results for a model other than the version it has distributed in production. Last month, Openii launched a model, a deep search, weeks before publishing the system card for that model.
Steven Adler, a former Openi security researcher, has observed to Techcrunch that security relationships are not obliged by any law or regulation: they are volunteers. Yet Openai has made several commitments towards governments to increase transparency around its models. Before the IS security summit in the United Kingdom in 2023, Openii in a blog post called System Cards “a key part” of his approach to responsibility. And leading to the Ai Actation summit in Paris in 2025, Openai said that system cards provide valuable information on the risks of a model.
“System cards are the main tool of the artificial intelligence sector for transparency and to describe which security tests have been done,” Adler said to Techcrunch in one and -mail. “The rules and commitments of transparency today are ultimately volunteers, so it depends on each artificial intelligence company to decide if or when to release a system card for a certain model.”
GPT-4.1 is sending without a system card at a time when current employees and ex are raising concerns for Openi’s safety practices. Last week, Adler and 11 other former Open employees presented a Amicus brief proposed in the case of Elon Musk against Openi, arguing that an Open for profit could cut the corners on the safety works. The Financial Times has recently reported that the chatgpt manufacturer, stimulated by competitive pressures, has reduced the amount of time and resources that assigns to the safety testers.
Although GPT-4.1 is not the AI model with the maximum performing in the Openi roster, he earns substantial earnings in the efficiency and latency departments. Thomas Woodside, co-founder and political analyst of secure to projects, told Techcrunch that performance improvements make an even more critical security relationship. The more sophisticated the model is, the greater the risk it could involve, he said.
Many artificial intelligence workshops have fallen efforts to codify the requirements for reporting safety in law. For example, Openii opposed the SB 1047 of California, which would have requested many artificial intelligence developers to control and publish safety assessments on models that make public.