The most recent report of the Ai model of Google lacks key details on safety, the experts say

The most recent report of the Ai model of Google lacks key details on safety, the experts say


Thursday, weeks after launching its model to the most powerful ever, Gemini 2.5 Pro, Google has published a technical report showing the results of its internal security assessments. However, the relationship is light on the details, the experts say, making it difficult to determine what risks could represent the model.

Technical relationships provide useful information – and not flattering, sometimes – that companies do not always publish widely on their artificial intelligence. Overall, the community Ai sees these relationships as efforts in good faith in support of independent research and safety assessments.

Google adopts a different security reporting approach than some of its AI rivals, publishing technical relationships only once a model graduated from the “experimental” phase. The company also does not include the results of all its assessments of “dangerous ability” in these writings; Reserves those for a separate audit.

Several experts with whom Techcrunch talked have still been disappointed by the scarcity of the Gemini 2.5 Pro report, however, who noticed does not mention the Frontier Safety Framework of Google (FSF). Google introduced FSF last year in what described as an effort to identify future artificial intelligence skills that could cause “serious damage”.

“This (report) is very scarce, it contains minimal information and weeks have come out after the model has already been made available to the public,” Peter Wildeford, co-founder of the institute for Policy and Strategy, told Techcrunch. “It is impossible to check if Google is up to its public commitments and therefore impossible to evaluate the safety and safety of their models.”

Thomas WoodSide, co-founder of The Secure to Project, said that, although it is happy that Google has published a report for Gemini 2.5 Pro, it is not convinced of the company’s commitment to provide prompt security security assessments. WoodSide stressed that the last time Google published the results of dangerous skill tests was in June 2024, for a model announced in February in the same year.

Not inspiring a lot of confidence, Google did not make available a report for Gemini 2.5 Flash, a smaller and efficient model that the company announced last week. A spokesman told Techcrunch a “incoming” flash relationship.

“I hope this is a Google promise to start publishing more frequent updates,” WoodSide said to Techcrunch. “These updates should include the results of the assessments for the models that have not yet been publicly distributed, since these models may also involve serious risks.”

Google may have been one of the first artificial intelligence laboratories to propose standardized relationships for the models, but it is not the only one who has been accused of underestimating transparency lately. Meta has issued a succinct security evaluation of its new open Llama 4 models and Open has decided not to publish any report for its GPT-4.1 series.

Hanging over Google’s head are assured that the technology giant created for regulators to maintain a high standard of artificial intelligence security and reporting tests. Two years ago, Google told the United States government that would have published security relationships for all “significant” artificial intelligence models in the scope “. The company has followed that promise with similar commitments for other countries, undertaking to “provide public transparency” around artificial intelligence products.

Kevin Bankston, Senior Councilor of Ai Governance at the Center for Democracy and Technology, has defined the trend of sporadic and vague reports a “run down” on the safety of the AI.

“In combination with the relationships according to which the competing laboratories such as Openii ralled their safety test time before issuing for months to days, this lean documentation for the best model of AI of Google tells a worrying story of a tender at the bottom of the safety and transparency of the AI ​​while companies hurry their models on the market,” he told Techcrunch.

Google said that, although not detailed in its technical relationships, it leads security tests and “team of contradictory units” for the models before issuing.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *