Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Thursday, after a few weeks of starting the most powerful AI model, Gemini 2.5 ProGoogle made a publication Technical report Displays the results of internal security assessments. However, the report makes it difficult to determine what risks they say, the experts say what the model can break.
Technical reports provide useful – and awkwardSometimes – information about the fact that companies do not always advertise about AI. By and the Great, the AI community sees such reports as good faith efforts to support independent research and security assessments.
Google approaches a different security report than the AI rivals, which only publishes part of the technical reports that ended the experimental stage. The company does not include findings from all the “dangerous abilities” assessments during this writing; He keeps those for a separate audit.
Several specialist techcrunch, we were still disappointed from the Shark of the Gemini 2.5 Pro report, but the Google said they celebrated Border Security Framework (FSF). Google presented the FSF last year as an effort to determine future AI opportunities to cause “severe damage”.
“This [report] It is very sparse, contains minimal information and appeared a few weeks after the model is already introduced to the public.
Thomas Woodside, which is a reference to a safe AI project, is not sure the company’s commitment to additional security assessments, while spreading a report for 2.5 Pro. Woodside noted that Google has recently released the results of dangerous abilities tests, in February 2024 was made for a model of the same year in February.
He does not encourage many confidence, not a report for Google Twins 2.5 FlashLast week, a smaller, more efficient model of the company announced. Talking to TechCrunch a spokesperson, a report for flash “coming soon”.
“I hope this is a promise to publish and a promise of updates more often than Google,” said Woodside Techcrunch. “These updates should still include the results of the assessments for models where it is not clearly placed, because these models also cause serious risks.”
Google can be one of the first AI laboratories to offer standard reports for models, but this is not the only one Accused of informing about transparency lately. Meta released Similarly Skimpy Security Assessment her new Llama 4 Open models and Openai chose not to publish any report GPT-4.1 series.
It is a technological giant that is made to google on the head, AI security testing and high levels to provide a high level to report. Two years ago, Google told the US government All “important” public AI models will publish security reports. Company followed the promise with similar obligations for Other countriesPledge for “to ensure public transparency” around AI products.
Kevin Bankston, Sporadic and uncertain trend in Democracy and Technology Center, reported the Sporadic and uncertain trend to the bottom of the Safety.
“In addition to the combination of competitive laboratories as Openai, this skinny documents for the best AI model of Google, for the best AI model of Google, a competition for companies is the problem of a competition for companies,” Techcrunch said.
Google said in detail in detailed information in the technical reports, the “Adverarian Red Partner” was reported to be a detailed test and the “adverse red team”.