Google’s Latest AI Model Report Lacks Key Safety Details, Experts Say

You are currently viewing Google’s Latest AI Model Report Lacks Key Safety Details, Experts Say

Google’s latest AI model report on Gemini 2.5 Pro has raised fresh concerns among AI researchers and policy experts, as critics argue the document lacks key safety details needed to assess the model’s potential risks.

Weeks after officially launching Gemini 2.5 Pro — billed as its most powerful AI model to date — Google published its technical report outlining internal safety evaluations. However, experts have pointed out the report is notably sparse, omitting significant information about dangerous capabilities and offering little insight into whether Google followed its Frontier Safety Framework (FSF), a system designed to detect future AI features that could cause “severe harm.”

“This [report] is very sparse, contains minimal information, and came out weeks after the model was already made available to the public,” Peter Wildeford, co-founder of the Institute for AI Policy and Strategy, told TechCrunch. “It’s impossible to verify if Google is living up to its public commitments and thus impossible to assess the safety and security of their models.”

Other industry voices, including Thomas Woodside from the Secure AI Project, echoed the sentiment, criticizing Google’s delayed and vague reporting. Woodside stressed that safety evaluations should be both timely and comprehensive, especially for models not yet publicly deployed, as they too can pose considerable risks.

Adding to the frustration, Google has yet to release a safety report for Gemini 2.5 Flash, its smaller and more efficient AI model introduced last week. A company spokesperson has promised the report is “coming soon.”

Despite being one of the early proponents of standardized AI model reports, Google joins a growing list of tech giants now facing scrutiny for scaled-back transparency. Both Meta and OpenAI have been criticized for providing limited safety disclosures on their latest models — a trend that experts like Kevin Bankston from the Center for Democracy and Technology fear signals a “race to the bottom” in AI safety practices.

While Google maintains that rigorous safety testing and adversarial red teaming are part of its release process, the lack of full disclosure in its reports continues to leave researchers and policymakers calling for greater accountability.

Get the Latest AI News on AI Content Minds Blog

Leave a Reply