According to TechCrunch, Google recently published a technical report about its Gemini 2.5 Pro AI model, but experts are criticizing the report for not having enough safety information. The report came out several weeks after Google launched the model, raising concerns about the company’s commitment to AI safety.
Before we dive deeper, let’s understand what Gemini 2.5 Pro is. Launched in March 2025, it is Google’s most advanced AI model. Think of it like a super-smart computer brain that can understand and create text, code, and work with different types of information like images and videos. This AI model is important because it helps Google compete with other companies like OpenAI.
- It has a very large “memory” that can handle up to 1 million pieces of information at once
- It’s very good at reasoning, coding, and solving math and science problems
- It ranks at the top of important AI tests
- It can help create web apps and has a special “thinking” ability
- It’s available to developers and users of Gemini Advanced
When companies like Google create powerful AI models, they should publish detailed technical reports. These reports are like safety labels on products – they tell us what’s inside and if there are any risks. A good AI safety report should explain how the model works, what data was used to train it, and most importantly, what safety tests were done.
- Safety reports should show how the company tested for possible misuse
- They should explain how they checked for biases
- They need to include “dangerous capability tests” that check if the AI could spread false information
- They should explain what the company is doing to fix any problems found
The timing of Google’s report is raising eyebrows. It was published weeks after the Gemini 2.5 Pro was already available to the public. Even more concerning, the last time Google tested for dangerous capabilities was in June 2024, well before this new model was released. Experts say this gap in testing is worrying.
Peter Wildeford, who co-founded the Institute for AI Policy and Strategy, criticized the report. Thomas Woodside from the Secure AI Project and Kevin Bankston from the Center for Democracy and Technology also expressed concerns. They say they cannot verify if Google is truly committed to AI safety based on this sparse report. Also, Google hasn’t published any safety report at all for another model called Gemini 2.5 Flash.
This problem isn’t just with Google. Other big AI companies like Meta and OpenAI are also not sharing enough information about their AI safety testing. This is happening even though these companies have promised government regulators they would be transparent about AI safety.
Also Read: OpenAI’s new o3 and o4-mini AI models enhance reasoning with multimodality, full tools support, and improved safety on Microsoft Azure. Read Here.
If Google doesn’t improve its safety reporting, it could face serious consequences:
- Possible government fines (like a €250 million fine in France, which is about ₹2,250 crore)
- Loss of public trust in Google’s AI products
- Stricter regulations that could limit how Google develops AI
- Damage to Google’s reputation as a responsible tech company
- Increased scrutiny from competition authorities
Experts are calling this trend a “race to the bottom” in AI safety standards. This means companies might be competing to release new AI models quickly without doing proper safety checks. For you and me, this matters because these powerful AI systems are increasingly part of our daily lives, and we need to know they’re safe to use. The question remains: will Google and other companies improve their safety reporting, or will it take government action to make them more transparent?