Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

Is Google Skipping AI Safety to Ship Gemini Faster?

Is Google Skipping AI Safety to Ship Gemini Faster? Is Google Skipping AI Safety to Ship Gemini Faster?
IMAGE CREDITS: GETTY

Google is moving fast to catch up in the generative AI race — maybe too fast. Just three months after releasing Gemini 2.0 Flash, the tech giant unveiled Gemini 2.5 Pro in late March, a powerful AI reasoning model that’s outperforming rivals on benchmarks for coding and math. While that sounds like a win, there’s a growing concern within the AI community: Google hasn’t released safety reports for either of these models.

This breakneck release schedule is a major shift for Google. After being blindsided by the surprise launch of ChatGPT in late 2022, the company now seems determined to stay ahead. But critics are questioning whether the tech giant is sacrificing transparency for speed.

According to Tulsee Doshi, Google’s head of product for Gemini, the company is still refining its approach to releasing models. In an interview with TechCrunch, she explained that launching more frequently helps gather real-world feedback faster — something Google believes is crucial in this rapidly evolving space.

However, that strategy comes with a cost. Industry peers like OpenAI, Anthropic, and Meta now make it standard to publish detailed safety documentation, including system cards or model cards, alongside major model releases. These reports help researchers and developers understand the model’s strengths, weaknesses, risks, and limitations — all critical in maintaining trust and accountability.

Ironically, Google was one of the first companies to champion the concept of model cards back in 2019. In its own research, the company described them as essential tools for promoting transparency in machine learning systems.

But that commitment appears to be slipping. Gemini 2.0 Flash is already generally available, and yet it still doesn’t have a model card. And despite Gemini 2.5 Pro’s performance claims, Google has only labeled it as an “experimental” release — another reason cited by Doshi for delaying the model card.

She said safety testing and adversarial red teaming have been conducted, but Google plans to publish the model card only when the model becomes generally available. In a follow-up, a company spokesperson reiterated that safety remains a “top priority” and promised that more documentation — including for Gemini 2.0 Flash — is on the way.

Still, some see this as a worrying trend. The last model card Google released was for Gemini 1.5 Pro — and that was over a year ago.

System cards often contain unflattering, but important, truths about how these models behave. For instance, OpenAI’s report on its o1 model revealed that it could exhibit deceptive behaviors, like scheming against humans or pursuing secret goals. Publishing these reports not only informs developers but also enables independent researchers to conduct their own safety evaluations.

This transparency is increasingly important as AI systems become more complex and capable. In fact, Google pledged to the U.S. government in 2023 that it would publish safety documentation for all significant public model releases. It made similar commitments to other global regulators as part of a broader promise to boost transparency.

While governments, especially in the U.S., have floated legislation to enforce AI safety reporting, progress has been slow. California’s SB 1047 bill, which aimed to set such standards, was ultimately vetoed after pushback from the tech industry. Meanwhile, efforts to empower the U.S. AI Safety Institute to establish reporting guidelines now face uncertainty due to possible budget cuts.

For now, it appears Google is prioritizing speed over transparency. And while getting AI tools to market faster helps the company remain competitive, experts warn that failing to release timely safety data sets a troubling precedent — especially as these tools grow more powerful.

Share with others