2025 DORA from Google Cloud

October 13, 2025

Google Cloud recently published the 2025 DORA Report. It's the most comprehensive industry survey on how engineering teams are adopting AI, and it lands on a conclusion we've been saying for a while: the tools aren't the hard part. The organizational system around them is.

What Stands Out

AI is an amplifier, not an equalizer. Strong teams get stronger. Weak teams see their problems magnified. This matches everything we see in practice. AI-assisted development doesn't fix broken processes, it accelerates them.

Engineering fundamentals still matter. Small batches, fast feedback, version control, good architecture, and a learning culture are prerequisites. Without them, AI brings little benefit, and sometimes harm.

The key finding: "The greatest returns on AI investment come not from the tools themselves, but from a strategic focus on the underlying organisational system."

In other words, maybe it's not the tech at all.

How Zenable Addresses DORA's Recommendations

The report offers six practices for successful AI adoption. Here's how Zenable covers five of them:

  • Clarify and socialize AI policies: We cover this end to end, from document analysis to dynamic and static checks. We can evaluate not only the code, but your AI adoption practices as well.
  • Connect AI to internal context: That's exactly what guardrails are for. Requirements grounded in your codebase, your standards, your context.
  • Prioritize foundational practices: You can write a control for that. We help teams detect and adopt foundational practices tailored to their stack.
  • Fortify your safety nets: This is our core. Automated policy enforcement across your AI coding workflow, in the IDE, in PRs, and across your codebase.
  • Invest in your internal platform: Zenable is the platform layer that makes all of the above operational.

What's Less Clear

The report isn't without gaps:

  • Vague on benefits. DORA claims results are "better" than last year, but shifts from percentage changes to abstract effect sizes. It's not apples-to-apples, making it hard to judge actual progress.
  • Vague on foundations. The seven "foundational practices" it advocates are described at a very high level. There's very little detail you could take away and action immediately.
  • Reliant on self-reports. The evidence is surveys and case studies, not delivery metrics. Self-reporting tends to overstate benefits (as METR has shown), and this is an ongoing limitation of DORA studies.

The Bottom Line

The 2025 DORA report confirms what forward-thinking teams already know: AI coding tools create value when they're wrapped in clear policies, connected to your context, and backed by safety nets. Without those foundations, you're just generating code faster, not shipping better software.

That's exactly what Zenable does, and what makes it different:

  • Works with your existing tools. Keep your IDE, keep your coding agents. Zenable integrates with 12+ IDEs and all major AI assistants through MCP. We're not replacing your editor, we're making it smarter.
  • Guardrails that learn from you. Zenable reviews your code changes and existing reviews to automatically build customized quality guardrails. Your team's standards, enforced consistently, without manual configuration.
  • Continuous improvement, built in. We maintain your checked-in context files, find and suggest new requirements automatically, and surface gaps you didn't know you had. Your guardrails get better over time without extra effort.
  • Your code stays yours. We never retain or train on your code. Zero data risk.

Get started for free and see where your team stands.