Three AI governance mistakes we see in almost every organisation

We’ve worked with product companies across a range of industries, and the governance gaps tend to repeat. Not because the teams are careless — most are thoughtful — but because governance is usually built reactively, in response to an incident or a regulatory requirement, rather than proactively as the system grows.

AI governance gets handed to legal or compliance teams who apply a contract-style framework to a technical system. The result is policy that looks thorough on paper but has no operational meaning. Engineers don’t know how to act on it. Product managers don’t know which decisions require review. Nothing changes.

Governance needs to be co-designed with the people who actually build and operate the systems. The policies have to translate into specific, actionable steps at the level of a sprint, a deployment, or a model update.

Mistake 2: No model inventory

Many organisations don’t know how many AI systems they’re running. This sounds extreme, but it’s common. A model gets embedded in a product three years ago, the team that built it has turned over, and nobody is monitoring it or knows what data it was trained on.

A model inventory doesn’t need to be complex. A structured document that captures what each model does, who owns it, what data it uses, and when it was last reviewed is enough to start. You can’t govern what you can’t see.

Mistake 3: Skipping the incident process

Most organisations have an incident process for infrastructure failures. Very few extend it to AI systems. When a model starts producing systematically bad outputs, there’s no defined escalation path, no severity framework, and no communication template for affected users.

An AI incident process should define: what constitutes an incident, who is responsible for triaging, how quickly the system should be taken offline or degraded, and what the post-incident review looks like.