
I’ve long been fascinated by the concept of never events and why the term has never really crossed over from health to other sectors.
For the uninitiated, a never event in healthcare is simply something that should never happen. The concept was defined in US in the early 2000s by Dr Ken Kizer to describe errors so clearly preventable that their occurrence signals a fundamental breakdown of process: surgery on the wrong person, a swab left behind after you’ve been sewn up, a mismatched blood transfusion… that kind of thing.
The power of the idea isn’t just moral; it’s structural. If an event is classed as something that should never happen, the chain of accountability becomes very clear. In healthcare, hospitals can lose funding or insurance coverage when never events occur. In theory, if every step in the process is followed — count the swabs in, count them back out, check the name on the bracelet — these outcomes should be impossible. Liability becomes simpler, and everyone’s focus shifts to prevention.
What’s always struck me as odd is that the concept never really migrated. There are a few parallels: aviation has categorical loss events — accidents that must be prevented by design. Nuclear and chemical industries talk about hazardous failure modes and build multiple layers of redundancy to avoid them. But outside those high-risk domains, most organisations have no shared language for the things that should never go wrong.
What we’re left with is a lot of implicit rules, some advice best practice that may or may not be systematised, and human error. Which is why I’ve started to get twitchy every time I hear the phrase “human in the loop.” And I work in AI, so I hear it a lot.
Redefining “never”
In practice, every organisation has non-negotiables: the things that must not happen. A safeguarding concern reported but not acted on, or a data file accidentally shared. A compliance report unchecked and unsigned. They rarely make headlines, but they cause real harm — reputational, financial, and human.
The logic of never events applies perfectly: if we can clearly define a mistake as preventable, we can design systems to stop it. And what I have to wonder is whether AI could help by spotting things humans miss — a mismatch between two records, or a process that silently stalls because someone’s off sick.
This won’t be the glamorous end of AI; it doesn’t generate dazzling insights. But AI does notice things we don’t — and sometimes that’s what’s needed.
The problem with “human in the loop”: us
Whenever AI appears in sensitive contexts, someone will say, “and of course we’ll always keep a human in the loop.” It sounds reassuring and triggers much nodding of heads, but I’m not so sure we should take it as a given.
First, we’ll soon run out of capacity to insert ourselves into every AI-enabled loop — we simply won’t be able to keep up. Second, what if the loop itself is unreliable?
Humans forget, assume, skip steps, or mean to check something later. It’s not carelessness; it’s how we’re wired. But we keep treating “more human oversight” as the ultimate safety net, when in other domains we already trust automation completely. Air-traffic systems prevent mid-air collisions without waiting for manual confirmation. Banks let algorithms flag fraud faster than any compliance officer could. We intervene when the system says “this looks wrong,” not before.
Maybe it’s time to apply the same logic elsewhere — and admit that machines are sometimes the steadier member of the team.
I will never be as diligent as my AI assistant
AI doesn’t forget to attach files to emails — but we do. Except we’re all doing a lot less of that since Outlook started asking, “It looks like this email should have an attachment…” Turns out a simple automated routine can save us from our knack for basic errors.
You might be thinking: we only miss things like that because they’re unimportant; humans are focused on the stuff that matters. Are we, though? Remember that time our actual Prime Minister left his daughter behind at the pub? We definitely miss the big stuff too. Ask any parent. Ask my mate who was left on the doorstep at the age of six while her mum did the school run with an empty back seat.
AI’s reliability, not its intelligence, is what makes it powerful. In contexts like safeguarding, finance, data protection or procurement, the biggest risks usually come from distraction or inconsistency, not bad intent. A well-trained AI system can cross-check, reconcile and flag anomalies without fatigue or bias — if you buy the right tool and train it well.
It can act as the calmest, most objective, most thorough member of the team: never rushed, never tired, always watching for the thing that must never happen.
Designing for dependability
I’m not here to convince you to “trust machines more”, and I’m not letting any of us off the hook for decisions where we are accountable for the consequences. Experience (human, mainly, but some machine-learned) tells me the real challenge is defining what never actually means when you’re in a room with professionals.
That step requires clarity, policy discipline, and human judgement about values and trade-offs. The less science-oriented the group, the harder the conversation can get — we’re more easily distracted by abstractions. But once defined, AI can enforce those boundaries with quiet reliability.
Think of it as a next-generation checklist — one that learns, adapts, and spots when a process starts drifting off course. We don’t ask surgeons to remember every step unaided; we design lists and delegate some jobs. Maybe AI can be the digital version of that: the tool that keeps us aligned when our attention wains.
The main (never) event
It’s fascinating to me that never events have stayed resolutely confined to healthcare, even though the principle belongs everywhere. I once read a paper by the excellent Dr Jeff Mesie arguing for their use in children’s social care, but no one quite seemed ready to deploy it in practice. In the world of policy, we tend to keep to the safe space of apportioning responsibilities through regulation (the ultimate sledgehammer for nutcracking) when we might just need better routines that become habits.
So this is my latest nominee for where AI can help — preventing the harm done by basic errors, and sharing information at pace so industries learn from one another. We’re not great at crossing the streams, but it makes us stronger when we do.
The real “never event” might be refusing to use tools that could stop harm before it happens. Machines cannot replace empathy — they’re here to take care of the parts we’re less good at: remembering, repeating, and noticing small deviations before they bite. I want to sit in a conference hall where someone says, “…and, of course, we always should have an AI in the loop,” and everyone nods as if to say, “Well, yes, obviously…”
Perhaps the safest systems of the future will have each of us doing what we’re best at: the human who understands why it matters, and the machine that makes sure it never gets missed.