This week the public vote closed for the iNetwork Innovation Awards. Every year they surface some of the most thoughtful, grounded innovations happening across local government and public services. Not the loudest. Not the glossiest. The ones that have quietly made a dent in how work actually gets done.
Most of us are “local residents”, but we don’t wake up desperate to read a shortlist about innovations in procurement reform or data standards. (Some of you do. I see you.) What we notice instead is what happens when something matters.
When we pick up the phone because we’re worried about someone, or when a housing issue won’t resolve. We notice when we’re trying to get help and the system feels broken.
We also notice whether the person on the other end sounds confident. Whether they have the right information at their fingertips and the system joins up behind the scenes to get The Thing sorted out without us having to chase it three times.
When councils get the internal work right — governance, data sharing (my personal favourite), sensible use of new tools — it changes that experience. It doesn’t make the front page. It just makes life less painful when you need help.
And that’s why this moment in AI adoption feels important.
Licences have been bought and assistants have been switched on. Enterprise deals have been signed. There has been no shortage of enthusiasm – and yet, in a surprising number of organisations, the value still feels… unrealised.
Which brings me — inevitably — back to one of our favourite topics:
How do we get better at systematically applying decades of learning about good change management to AI adoption?
And why are some people still behaving as though Copilot rollout is a strategy?
If it were The Answer, we’d be seeing those promised productivity gains everywhere by now. We aren’t.
I had a small run-in with Copilot this week. I was trying to open a SharePoint document I’d been sent a link to. Instead of simply opening it like a normal piece of software, I found myself negotiating with it. It kept popping back in to ask whether it could help. I asked it to compare two documents; it thought for a very long time and then gently suggested I might try a different query instead.
Later — as if on cue — my LinkedIn feed served me a blog explaining that the real problem with Copilot is that we’re all prompting like small children and Must Try Harder.
Reader, I am not the problem.
Of course people can improve their prompting. Of course there are features we haven’t explored. But telling thousands of staff that productivity gains will appear if only they try harder is not an AI strategy: its wishful thinking dressed up as empowerment.
Giving everyone Copilot as your answer to AI strategy is the same as giving everyone Excel as your answer to data analytics.
Excel is brilliant. It is powerful and widely available. But no serious organisation believes that handing out spreadsheets automatically improves forecasting, financial discipline or performance management. Those things require defined use cases, elegant processes, visible leadership and a clear idea of what “better” actually looks like.
AI is no different, and the evidence isn’t ambiguous. Prosci’s long-running change management research shows that projects with strong, structured change support are far more likely to meet their objectives than those without it. And “strong change support” is not mystical: a clear case for change, proper sponsorship, champions who actually influence behaviour, reinforcement over time — all the good stuff we’ve known about for years.
Meanwhile, Work Trend Index 2024 makes clear that AI experimentation is accelerating rapidly. Usage is up. Interest is up. But sustained productivity gains are still being worked through. Adoption is not the same thing as transformation.
And yet we still see strategies that amount to:
Step one: Enterprise licence purchased.
Step two: Launch email sent.
Step three: Champions network created on Teams.
Six months later: “Why hasn’t productivity doubled?” “Why isn’t anyone contributing to that Teams chat?”
Hm. I wonder.
If human habits don’t change just because the tools do, perhaps the missing ingredient isn’t another all-staff email reminder or a license upgrade.
Take North Yorkshire Council, recognised in this year’s iNetwork awards, and not for the first time. They did not simply switch on generic assistants and hope for the best: they built a pipeline for innovation that works across their organisation. They also defined bounded use cases. They involved practitioners early (or, more often, were led by the practitioners themselves with the tech squad just there to help out). They embedded assistants directly into existing workflows. They iterated and modelled good practice at every level. They governed it properly.
The result is what they affectionately call a “flock of bots”: domain-specific assistants supporting real service areas, not floating abstractions absorbing compute like a light left on in a cupboard.
But the important bit isn’t the bots (some of which are ours, which is not to say we’re not super proud of them). The important bit is the work they did around them.
They didn’t tell social workers to “use more AI” or “prompt better”. They also didn’t suggest that uneven take-up was a maturity issue. They designed tools to fit the context, set guardrails, created space to learn, and were honest about what success looked like.
That’s change management. Obvs.
And our friends in North Yorkshire are not alone. Across councils and businesses alike, the projects that are sticking – and there are lots that are – share the same characteristics: clear priority problems, strong senior sponsorship, thoughtful information governance, role-specific (light-touch) training, feedback loops that actually get used.
We’ve seen the same pattern outside local government. At Ambition Institute, their Technology Director didn’t simply announce “we have AI now”: she joined team meetings, listened to scepticism, encouraged experimentation and reshaped the tools to reflect Ambition’s tone of voice and mission. That patient, visible sponsorship has led to over 60% staff adoption, 11,000 prompts a month and around 200 staff days saved monthly. That’s not because people were suddenly better prompters; it’s because the change was handled properly.
There is, annoyingly, no substitute for Doing The Work.
Part of the reason the “just use it more/better” narrative grates is that it individualises what is actually a leadership responsibility. If productivity hasn’t improved, the implication is that staff haven’t leaned in hard enough.
But most professionals are not short of intelligence. They are hard workers and they are short of time, clarity and, sometimes, psychological safety.
A housing officer does not need a masterclass in advanced prompt engineering: they need an assistant that understands policy and doesn’t hallucinate regulatory nonsense. A children’s social worker needs support that reflects local safeguarding thresholds. And they need tools intuitive enough that learning happens through doing, not through attending yet another optional webinar.
What I like about the iNetwork awards is that they reward teams who have done the less glamorous work. The governance sessions, workshops, iterations and the slightly awkward conversations about risk and responsibility.
They remind us that innovation is not the same as procurement.
If your AI programme is under-delivering, look at how you implemented it as well as what you implemented. Set success criteria — and exit criteria. Explain what’s changing. Reinforce new behaviours. Resist the temptation to imply that frustration equals incompetence.
If we respect that, AI can materially improve the moments that matter — including the calls we make when something feels wrong. And that’s worth doing properly.