AI has had quite a year. Tools that felt experimental not long ago now feel as ordinary as Teams notifications that pop up when you’re trying to present, or meetings that could have been an email.
OpenAI’s State of Enterprise AI report puts some scale on that shift: enterprise use of ChatGPT has grown eight‑fold since late 2024, and the average worker is now sending around 30% more messages than this time last year.
Microsoft’s data tells a similar story. Copilot now appears everywhere from email drafting to spreadsheet wrangling to those familiar moments of “please summarise this 42‑page policy document because I cannot.” The Copilot conversations Microsoft shares are revealing — although I can’t help wondering how much prompt content would need to be redacted for public consumption.
In short: generative AI has arrived, kicked off its shoes, made itself a cup of tea and settled in. Millions of people now use it to get through the working day — and, by most measures, they like it.
Staff using ChatGPT Enterprise commonly report saving 40–60 minutes a day. Civil servants in the UK Government’s Microsoft 365 Copilot experiment self‑reported an average 26 minutes saved per day, largely through reduced admin and less time trawling through shared drives.
These are real gains. In some teams, they are genuinely transformative. And yet, zoom out to the organisation — or the national economy — and the productivity dial we had our eye on this time last year has barely moved.
The UK’s productivity performance has been stubbornly weak for years — a phenomenon economists call the productivity puzzle. According to the latest ONS and Bank of England analysis, output per hour in the UK still lags many of our OECD peers, and investment in technology, skills and organisational change remains central to closing that gap. AI’s rapid diffusion at the task level reflects high individual adoption, but the broader productivity data reinforces that tools alone don’t move the macro needle without complementary changes in process, management practice and capability. This echoes findings from academic work and think-tanks: technology tends to raise productivity only when firms reorganise work around it.
The view from the ground
Across the public sector and non‑profit organisations we worked with in 2025 — including councils, housing providers, charities, universities and further education colleges — our very own AI assistants returned over 3 million minutes of staff time in a single year. That’s more than 52,000 hours, quietly reclaimed from administrative drag.
This wasn’t pilot activity or one‑off experimentation. These assistants are now embedded into everyday work: interpreting complex policy and guidance, drafting internal and external communications, summarising long documents, helping win grant funding, supporting compliance and quality assurance. In other words, exactly the kinds of tasks people complain about most — and exactly the ones AI is currently best suited to help with.
At the individual and team level, the impact is immediate. People feel faster. Work feels lighter. In some cases, genuinely less exhausting – and more inclusive.
But we also know those gains don’t automatically show up in productivity statistics – at least not immediately, at scale. Not because the time savings aren’t real – they are the conservative end of our estimates, and QA’d by our customers – but because sometimes the systems around AI-enabled teams haven’t moved with the tech.
What that reclaimed time does enable is no less important, in the meantime. For an organisation like The King’s Fund, it creates space for analysis, synthesis and sense‑making — time to think, challenge assumptions and shape debate rather than simply prepare material. For a children’s charity like Coram, it can mean just a bit less pressure in emotionally demanding roles, and more capacity to focus on families, practitioners and programme quality. For a college tutor or a social worker, it might be the difference between being tied up in paperwork and having the headspace to support young people properly, personalise learning and reflect on what is — and isn’t — working. Some of that will show up in staff retention or the outcomes you unlock for the people they work with – maybe even in the odd Ofsted report. In the meantime, we hear the feedback so widely, loudly and consistently that we can’t help but feel pretty flippin’ proud of everything we’ve helped our partners achieve this year. We know how hard they work.
The same goes for our private sector clients, saving hours and hours of admin time to focus on the work they enjoy and that adds value.
The frontier divide: everyone knows it’s there
One of the most useful insights in OpenAI’s enterprise report is the distinction between “frontier firms” and everyone else.
Frontier firms treat AI as part of how work is done, not as an add‑on. They connect AI to internal data (securely), adapt workflows, invest in organisation‑wide training and redesign key processes. Many run portfolios of custom assistants that quietly handle documentation, analysis, compliance checks and customer support.
These aren’t marginal efficiencies. They represent new operating models. And, happily, that describes most of our own customer base (waves cheerily).
But many organisations are still in what I’d gently call the “AI optimism with a side of circular discussion phase”: enthusiastic teams, pockets of brilliance, extensive debate about risks, and a sense that everything will be amazing once the data, governance, skills, funding and workflows have been sorted out — ideally by someone else.
Several are in month nine of testing something, often without clear measures of success or an evaluation running alongside, which makes it hard to see how those unending pilots ever become production.
Meanwhile, newer staff — often straight from college education — are quietly using AI every day (on their phones if it’s not on their desktops), accelerating their work while the organisation debates whether to turn access on or off.
The blockers that keep showing up
Across OpenAI’s enterprise data and Microsoft’s Copilot telemetry, as well as Stanford HAI’s AI Index and Wharton’s multi‑year research, the same blockers appear again and again.
- Partial (or non‑existent) integration
Only around a quarter of enterprises have connected AI tools to governed internal data. Without that, AI becomes a smarty pants assistant with no access pass and a lot of creative suggestions. - Messy, opaque workflows
AI thrives when it knows what it is meant to do. Many organisational processes are still ancestral and largely preserved through oral tradition. - Uneven skills and confidence
A small group become power users; everyone else watches politely, unsure whether they’re “doing it right”. - Risk handled by prohibition and avoidance rather than design and management
Blanket bans lead to patchy access — and riskier behaviour under the desk. Oh, the things we’ve seen… - Leadership enthusiasm without leadership follow‑through
Buying licences is easy. Redesigning services, showing your own inner workings, and reallocating time is harder.
Frontier firms aren’t winning because they bought better models; they’re winning because they created better environments for using them. And, unsurprisingly, these are the places that aren’t just good at AI but good at innovation in general.
Workers feel faster; the organisation doesn’t. Why?
Talk to staff using AI day to day and the improvements are obvious. Writing is quicker. Information is easier to find. Repetitive tasks disappear. Satisfaction is high. But at organisational — and certainly national — level, productivity looks stubbornly flat. That isn’t a contradiction; it’s a feature of where we are in the adoption curve. We’re following a familiar pattern — well documented in Gartner’s hype cycle — where early enthusiasm outpaces organisational change, before productivity gains emerge more slowly.
Task‑level gains don’t automatically become system‑level gains. Tools spread faster than capability. And productivity diffusion is always slow — as it was with electricity, computing and broadband. As it always will be. What kind of innovation adopter you want to be is, though, a choice. You can choose not to be a laggard.
Copilot’s messy reality
The UK public sector Copilot experiment captures this perfectly. Staff liked the tool, used it frequently and saved meaningful time. But rollout was patchy, guidance unclear, training inconsistent and processes unchanged.
Microsoft puts it plainly: “AI at work is here. Now comes the hard part.” Translation: you now need to fix the system around the tool.
Where this leaves us
AI works. People like it. Time savings are real. Quality gains are real. Productivity gains, however, remain uneven — because organisational change is uneven.
2025 was the year AI became normal. 2026 will be the year organisations decide whether to actually change.
The next wave of productivity won’t come from better models. It will come from better management.
But then again — doesn’t everything?