
We are living in quite the moment for AI – and having a ringside seat for history is a genuine privilege. Organisations are deploying new tools, suppliers are running demos and, to help, bosses are sending people to compulsory two-day ‘prompt writing’ workshops while your HR team are designing AI training modules. But there’s a catch: we don’t really know what “good AI training” looks like.
First, we need to clear something up: This is an AI blog about AI training, not training AI. I mean, training as we understand it in our normal working lives. The thing where you go to a class or shadow a colleague or read a book and acquire a new skill or some new knowledge. As opposed to training, the thing where you teach an AI by feeding it lots of examples so it can spot patterns and make predictions. But you got that, right?
The main difference is your AI has a MUCH higher tolerance for training volume, duration and repetition, whereas you need frequent comfort breaks, snacks, caffeine and the occasional bit of humour to hold your attention. As you get older, you also get more fussy about it needing to be relevant and a really good use of your time. AI remains at the ‘keen puppy’ stage, hanging on your every word.
This is the worst AI CPD we will ever have
We are not, I think, living in a golden age of AI training. To generalise slightly, in the interests of speed and subjectivity: across the private and non-profit sectors, big organisations are buying up licences for tools like Copilot and then arranging lots of one-hour demos while developing the odd pilot agent. Quality is mixed and impact is tbd. An offer of training ‘drop-ins’ is a great way to reach lots of people, but the busiest people, who could most do with an AI assistant, are the ones who still struggle to make the time to go. It’s difficult to say with any certainty that the potential benefits of the tech are being unlocked, in those circumstances.
What we’re not seeing is a tonne of good evaluation of either pilots or training.
We’ve also seen a lot of organisations – especially the largest ones – running compulsory, sometimes eye-wateringly long, workshops on how to write prompts for generative AI models. The strapline is usually something like, “Anyone can be a prompt engineer!” We have yet to hear from someone who thinks it gave them more than you’d get from a decent, short, online ‘lunch and learn’ with a colleague who’s cracked the code already, through a bit of trial and error and maybe having read a blog or two.
Folks with deeper pockets are outsourcing this work, inevitably; hiring external consultants to deliver “Comprehensive AI training” packages so they can report to their board everyone definitely had the training, and efficiencies are imminent… tick! Good job AI doesn’t really change, eh? That training you did in Spring will definitely still apply next year.
To be fair, those packages will at least get more deeply into risks and good versus bad practice. One or two might even touch on sustainability and help with a bit of strategy development.
In the darker recesses of the private sector, we’re also hearing about companies who expect staff to shift their workflows immediately – with the offer of training if they’re lucky, redundancy if they’re not. Good luck with that.
Some of the help on offer is useful in so far as it generates awareness and helps set an expectation for professional development and innovation. But training developed at this stage of gen AI evolution often skips over the messiness of the real work you want to apply it to: having unclear tasks, poor data, multiple competing priorities, and constraints that don’t show up in demos. People who understand tech better than you can teach you a lot, but they can’t always relate what they know to your day.
What the evidence tells us. So far.
It’s a touch unfair to paint everyone as a keen amateur. Here is what some actual evidence tells us:
- OECD: Skill needs and policies in the age of Artificial Intelligence (OECD Employment Outlook 2023).
TL;DR: (Re)training is necessary if you want to unlock productivity gains, but we’re not doing enough.
The OECD report argues that AI will significantly change task and skill composition of tons of jobs, and that adult learning systems must adapt quickly. Key findings include the fact that, while firms implementing AI usually provide training, in many cases existing public policy support and training systems are insufficient for the speed and scale of change required.
- UK Government Copilot experiments.
TL;DR: Generative AI assistants like Copilot can definitely save time.
The Microsoft 365 Copilot Experiment: Cross-Government Findings Report (Sept-Dec 2024) involved 20,000+ civil servants. Key results: average reported time savings ~ 26 minutes/day; 82% of users said they would not want to return to pre-Copilot conditions. But limitations showed up in complex or data-heavy tasks, fidelity of outputs (which feels like researcher speak for ‘spouting of nonsense’), and the need for oversight.
- All serious research into career professional development (CPD).
TL;DR: Good CPD needs training that feels useful, keeps people interested, and helps them actually do their job better.
There are literally too many good, peer-reviewed, evidence-led reports to name that will tell you CPD that works has to be relevant (connecting directly to people’s real work and goals, so they can see the point of it), engaging (interactive, practical, and pitched at the right level, not just information delivery) and applied (giving people the chance to practise, reflect, and embed what they’ve learned so it sticks.)
What your training should include: a field guide for managers
Here are some principles and practical ideas we advocate:
- Embed learning into real work
Instead of isolated classroom sessions, use real tasks. Pilot users or “champions” can surface common failures. Use messy data and ambiguous cases to normalise the need to keep thinking and talking. - Use short, spaced, iterative sessions
Multiple small sessions (e.g. monthly or bi-weekly), with reflection on what worked/what didn’t, are better than a single, long sessions. - Include leaders and supervisors
People overseeing work need to understand AI’s capabilities and limitations. They must be able to ask: How did you use it? What did you check? What did you reject? You’ll uncover any bad AI practice the same way you uncover any other kind of practice: people won’t be able to properly explain their choices. - Build in reflection/feedback loops
Peer reviews of AI-augmented work are handy. Team discussions about decisions influenced by AI are gold. Try some post-task debriefs: what went well, what were the surprises, what errors turned up. Just for now; we’re all learning. - If you’re worried: have some “AI-off” zones
Periodically get people to do analogous tasks without AI to preserve skills. This needs careful handling because it’s a bit like taking away someone’s toys, but it might be necessary if you’re seeing worrying evidence of over-reliance, or you need to train new staff. - Measure what matters
Track time saved, yes, but also track error rate, quality of decisions, satisfaction, trust, confidence, and sense of professional judgment. Use qualitative insights as well as quantitative. - Use AI to help develop training
For example, generate examples of bad / good prompts, workshop plans, sample error cases, role-plays, feedback templates. Save money and get domain specificity.*
Things can only get better
Training for AI doesn’t need to be perfect yet, but it does need to be thoughtful. The goal isn’t to turn everyone into prompt engineers — it’s to build confidence, judgement, and the habits that make AI genuinely useful. If we can get that balance right, today’s awkward first steps will look like the beginning of something much better.
But we’re still figuring out what good AI training looks like, and that’s fine. What matters is creating space for people to learn together, share mistakes, and apply tools to the work that actually matters. That’s how the worst AI CPD we ever had becomes the best we’ve ever seen.
For now, keep it relevant, keep it short, keep it practical — and remember that, unlike the AI, your staff can’t be trained by brute force.
*Our partners in Torbay Council are great at this, but they’re not alone. It’s the kind of efficiency you need to find when you’re an SME or non-profit organisation and every scrap of resource matters. They have one of our retrieval augmented generation (RAG) AI policy buddy for their social workers and SEND teams, enabling them to more easily find and apply the right procedures and best practice. Their policy buddy can explain, in any language, how to get the best out of it, so it will cheerfully use the large language model baked into the tech to help with prompts or build training modules from its library of verified content. Nice.