It’s April and I’m having an AI admin spring clean

Author: alex.steele@leadingai.co.uk

Published: 24/04/2026

It’s April and I’m having an AI admin spring clean

Even if you don’t work in finance, or education, or anywhere that runs on a formal ‘year’, April carries a lot of “new term” energy after that nice run of long weekends. Around now, I always get the sense that I should probably get things in order – usually triggered by the arrival of better weather and an email from my accountant asking me to tell him what I got up to in the last tax year.

So I tidy up the admin. I make a plan, and I get ahead of things before everything accelerates madly towards summer.

Except of course I don’t. I’m busy. Everyone’s busy. It’s April and we can feel the weight of ‘things to do before summer’ starting to build. The good intentions are there, but it feels like a ‘future me’ problem.

I’m not that unusual; I’d bet we’re all mainly directing our ‘new year energy into reacting. We’re clearing inboxes, getting through meetings, keeping things moving, and carrying that familiar guilt that there are things we should be getting ahead of.

Getting a grip on AI probably sits squarely in that category. And rather than giving you another well-intentioned but ultimately ignorable set of recommendations that you can keep transposing to the next ‘to do’ list you write, and then the next one, here’s some advice that might help.

 

If you do nothing else on AI this spring, focus on four things that will actually make a difference.
  1. Someone needs to own this (properly)

AI has moved beyond the point where it’s just experimentation happening at the edges of your organisation. That doesn’t mean you need to respond with a heavyweight strategy or a new governance structure that generates more paperwork than progress, but it does mean you should be able to answer a very basic question without hesitation: who is responsible for how AI is being used here?

In the UK, there isn’t one single piece of legislation driving this’ it’s more subtle. Frameworks like UK GDPR and the Equality Act 2010 already apply, and regulators are making it clear that AI doesn’t sit outside existing expectations around fairness, accountability and transparency. The direction of travel is straightforward. though: organisations are expected to know what they are doing with AI, and who is accountable for it.

In practice, this is much simpler than it sounds. It is not about creating a new role for the sake of it. It is about making sure this doesn’t sit in a gap between teams, or get treated as something that “just happens” because the tools are available. If something goes wrong, nobody will be particularly interested in which product you bought or who deployed the software. They will want to know who was responsible for how it was used.

And don’t just say “the IT team”. They are already on the hook for a lot of things, but they don’t solely control what gets bought, how it’s used in practice, or the decisions it influences. This needs to sit with someone who is actually empowered to grow your use of AI, to lead innovation and build capability across the organisation – and who understands the risks well enough to manage them sensibly.

 

  1. Be clear what decisions AI is influencing

There’s no point asking if people are using AI; they definitely are, even if it’s just ChatGPT on their phone, under the desk, or that iffy summary a Google search always generates. At this point, the smarter question is “what decisions is it shaping?”

Sometimes it’s obvious. Organisations are screening CVs, prioritising cases, deciding who gets a response first, suggesting actions in a workflow… Sometimes it’s less visible: drafting content that influences how a decision is framed, summarising information in a way that emphasises certain points over others, or nudging the order in which things are looked at. This is where the real change is happening: not just in the presence of AI, but under its influence.

Rules around automated decision-making are evolving, but the core expectation has remained consistent: where decisions affect people — particularly in areas like recruitment, access to services or financial outcomes — there needs to be human oversight, an ability to explain what has happened, and a route to challenge it if needed.

That can sound like a lot. In reality, it often comes down to being able to describe, in plain English, what the tool is doing, where it is used, and how a person can step in if something doesn’t look right. A short note of a conversation where you worked that out is usually far more useful than an elaborate framework that nobody ever refers back to.

 

  1. Own the risk; you cannot outsource it

There is still a tendency to assume that if a tool comes from a well-known provider, some of the responsibility travels with it. It doesn’t.

If your organisation is using a system, your organisation is accountable for the outcomes it produces. That applies whether the tool is built in-house, bought as a service, or embedded in a broader platform that people are using day to day.

This is not an argument for slowing everything down with procurement processes or technical reviews, but it is a case for asking a few straightforward questions and making sure the answers are understood and accessible. What data is being used? How are outputs generated? What safeguards exist? What happens when the tool produces something inaccurate or inappropriate – how will we know?

Asking the questions and writing those answers down somewhere is often enough. It gives you a shared understanding, and a starting point if you need to explain your approach to someone else. Don’t say ‘safeguards are built in’ unless you can also say what those safeguards are.

 

  1. The workforce capability bit is no longer theoretical. You need a strategy, and your strategy needs a delivery plan.

The question is not whether people can use AI, but whether they understand AI well enough to use those tools properly. If you ask us (and some of you do, which we appreciate) that is becoming a capability issue more than a technology one.

At the simplest level, there are a few things that everyone needs to understand.

People need to know what gen AI is actually doing when they use it — that it is generating outputs based on patterns in data, not “knowing” things in the way a person does. Not even in the way a database kinda does. Everyone needs to have a basic sense of where information is coming from, especially when tools are connected to internal documents or external sources. They need to understand that outputs can be wrong, especially if your source is ‘the internet’, or if it’s incomplete or biased. And checking matters more, not less.

None of that is especially technical; but it is increasingly essential.

Then there is a second layer, which not everyone needs, but some people really do.

If you are designing services, making decisions that affect people, or choosing which tools to use, you need a clearer understanding of risk and responsibility. That includes how data is being used, how outputs might vary, where bias could show up, and what good oversight looks like in practice. It also means thinking about ethics in a practical way — not as a set of abstract principles, but as everyday questions about fairness, transparency and accountability in how work is done.

This is where organisations can get themselves into difficulty, not because people are using AI, but because they are using it without a shared understanding of how it works or where the boundaries are.

The good news is that this is totally manageable.

You do not need everyone to become a technical expert – but you need to notice that a few things have moved from being ‘a technical issue’ to ‘a universal issue for modern times, which we all need to know about’. You need a baseline level of confidence across the organisation, and a smaller group of people who can go further and support others. That means building simple guidance into workflows, sharing examples of what good looks like, or making it clear where AI is expected to be used and where it is not.

Handled well, this is what allows AI to be used effectively. People understand enough to use it with confidence, know when to question it, and have somewhere to go when they are unsure. That is a much stronger position than either blanket enthusiasm or blanket caution.

 

So… what should you actually do this week?

If you are already busy, and I know you are, the goal here is not to introduce another layer of process. At Leading AI we are very much ‘bonfire of the pointless bureaucracy’ folks by nature. But, awkwardly, we’re also ‘do follow the rules when it saves time and makes the outcome better’ people, so I’m saying there’s a sweet spot. Put just enough structure in place to stay in control of what is already happening and enable innovation in practice. If you’re in the UK, be grateful you’re not subject to the extended disco-length rules applied in the EU, but know they will come for us all if we can’t demonstrate that we can be trusted with the current, more principles-based, approach.

Once you have a clear view of what is being used, where the risks sit, and who is responsible, it becomes much easier to move with confidence. You can try things without second-guessing whether you have missed something important. You can build on what works, rather than starting from scratch each time. You can explain your approach to colleagues, boards and stakeholders in a way that feels grounded and credible, because it is.

So yes, it is April, and you already have a list and you know you can’t ‘do’ AI transformation in one big, fat go. But hopefully now you know what to do next.

Oh, and if you work in the UK, here’s a handy table for you. You’re welcome.

 

Resource Why it’s useful
AI Opportunities Action Plan: One Year On Latest info on UK government priorities on AI, including skills, public services, compute and support for business.
Artificial Intelligence Playbook for the UK Government Ten core principles and step‑by‑step guidance on safe, effective AI use in government and wider public sector bodies.
AI regulation in the UK: Government response to White Paper – Courtesy of Burges Salmon Plain‑English summary of the UK’s pro‑innovation regulatory framework and what’s coming next for organisations using AI.
Artificial Intelligence – UK Regulatory Outlook (from Osborne Clarke) Concise, handy updates on UK AI bills, automated decision‑making rules, copyright, and related EU developments that still affect UK businesses.
Making government datasets ready for AI Practical checklist and best practice for preparing public‑sector data so it can be safely and effectively used in AI projects.
National AI Strategy High‑level vision for how the UK aims to develop and deploy AI across the economy and public services.
Free AI training for all – GOV.UK / Skills England AI Skills Boost A news blog about the free, short online courses to build foundational AI skills for workers.