BLOGS
May 23, 2025

The policy graveyard, or how to create guidance people actually use

Every organisation has policies - good and bad - but how do you avoid them gathering dust, and what will make your AI policy one of the good ones?

The policy graveyard, or how to create guidance people actually use

Be honest. When did you last read a policy that you didn’t have to write yourself? It doesn’t count if you were obliged to review and approve it. I mean, properly read it: reflect on the intent, follow the detail, and check what other policies might also apply before taking action.

It’s okay. I’m not here to policy-shame you. Policies are never a cracking read*. There are almost never any jokes (unless they’re part of an explanation of what constitutes an offensive remark), the stock images aren’t relatable, your time is short and the text is long. We all do the same thing: when we want to know what we’re supposed to do, we ask the nearest person who is likely to know the answer. If there’s absolutely no way around it, we do a search through some documents for the bit we think we need — then we just read that paragraph.

We know that’s what people do because now and then our customers ask their teams to check what happened before we installed an AI Policy Buddy. But we’ll come back to that.

Where did I put the AI policy we had that meeting about?

You’ve seen it somewhere. A thoughtful document on “ethical principles for AI use” that was circulated once, nodded at in a meeting, then largely forgotten — except to report to the board that it was adopted (cue nods of approval). Maybe you used Copilot to look for it later and found a draft still littered with the wilder suggestions that never made the final cut. It lives on the shared drive, gathering metaphorical dust — a monument to good intentions and poor follow-through. The policy it replaced is probably still on the intranet.

Welcome to the policy graveyard.

Rules are GREAT

In principle, I love policies. Procedures too. Also laws, regulation. I think they save time, help keep everyone safe and on the same page, underpin a functioning society, and manage risks. Having a policy in place tells you someone already asked the questions, looked at the evidence, had the discussion about what best to do, and written down an answer. That means we can all pick up from where they left off and crack on with what we need to do. At their best, your organisation’s policies give you a framework that makes everyone’s jobs easier to do, and your life easier to live. They enable better decision-making, and that helps everyone.

So why, you might ask, did my last professional coach tell me I had a “near-pathological disregard for rules”?

I think I know what happened. I should have been clearer: I like good policies that make sense. I’m not a determined rule-breaker; anarchists rarely survive 20 years in the civil service. You don’t need to convince me that murder is bad or that I should keep to the speed limit. They’re good rules that keep us all safe. I’m happy to follow them. In fact, I’d recommend them to anyone. But several things have happened to make that the case: the principles are solid, the rules make sense, I have the capacity to follow them, they are well signposted, and they come with a clear set of consequences. Good things can happen if I follow them; everything goes sideways if I don’t.

Why so many policies don’t work

GenAI is a good example of how policy setting can go awry. The truth is, many organisations feel pressure to have an AI policy, but don’t really bottom out what it’s actually for — and don’t then follow through with raising awareness, applying sanctions where they find breaches, or building capability. Worst case, they cobble together something based on frameworks from big consultancies, policy think tanks, or global ethics boards. The result is a policy that’s probably too long, too vague, and too disconnected from the tools people actually use. Worst, worst case, it’s a long list of things you’re not allowed to use, written by someone who fears AI more than they use it. A banned book list by someone who didn’t do the reading — and failed to realise that banning a book makes it look so much more appealing.

Pro tip: If your AI policy includes the phrase “ontological autonomy” but not “Google Docs” or “ChatGPT,” something’s gone wrong. Ditto if you banned ChatGPT without offering a secure alternative.

What makes a policy real?

Last year, we worked with schools and colleges to design some GenAI use policies that people actually read and use. They’re not flashy, but they work — and they flew off the shelves. Why? Because they’re:

Short – …and honestly, they could be even shorter

Specific – linked to real applications and common tasks

Actionable – with clear do’s and don’ts

Revisitable – updated as things change

But here’s the bit people often miss: the real value isn’t the document — it’s the discussion that creates it. That’s why we put them in everyone’s hands for free: to give them some talking points and key information. We like being helpful.

We’ve seen time and again that the conversations around agreeing a policy are where the clarity happens:

• “Wait, do we want staff using ChatGPT if we don’t know where that data goes?”

• “Is it OK to use AI to write reports? What kind of AI can I use for that?”

• “How do we know what tools people are already using?”

• “Wait… students hold the IP on their homework?”

These are the moments when you uncover assumptions, deal with edge cases, and, crucially, build shared understanding.

If that sounds messy, it should. Borrowing from Patrick Lencioni, this should be classic healthy conflict — where people feel safe to challenge each other, test ideas, and still move forward. You won’t land somewhere that everyone loves, but you’ll land somewhere people can live with — and that’s how good policy happens.

The final document is just the receipt.

Policy on paper vs. policy in practice

Even when you do arrive at a good policy, it’s easy for it to become “published but not present.” You’ve ticked the box, but can anyone actually find it? Do they know what’s in it? Does it help them in the moment they need it?

That’s why we created our Policy Buddy AI assistants — simple, conversational tools trained on your actual policy content. They make guidance accessible where and when people need it. No more hunting through PDFs and file structures. Just ask your question and get the answer — in your language, specific to your context.

This works brilliantly for all the buried organisational stuff, but we also use it for essential information like safeguarding policies, instructions for frontline support staff, and data protection. Because the best policy in the world isn’t useful if no one can find it when it counts.

We can see in the data that a team of around 1,000 staff — who rarely, if ever, looked up a policy before but spent significant amounts of time asking around — now use their Policy Buddy hundreds of times every month. That saves thousands (yes, thousands) of hours. More importantly, they understand the policies better, apply them more consistently, and build confidence — in their decisions and their digital skills.

Instead of being aware there’s a policy graveyard across the road that they don’t much care to visit, there’s a helpful, always-on librarian pulling out instant answers at their desktop.

You do need a GenAI usage policy, though

We can share ones we’ve seen. We can help you write yours. Just ask.

But you do need to establish some principles, set some guardrails, and clarify responsibilities — not just for teams ‘out there’ but for what the leadership team is going to do to grow skills and equip people for the future. And don’t forget to do the things you wrote into it: train staff, provide tools, raise awareness, and follow through on consequences.

But here’s the twist: you probably won’t need an AI policy forever.

We think these kinds of documents are probably a transitional tool. Over time, GenAI won’t sit in a separate “thing you use.” It’ll be a normal part of everyday tools like your Microsoft suite, Google Workspace, or your case management system. We’re nearly there now. AI will become something you talk about as part of your privacy policy, your style guide, your inclusion strategy. But until you’ve rewritten all of those — get a GenAI policy to see you through this next wave of experimentation and change.

________________________________________

*Please prove me wrong. If you have a great, short policy, send it in. I’ll pop it on the fridge.

Explore our collection of 200+ Premium Webflow Templates