BLOGS
May 29, 2025

Opportunity and risk: Why AI is neither (a) good nor (b) evil but still (c) both/neither

We often find ourselves in discussions where teams frame AI in moral extremes: good vs evil, opportunity vs risk, saviour vs destroyer. It’s hard to make some space to reflect on the question, but it's worth it.

Opportunity and risk: Why AI is neither (a) good nor (b) evil but still (c) both/neither

In the heady days of October 2024 – when we weren’t sure Trump would get the nomination and most of us lived each day not knowing who JD Vance was– we pondered the question of whether AI was good or evil. The blog was written as a tribute to my deeply nerdish love of cinema, and as a check on our very human instinct to immediately assign new things to one of two buckets: good idea or bad idea.

It’s a question that’s easy to relate to because it’s a completely natural instinct. Like every animal, we have to assess whether the Unfamiliar Thing that enters our space is a threat or not – and we try to do it quickly, in case we need to either fight for survival or run away. The harder the Unfamiliar Thing is to understand, the more likely we are to play it safe in the interests of time: pop a big ‘risk’ flag on it, and give it a swerve. We warn others to do the same. Fair enough; your instincts have kept you alive this long. Your ability to make a snap judgement is why you didn’t click that text message about a missed delivery from "EvriTrackz247.biz" — and your credit balance thanks you for it.

These are the kinds of questions people feel, even if they’re rarely written into the procurement notice or project spec. We work with public sector teams a lot and there, as in the wider world, we often find ourselves in discussions where teams frame AI in moral extremes: good vs evil, opportunity vs risk, saviour vs destroyer. And it’s hard to make some space to reflect on the question because the volume of noise around us is growing all the time. It’s hard to get a moment to think.

The recent Community Care article captured the tension really well: is AI in social work a groundbreaking opportunity — or an unacceptable risk?

Well, having built AI tools in collaboration with real frontline professionals: it’s both. Most innovation is. Which means you need to break AI down and study it a bit more before you chuck the whole thing in the ‘bad idea’ bucket. Or, to put this back into the language of cinema, you have to be the hero standing in front of a baying mob as they call for blood. You have to insist on trying to talk to the monster, and attempt to understand it better.

Think King Kong. The mob is scared and they want the giant destroyed, but there’s always one person — often a woman or a child, usually a scientist — who sees there’s more to the monster than brute strength. They try to understand him, not just cage or kill him. See also: Iron Giant, Frankenstein, or, for our younger readers, Hiccup in How to Train Your Dragon.

Why risk dominates the conversation

Risk gets more airtime for a reason.

AI and machine learning can be used in ways that genuinely cause harm. Predictive analytics in children’s social care are a good example: proper evaluation of the work raised serious concerns about bias, lack of transparency, and disproportionate impacts on already marginalised families in England. These aren’t imagined risks. They’re well-documented, and they deserve scrutiny.

But that kind of threat assessment makes it more likely that we lump all AI into the same danger zone. A generative tool that helps social workers draft guidance, answer policy questions or save time on emails is not the same as a black-box model predicting which children are at risk of harm and pointing the finger at a family who hasn’t done anything. (We’ll call that the Minority Report problem for now, in case you only read this far because you thought this was an blog about cinema).

The point is: it’s all called “AI”, and the anxiety sticks.

The emotional truth we already know: reward and risk are inseparable

Here’s the emotional part no one likes to admit: everything with real reward carries risk.

We know this in our personal lives. Relationships can go wrong. Careers can fail. You could get hit by a bus when you're crossing the street. But we don’t lock ourselves in our homes forever: we manage risk, we make informed choices, and we get on with living. So why do we expect AI to be risk-free before we’re willing to engage with it? What’s the standard we’re holding it to?

Risk can rarely be eliminated. It can be understood, minimised, and made proportionate to the benefit. That’s what we’ve done with our RAG AI assistants: built in human oversight, prioritised transparency, avoided decision-making automation entirely. It’s a tool, not a prophet.

Let’s keep the Hollywood villains in check

A decent chunk of our fears come from the cultural baggage we all carry. The AI we see in films is rarely a quiet back-office assistant — it’s Skynet, HAL 9000, or that terrifying child-bot from M3GAN.

And Hollywood isn’t done with us yet: take the new Mission: Impossible films. The entire plot revolves around a rogue, unknowable, invisible AI so powerful it destabilises the world order. No spoilers, but the metaphor isn’t subtle: someone let the genie out of the bottle, and the impossible task is putting it back in. Wholly ignoring the obvious problem: you can’t. Which is fine, because it also doesn't exist.

These stories are entertaining (and all my respect goes to Tom Cruise’s stunning stunt work). But they frame AI as something unknowable (fair), uncontrollable (try harder…), inherently dangerous (seriously, try harder).

Meanwhile, in the real world

In real life, AI isn’t an abstract entity plotting world domination. It’s not even one thing, but it’s helping someone write a report, summarise a policy, or draft an email. Most of it is deeply boring and incredibly useful.

We’ve already seen how this can work in practice. In North Yorkshire, our Policy Buddy has been used thousands of times by frontline staff to check safeguarding procedures, write better guidance, and navigate complexity faster. No one’s handing over decisions to it. It doesn’t replace professional judgment. But it does support good work, quietly, in the background.

Other organisations are using generative tools to streamline admin, help with bid writing, or get support plans done faster. These aren’t abstract futures. They’re working examples. And the people using them aren’t tech evangelists – they’re people trying to do their jobs a little better.

We’ve worked with forward-thinking organisations across sectors who approach GenAI in the same spirit: not to chase hype, but to carefully unlock value. We’ve helped teams design safe internal tools to handle HR queries, generate board reporting, and analyse data — all with clear safeguards and built-in human oversight. It’s not flashy. It’s just... really, really useful.

Embrace the ‘and’ – lean into ‘what if…?’

We need to get better at embracing complexity. AI is opportunity and risk.

The challenge — and the responsibility — is in how we design, govern and use it. That’s as true in local government and social work as it is in finance, HR, or product development.

We don’t need to rush to pick a team – or a bucket. We need to stay alert, stay human, and keep designing systems that do more good than harm. That’s not evil. It’s not perfect either. It’s just progress.

What if the enormous power of AI meant we could identify a risk we hadn’t seen before? What if it completely shifts the balance of how you spend your day, so you can get through your tasks, breathe for a second, and look further down the road?

So no, AI isn’t good or evil. It’s not risk-free, but it’s not chaos incarnate either. It’s a tool — a powerful one — that’s already being used with care, creativity and a fair bit of common sense.

Work the problem — but don’t forget to keep asking: What if…?

*You can take your pick of the 1933, 2005 or 2021 versions, but RKO’s original is a stone cold classic. You can skip Kong: Skull Island (2017). I wish I could have that time back, but maybe you can still save yourself.

Explore our collection of 200+ Premium Webflow Templates