Explaining the explainable: when AI helps you understand AI risk

Author: alex.steele@leadingai.co.uk

Published: 19/10/2025

Leading AI

It’s rare that I read something and feel like I need a grown up to explain it to me, but every so often I still stare blankly at some word soup and realise I am either (a) well outside my knowledge comfort zone, (b) even more tired than I thought and/or (c) reading nonsense that may or may not require further study. This week, my Becher’s Brook – my Everest – was this paragraph in an AI newsletter:

“…a toxic brew of leverage (debt) in crypto trading led to the $700B sell-off over the weekend, where positions got programmatically liquidated. Yueqi Yang suggests that this can be seen as a dress rehearsal for what happens if AI’s “multilayered leverage” gets similarly margin-called (where, us talking now, companies like Oracle borrow billions to build data centers based on OpenAI’s future revenue commitments, but AI adoption disappoints and OpenAI can’t pay its $60B/year bills, the debt can’t be serviced, triggering a cascade through the entire circular financing loops).”

I had a sense this was probably important – $700bn is a lot of money, words like ‘triggering’ and ‘cascade’ are red flags – but the ominous sense of risk was wrapped in the kind of language most of us (back me up here) instinctively skim past. So, I did what any self-respecting pro does in the age of AI: I asked AI to explain it.

“Explain it to me like I’m a reasonably smart professional but not necessarily an expert in the financial markets.”

“Imagine the AI sector as a massive chain of IOUs. Cloud providers borrow billions to build data centres, betting that AI companies like OpenAI will soon make enough to pay them back. Meanwhile, AI companies make long-term commitments based on customers they expect to have. If real-world demand doesn’t keep pace with those forecasts, the repayment chain snaps — and because everyone’s debt depends on everyone else’s future profits, the shock can cascade through the system. It’s essentially a financial echo chamber built on optimism.”

Well, that doesn’t sound good. Just to make sure…

“Explain it to me like I’m a child. Or just a really tired adult with a limited attention span.”

“Imagine a group of local teenagers have heard there’s going to be a huge vegan parade in town. They borrow money from their parents, spend the weekend baking matcha blondies and cacao energy balls, and set up their stall by the high street. Then they realise they misheard — it’s actually a VW parade for Beetle enthusiasts. Instead of hungry vegans, they’re surrounded by classic car enthusiasts asking for bacon rolls.
The AI world’s a bit like that: everyone’s been gearing up for one kind of boom, and there’s a chance the crowd that turns up isn’t quite who they expected. It gets awkward when the parents ask for their money back.”

Herbie’s coming? Awesome!

“Explain it to me like my main joy in life is an abstract analogy.”

“Picture a hall of mirrors where each mirror is shining because it’s reflecting the others. It looks dazzling — endless light. But none of them actually makes light; they’re just bouncing it around. Then one mirror cracks, the reflections stop aligning, and suddenly the whole room goes dim. That’s what happens when AI’s “multilayered leverage” gets margin-called: the brightness we thought was growth turns out to be everyone reflecting everyone else’s optimism.”

Damn, ChatGPT… are you okay?

The lesson learned

I’d spent five minutes rereading the original text and wondering if it was just confusing to a non-expert or if I was finally experiencing the AI-driven cognitive decline I’d been warned about. Then, within seconds, AI gave me different ways to see the same idea — each illuminating something slightly different but important. And each reminding me there are simple ways to explain complex ideas.

That’s what genuinely impresses me about AI-generated language. Not its cleverness or polish per se, but its ability to teach. You can ask it to explain something for your brain, in your tone, matched to your level of patience. It’s less like searching the internet and more like that teacher who explained long division a different way because the first explanation just wasn’t landing for you.

And it turns out it was a useful insight

Once I understood the point the writer was making, the practical implications clicked into place. Over one weekend, the crypto bubble had burst and a chain reaction caused a $700 billion sell-off, with lots of loans called in at once, driving forced selling and making prices fall further still. When gen AI reaches the same point in the hype cycle, maybe the same will happen. But the real risk for most individuals and organisations just going about their day jobs isn’t about crypto-style market crashes — it’s about dependency. If a chunk of the AI sector is built on optimistic borrowing and circular commitments, then overconfidence at the top could ripple through to the tools we all use every day.

Happily, for most professionals, that means thinking less like investors and more like risk managers, which is squarely back in our comfort zone.

So how significant is the risk and what’s the right strategy to manage it?

If a supplier collapses – and that’s the risk we’re really looking at for most organisations – it doesn’t necessarily mean they disappear overnight. More often it shows up as slow, creeping disruption — a service stops working properly, features vanish, prices rise, or support dries up. Your data might be stuck in a system you can’t easily access, and after a merger or rescue deal the product may quietly drift away from what you originally bought. In practice, supplier failure tends to look less like a crash and more like a fade — but the impact on your organisation can ultimately be just as sharp.

The risk is greatest where the upfront spend on development and marketing has been huge, but the profits take time to materialise — or where a nimble start-up, less reliant on deep pockets, can undercut or out-innovate a more traditional player before they see a return.

What should we do?

First, we need to keep things in perspective. The outlook is promising, it’s simply not without pressure points. The confidence behind today’s AI investment boom rests on a handful of big assumptions: that adoption will keep accelerating across every sector, that technical progress will continue at pace, that monetisation will scale smoothly, and that the infrastructure, energy and regulatory conditions needed to support it will stay cheap and stable.

Demand for chips, power, and skilled people could make growth more uneven than forecasts suggest, and regulation or public concern may shape the pace of adoption. Many organisations are still working out how to turn AI use into reliable returns, and some early investments may prove larger than necessary. Even so, the fundamentals remain strong — it’s just worth remembering that progress depends on a lot of moving parts and no one is immune to failure.

It’s time for some classic risk management: sketch out your mitigations and work out if you need a contingency plan. Here’s what we’d suggest:

  • Diversify vendors where possible: Avoid becoming too reliant on a single provider — build contingency plans for alternatives.
  • Monitor your SLAs: Keep track of service-level agreements and look for flags that suggest issues with your vendor’s health. Communicate with IT/procurement teams about possible exposure to changes in the supplier market.
  • Data portability: Make sure your organisation’s data is easy to access and move elsewhere if needed, so you can move to a different provider with minimal disruption, if necessary.
  • Scenario planning: Understand which business-critical processes would be affected by a supplier failure or cost surge, and get involved in scenario planning as your organisation’s AI adoption deepens. You know the work so you will know how the risk plays out.

In other words, diversification and good partnering isn’t just a financial principle — it’s an AI adoption essential.