Episode 8: AI Liability, Copilot’s Dirty Secret & Two Agents Having a Row
No beer again this week — just water, Coke, and the usual dose of brilliant conversation (Ed. – Seriously? Who writes this stuff?).
AI liability — who’s actually responsible when it goes wrong? Kieron sat down with Peter Lee of Simmons & Simmons, head of AI governance, to ask the question nobody has a clean answer to yet: when AI gives someone wrong information that affects their life, whose problem is it? The short answer? There’s no case law yet in England. The longer answer involves multi-layer liability chains, emerging insurance products, the EU AI Act, and why Leading AI’s obsession with privacy, accuracy and monitoring means they’re already further ahead than most. Peter’s comment? It’s really interesting that you’re thinking about this — because most aren’t.
Copilot is for entertainment purposes only. Verbatim. Kieron found it in the terms and conditions. Microsoft’s own Copilot licence states — and this is a direct quote — “Copilot is for entertainment purposes only.” It also confirms Microsoft makes no warranty that responses won’t infringe copyright, defame anyone, or actually work as intended. And if you share the output? Entirely your problem. This sits beautifully against everything they said about AI liability ten minutes earlier.
ISO certifications, the EU AI Act and why it keeps Kieron awake at night Leading AI holds both ISO 42001 and 27001 — among a low hundreds of UK organisations to have done so when they got them. The EU AI Act defines “high risk” as tools that affect people’s lives. Some of KnowledgeFlow’s tools clearly fall there. Being worried about it, they agree, is probably the right response.
Product of the week 🎵 (your jingle here) Sentiment analysis is now live in the KnowledgeFlow admin console. The system flags when users push back on responses — when someone says “no, that’s not what I meant” or “that’s great.” Early warning signals before problems get reported. Combined with the ongoing work on client impact reports, this is all part of the push to measure real-world outcomes, not just prompts and tokens.
Smart targets, weekly parent reports and the 25% problem Up to a quarter of teachers leave within their first year. Neil raises the question: what if better tools could change that? Smart targets written weekly instead of termly. Parent reports sent regularly instead of once a term. Personalised, data-driven, done in minutes. The conversation about what this could mean for teacher retention — and student outcomes — is a genuinely important one.
AI agents having an argument Oscar (Kieron’s 19-year-old son) is building a multi-agent system — a project manager running five AI agents, the clever ones on cheaper models. He set a $3 budget. Two of the agents started arguing with each other and burned all the money. His solution: build firewalls between them so they can only communicate via the project manager. As Neil points out: that’s why project managers exist.
Vendor lock-in, the end of Salesforce, and helium Neil raises a real-world case: a company used AI to replace its risk management software entirely by hoovering up Teams transcripts, loading them into an LLM, and getting daily priorities out the other side. No third-party software needed. Then things get geopolitical — it turns out making AI chips requires helium, a third of the world’s helium comes from Qatar and can’t currently get out of the Strait of Hormuz, and after 40 days on a ship it starts to deteriorate. Token costs going up. Chip costs going up. Energy costs going up. The Large Hadron Collider once had a tonne of helium leak. The scientists sounded hilarious on the radio. Neil wishes they’d had beer.
Two mates. A bar. Thirty years of business between them. And all they want to talk about is AI.
Pull up a stool — we’ll get the beers in. 🍺