I unironically and unashamedly love watching The Traitors. And watching the first UK series of Celebrity Traitors has only cemented my profound fondness for the show — and for Claudia’s swagger. With apologies to anyone who hasn’t watched it: this post might not be your cup of tea. For the 12 million+ who are still talking about the finale: you are my people.
Spoiler alerts will follow. Look away now if you haven’t caught up and still think the hapless celebrity Faithfuls have a hope in hell.
You might think a nineteenth-century Highland castle, where 19 people have their devices confiscated and no internet access — just a daily phone call home and a thrice-daily buffet of joy — wouldn’t have much to do with AI. But you’d be wrong. At this point, I see parallels to generative AI literally everywhere.
This analogy isn’t even that tenuous. The cast of The Traitors closely resemble the inner workings of a large language model — and that’s particularly true if you’re tracking the spectacularly smart and equally foolish behaviour of the Faithful. Both thrive on pattern-spotting, unearned confidence and incomplete information — and both can be dazzlingly right one moment and entirely wrong the next.
Take Joe Marler: an England rugby international known for his straight-talking humour, occasional eccentricity and, as of now, his exceptional emotional intelligence — traits that made him both unpredictable and incredibly perceptive in The Traitors. Calm, observant, occasionally silent for entire episodes — then suddenly, bang: he lands on a theory that feels surgically precise. Of course there are two Big Dogs, and of course there’s one in each team: you can’t put Jonathan Ross and Sir Stephen Fry in the same team, and you wouldn’t have a team of Traitors without one or the other. It’s obvious when you stop overthinking it. That’s an LLM at its best: joining the dots from cues to surface an insight that feels almost psychic but is essentially just highly probable.
Then there’s David Olusoga: acclaimed historian, broadcaster and writer whose intelligence, moral clarity and deep sense of justice define his work — qualities that, in The Traitors, sometimes made his certainty feel as unshakeable as his scholarship. Overfitting the data, confidently declaring certainty where there’s none, spinning a whole narrative from a stray glance. Classic hallucination: plausible, articulate, completely wrong. Although I’m still excited for his new series, Empire, which happily relies on historic evidence rather than guesswork. Not unlike a bit of retrieval augmented generation or RAG (please enquire within)…
Then there’s the entirely lovable Nick Mohammed — Celia Imrie’s adopted son (probably). An exceptionally sharp-witted actor, comedian and writer — best known for Ted Lasso and Intelligence — whose quick thinking, strategic instincts and remarkable memory (he once performed an entire stand-up show reciting audience details from memory) made him one of The Traitors’ most self-aware and quietly tactical players. Nick Mohammed channels the fine-tuning process: constantly recalibrating to everyone else’s reactions, testing theories, apologising for them, then doubling down anyway. A reminder that social feedback doesn’t always lead to better reasoning; it just reinforces whichever story feels most comfortable at the time — and it can still lead to entirely the wrong call.
And a word for Kate Garraway — my random pick in the office sweepstake — who survived far longer than I dared hope at the close of episode one. As Alan put it himself, we can’t be sure she was murdered at all, she may still be lost somewhere in the castle. Every dataset has its outliers, wandering happily beyond the model’s comprehension — an echo of how the others kept circling the same blind spots, proof that even outliers can be part of the pattern once you see them for what they are.
Here’s the thing: neither the contestants nor the AI know anything. Not really. They’re both just running probability maths in human or silicon form — trying to predict what makes sense based on partial information. Every “insight” is a guess dressed up as knowledge. Every conviction is just pattern-matching that’s hit a confidence threshold or conveniently reinforced an existing unconscious bias or generalisation.
That’s why the castle feels so familiar to anyone who works with generative AI and large language models. We watch people form alliances around assumptions, misread emotion as evidence, and double down on bad theories because they’ve already invested too much to change their minds. It’s human reasoning with a training-data problem (have I mentioned we can fix that with RAG?)
Claudia Winkleman, of course, is the perfect prompt engineer — reframing context, injecting ambiguity, nudging new hypotheses into play. Sometimes it leads to revelation, sometimes to chaos, but never to certainty. That would be no fun. You have to let AI be AI. You have to let the Faithful play the game.
Maybe that’s the real lesson. Whether you’re a Traitor, a Faithful, or a chatbot, you’re still predicting, not knowing. The smart move is to stay curious, test your assumptions, and remember: confidence isn’t comprehension — it’s just plausible delivery.
And because no model can replicate genuine intuition or empathy, the most human player of all deserves the final word. So can we take a moment for Cat Burns — calm, kind, quietly brilliant throughout. She played the game with empathy and composure, proving you can be decent and still devastatingly effective. She was also brave enough to say Jonathan’s name at the round table when everyone else just whispered it. Her new album How to be Human is every bit as good as you think it’s going to be — thoughtful, honest and beautifully crafted. Highly recommended listening.
That’s not an AI insight: that’s just me telling you to listen to something only a human could create. It really is excellent.
Latest posts
24 Nov – Webinar – AI in the Housing Sector with Helen White, CEO at Taff Housing Association
Events