What are humans for now? Part 2: the self-checkout economy

Author: alex.steele@leadingai.co.uk

Published: 15/05/2026

What are humans for now? Part 2: the self-checkout economy

At the end of my street is a Sainsbury’s Local immediately next door to a Tesco Metro, as they so often are in cities. Which one gets my money is determined – always – by one of three things, in ascending order of influence: availability of a particular item, which door opens automatically first as I approach, and how easy it is to use the self-checkout.

Sadly for Sainsbo, their self-checkout and I do not get along.

First, I have to remember to press the button to use the card machine before waving my phone confidently at the contactless thingy. The Tesco one just takes the payment. Fewer stages. I respect that.

Second, in Sainsbo, I too often scan something, nothing happens, I scan it again, nothing happens, and then I begin a dance where I attempt increasingly experimental scanning angles, including one manoeuvre where I try to surprise the machine by bringing the barcode in from the side while quietly whispering “ha!”. You’d be surprised how often it works.

Eventually I can place the item in the bagging area, at which point the machine usually demands intervention anyway (while Tesco’s camera has already looked at my face and decided no ID is needed, which is a mixed blessing). Very occasionally, a weary member of staff appears from nowhere carrying the emotional energy of someone who has already resolved seventeen low-level checkout glitches before 9am.

Tesco, in my tiny sample of two, has solved the same problem differently: there are even fewer staff, but the machines work better. The trade-off is that if something does go wrong, you can stand there for a very, very long time clutching a bottle of gin and a lemon and questioning your life choices.

It makes me painfully aware that one employee is now supervising what used to be the work of several people, we – the customers – are doing unpaid labour, and somewhere in the background a suit is describing this whole arrangement as “hyper-efficient customer throughput”.*

At roughly the same time*, retailers have been warning about rising levels of shoplifting and calling for more police presence outside shops, stronger enforcement, and more support tackling retail crime.

And that’s when I started wondering whether we’ve really thought properly about how we invest in automation. Because I know supermarkets will tell me this helps keep food prices down – and maybe it does. But I’m not entirely convinced the equation still works if taxpayers end up funding the social consequences elsewhere instead.

Because part two in this little series is not really about self-checkouts and my lack of patience. It is about the strange economics of automation more generally, and the increasingly important question of how we decide the balance between investing in technology and investing in people.

The Iron Triangle: still a useful way to think about trade-offs

One of my favourite things about project management is that it occasionally produces concepts that are genuinely useful far beyond that little corner of management theory. The “Iron Triangle” is one of them.

Originally developed by Dr Martin Barnes in the 1960s, the idea is that every project is balancing some version of time, cost and quality. You can optimise heavily for one, maybe two, but trade-offs appear whether you acknowledge them or not. Need it fast and cheap? Quality may suffer. Need it fast and brilliant? It probably won’t be cheap. Need it cheap and excellent? It may take a while.

Automation needs its own version of the Iron Triangle, because organisations are making these trade-offs all the time whether they acknowledge them or not. Every automation decision is really a balancing act between things like cost, efficiency and quality but also, for AI: resilience, trust, accessibility, flexibility, human experience and accountability.

The “correct” balance changes depending on the situation.

If I’m buying milk while wearing a big coat over my PJs at 7am, I’m fine with the machine. When my 90 year old mum is trying to find her favourite sandwich and pay in cash, she does not want a fully automated experience optimised around throughput and staffing reduction.

The problem is that organisations often treat automation as a technology decision when it is actually a design decision. And there’s a tonne of good evidence on how to make design and investment decisions, if you look for it.

“So-so automation”

One of the reasons I’ve become slightly obsessed with this topic is that economists already have a term for what many of us instinctively recognise when technology somehow makes everything feel both more advanced and more annoying at the same time. It sounds like a name we’d give it in our mildly irreverent company Teams chat, but actual MIT economist and Nobel laureate Daron Acemoglu calls it “so-so automation”.

The idea is simple: some automation removes workers from tasks without delivering enough real productivity gain to justify the disruption. Costs move around, experiences worsen, staff disappear, but the overall value created is debatable at best.

His examples include early automated customer service systems, which managed to be simultaneously expensive, frustrating and oddly bad at solving actual problems. Which I think most of us have experienced while screaming the name of a film into one of those early automated Odeon booking lines that somehow combined voice recognition with total contempt for the human spirit and regional accents.

And honestly, quite a lot of current AI deployment still risks drifting into the same territory despite the fact that the solutions are a million times better. And that’s because badly designed automation often does not eliminate work. It transfers work.

Customers become the cashier. Passengers become the travel agent. Patients become the administrator. Teachers become the IT support desk. One remaining employee becomes responsible for managing twenty confused humans and six malfunctioning systems simultaneously.

Meanwhile, the organisation reports a productivity gain because one budget line became smaller.

The hidden costs don’t disappear

There is now fairly strong evidence that self-checkout systems are associated with significantly higher levels of theft (not the cause, not necessarily the main driver, but a factor). Retail crime surveys in the UK suggest shoplifting has reached record levels, costing over £2bn annually, while criminology researchers have linked self-checkout theft to something called “Neutralisation Theory”.

*Third useful academic concept alert*

Neutralisation Theory basically suggests people become better at rationalising behaviour when the social and moral boundaries around it weaken. If there is no visible human “victim”, people find it easier to justify behaviour they might otherwise avoid. It’s a bit like how people are more likely to shout at you from the other side of a car windscreen; it still takes a moral leap to commit a crime when you absolutely know that’s what you’re doing, but someone already on the fence is getting a tiny nudge.

Humans are often the bit of the system maintaining social norms, trust, judgement and accountability. Remove them entirely and systems can start behaving differently in ways that budget lines do not immediately capture. They show up in fraud or theft but also customer retention, staff burnout and the exclusion of people less comfortable with technology – and often involves moving costs straight from the private sector to the public purse.

Economists call these “externalities”: costs created by a decision but carried by someone else. I think we used to call it passing the buck?

The AI rollout problem

One risk in the current AI boom is that many organisations still appear to think technology investment alone is the transformation strategy, but it isn’t and it never was. There’s increasing evidence that companies investing heavily in AI infrastructure without redesigning work or investing properly in people are struggling to realise the returns they expected – we’ve talked about it here a few times and it makes perfect sense when you think about it for more than six seconds. You cannot just bolt a generative AI system onto a broken process and declare victory.

Or as Deloitte apparently put it rather well: deploying AI without redesigning work is like “putting a jet engine on a horse-drawn carriage”. (Is it more terrifying for the horses or the passengers?)

The organisations seeing the best results are generally not the ones trying hardest to eliminate humans altogether. They are the ones working out where automation genuinely helps, where human judgement still matters, where trust matters, where flexibility matters and where people mainly want speed and convenience.

Machines are brilliant at consistency. Humans are still (mainly) better at ambiguity, reassurance, negotiation, exceptions, ethics, accountability and context. Especially all at once.

What are humans for when we automate?

That, really, is the question underneath all of this. Not: “Can AI do this task?” But:
“What combination of humans and technology produces the best overall outcome here?” Sometimes the answer will genuinely involve fewer people. Sometimes automation will improve quality, speed and accessibility simultaneously. Sometimes the answer will involve more human involvement, not less.

And sometimes the smartest thing an organisation can do is resist automating a process simply because the technology exists.

Because the future is not humans versus machines. It is humans and machines. And if we treat every human being in a system as an inefficiency to be removed, we probably shouldn’t be surprised when the system starts behaving a bit strangely.

 

*ChatGPT had loads of alternatives to this phrase, and you deserve to see a few more:

  • “frictionless retail transformation”
  • “an agile, customer-empowered retail journey”
  • “customer-centric operational innovation”
  • “a digitally enabled shopping ecosystem”
  • “unlocking efficiencies through assisted self-service”
  • “hyper-efficient customer throughput”