I had an odd moment this week when two very different stories about the same bit of tech — which I happened to see on my telly on the same day — made me feel, first, somewhat depressed, and then very optimistic indeed.
Smart glasses.
This is a good news / bad news situation, so let’s get the bad news out of the way.
On BBC Breakfast, they discussed some pretty creepy accounts of men wearing AI-enabled smart glasses to record women in public without their consent. These bottom-of-the-barrel content creators then posted the interactions online, inviting comments on the key issues of the day — which appeared to be the women’s appearance, and their general lack of gratitude when a man appears from nowhere, asks for their phone number, and secretly films them while doing so.
Across the pond, a university in California issued a campus safety advisory after a man wearing the same kind of glasses was allegedly approaching women with unwanted comments and recording them for an account called pickuplines.pov. Classy stuff, I think you’ll agree. Excited to send my daughters out into this enlightened world.
Meta and Ray-Ban point out that there’s a small LED that lights up when the glasses are recording. Others point out that the LED is easy to cover, and hard to see in bright light — for example, outdoors, in broad daylight.
I found it difficult not to think: if you’re the kind of person who secretly films people to post them online, then thank you for confirming exactly why consent matters.
But then there was Chris McCausland
Later that day, I watched Strictly Come Dancing legend and newest national treasure Chris McCausland’s BBC documentary Seeing into the Future — a thoughtful, genuinely moving, and often funny exploration of how emerging technologies, including AI and wearable tech, are transforming life for people with disabilities.
In the programme, McCausland, who is blind, visits Meta and other tech leaders in California, borrows some kit, and takes us through experiences that most of us take for granted: walking through open spaces independently, browsing in shops, ordering from a restaurant menu — all supported by live AI that interprets the world for him.
The documentary also shows just how far fairly standard AI on his smartphone has already transformed his day, reducing his reliance on family members for things like checking the weather or helping him choose what to wear by describing a T-shirt. He then tries newer AI glasses that offer live scene description: what’s in front of him, what colour things are, what’s in the sky above.
Crucially, this isn’t a gimmick. It’s freedom from having to ask someone else for help. One of my favourite lines from the programme cuts straight to the heart of why it matters: “The one thing blind people never have is two hands free.”
There’s no sentimentality, just the practical dignity of being able to navigate, choose, and engage with the world on your own terms. You’d have to have a heart of stone not to enjoy Chris giddy with joy at the prospect of shopping for new vinyl in a small local record shop.
Personalisation as a feature, not just a fancy bonus
One of the central promises of AI is personalisation: technology that adapts to your needs and preferences, rather than forcing you to adapt to the machine’s standard way of operating. We’ve seen this with our own assistants. When AI picks up your preferences, offers tailored help, and anticipates needs through careful training, it starts to feel less like a tool and more like a responsive helper.
That’s not just useful and convenient — which was the original design intent — it’s empowering. And that’s where we increasingly see AI deliver real value in workplaces.
Watching Chris McCausland reminded me of a North Yorkshire children’s social worker who — entirely unprompted — sent this email to the local project leader at a council that played a key role in developing Leading AI’s ‘Policy Buddy’ AI tool:
“I want to know more, so I am doing justice to the work I am doing and the children I work with — the AI buddy is a neurodivergent social worker’s bestie! … I have had three viability assessments, two assessment plans, one statement and one court care plan to get done in less than a week… I have managed to write them and get them to managers for QA. The AI buddy has helped me no end in finding the research, the evidence, and the language when I have brain block, so THANK YOU very much for creating the ultimate accessory for my brain!”
No: thank you.
We had, frankly, been thinking about this aspect of personalisation as a happy bonus — like the ability to use large language models to work across multiple languages from the same sources. But it’s so much more than that. Personalisation is about adapting technology to human variety, and that matters enormously for people who’ve long been left out of “mainstream tech” design. In turn, that means bringing more people into the room and into decision-making, which drives better outcomes whether you’re in the private or non-profit sector.
Historically, accessibility often meant building niche solutions for different needs and somehow trying to make them viable at scale. Now, you can quickly create something like a classroom assistant that helps explain a lesson in different ways — with words on a page or spoken out loud — without having to build an entirely separate product each time.
That’s the promise showing up in tools like smart glasses.
Why the same tech lands so differently
Part of the tension lies in how technology is marketed and perceived. Smart glasses like Ray-Ban Meta are sold as lifestyle accessories, with cameras, microphones and AI built in — but without strong, default privacy signals for bystanders. Critics worry that the design simply makes it too easy to record people without their consent, even if there’s a tiny LED meant to indicate recording. Social norms around consent haven’t kept pace with what the hardware allows.
Rules tend to follow moments like this — often clumsily, and usually later than anyone would like. Sometimes they’re blunt, but they can act as a forcing function, making people notice the need for consent where they previously didn’t. In Japan and South Korea, for example, it’s already impossible to mute the shutter sound on phone cameras. That feature exists specifically to make non-consensual photography more obvious, and therefore less likely. It’s not perfect, but it’s a nudge.
There isn’t an endemic problem with the underlying tech. What’s exposed instead is something more human: a sense of entitlement that has nothing to do with innovation, and everything to do with whose autonomy is being prioritised. Contrast issues like upskirting with what McCausland’s experience highlights: technology that supports individual autonomy rather than undermining it. Go, Chris. And, actually — well done, Meta.
So yes — mainstream tech is becoming more accessible
Accessibility has often been sidelined: expensive specialist solutions, difficult to attract investment to, slow to improve. But when accessibility features are built into mainstream AI — and supported by competitive markets — something different happens. You get faster iteration, broader testing, economies of scale, and tools that genuinely support diverse human needs.
This is why systems that personalise their behaviour — whether that’s remembering a neurodivergent colleague’s preferences or describing the world to someone who can’t see it — matter so much. These aren’t add-ons. They’re powerful equalisers.
At a moment when technology is evolving faster than our norms, one distinction keeps surfacing: accessibility expands freedom, misplaced entitlement narrows it.
Happily, this is a rare opportunity for a win-win. Everyone involved in these debates tells me they’re big fans of freedom and, when it’s designed responsibly, that’s exactly what this technology can bring.