There’s a long-form documentary on YouTube featuring Demis Hassabis talking to Hannah Fry about artificial intelligence, scientific discovery, and the work behind AlphaFold. It’s had over 10 million views.
That’s the same kind of viewing figures as something like The Traitors final. So maybe you can get people to pay attention to something if you tell the story well enough — even when you don’t stop midway to vote someone off.
The numbers suggest we’re definitely interested in the big, slightly mind-bending side of AI — the one that promises scientific breakthroughs and rewrites what’s possible. That said, you can understand why, day to day, most of the conversation still circles back to chatbots.
Tools like ChatGPT, Microsoft Copilot and Google Gemini dominate how we think and talk about AI. And they matter. But — putting aside the big leap forward when OpenAI gave everyone access to ChatGPT a few years ago — they’re not necessarily the most interesting thing that’s happened.
This isn’t about which bit of AI gets the best viewing figures, though. The more important story is the one that sits behind that documentary — and behind a lot of the headlines we struggle to translate into our own daily lives. The world where breakthroughs become part of the systems you already rely on, often without you even noticing.
From a really excellent documentary to something that might affect your family
The work Hassabis is describing isn’t theoretical. AlphaFold solved a problem that had been sitting unsolved in biology for decades: predicting how proteins fold. It has now mapped hundreds of millions of protein structures and made them available to researchers around the world.
That changes the pace of drug discovery. Which, translated into real life, means this: at some point in the next few years, a treatment that helps someone you know—your mum, your partner, your friend—is more likely to exist, and to exist sooner, because of work like this.
That’s the level AI is now operating at. It’s not just helping you write longer emails; it’s changing the timeline of medical progress. Helped, let’s be clear, by a very human decision to make something open—something a different person, or group of people, could have decided to keep to themselves. Worth remembering on the more pessimistic days.
The same pattern, just less… visible
That “watch it, then forget about it” dynamic shows up in stories of breakthroughs here and elsewhere.
We’re quite happy to engage with the big story — the documentary, the ‘comments below’, the headline breakthrough. (And, to be fair, people like Hannah Fry have done a brilliant job of making that world accessible and genuinely interesting.) But we don’t always join the dots between that and the smaller, everyday changes.
Take the apps you already use. If you open Google Photos and type “dog on a beach” or “dad asleep on sofa”, it will find the photo without you tagging or organising anything. If you use Spotify, your Discover Weekly playlist is shaped by models trained on millions of listening patterns. If you scroll Netflix, what you see first has been carefully selected.
None of that feels like “AI” (except maybe when the facial recognition starts to tip into “hmm”). For the most part, it just feels like the product works better than it used to. And that’s the difference. AI has moved from something you notice to something you rely on. I, for one, don’t want to go back to remembering where or when I last saved a scan of my passport.
Breaking language barriers without making a fuss
The same is true for translation. Tools built into platforms like Zoom and Microsoft Teams now offer live captions and translation that are, in many cases, good enough to follow a conversation across languages.
What does that mean in practice? It means a colleague can contribute in their second language and still be understood. A parent can follow a school webinar without missing some particularly obscure bit of English schools idiom. That international project runs just a bit more smoothly.
It’s not perfect. But then nothing is. It has crossed the line into being genuinely useful without requiring any real effort from us.
The bits that will never trend (but matter anyway)
There is also a whole category of AI work that won’t get 10 million views, but probably should. We may just need Hannah Fry to make more television.
Researchers are using AI to improve weather forecasting, model climate risks, and optimise energy systems. Some of that work again comes out of places like DeepMind, alongside universities and public research bodies.
And what does that mean for you? It might mean earlier warnings for extreme weather, better planning for flood risks, and more stable energy systems. Individually, these are small improvements. Collectively, they matter rather a lot.
Creative tools: from “wow” to “well, of course”
A couple of years ago, AI-generated images were the thing everyone was showing each other. Now, the more interesting change is that AI functionality is built in as standard to tools like Adobe Photoshop and Canva. It’s how I so easily snipped the bins out of my best holiday photos.
Designers use it to extend backgrounds, remove objects, and test ideas quickly. Video tools clean up audio and generate captions. Music tools assist with production. Which means your friend who runs a small business, or edits videos, or does a bit of design on the side, is quietly producing better work with less friction.
No one’s stopping to marvel at it anymore. Although the same people are still confidently declaring AI will be the death of the arts. The problem is: which AI? Which arts?
Where you can just be pleased it works
If AI helps identify cancer earlier, speeds up drug discovery, or improves weather prediction, you don’t need to overthink it. You can just be pleased that it exists.
The same goes for a lot of the invisible improvements: better photo search, more accessible meetings, and tools that remove the more tedious parts of creative work (no one really enjoyed painstakingly snipping around objects in Photoshop).
This is technology doing its job properly.
Where you might reasonably opt out
But not every use of AI is entitled to enthusiasm. There is a growing category of features that are technically impressive but not always necessary—and sometimes feel like a solution in search of a problem. Or, occasionally, the creator of new ones.
A small, everyday example (and less grim than the obvious alternatives): searching for a recipe. You type something very specific — say, a proper boeuf bourguignon, spelled more-or-less correctly — and Google serves up an AI-generated summary recipe at the top of the page. It’s a kind of compromise version: simplified, generic, correct but not necessarily right.
Sometimes that’s fine. Sometimes it’s exactly what you don’t want. But sometimes you have a very specific craving based on a genuinely excellent meal and you want to recreate it — not something adjacent to it.
If you’re after the real thing — the version someone has tested, refined, and cares about — the AI layer can get in the way. And use up a fair bit of energy in the process. So it’s reasonable to be selective: use AI where it genuinely improves the outcome. ignore it — or switch it off — where it flattens or dilutes what you’re trying to do. Go back to the original source when that matters more than speed.
Because yes, you can put some beef and wine in an Instant Pot. But there’s a reason people in France still add brandy, flame it, always include a couple of bay leaves, and give it four hours in the oven. Bon app’.