We’re calling it: generative AI is officially mainstream for professionals

Author: alex.steele@leadingai.co.uk

Published: 16/11/2025

Leading AI

Generative AI is no longer just “the thing we might test, if we have time” or “that uncanny chatbot the young people seem to like”. It’s officially a normal part of your day.

That’s a huge deal. Not just because adoption is high and rising — and because the analysis behind Gartner’s classic hype cycle is, as ever, holding up beautifully — but because the conversation has shifted entirely. Collectively, we’ve moved (for the most part) beyond the “evil AI overlord” storyline into something more nuanced, one that recognises the multitude of possible AI use cases (only a handful of which could accidentally unleash Skynet).

We’re past the question of if or when we adopt AI; this is the era of what’s next and how best.

But, intriguingly, the “is it cheating?” whispers still linger — and the confidence and capacity to innovate remain very unevenly distributed. That hesitation in large parts of the market means there are still two kinds of mainstream AI adopters: the pragmatists and the sluggish remainder.

From experiment to everyday use

Our inspiration this week is the recent Wharton School / GBK Collective Accountable Acceleration report, which shows 82% of enterprise leaders now use generative AI at least weekly, and 46% use it daily — a 17-point jump from last year, bringing us to that crucial “about half” threshold. An impressive 72% of enterprises now claim to formally track ROI for GenAI, with three in four reporting positive returns.

That tells us most of the private sector is past pilot budgets and tentative trials. Businesses are tracking, reporting, and integrating generative AI as standard. The message is simple: we’re really doing this. If you’re not, you’re officially late to the party.

We’ve (mostly) moved on from fearing AI, but not from feeling guilty about using it

When we first started writing about AI, the pattern was clear: everyone could see the potential for better productivity and more interesting work, but relatively few people truly trusted it. The tools were there; the confidence wasn’t.

Fast-forward to 2025 and two things have changed. First, the fear has faded. According to Wharton, 89% of leaders now believe generative AI enhances employee skills, even if 43% still worry about skill atrophy. Second, the awkwardness remains — and may even have deepened. The technology is mainstream, but the etiquette of using it is still fuzzy.

That’s where adoption splits. Some organisations — especially in the public sector — are still holding back or have taken a classic halfway route: rolling out Copilot while blocking ChatGPT. It’s a well-intentioned move that ignores both the frustration of their teams and the opportunities they’re missing. Meanwhile, others have already moved ahead, deploying safer, right-sized tools and building the policies and habits that turn “pilot” use into everyday practice.

For teams with no sanctioned option — or stuck with the (let’s say) variable quality of Copilot’s output — ChatGPT quickly becomes the quiet default. It’s the first-draft machine, the secret assistant. People use it on their phones if they have to, because it works. What they usually don’t have is a secure, approved alternative or the framework to build skills and safe habits around it.

That’s a big problem. In ill-informed environments, most users don’t realise that ChatGPT isn’t great for retrieving verifiable facts (Perplexity is better, RAG solutions like ours are better still). They don’t get taught that, while Copilot pops up everywhere, Gamma makes cleaner slides and Codex writes tighter code. They just know the thing on their phone is super useful and the official alternative isn’t — and they don’t know where their (your) data might have gone.

When an office blocks a popular tool and the team quietly switches to mobile, it’s not rebellion; it’s a signal. The question isn’t whether people will use AI — they already are. The question is how openly they can do it, and how safely they’re being helped to learn.

Culture is now the bottleneck

If the tools are there and people are using them, what’s stopping the full leap? The Wharton report has an answer: it’s not technology anymore, it’s human capital. Training budgets aren’t keeping pace. Many firms now list lack of training as a top-ten barrier to adoption.

The biggest gains are going to those who don’t just deploy the tools but anchor them in culture and governance. And yes, the “cheating” vibe is part of that cultural issue: how do we recognise “using AI” as legitimate work?

As we said in AI is a team sport, the connection between AI and human judgement matters most. The Wharton data backs that up: it’s less about the model, and more about how the organisation uses it.

AI: from the ‘evil overlord’ era to your favourite secret assistant

One of our first blogs was Is AI (a) good or (b) evil? — written when the headlines were full of doom and the chatbots were still learning better manners. Even then, we knew the hero-or-villain narrative was too simple. Now everyone can see it: AI isn’t the intruder anymore; it’s a colleague, whether you hired it or not. It’s good if you use it for good things. It’s evil if you use it for bad things. It’s all about you. Well, us.

Here’s what the evolution generally looks like:

Phase 1: Fear — “Will AI take my job?”
The era of nervous laughs in team meetings and those cautious “just testing it for fun” prompts after hours. Everyone wanted to see what it could do, but no one wanted to admit they took it seriously.

Phase 2: Hype — “Holy crap, look at what this thing can do!”
Suddenly your LinkedIn timeline was full of posts starting with “I asked ChatGPT to…” and ending with a slightly dodgy poem about leadership. Teams spun up pilots; emails got longer but more eloquent. The mood was equal parts wonder and chaos.

Phase 3: Mainstream — “Okay, how should we work with AI every day?”
This is where we are now: the post-glamour, lesson-learned, post-panic stage. AI is the standard but invisible assistant in an operating model that hasn’t quite reoriented around the new tech — the thing everyone uses but too few mention out loud. It writes meeting notes, emails, and half the project plan, but somehow still doesn’t get a budget line.

And that’s the cultural tension. We’ve accepted that AI isn’t plotting against us, but we haven’t quite agreed that it’s okay to use it openly. The tools are normal; the norms aren’t. Which is why the “Is it cheating?” question still bubbles up — not because anyone’s doing anything wrong, for the most part, but because we haven’t yet agreed what “right” looks like.

So, what should we say to our colleagues today?

Here are three take-aways you might need to borrow from every meeting we ever have:

a) Stop hiding your AI tools.
If your colleagues are using AI on their phones because the desktop version is blocked, that’s a sign: your policy and culture are lagging behind what people need and know is achievable. Encouraging openness reduces risk and improves outcomes. You don’t turn off email when people send data to the wrong addressee; you refresh their training. Why treat an AI chatbot any differently? Why share other lessons learned but not your best prompts and hacks? In fact, why are you calling it a hack when it’s just smarter working?

b) Define what “good use” of AI means in your context.
Cheating isn’t about using the tools per se — it’s about passing off AI output as your unassisted thinking when that’s not how it happened. The tool is the assistant, not the actor, and it deserves your support and a namecheck in the end credits.

c) Train for the human element.
The biggest differentiator isn’t access to large models (although investment does drive a digital divide). The real advantage is showing people how to use AI well. If 43% of leaders worry skills are declining despite 89% saying GenAI enhances them, you’re looking at a skill-gap paradox. Understand what the options are, buy what you can afford; show people how to make the best of it.

When you blocked ChatGPT in your office, you didn’t stop people using it…

…you just made it more hidden and compounded a culture of secrecy. The shift we’re seeing now is that hiding doesn’t make sense anymore. AI isn’t fringe; it’s foundational.

We’ve moved on from “Is AI good or evil?” but we still haven’t fully settled how we feel about it. Is it cheating? Maybe not. Is it transparent? Not yet. Is it indispensable? Hell yes.

And maybe we aren’t meant to have strong feelings about it anymore. I don’t have feelings about the internet. At this stage, it just is.

So if you’re leading a team, writing policy, or just trying to stay sane in a world where everyone’s quietly whispering their best prompts, your job now is to shift the conversation. Integrate. Experiment. And think about how this new capability can fundamentally change how you operate.