I wasn’t planning to write about Grok this week. Like many people, my first instinct was a tired sigh: another Musk-shaped distraction. But the news matters — not because Grok is clever, or new, or especially impressive, but because of what it tells us about where we are now.
Put simply: powerful AI capabilities are becoming normal, accessible and very, very easy to misuse. And the gap between what is technically possible and what is legal, ethical, or responsible is being papered over by slick interfaces and culture-war noise. The risk is you think your behaviour is just a personal choice and forget that our society still has rules. Good ones. Rules designed to keep people safe.
Nothing that’s happened suddenly makes illegal things legal. Harassment is still harassment. Exploitation is still harmful. Threatening or abusive language isn’t magically acceptable because a chatbot produces it. But mainstream tech platforms are lowering the friction to crossing those lines — and making it easier for people to do harmful things without stopping to think.
It’s worth being clear that this isn’t true of all mainstream AI tools. Systems like ChatGPT or Claude are far from perfect and still make mistakes, but they are at least designed with guardrails intended to reduce the risk of harm, rather than treating safety as an optional extra or a punchline. Or, in the case of Grok, making lack of safety effectively a premium feature. I could write another entire blog just on the ethics of that… decision.
That’s why I really liked Neil Watkins’ recent piece on talking to kids about AI. It doesn’t lean into panic or bans, but instead focuses on helping young people understand what these tools are, where the risks lie, and why responsibility still sits with the human using them. That feels exactly right — and not just for kids.
I was also struck by Kevin Yong’s piece, “AI should not just be a tool for the baddies ”. It makes an uncomfortable but important point: choosing not to understand AI doesn’t make the risks disappear. It just leaves the field open to people who don’t care about harm, misuse or consequences. We can’t afford to dither while the bad guys race ahead.
Generative AI can now produce highly convincing impersonations — a fake voice note from a colleague, or a fabricated message attributed to a teacher or manager. The technology makes this easy. But when tools remove friction and downplay risk, people are more likely to experiment first and think later.
The way these tools are built matters. Design choices — about defaults, friction, logging, and safeguards — can create an extra layer of control that supports responsible use rather than undermining it. That’s the space we like to create our tools in and it doesn’t remove human responsibility, but it does shape behaviour. Pretending those choices are neutral is itself a choice.
This is where Grok — and Musk’s position on it — becomes more than an eye-roll moment. Musk and his teams have publicly leaned into calling Grok’s edgy, adult/NSFW-oriented functionality “spicy,” including a so-called “Spicy Mode” for image/video generation that lets users create nudity-oriented content, and Musk has shared screenshots and promotional language pointing to a willingness to answer “spicy” questions more readily than other AI systems. The problem isn’t just the tool. It’s the framing: rules are boring, safeguards are censorship, and anyone raising concerns is humourless or afraid of free speech. That framing is seductive, implying we’re all just squares if we aren’t totally okay with it.
We don’t need to clutch our pearls, but we also don’t need to play along.
Being responsible here doesn’t mean being anti-AI. It means understanding what’s happening, being honest about what’s still broken or dangerous, and not outsourcing our judgement to a chatbot — however confidently it speaks. It means adults acting like adults, and helping younger users do the same.
AI is now part of the everyday. That makes responsibility more important, not less. And if that’s uncomfortable for tech leaders who prefer chaos to accountability, that’s rather my point.