
Welcome to the 42 new techies who have joined us this week.
If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox.
Discover weekly insights, inspiration, and exclusive interviews with the global LEAP community.
What Mead said:
“The most intractable problem today is not pollution or technology or war; but the lack of belief that the future is very much in the hands of the individual.”
When did she say this?
It’s a quote from her book Changing Culture: The Challenge for the Future, published in 1970. So yes, a while ago – but give us a minute to explain why we’re sharing it with you today.
On the blog this week we’ve been talking about AI-powered mental health tools. Chatbots, emotion recognition, XR-based therapy environments; the tech is incredible.
But the question at the heart of it isn’t about code or algorithms. It’s about care. And empathy. And whether a machine can ever truly understand what it means to be human, or if it can only pretend.
Margaret Mead’s words remind us that even in a world shaped by innovation, it’s still individuals – people – who carry the future forward. The tech we build reflects the choices we make about what matters.
So when we use AI to support mental health, the question becomes:
Are we building tech that makes people feel heard, safe, and seen? Or are we trying to replace something that can’t be replaced?
In all our conversations with tech innovators and leaders and activists around the world, the same idea keeps coming back: tech can enhance human experiences, but it can’t replicate them.
A generative AI chatbot might speak kindly to someone in distress – and that may be valuable, even life-saving in that moment. But the bot doesn’t feel kindness. And the distinction matters.
When we interviewed healthcare futurist Dr. Koen Kas (CEO at Healthskouts), he said:
“A digital strategy – digital and data capabilities empowered by AI – should have the ability to unlock moments of customer and patient delight.”
And he’s right. If the goal is to enhance care (not to replace the human relationship at the centre of it) then emotional AI might be part of a powerful shift in mental health support. But we can’t mistake simulation for connection. A chatbot can offer comfort, but it doesn’t understand suffering.
On the blog, we looked closely at three recent reviews of AI in mental healthcare. Across dozens of studies, researchers have found that:
But there are also real concerns:
The consensus across all three reviews (taking into account a wide range of studies) is that AI should augment, not replace. It’s a technological support system, not a human substitute.
If you’re building AI, marketing AI, or investing in it – then you're helping to shape how people interact with it. And in mental healthcare, the stakes are high.
The words we use to describe AI tools influence how people perceive them. So we have to ask if a product promises empathy, or if it supports genuine empathy delivered by a human.
One builds trust. The other risks breaking it.
As Mead said over fifty years ago, the future is still in our hands. And how we design, regulate, market, and integrate AI into care will decide whether it empowers people or leaves them more isolated than before.
We’ll bring the leading minds in AI and health tech together under one roof. Pre-register now to be a part of the conversation.
Have an idea for a topic you'd like us to cover? We're eager to hear it. Drop us a message and share your thoughts.
Catch you next week,
The LEAP Team
Have your eye on the exit right from the start
Have your eye on the exit right from the start