When concrete starts healing itself
How bacteria, AI and IoT are turning infrastructure into responsive systems
If you haven’t already, subscribe and join our community in receiving weekly tech insights, updates, and interviews with industry experts straight to your inbox.
Build your own global tech community with LEAP – and drive our technological future.
What Özcan said:
“As AI grows emotionally responsive, we're entering intimate territory – especially with children and vulnerable users, whose developmental space is sacred."
The first generation of AI systems answered questions.
The next generation may comfort, reassure and respond emotionally. And that change – from informational AI to relational AI – changes the stakes dramatically.
Research suggests a meaningful minority of teenagers are already using AI chatbots for mental health support. One UK study from the Youth Endowment Fund, based on a survey of nearly 11,000 young people aged 13–17 in England and Wales, found that 25% had used AI chatbots for mental health support in the past year.
As AI systems become more emotionally responsive, the boundary between tool and companion begins to blur.
And when the users are children (with emotional and cognitive frameworks that are still developing), the design decisions behind those systems become profoundly ethical ones.
So what guardrails should shape this next generation of emotionally aware technology?
Özcan proposes three.
“AI must never pretend to feel or deceive with simulated humanity. Children deserve unvarnished clarity on their limits.”
One of the biggest risks in emotional AI is anthropomorphism – the human tendency to attribute feelings and intentions to non-human systems.
And children can be particularly susceptible to treating chatbots as lifelike or human-like companions. Researchers at the University of Cambridge warn that young users may trust conversational AI systems more readily than intended – and even more so when interfaces are designed to appear empathetic or supportive.
The problem isn’t simply that AI can simulate empathy. It’s that the simulation can look convincing.
Some conversational systems already use phrases such as ‘I care about you’ or ‘I’m here for you’. And this kind of language can blur emotional boundaries, particularly for young people who may interpret these cues as genuine emotional understanding.
Radical transparency means designing systems that make their limits visible:
In other words, emotional intelligence in machines must come with humility.
“Design for autonomy, not dependence. If a child can't function without it, we've failed – echoing attachment theory's healthy detachment.”
Human development depends on secure relationships – but also on gradually learning independence.
The risk with highly responsive AI companions is that they can create a kind of parasocial attachment: a relationship that feels emotionally real but is fundamentally one-sided.
A large-scale analysis of more than 17,000 user-shared conversations with social chatbots has already identified patterns consistent with parasocial dynamics and emotional reliance. Researchers note that emotionally synchronised conversations between humans and AI can create a sense of companionship, even though the system itself has no emotional experience.
For vulnerable users (especially if they’re experiencing loneliness or stress), that attachment can become a coping mechanism.
Research among Danish high-school students has also found links between chatbot use for social-support conversations and higher levels of loneliness, with students reporting they sometimes turned to chatbots during moments of bad mood or isolation.
Designing for independence means building systems that:
The goal is scaffolding, not substitution.
“Strengthen real relationships in extreme contexts (isolation, space psych), intervening only to foster resilience, never substitutivity.”
The most promising use cases for emotionally responsive AI appear in extreme contexts – environments where human support may be limited.
Examples include:
Even here, researchers emphasise that conversational AI should function as a bridge rather than a replacement.
Recent reviews of conversational AI in paediatric mental health suggest these systems may support psychoeducation, skill-building and early intervention – but they’re still complementary tools rather than substitutes for human care.
That distinction is important, because technology that facilitates relationships can strengthen resilience. Technology that replaces them, however, could weaken the social fabric that vulnerable individuals really need.
As machines become more capable of simulating empathy, the people building those machines have a growing responsibility.
Because when tech enters the emotional lives of children, we have to think carefully about what it teaches them about relationships, trust, and ultimately what it means to be human.
And as Özcan says:
“To me, ethics isn't a constraint – it's the architecture.”
Read our full interview with Dr. Beste Özcan: The future of wearables is emotional
Meet the founders and researchers shaping the future of human-centred technology at LEAP from 31 August – 3 September 2026. Learn more.
Have an idea for a topic you'd like us to cover? We're eager to hear it. Drop us a message and share your thoughts.
Catch you next week,
The LEAP Team
How bacteria, AI and IoT are turning infrastructure into responsive systems
Sometimes the best tech conversations happen under the darkest skies
We’re reflecting on people, process, and the work behind five years of LEAP
How bacteria, AI and IoT are turning infrastructure into responsive systems
Sometimes the best tech conversations happen under the darkest skies
We’re reflecting on people, process, and the work behind five years of LEAP