Digital doctors and emotional AI: Can tech be empathetic?

Digital doctors and emotional AI: Can tech be empathetic?

The emergence of chatbots, generative AI therapists, emotion recognition systems, and XR‑based therapy environments has ignited intense debate in mental healthcare. Can machines ever truly empathise with humans, or are they simply simulating the experience?

We found three recent peer-reviewed studies to map this debate: 

  1. Wang et al. (2025) offer a systematic review published in JMIR Mental Health that categorises generative AI (GenAI) interventions into three areas: diagnosis and assessment, therapeutic tools, and clinician support. The review also proposes an ethical framework, GenAI4MH, to guide responsible deployment.
  2. Ni et al. (2025) conduct a PRISMA‑ScR scoping review of AI‑driven mental health interventions, mapping tools across five phases: screening, treatment, post‑treatment follow‑up, clinical education, and population-level prevention.
  3. Xian et al. (2024) provide a scoping review of ethical debates around generative AI in mental health care, concluding that GenAI shows promise as a supplementary tool, but cannot replace trained human clinicians. 

Here’s what these three studies say about the emotional potential of AI – with extra insights from our own LEAP community. 

How AI is actually helping 

One of the most valuable functions of AI in mental healthcare right now is diagnosis and screening. AI tools are increasingly used to detect depression and anxiety – and even stratify risk sentiments in social media language. Wang et al. note that around 47% of reviewed studies focus on this use case. And Ni et al. reinforce this view, showing widespread deployment of NLP and ML models in early triage and pre‑treatment stages. 

Therapeutic interventions are also increasingly effective, with chatbot therapy now well-studied. A comprehensive review of the field in 2023 showed that across 15 controlled trials, AI‑powered conversational agents could significantly reduce symptoms of depression (Hedges’s g ≈ 0.64) and distress (g ≈ 0.7) – though many of the reviewed systems were rule-based, rather than GenAI. 

Wang et al. extend that understanding into LLM‑driven chatbots, noting both benefits and serious risks – such as privacy violations or emotional harm if predictions fail. 

In their review, Ni et al. describe how AI-driven tools also support post-treatment monitoring, tracking mood or relapse risk using digital diaries, smartphone sensor data, or emotion recognition algorithms. 

And GenAI is also being used as a support tool for clinicians themselves; for example, assisting mental health professionals with tasks like summarising session notes, triaging new patients, or generating psycho‑educational materials. Wang et al. place this in the third core category of GenAI usage. 

But what about empathy?

Empathy is fundamentally relational. As Xian et al. emphasise, generative AI lacks embodied emotional intelligence. So they caution that while GenAI can augment mental healthcare, it cannot replicate human empathy – and should remain a support tool, not a substitute for trained professionals. 

The ethical frameworks proposed in Wang et al.’s review also emphasise the importance of privacy, user safety, and transparency. These are all qualities that connect to trust; and trust is the foundation of empathy in therapeutic contexts. 

Tech can simulate empathy in tone, pacing, and emotional mirroring. It can help users feel understood in moments of isolation – we know this, because it’s already happening in practice. And this can offer powerful benefits in accessibility and early intervention.

But the researchers agree that AI can’t truly empathise. It lacks lived experience, moral judgement, and relational context; and without these, trust is fragile. 

Empathetic or not, digital health tools can transform healthcare at scale 

When we interviewed LEAP speaker Dr. Koen Kas (CEO at Healthskouts), he told us that AI-powered digital health tools can absolutely change the way we approach healthcare; and that access to those tools is a real advantage for healthcare providers worldwide. 

Speaking about his work at Healthskouts, he said:

“There wasn’t a single resource showing all the digital health tools around on the planet. We created that.”

He described a future where patient and caregiver friction is understood and removed through digital solutions that enhance human interaction:

“A digital strategy – digital and data capabilities empowered by AI – should have the ability to unlock moments of customer and patient delight.”

AI may not replace human empathy, but it can empower and inspire patients to take a proactive approach to their health. 

A balanced path to the future 

All three reviews emphasise that successful deployment requires:

  • Human oversight alongside AI tools.
  • Clear ethical frameworks that guard privacy and emotional safety.
  • Alignment with clinical workflows, not bypassing professionals.

And our interview with Dr. Kas reinforces this:

“Show me how both the customer and the end‑user benefits,” he said; and urged that digital strategy must facilitate and enhance existing human-delivered care, rather than replace it.

AI chatbots and GenAI tools can enhance access, provide early support, and lighten clinical burdens. But they simulate empathy. This can be enough to bridge access gaps, but not to replace therapeutic connection. 

According to these studies, true empathy remains human. For AI to play responsibly in mental healthcare, we need oversight and partnership with clinicians. Because only then can AI support emotional healing without confusing simulation for connection.

Related
articles

Mind and machine: The state of neurotech in 2025

The global brain-computer interface market is growing fast. We explore what the latest data says about where neurotech is heading - and how close we are to real mind–machine integration.